content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
What are downbursts?
A. Precipitation in the form of balls or lumps of ice
B. Intense rotating updrafts
C. Storms that occur primarily in the western United States
D. Strong, localized winds associated with thunderstorms
What are downbursts? A. Precipitation in the form of balls or lumps of ice B. Intense rotating updrafts C. Storms that occur primarily in the western United States D. Strong, localized winds
associated with thunderstorms
Downbursts are: Strong, localized winds associated with thunderstorms.
Asked 1/21/2018 9:58:23 AM
Updated 1/21/2018 10:33:40 AM
1 Answer/Comment | {"url":"https://www.weegy.com/?ConversationId=L0XTQR7Z&Link=i&ModeType=2","timestamp":"2024-11-06T15:22:39Z","content_type":"application/xhtml+xml","content_length":"60885","record_id":"<urn:uuid:a8989d88-9639-4ec3-9536-d07607138f31>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00250.warc.gz"} |
Mathematics Made Visible
This image, created by Greg Egan, shows the ‘discriminant’ of the symmetry group of the icosahedron. This group acts as linear transformations of \(\mathbb{R}^3\) and thus also \(\mathbb{C}^3\). By a
theorem of Chevalley, the space of orbits of this group action is again isomorphic to \(\mathbb{C}^3\). Each point in the surface shown here corresponds to a ‘nongeneric’ orbit: an orbit with fewer
than the maximal number of points. More precisely, the space of nongeneric orbits forms a complex surface in \(\mathbb{C}^3\), called the discriminant, whose intersection with \(\mathbb{R}^3\) is
shown here.
Involutes of a Cubical Parabola
This animation by Marshall Hampton shows the involutes of the curve \(y = x^3\). It lies at a fascinating mathematical crossroads, which we shall explore in a series of three posts.
Barth Sextic
A sextic surface is one defined by a polynomial equation of degree 6. The Barth sextic, drawn above by Craig Kaplan, is the sextic surface with the maximum possible number of ordinary double points:
that is, points where it looks like the origin of the cone in 3-dimensional space defined by \(x^2 + y^2 = z^2\).
Rectified Truncated Icosahedron
The rectified truncated icosahedron is a surprising new polyhedron discovered by Craig S. Kaplan. It has a total of 60 triangles, 12 pentagons and 20 hexagons as faces.
Zamolodchikov Tetrahedron Equation
The Zamolodchikov tetrahedron equation, illustrated above by J. Scott Carter and Masahico Saito, is a fundamental law governing surfaces embedded in 4-dimensional space. It also arises purely
algebraically in the theory of braided monoidal 2-categories.
Clebsch Surface
This is an image of the Clebsch surface, created by Greg Egan. The Clebsch surface owes its fame to the fact that while all smooth cubic surfaces defined over the complex numbers contain 27 lines,
for this particular example all the lines are real, and thus visible to the eye. However, it has other nice properties as well.
27 Lines on a Cubic Surface
This animation by Greg Egan shows 27 lines on a surface defined by cubic equations: the Clebsch surface. It illustrates a remarkable fact: any smooth cubic surface contains 27 lines.
Hoffman–Singleton Graph
This is the Hoffman–Singleton graph, a remarkably symmetrical graph with 50 vertices and 175 edges. There is a beautiful way to construct the Hoffman–Singleton graph by connecting 5 pentagons to 5
Cairo Tiling
The Cairo tiling is a tiling of the plane by non-regular pentagons which is dual to the snub square tiling.
Free Modular Lattice on 3 Generators
This is the free modular lattice on 3 generators, as drawn by Jesse McKeown. First discovered by Dedekind in 1900, this structure turns out to have an interesting connection to 8-dimensional
Euclidean space. | {"url":"https://blogs.ams.org/visualinsight/page/2/","timestamp":"2024-11-05T16:10:02Z","content_type":"text/html","content_length":"65651","record_id":"<urn:uuid:7bcbc6df-c96d-4278-b210-a76dab86c304>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00215.warc.gz"} |
Quantum Physics | Encyclopedia.com
Quantum Mechanics
Quantum mechanics has the distinction of being considered both the most empirically successful and the most poorly understood theory in the history of physics.
To take an oft-cited example of the first point: The theoretically calculated value of the anomalous magnetic moment of the electron using quantum electrodynamics matches the observed value to twelve
decimal places, arguably the best confirmed empirical prediction ever made. To illustrate the second point, we have the equally oft-cited remarks of Niels Bohr, "Anyone who says that they can
contemplate quantum mechanics without becoming dizzy has not understood the concept in the least," and of Richard Feynman, "[We] have always had (secret, secret, close the doors!) we always have had
a great deal of difficulty in understanding the world view that quantum mechanics represents." How could both of these circumstances obtain?
For the purposes of making predictions, quantum theory consists in a mathematical apparatus and has clear enough rules of thumb about how to apply the mathematical apparatus in various experimental
situations. If one is doing an experiment or observing something, one must first associate a mathematical quantum state or wave function with the system under observation. For example, if one
prepares in the laboratory an electron beam with a fixed momentum, then the quantum state of each electron in the beam will be something like a sine wave. In the case of a single particle it is
common to visualize this wave function as one would a water wave: as an object extended in space. Although this visualization works for a single particle, it does not work in general, so care must be
taken. But for the moment, this simple visualization works. The wave function for the electron is "spread out" in space.
The second part of the mathematical apparatus is a dynamical equation that specifies how the quantum state changes with time so long as no observation or measurement is made on the system. These
equations have names like the Schrödinger equation (for nonrelativistic quantum mechanics) and the Dirac equation (for relativistic quantum field theory). In the case of the electron mentioned
earlier the dynamical equation is relevantly similar to the dynamical equation for water waves, so we can visualize the quantum state as a little plane water wave moving in a certain direction. If
the electron is shot at a screen with two slits in it, then the quantum state will behave similarly to a water wave that hits such a barrier: circularly expanding waves will emerge from each slit,
and there will be constructive and destructive interference where those waves overlap. If beyond the slits there is a fluorescent screen, we can easily calculate what the quantum state "at the
screen" will look like: It will have the peaks and troughs characteristic of interfering water waves.
Finally comes the interaction with the screen. Here is where things get tricky. One would naively expect that the correct way to understand what happens when the electron wave function reaches the
screen is to build a physical model of the screen and apply quantum mechanics to it. But that is not what is done. Instead, the screen is treated as a measuring device and the interaction with the
screen as a measurement, and new rules are brought into play.
The new rules require that one first decide what property the measuring device measures. In the case of a fixed screen it is taken that the screen measures the position of a particle. If instead of a
fixed screen we had an absorber on springs, whose recoil is recorded, then the device would measure the momentum of the particle. These determinations are typically made by relying on classical
judgments: There is no algorithm for determining what a generic (physically specified) object "measures," or indeed whether it measures anything at all. But laboratory apparatus for measuring
position and momentum have been familiar from before the advent of quantum theory, so this poses no real practical problem.
Next, the property measured gets associated with a mathematical object called a Hermitian operator. Again, there is no algorithm for this, but for familiar classical properties like position and
momentum the association is established. For each Hermitian operator there is an associated set of wave functions called the eigenstates of the operator. It is purely a matter of mathematics to
determine the eigenstates. Each eigenstate has associated with it an eigenvalue : The eigenvalues are supposed to correspond to the possible outcomes of a measurement of the associated property, such
as the possible values of position, momentum, or energy. (Conversely, it is typically assumed that for every Hermitian operator, there corresponds a measurable property and possible laboratory
operations that would measure it, although there is no general method for specifying these.)
The last step in the recipe for making predictions can now be taken. When a system is measured, the wave function for the system is first expressed as a sum of terms, each term being an eigenstate of
the relevant Hermitian operator. Any wave function can be expressed as a sum of such terms, with each term given a weight, which is a complex number. For example, if an operator has only two
eigenstates, call them |1> and |2>, then any wave function can be expressed in the form α|1> + β |1>, with α and β complex numbers such that |α|^2 + |β |^2 = 1. (This is the case, for example, when
we measure the so-called spin of an electron in a given direction, and always get one of two results: spin up or spin down.) Recall that each eigenstate is associated with a possible outcome of the
measurement: |1>, for example, could be associated with getting spin up, and |2> with getting spin down. The quantum mechanical prediction is now typically a probabilistic one: the chance of getting
the result associated with |1> is |α|^2, and the chance of getting the result associated with |2> is |β |^2. In general, one writes out the wave function of the system in terms of the appropriate
eigenstates, and then the chance of getting the result associated with some eigenstate is just the square of the complex number that weights the state.
We can now see how quantum theory makes empirical predictions: So long as one knows the initial quantum state of the system and the right Hermitian operator to associate with the measurement, the
theory will allow one to make probabilistic predictions for the outcome. Those predictions turn out to be exquisitely accurate.
If a Hermitian operator has only a finite number of eigenstates, or the eigenvalues of the operator are discrete, then any associated measurement should have only a discrete set of possible outcomes.
This has already been in the case of spin; for a spin-1/2 particle such as an electron, there are only two eigenstates for the spin in a given direction. Physically, this means that when we do an
experiment to measure spin (which may involve shooting a particle through an inhomogeneous magnetic field) we will get only one of two results: Either the particle will be deflected up a given amount
or down a given amount (hence spin up and spin down). In this case the physical quantity is quantized ; it takes only a discrete set of values. But quantum theory does not require all physical
magnitudes to be quantized in this way; the position, momentum, or energy of a free particle is not. So the heart of quantum theory is not a theory of discreteness, it is rather just the mathematical
apparatus and the rules of application described earlier.
The Measurement Problem
Why, then, is the quantum theory so puzzling, or so much more obscure than, say, classical mechanics? One way that it differs from classical theory is that it provides only probabilistic predictions
for experiments, and one might well wonder, as Albert Einstein famously did, whether this is because "God plays dice with the universe" (i.e., the physical world itself is not deterministic) or
whether the probabilities merely reflect our incomplete knowledge of physical situation. But even apart from the probabilities, the formulation of the theory is rather peculiar. Rules are given for
representing the physical state of a system and for how that physical state evolves and interacts with other systems when no measurement takes place. This evolution is perfectly deterministic. A
different set of rules is applied to derive predictions for the outcomes of experiments, and these rules are not deterministic. Still, an experiment in a laboratory is just a species of physical
interaction, and ought to be treatable as such. There should be a way to describe the physical situation in the lab, and the interaction of the measured system with the measuring device, that relies
only on applying, say, the Schrödinger equation to the physical state of the system plus the lab.
John S. Bell put this point succinctly, "If you make axioms, rather than definitions and theorems, about the 'measurement' of anything else, then you commit redundancy and risk inconsistency" (1987,
p. 166). You commit redundancy because while the axioms about measurement specify what should happen in a measurement situation, the measurement situation, considered as a simple physical
interaction, ought also to be covered by the general theory of such interactions. You risk inconsistency because the redundancy produces the possibility that the measurement axioms will contradict
the results of the second sort of treatment. This is indeed what happens in the standard approaches to quantum mechanics. The result is called the measurement problem.
The measurement problem arises from a conflict in the standard approach between treating a laboratory operation as a normal physical interaction and treating it as a measurement. To display this
conflict, we need some way to represent the laboratory apparatus as a physical device and the interaction between the device and the system as a physical interaction. Now this might seem to be a
daunting task; a piece of laboratory apparatus is typically large and complicated, comprising astronomically large numbers of atoms. By contrast, exact wave functions are hard to come by for anything
much more complicated than a single hydrogen atom. How can we hope to treat the laboratory operation at a fundamental level?
Fortunately, there is a way around this problem. Although we cannot write down, in detail, the physical state of a large piece of apparatus, there are conditions that we must assume if we are to
regard the apparatus as a good measuring device. There are necessary conditions for being a good measuring device, and since we do regard certain apparatus as such devices, we must be assuming that
they meet these conditions.
Take the case of spin. If we choose a direction in space, call it the x– direction, then there is a Hermitian operator that gets associated with the quantity x– spin. That operator has two
eigenstates, which we can represent as |x– up>[S] and |x– down>[S]. The subscript s indicates that these are states of the system to be measured. We have pieces of laboratory equipment that can be
regarded as good devices for measuring the x– spin of a particle. We can prepare such an apparatus in a state, call it the "ready" state, in which it will function as a good measuring device. Again,
we do not know the exact physical details of this ready state, but we must assume such states exist and can be prepared. What physical characteristics must such a ready state have?
Besides the ready state, the apparatus must have two distinct indicator states, one of which corresponds to getting an "up" result of the measurement and the other that corresponds to getting a
"down" result. And the key point about the physics of the apparatus is this: It must be that if the device in its ready state interacts with a particle in the state |x– up>[S], it will evolve into
the indicator state that is associated with the up result, and if it interacts with a particle in state |x– down>[S], it will evolve into the other indicator state.
This can be put in a formal notation. The ready state of the apparatus can be represented by |ready>[A], the up indicator state by |"up">[A], and the down indicator state by |"down">[A]. If we feed
an x– spin up particle into the device, the initial physical state of the system plus apparatus is represented by |x– up>[S]|ready>[A], if we feed in an x– spin down particle the initial state is |x–
down>[S]|ready>[A]. If the apparatus is, in fact, a good x– spin measuring device, then the first initial state must evolve into a state in which the apparatus indicates up, that is, it must evolve
into |x– up>[S]|"up">[A], and the second initial state must evolve into a state that indicates down, that is, |x– down>[S]|"down">[A]. Using an arrow to represent the relevant time evolution, then,
we have for any good x– spin measuring device
|x– up>[S]|ready>[A] → |x– up>[S]|"up">[A] and
|x– down>[S]|ready>[A] → |x– down>[S]|"down">[A].
We have not done any real physics yet, we have just indicated how the physics must come out if there are to be items that count as good x– spin measuring devices, as we think there are.
The important part of the physics that generates the measurement problem is the arrow in the representations listed earlier, the physical evolution that takes one from the initial state of the system
plus apparatus to the final state. Quantum theory provides laws of evolution for quantum states such as the Schrödinger and Dirac equations. These would be the equations one would use to model the
evolution of the system plus apparatus as a normal physical evolution. And all these dynamical equations have a common mathematical feature; they are all linear equations. It is this feature of the
quantum theory that generates the measurement problem, so we should pause over the notion of linearity.
The set of wave functions used in quantum theory form a vector space. This means that one can take a weighted sum of any set of wave functions and get another wave function. (The weights in this case
are complex numbers, hence it is a complex vector space.) This property was mentioned earlier when it was noted that any wave function can be expressed as a weighted sum of the eigenvectors of an
observable. An operator on a vector space is just an object that maps a vector as input to another vector as output. If the operator O maps the vector A to the vector B , we can write that as
O (A ) = B .
A linear operator has the feature that you get the same result whether to operate on a sum of two vectors or you first operate on the vectors and then takes the sum. That is, if O is a linear
operator, then for all vectors A and B ,
O (A + B ) = O (A ) + O (B ).
The dynamical equations evidently correspond to operators; they take as input the initial physical state and give as output the final state, after a specified period has elapsed. But further, the
Schrödinger and Dirac equations correspond to linear operators. Why is this important?
We have already seen how the physical state of a good x– spin measuring device must evolve when fed a particle in the state |x– up>[S] or the state |x– down>[S]. But these are not the only spin
states that the incoming particle can occupy. There is an infinitude of spin states, which correspond to all the wave functions that can be expressed as α|x– up>[S] + β|x– down>[S], with α and β
complex numbers such that |α |^2 + |β |^2 = 1. Correspondingly, there is an infinitude of possible directions in space in which one can orient a spin measuring device, and each of the directions is
associated with a different Hermitian operator. For a direction at right angles to the x– direction, call it the y– direction, there are eigenstates |y– up>[S] and |y– down>[S]. These states can be
expressed as weighted sums of the x– spin eigenstates, and in the usual notation
|y– up>[S] = 1/√2|x– up>[S] + 1/√2|x– down>[S] and
|y– down>[S] = 1/√2|x– up>[S] − 1/√2|x– down>[S].
So what happens if we feed a particle in the state |y– up>[S] into the good x– spin measuring device?
Empirically, we know what happens: About half the time the apparatus ends up indicating "up" and about half the time it ends up indicating "down." There is nothing we are able to do to control the
outcome: y– up eigenstate particles that are identically prepared nonetheless yield different outcomes in this experiment.
If we use the usual predictive apparatus, we also get this result. The "up" result from the apparatus is associated with the eigenstate |x– up>[S] and the "down" result associated with |x– down>[S].
The general recipe tells us to express the incoming particle in terms of these eigenstates as 1/√2|x– up>[S] + 1/√2|x– down>[S], and then to take the squares of the weighting factors to get the
probabilities of the results. This yields a probabilistic prediction of 50 percent chance "up" and 50 percent chance "down," which corresponds to what we see in the lab.
But if instead of the usual predictive apparatus we use the general account of physical interactions, we get into trouble. In that case, we would represent the initial state of the system plus
apparatus as |y– up>[S]|ready>[A]. The dynamical equation can now be used to determine the physical state of the system plus apparatus at the end of the experiment.
But the linearity of the dynamical equations already determines what the answer must be. For
|y– up>[S]|ready>[A] = (1/√2|x– up>[S] + 1/√2|x– down>[S])|ready>[A]
= 1/√2|x– up>[S]|ready>[A] + 1/√2|x– down>[S]|ready>[A].
But we know how each of the two terms of this superposition must evolve, since the apparatus is a good x– spin measuring device. By linearity, this initial state must evolve into the final state
1/√2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A].
That is, the final state of the apparatus plus system must be a superposition of a state in which the apparatus yields the result "up" and a state in which the apparatus yields the result "down."
That is what treating the measurement as a normal physical interaction must imply.
So by making axioms about measurements, we have both committed redundancy and achieved inconsistency. The axioms say that the outcome of the experiment is not determined by the initial state; each of
two outcomes is possible, with a 50 percent chance of each. But the treatment of the measurement as a normal physical interaction implies that only one final physical state can occur. And
furthermore, that final physical state is an extremely difficult one to understand. It appears to be neither a state in which the measuring apparatus is indicating "up" nor a state in which the
apparatus is indicating "down," but some sort of symmetric combination of the two. If all the physical facts about the apparatus are somehow represented in its wave function, then it seems that at
the end of the experiment the apparatus can neither be indicating up (and not down) nor down (and not up). But we always see one or the other when we do this experiment.
At this point our attention must clearly be turned to the mathematical object we have called the wave function. The wave function is supposed to represent the physical state of a system. The question
is whether the wave function represents all of the physical features of a system, or whether systems represented by the same wave function could nevertheless be physically different. If one asserts
the former, then one believes that the wave function is complete, if the latter, then the wave function is incomplete. The standard interpretations of the quantum formalism take the wave function to
be complete, interpretations that take it to be incomplete are commonly called hidden variables theories (although that is a misleading name).
The wave function 1/√2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A] does not represent the apparatus as indicating up (and not down) or as indicating down (and not up). So if the wave function is
complete, the apparatus, at the end of the experiment, must neither be indicating up (and not down) nor down (and not up). But that flatly contradicts our direct experience of such apparatus. This is
the measurement problem. As Bell puts it, "Either the wave function, as given by the Schrödinger equation, is not everything, or it is not right" (1987, p. 201).
Collapse Interpretations
collapse tied to observation
What is one to do? From the beginning of discussions of these matters, Einstein held the argument to show that the wave function is not everything and hence that quantum mechanics is incomplete. The
wave function might represent part of the physical state of a system, or the wave function might represent some features of ensembles, collections, or systems, but the wave function cannot be a
complete representation of the physical state an individual system, like the particular x– spin measuring device in the laboratory after a particular experiment is done. For after the experiment, the
apparatus evidently either indicates "up" or it indicates "down," but the wave function does not represent it as doing so.
By contrast, the founders of the quantum theory, especially Bohr, insisted that the wave function is complete. And they did not want to deny that the measuring device ends up indicating one
determinate outcome. So the only option left was to deny that the wave function, as given by the Schrödinger equation, is right. At some times, the wave function must evolve in a way that is not
correctly described by the Schrödinger equation. The wave function must "collapse." The standard interpretation of quantum mechanics holds that the wave function evolves, at different times, in
either of two different ways. This view was given its canonical formulation in John von Neumann's Mathematical Foundations of Quantum Mechanics (1955). Von Neumann believed (incorrectly, as we will
see) that he had proven the impossibility of supplementing the wave function with hidden variables, so he thought the wave function must be complete. When he comes to discuss the time evolution of
systems, Von Neumann says "[w]e therefore have two fundamentally different types of interventions which can occur in a system S . … First, the arbitrary [i.e., nondeterministic] changes by
measurement. … Second, the automatic [i.e., deterministic] changes which occur with the passage of time" (p. 351). The second type of change is described by, for example, the Schrödinger equation,
and the first by an indeterministic process of collapse.
What the collapse dynamics must be can be read off from the results we want together with the thesis that the wave function is complete. For example, in the x– spin measurement of the y– spin up
electron, we want there to be a 50 percent chance that the apparatus indicates "up" and a 50 percent chance that it indicates "down." But the only wave function that represents an apparatus
indicating "up" is |"up">[A], and the only wave function for an apparatus indicating "down" is |"down">[A]. So instead of a deterministic transition to the final state
1/√ 2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A]
we must postulate an indeterministic transition with a 50 percent chance of yielding |x– up>[S]|"up">[A] and a 50 percent chance of yielding |x– down>[S]|"down">[A].
It is clear what the collapse dynamics must do. What is completely unclear, though, is when it must do it. All Von Neumann's rules say is that we get collapses when measurements occur and
deterministic evolutions "with the passage of time." But surely measurements also involve the passage of time; so under exactly what conditions do each of the evolutions obtain? Collapse theories,
which postulate two distinct and incompatible forms of evolution of the wave function, require some account of when each type of evolution occurs.
Historically, this line of inquiry was influenced by the association of the problem with "measurement" or "observation." If one begins with the thought that the non-linear evolution happens only when
a measurement or observation occurs, then the problem becomes one of specifying when a measurement or observation occurs. And this in turn suggests that we need a characterization of an observer who
makes the observation. Pushing even further, one can arrive at the notion that observations require a conscious observer of a certain kind, folding the problem of consciousness into the mix. As Bell
asks, "What exactly qualifies some physical systems to play the role of 'measurer'? Was the wave function of the world waiting to jump for thousands of millions of years until a single-celled living
creature appeared? Or did it have to wait a little longer, for some better qualified system … with a Ph.D.?" (1987, p. 117).
This line of thought was discussed by Eugene Wigner, "This way out of the difficulty amounts to the postulate that the equations of motion of quantum mechanics cease to be linear, in fact that they
are grossly non-linear if conscious beings enter the picture" (1967, p. 183). Wigner suggests that the quantum measurement problem indicates "the effect of consciousness on physical phenomena," a
possibility of almost incomprehensible implications (not the least of which: How could conscious beings evolve if there were no collapses, since the universe would surely be in a superposition of
states with and without conscious beings!). In any case, Wigner's speculations never amounted to a physical theory, nor could they unless a physical characterization of a conscious system was
So if one adopts a collapse theory, and if the collapses are tied to measurements or observations, then one is left with the problem of giving a physical characterization of an observation or a
measurement. Such physicists as Einstein and Bell were incredulous of the notion that conscious systems play such a central role in the physics of the universe.
spontaneous collapse theories
Nonetheless, precise theories of collapse do exist. The key to resolving the foregoing puzzle is to notice that although collapses must be of the right form to make the physical interactions called
"observations" and "measurements" have determinate outcomes, there is no reason that the collapse dynamics itself need mention observation or measurement. The collapse dynamics merely must be of such
a kind as to give outcomes in the right situations.
The most widely discussed theory of wave function collapse was developed by Gian Carlo Ghirardi, Alberto Rimini, and Tulio Weber (1986) and is called the spontaneous localization theory or, more
commonly, the GRW theory. The theory postulates an account of wave function collapse that makes no mention of observation, measurement, consciousness, or anything of the sort. Rather, it supplies a
universal rule for both how and when the collapse occurs. The "how" of the collapse involves localization in space; when the collapse occurs, one takes a single particle and multiplies its wave
function, expressed as a function of space, by a narrow Gaussian (bell curve). This has the effect of localizing the particle near the center of the Gaussian, in the sense that most of the wave
function will be near the center. If the wave function before the collapse is widely spread out over space, after the collapse it is much more heavily weighted to a particular region. The likelihood
that a collapse will occur centered at a particular location depends on the square amplitude of the precollapse wave function for that location. The collapses, unlike Schrödinger evolution, are
fundamentally nondeterministic, chancy events.
The GRW collapse does not perfectly locate the wave function at a point. It could not do so for straightforward physical reasons: The localization process will violate the conservation of energy, and
the more narrowly the postcollapse wave function is confined, the more new energy is pumped into the system. If there were perfect localizations, the energy increase would be infinite—and immediately
evident. (It follows from these same observations that even in the "standard" theory there are never collapses to perfectly precise positions—even after a so-called position measurement.)
Therefore, the GRW theory faces a decision: Exactly how localized should the localized wave function be? This corresponds to choosing a width for the Gaussian: The narrower the width, the more energy
that is added to the system on collapse. The choice for this width is bounded in one direction by observation—the energy increase for the universe must be below observed bounds, and particular
processes, such as spontaneous ionization, should be rare—and in the other direction by the demand that the localization solve the measurement problem. As it happens, Ghirardi, Rimini, and Weber
chose a value of about 10^–5 centimeters for the width of the Gaussian. This is a new constant of nature.
Beside the "how" of the collapse, the GRW theory must specify the "when." It was here that we saw issues such as consciousness getting into the discussion: If collapses occur only when measurements
or observations occur, then we must know when measurements or observations occur. The GRW theory slices through this problematic neatly; it simply postulates that the collapses take place at random,
with a fixed probability per unit time. This introduces another new fundamental constant: the average time between collapses per particle. The value of that constant is also limited in two
directions; on the one hand, we know from interference experiments that isolated individual particles almost never suffer collapses on the time scale of laboratory operations. On the other hand, the
collapses must be frequent enough to resolve the measurement problem. The GRW theory employs a value of 10^15 seconds, or about 100 million years, for this constant.
Clearly, the constant has been chosen large enough to solve one problem: Individual isolated particles will almost never suffer collapses in the laboratory. It is less clear, though, how it solves
the measurement problem.
The key here is to note that actual experiments record their outcomes in the correlated positions of many, many particles. In our spin experiment we said that our spin measuring device must have two
distinct indicator states: |"up"> and |"down">. To be a useful measuring device, these indicator states must be macroscopically distinguishable. This is achieved with macroscopic objects—pointers,
drops of ink, and so on—to indicate the outcome. And a macroscopic object will have on the order of 10^23 particles.
So suppose the outcome |"up"> corresponds to a pointer pointing to the right and the outcome |"down"> corresponds to the pointer pointing to the left. If there are no collapses, the device will end
up with the wave function 1/√2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A]. Now although it is unlikely that any particular particle in the pointer will suffer a collapse on the time scale of
the experiment, because there are so many particles in the pointer, it is overwhelmingly likely that some particle or other in the pointer will suffer a collapse quickly: within about 10^–8 seconds.
And (this is the key), since in the state 1/√2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A] all the particle positions are correlated with one another, if the collapse localizes a single particle
in the pointer, it localizes all of them. So, if having the wave functions of all the particles in the pointer highly concentrated on the right (or on the left) suffices to solve the measurement
problem, the problem will be solved before 10^–4 seconds has elapsed.
The original GRW theory has been subject to much discussion. In a technical direction there have been similar theories, by Ghirardi and Rimini and by Philip Perle, that make the collapses to be
continuous rather than discrete. More fundamentally, there have been two foundational questions: First, does the only approximate nature of the "localization" vitiate its usefulness in solving the
measurement problem, and second, does the theory require a physical ontology distinct from the wave function? Several suggestions for such an additional ontology have been put forward, including a
mass density in space-time, and discrete events ("flashes") in space-time.
The addition of such extra ontology, beyond the wave function, reminds us of the second horn of Bell's dilemma: Either the wave function as given by the Schrödinger equation is not right or it is not
everything. The versions of the GRW theory that admit a mass density or the flashes postulate that the wave function is not everything, do so in such a way that the exact state of the extra ontology
can be recovered from the wave function. The more radical proposal is that there is extra ontology, and its state cannot be read off the wave function. These are the so-called hidden variables
Additional Variables Theories
According to an additional variables theory, the complete quantum state of the system after a measurement is indeed 1/√2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A]. The outcome of the
measurement cannot be read off of that state because the outcome is realized in the state of the additional variables, not in the wave function. It immediately follows that for any such theory, the
additional ontology, the additional variables, had best not be "hidden": since the actual outcome is manifest, the additional variables had best be manifest. Indeed, on this approach the role of the
wave function in the theory is to determine the evolution of the additional variables. The wave function, since it is made manifest only through this influence, is really the more "hidden" part of
the ontology.
The best known and most intensively developed additional variables theory goes back to Louis de Broglie, but is most intimately associated with David Bohm. In its nonrelativistic particle version,
Bohmian mechanics, physical objects are constituted of always-located point particles, just as was conceived in classical mechanics. At any given time, the physical state of a system comprises both
the exact positions of the particles and a wave function. The wave function never collapses: it always obeys a linear dynamical equation like the Schrödinger equation. Nonetheless, at the end of the
experiment the particles in the pointer will end up either all on the right or all on the left, thus solving the measurement problem. This is a consequence of the dynamics of the particles as
determined by the wave function.
It happens that the particle dynamics in Bohmian mechanics is completely deterministic, although that is not fundamentally important to the theory and indeterministic versions of Bohm's approach have
been developed. The dynamical equation used in Bohmian mechanics is much more importantly the simplest equation that one can write down if one assumes that the particle trajectories are to be
determined by the wave function and that various symmetries are to be respected. If one starts with idea that there are particles and that quantum theory should be a theory of the motion of those
particles that reproduces the predictions of the standard mathematical recipe, Bohmian mechanics is the most direct outcome.
Since Bohmian mechanics is a deterministic theory, the outcome of any experiment is fixed by the initial state of the system. The probabilities derived from the standard mathematical recipe must
therefore be interpreted purely epistemically: they reflect our lack of knowledge of the initial state. This lack of knowledge turns out to have a physical explanation in Bohmian mechanics: Once one
models any interaction designed to acquire information about a system as a physical interaction between a system and an observer, it can be shown to follow that initial uncertainty about the state of
the target system cannot be reduced below a certain bound, given by the Heisenberg uncertainty relations.
This illustrates the degree to which the ontological "morals" of quantum theory are held hostage to interpretations. In the standard interpretation, when the wave function of a particle is spread
out, there is no further fact about exactly where the particle is. (Because of this, position measurements in the standard theory are not really measurements, i.e., they do not reveal preexisting
facts about positions.) In Bohm's interpretation, when the wave function is spread out, there is a fact about exactly where the particle is, but it follows from physical analysis that one cannot find
out more exactly where it is without thereby altering the wave function (more properly, without altering the effective wave function that we use to make predictions). Similarly, in the standard
interpretation, when we do a position measurement on a spread out particle, there is an indeterministic collapse that localizes the particle—it gives it an approximate location. According to Bohm's
theory the same interaction really is a measurement: It reveals the location that the particle already had. So it is a fool's errand to ask after "the ontological implications of quantum theory": the
account of the physical world one gets depends critically on the interpretation of the formalism.
Bohm's approach has been adapted to other choices for the additional variables. In particular, interpretations of field theory have been pursued in two different ways: with field variables that
evolve indeterministically, and with the addition to Bohmian mechanics the possibility of creating and annihilating particles in an indeterministic way. Each of these provides the wherewithal to
treat standard field theory.
There have been extensive examinations of other ways to add additional variables to a noncollapse interpretation, largely under the rubric of modal interpretations. Both rules for specifying what the
additional variables are and rules for the dynamics of the new variables have been investigated.
A Third Way?
There are also some rather radical attempts to reject each of Bell's two options and to maintain both that the wave function, as given by the Schrödinger equation, is right and that it is everything—
that is, it is descriptively complete. Since a wave function such as 1/√2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A] does not indicate that one outcome rather than the other occurred, this
requires maintaining that it is not the case that one outcome rather than the other occurred.
This denial can come in two flavors. One is to maintain that neither outcome occurred, or even seemed to occur, and one is only somehow under the illusion that one did. David Z. Albert (1992)
investigated this option under the rubric the bare theory. Ultimately, the bare theory is insupportable, since any coherent account must at least allow that the quantum mechanical predictions appear
to be correct.
The more famous attempt in this direction contends that, in some sense, both outcomes occur, albeit in different "worlds." Evidently, the wave function 1/√2|x– up>[S]|"up">[A] + 1/√2|x– down>[S]|
"down">[A] can be written as the mathematical sum of two pieces, one of which corresponds to a situation with the apparatus indicating "up" and the other to a situation with the apparatus indicating
"down." The many worlds theory attempts to interpret this as a single physical state, which somehow contains or supports two separate "worlds," one with each outcome.
The many worlds interpretation confronts several technical and interpretive hurdles. The first technical hurdle arises because any wave function can be written as the sum of other wave functions in
an infinitude of ways. For example, consider the apparatus state 1/√2 |"up">[A] + 1/√2 |"down">[A]. Intuitively, this state does not represent the apparatus as having fired one way or another. This
state can be called |D[1]>[A]. Similarly, |D[2]>[A] can represent the state 1/√2 |"up">[A] − 1/√2 |"down">[A], which also does not correspond to an apparatus with a definite outcome. The state 1/√2|x
– up>[S]|"up">[A] + 1/√2|x– down>[S]|"down">[A], which seems to consist in two "worlds," one with each outcome, can be written just as well as 1/√2|y– up>[S]|D[1]>[A] + 1/√2|y– down>[S]|D[2]>[A].
Written in this way, the state seems to comprise two worlds: one in which the electron has y– spin up and the apparatus is not in a definite indicator state, the other in which the electron has y–
spin down, and the apparatus is in a distinct physical state that is equally not a definite indicator state. If these are the "two worlds," then the measurement problem has not been solved, it has
been merely traded as a single world without a definite outcome for a pair of worlds neither of which has a definite outcome.
So the many worlds theory would first have to maintain that there is a preferred way to decompose the global wave function into "worlds." This is known as the preferred basis problem.
A more fundamental difficulty arises when one tries to understand the status of the probabilities in the many worlds theory. In a collapse theory the probabilities are probabilities for collapses to
occur one way rather than another, and there is a physical fact about how the collapses occur, and therefore about frequencies of outcomes. In an additional variables theory the probabilities are
about which values the additional variables take, and there is a physical fact about the values they take and therefore about frequencies of outcomes. But in the many worlds theory, whenever one does
an experiment like the spin measurement described earlier, the world splits: There is no frequency with which one outcome occurs as opposed to the other. And more critically, that the world "splits"
has nothing to do with the amplitude assigned to the two daughter worlds.
Suppose, for example, that instead of feeding a y– spin up electron into our x– spin measuring device, we feed in an electron whose state is 1/2|x– up>[S] + √3/2 |x– down>[S]. By linearity, at the
end of the experiment, the state of the system plus apparatus is 1/2|x– up>[S]|"up">[A] + √3/2 |x– down>[S]|"down">[A]. Even if we have solved the preferred basis problem and can assert that there
are now two worlds, one with each outcome, notice that we are evidently in exactly the same situation as in the original experiment: Whenever we do the experiment, the universe "splits." But the
quantum formalism counsels us to have different expectations in the two cases: in the first case, we should expect to get an "up" outcome 50 percent of the time, in the second case only 25 percent of
the time. It is unclear, in the many worlds theory, what the expectations are for, and why they should be different.
Another interpretation of the quantum formalism that has been considered is the many minds theory of Barry Loewer and Albert. Despite the name, the many minds theory is not allied in spirit with the
many worlds theory: It is rather an additional variables theory in which the additional variables are purely mental subjective states. This is somewhat akin to Wigner's appeal to consciousness to
solve the measurement problem, but where Wigner's minds affect the development of the wave function, the minds in this theory (as is typical for additional variables theories) do not. The physical
measurement apparatus in the problematic case does not end up in a definite indicator state, but a mind is so constituted that it will, in this situation, have the subjective experience of seeing a
particular indicator state. Which mental state the mind evolves into is indeterministic. The preferred basis problem is addressed by stipulating that there is an objectively preferred basis of
physical states that are associated with distinct mental states.
The difference between the many worlds and the many minds approaches is made most vivid by noting that the latter theory does not need more than one mind to solve the measurement problem, where the
problem is now understood as explaining the determinate nature of our experience. A multiplicity of minds are added to Loewer and Albert's theory only to recover a weak form of mind-body
supervenience: Although the experiential state of an individual mind does not supervene on the physical state of the body with which it is associated, if one associates every body with an infinitude
of minds, the distribution of their mental states can supervene on the physical state of the body.
A final attempt to address the problems of quantum mechanics deserves brief mention. Some maintain that the reason quantum mechanics is so confusing is not because the mathematical apparatus requires
emendation (e.g., by explicitly adding a collapse or additional variables) or an interpretation (i.e., an account of exactly which mathematical objects represent physical facts), but because we
reason about the quantum world in the wrong way. Classical logic, it is said, is what is leading us astray. We merely need to replace our patterns of inference with quantum logic.
There is a perfectly good mathematical subject that sometimes goes by the name quantum logic, which is the study, for example, of relations between subspaces of Hilbert space. These studies, like all
mathematics, employ classical logic. There is, however, no sense in which these studies, by themselves, afford a solution to the measurement problem or explain how it is that experiments like those
described earlier have unique, determinate outcomes.
The Wave Function, Entanglement, epr, and Non-Locality
For the purposes of this discussion, the wave function has been treated as if it were something like the electromagnetic field: a field defined on space. Although this is not too misleading when
discussing a single particle, it is entirely inadequate when considering collections of particles. The wave function for N particles is a function not on physical space, but on the 3N-dimensional
configuration space, each point of which specifies the exact location of all the N particles. This allows for the existence of entangled wave functions, in which the physical characteristics of even
widely separated particles cannot be specified independently of one another.
Consider R and L, a pair of widely separated particles. Among the wave functions available for this pair is one that ascribes x– spin up to R and x– spin down to L, which is written as |x– up>[R]|x–
down>[L], and one that attributes x– spin down to R and x– spin up to L:|x– down>[R]|x– up>[L]. These are called product states, and all predictions from these states about how R will respond to a
measurement are independent of what happens to L, and vice versa.
But besides these product states, there are entangled states like the singlet state : 1/√2|x– up>[R]|x– down>[L] - 1/√2|x– down>[R]|x– up>[L]. In this state the x– spins of the two particles are said
to be anticorrelated since a measurement of their x– spins will yield either up for R and down for L or down for R and up for L (with a 50 percent chance for each outcome). Even so, if the wave
function is complete, then neither particle in the singlet state has a determinate x– spin: the state is evidently symmetrical between spin up and spin down for each particle considered individually.
How can the x– spins of the particles be anticorrelated if neither particle has an x– spin? The standard answer must appeal to dispositions: although in the singlet state neither particle is disposed
to display a particular x– spin on measurement, the pair is jointly disposed to display opposite x– spins if both are measured. Put another way, on the standard interpretation, before either particle
is measured neither has a determinate x– spin, but after one of them is measured, and, say, displays x– spin up, the other acquires a surefire disposition to display x– spin down. And this change
occurs simultaneously, even if the particles happen to be millions of miles apart.
Einstein found this to be a fundamentally objectionable feature of the standard interpretation of the wave function. In a paper coauthored with Boris Podolsky and Nathan Rosen (EPR 1935), Einstein
pointed out this mysterious, instantaneous "spooky action-at-a-distance" built into the standard approach to quantum theory. It is uncontroversial that an x– spin measurement carried out on L with,
say, an "up" outcome" will result in a change of the wave function assigned to R: It will now be assigned the state |x– down>[R]. If the wave function is complete, then this must reflect a physical
change in the state of R because of the measurement carried out on L, even though there is no physical process that connects the two particles. What EPR pointed out (using particle positions rather
than spin, but to the same effect) was that the correlations could easily be explained without postulating any such action-at-a-distance. The natural suggestion is that when we assign a particular
pair of particles the state 1/√2|x– up>[R]|x– down>[L] − 1/√2|x– down>[R]|x– up>[L], it is a consequence of our ignorance of the real physical state of the pair: The pair is either in the product
state |x– up>[R]|x– down>[L] or in the product state |x– down>[R]|x– up>[L], with a 50 percent chance of each. This simple expedient will predict the same perfect anticorrelations without any need to
invoke a real physical change of one particle consequent to the measurement of the other.
So matters stood until 1964, when Bell published his famous theorem. Bell showed that Einstein's approach could not possibly recover the full range of quantum mechanical predictions. That is, no
theory can make the same predictions as quantum mechanics if it postulates (1) that distant particles, such as R and L, have each their own physical state definable independently of the other and (2)
measurements made on each of the particles have no physical affect on the other. Entanglement of states turns out to be an essential feature—arguably the central feature—of quantum mechanics. And
entanglement between widely separated particles implies non-locality: The physics of either particle cannot be specified without reference to the state and career of the other.
The spooky action-at-a-distance that Einstein noted is not just an artifact of an interpretation of the quantum formalism; it is an inherent feature of physical phenomena that can be verified in the
laboratory. A fundamental problem is that the physical connection between the particles is not just spooky (unmediated by a continuous space-time process), it is superluminal. It remains unclear to
this day how to reconcile this with the theory of relativity.
See also Bohm, David; Bohmian Mechanics; Many Worlds/Many Minds Interpretation of Quantum Mechanics; Modal Interpretation of Quantum Mechanics; Non-locality; Philosophy of Physics; Quantum Logic and
Albert, David Z. Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press, 1992.
Bell, John S. Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy. Cambridge, U.K.: Cambridge University Press, 1987.
Dürr, Detlef, Sheldon Goldstein, and Nino Zanghi. "Quantum Equilibrium and the Origin of Absolute Uncertainty." Journal of Statistical Physics 67 (1992): 843–907.
Ghirardi, GianCarlo, Alberto Rimini, and Tulio Weber. "Unified Dynamics for Microscopic and Macroscopic Systems." Physical Review 34 (2) (1986): 470–491.
Maudlin, Tim. Quantum Non-locality and Relativity: Metaphysical Intimations of Modern Physics. Malden, MA: Blackwell, 2002.
Von Neumann, John. Mathematical Foundations of Quantum Mechanics. Translated by Robert T. Beyer. Princeton, NJ: Princeton University Press, 1955.
Wheeler, John Archibald, and Wojciech Hubert Zurek, eds. Quantum Theory and Measurement. Princeton, NJ: Princeton University Press, 1983.
Wigner, Eugene. Symmetries and Reflections. Westport, CT: Greenwood Press, 1967.
Tim Maudlin (2005)
Physics, Quantum
Physics, Quantum
Quantum theory is one of the most successful theories in the history of physics. The accuracy of its predictions is astounding. The breath of its application is impressive. Quantum theory is used to
explain how atoms behave, how elements can combine to form molecules, how light behaves, and even how black holes behave. There can be no doubt that there is something very right about quantum theory
But at the same time, it is difficult to understand what quantum theory is really saying about the world. In fact, it is not clear that quantum theory gives any consistent picture of what the
physical world is like. Quantum theory seems to say that light is both wavelike and particlelike. It seems to say that objects can be in two places at once, or even that cats can be both alive and
dead, or neither alive nor dead, or—what? There can be no doubt that there is something troubling about quantum theory.
Early research
Quantum theory, more or less as it is known at the beginning of the twenty-first century, was developed during the first quarter of the twentieth century in response to several problems that had
arisen with classical mechanics. The first is the problem of blackbody radiation. A blackbody is any physical body that absorbs all incident radiation. As the blackbody continues to absorb radiation,
its internal energy increases until, like a bucket full of water, it can hold no more and must re-emit radiation equal in energy to any additional incident radiation. The problem is, most simply,
that the classical prediction for the energy of the emitted radiation as a function of its frequency is wrong. The problem was well known but unsolved until the German physicist Max Planck (1858–
1947) proposed in 1900 the hypothesis that the energy absorbed and emitted by the blackbody could come only in discrete amounts, multiples of some constant, finite, amount of energy. While Planck
himself never felt satisfied with this hypothesis as more than a localized, phenomenological description of the behavior of blackbodies, others eventually accepted Planck's hypothesis as a
revolution, a claim that energy itself can come in only discrete amounts, the quanta of quantum theory.
A second problem with classical mechanics was the challenge of describing the spectrum of hydrogen, and eventually, other elements. Atomic spectra are most easily understood in light of a fundamental
formula linking the energy of light with its frequency: E = h ν, where E is the energy of light, h is a constant (Planck's constant, as it turns out), and ν is the frequency of the light (which
determines the color of the visible light).
Suppose, now, that the energy of some atom (for example, an atom of hydrogen) is increased. If the atom is subsequently allowed to relax, it releases the added energy in the form of (electromagnetic)
radiation. The relationship E = h ν reveals that the frequency of the light depends on the amount of energy that the atom emits as it relaxes. Prior to the development of quantum theory, the best
classical theory of the atom was Ernest Rutherford's (1871–1937), according to which negatively charged electrons orbit a central positively charged nucleus. The energy of a hydrogen atom (which has
only one electron) corresponds to the distance of the electron from the nucleus. (The further the electron is, the higher its energy is.) Rutherford's model predicts that the radiation emitted by a
hydrogen atom could have any of a continuous set of possible energies, depending on the distance of its electron from the nucleus. Hence a large number of hydrogen atoms with energies randomly
distributed among them will emit light of many frequencies. However, in the nineteenth century it was well known that hydrogen emits only a few frequencies of visible light.
In 1913, Niels Bohr (1885–1962) introduced the hypothesis that the electrons in an atom can be only certain distances from the nucleus; that is, they can exist in only certain "orbits" around the
nucleus. The differences in the energies of these orbits correspond to the possible energies of the radiation emitted by the atom. When an electron with high energy "falls" to a lower orbit, it
releases just the amount of energy that is the difference between the energies of the higher and lower orbits. Because only certain orbits are possible, the atom can emit only certain frequencies of
The crucial part of Bohr's proposal is that electrons cannot occupy the space between the orbits, so that when the electron passes from one orbit to another, it "jumps" between them without passing
through the space in between. Thus, Bohr's model violates the principle of classical mechanics that particles always follow continuous trajectories. In other words, Bohr's model left little doubt
that classical mechanics had to be abandoned.
Over the next twelve years, the search was on for a replacement. By 1926, as the result of considerable experimental and theoretical work on the part of numerous physicists, two theories—
experimentally equivalent—were introduced, namely, Werner Heisenberg's (1901–1976) matrix mechanics and Erwin Schrödinger's (1887–1961) wave mechanics.
Matrix mechanics. Heisenberg's matrix mechanics arose out of a general approach to quantum theory advocated already by Bohr and Wolfgang Pauli (1900–1958), among others. In Heisenberg's hands, this
approach became a commitment to remove from the theory any quantities that cannot be observed. Heisenberg took as his "observable" such things as the transition probabilities of the hydrogen atom
(the probability that an electron would make a transition from a higher to a lower orbit). Heisenberg introduced operators that, in essence, represented such observable quantities mathematically.
Soon thereafter, Max Born (1882–1970) recognized Heisenberg's operators as matrices, which were already well understood mathematically.
Heisenberg's operators can be used in place of the continuous variables of Newtonian physics. Indeed, one can replace Newtonian position and momentum with their matrix "equivalents" and obtain the
equations of motion of quantum theory, commonly called (in this form) Heisenberg's equations. The procedure of replacing classical (Newtonian) quantities with the analogous operators is known as
quantization. A complete understanding of quantization remains elusive, due primarily to the fact that quantum-mechanical operators can be incompatible, which means in particular that they cannot be
Wave mechanics. Schrödinger's wave mechanics arose from a different line of reasoning, primarily due to Louis de Broglie (1892–1987) and Albert Einstein (1879–1955). Einstein had for some time
expressed a commitment to a physical world that can be adequately described causally, which meant that it could be described in terms of quantities that evolve continuously in time. Einstein, who was
primarily responsible for showing that light has both particlelike and wavelike properties, hoped early on for a theory that somehow "fused" these two aspects of light into a single consistent
In 1923, de Broglie instituted the program of wave mechanics. He was impressed by the Hamilton-Jacobi approach to classical physics, in which the fundamental equations are wave equations, but the
fundamental objects of the theory are still particles, whose trajectories are determined by the waves. Recalling this formalism, de Broglie suggested that the particlelike and wavelike properties of
light might be reconcilable in similar fashion. Einstein's enthusiasm for de Broglie's ideas—both because de Broglie's waves evolved continuously and because the theory fused the wavelike and
particlelike properties of light and matter—stimulated Schrödinger to work on the problem from that point of view, and in 1926 Schrödinger published his wave mechanics.
It was quickly realized that matrix mechanics and wave mechanics are experimentally equivalent. Shortly thereafter, in 1932, John von Neumann (1903–1957) showed their equivalence rigorously by
introducing the Hilbert space formalism of quantum theory. The Uncertainty Principle serves to illustrate the equivalence. The Uncertainty Principle follows immediately from Heisenberg's matrix
mechanics. Indeed, in only a few lines of argument, one can arrive at the mathematical statement of the Uncertainty Principle for any operators (physical quantities) A and B : ΔA ΔB ≥ Kh, where K is
a constant that depends on A and B, and h is Planck's constant. The symbol ΔA means "root mean square deviation of A " and is a measure of the statistical dispersion (uncertainty) in a set of values
of A. So the Uncertainty Principle says that the statistical dispersion in values of A times the statistical dispersion in values of B are always greater than or equal to some constant. If (and only
if) A and B are incompatible (see above) then this constant is greater than zero, so that it is impossible to measure a both A and B on an ensemble of physical systems in such a way as to have no
dispersion in the results.
Schrödinger's wave mechanics gives rise to the same result. It is easiest to see how it does so in the context of the classic example involving position and momentum, which are incompatible
quantities. In the context of Schrödinger's wave mechanics, the probability of finding a particle at a given location is determined by the amplitude (height) of the wave at that location. Hence, a
particle with a definite position is represented by a "wave" that is zero everywhere except at the location of the particle. On the other hand, a particle with definite momentum is represented by a
wave that is flat (i.e., has the same amplitude at all points), and, conversely to position, momentum becomes more and more "spread" as the wave becomes more sharply peaked. Hence the more precisely
one can predict the location of a particle, the less precisely one can predict its momentum. A more quantitative version of these considerations leads, again, to the Uncertainty Principle.
Quantum field theory. Perhaps the major development after the original formulation of quantum theory by Heisenberg and Schrödinger (with further articulation by many others) was the extension of
quantum mechanics to fields, resulting in quantum field theory. Paul Dirac (1902–1984) and others extended the work to relativistic field theories. The central idea is the same: The quantities of
classical field theory are quanticized in an appropriate way. Work on quantum field theory is ongoing, a central unresolved issue being how one can incorporate the force of gravity, and specifically
Einstein's relativistic field theory of gravity, into the framework of relativistic quantum field theory. A related, though even more speculative, area of research is quantum cosmology, which is,
more or less, the attempt to discern how Big Bang theory (itself derived from Einstein's Theory of Gravity) will have to be modified in the light of quantum gravity.
Contemporary research
Contemporary research in the interpretation of quantum theory focuses on two key issues: the "measurement problem" and locality (Bell's Theorem).
Schrödinger's cat. Although the essence of the measurement problem was clear to several researchers even before 1925, it was perhaps first clearly stated in 1935 by Schrödinger. In his famous
example, Schrödinger imagines a cat in the following unfortunate situation. A box, containing the cat, also contains a sample of some radioactive substance that has a probability of 1/2 to decay
within one hour. Any decay is detected by a Geiger counter, which releases poison into the box if it detects a decay. At the end of an hour, the state of the cat is indeterminate between "alive" and
"dead," in much the same way that a state of definite position is indeterminate with regard to momentum.
The cat is said to be in a superposition of the alive state and the dead state. In standard quantum theory, such a superposition is interpreted to mean that the cat is neither determinately alive,
nor determinately dead. But, says Schrödinger, while one might be able to accept that particles such as electrons are somehow indeterminate with respect to position or momentum, one can hardly accept
indeterminacy in the state of a cat.
More generally, Schrödinger's point is that indeterminacy at the level of the usual objects of quantum theory (electrons, protons, and so on) can easily be transformed into indeterminacy at the level
of everyday objects (such as cats, pointers on measuring apparatuses, and so on) simply by coupling the state of the everyday object to the state of the quantum object. Such couplings are exactly the
source of our ability to measure the quantum objects in the first place. Hence, the problem that Schrödinger originally raised with respect to the cat is now called the measurement problem : Everyday
objects such as cats and pointers can, according to standard quantum theory, be indeterminate in state. For example, a cat might be indeterminate with respect to whether it is alive. A pointer might
be indeterminate with respect to its location (i.e., it is pointing in no particular direction).
Approaches to the measurement problem. Thus, the interpretation of quantum theory faces a serious problem, the measurement problem, to which there have been many approaches. One approach, apparently
advocated by Einstein, is to search for a hidden-variables theory to underwrite the probabilities of standard quantum theory. The central idea here is that the indeterminate description of physical
systems provided by quantum theory is incomplete. Hidden variables (so-called because they are "hidden" from standard quantum theory) complete the quantum-mechanical description in a way that renders
the state of the system determinate in the relevant sense. The most famous example of a successful hidden-variables theory is the 1952 theory of David Bohm (1917–1992), itself an extension of a
theory proposed by Louis de Broglie in the 1920s. In the Broglie-Bohm theory, particles always have determinate positions, and those positions evolve deterministically as a function of their own
initial position and the initial positions of all the other particles in the universe. The probabilities of standard quantum theory are obtained by averaging over the possible initial positions of
the particles, so that the probabilities of standard quantum theory are due to ignorance of the initial conditions, just as in classical mechanics. According to some, the problematic feature of this
theory is its nonlocality —the velocity of a given particle can depend instantaneously on the positions of particles arbitrarily far away.
Other hidden-variables theories exist, both deterministic and indeterministic. They have some basic features in common with the de Broglie-Bohm theory, although they do not all take position to be
"preferred"—some choose other preferred quantities. In the de Broglie-Bohm theory, position is said to be "preferred" because all particles always have a definite position, by stipulation.
There are other approaches to solving the measurement problem. One set of approaches involves so-called Many-worlds interpretations, according to which each of the possibilities inherent in a
superposition is in fact actual, though each in its own distinct and independent "world." There is a variant, the Many-minds theory, according to which each observer observes each possibility, though
with distinct and independent "minds." These interpretations have a notoriously difficult time reproducing the probabilities of quantum theory in a convincing way. A slightly more technical, but
perhaps even more troubling, issue arises from the fact that any superposition can be "decomposed" into possibilities in an infinity of ways. So, for example, a superposition of "alive" and "dead"
can also be decomposed into other pairs of possibilities. It is unclear how Many-worlds interpretations determine which decomposition is used to define the "worlds," though there are various
Yet another set of approaches to the measurement problem is loosely connected to the Copenhagen Interpretation of quantum theory. According to these approaches, physical quantities have meaning only
in the context of an experimental arrangement designed to measure them. These approaches insist that the standard quantum-mechanical state is considered to describe our ignorance about which
properties a system has in cases where the possible properties are determined by the experimental context. Only those properties that could be revealed in this experimental context are considered
"possible." In this way, these interpretations sidestep the issue of which decomposition of a superposition one should take to describe the possibilities over which the probabilities are defined.
Once a measurement is made, the superposition is "collapsed" to the possibility that was in fact realized by the measurement. In this context, the collapse is a natural thing to do, because the
quantum mechanical state represents our ignorance about which experimental possibility would turn up. The major problem facing these approaches is to define measurement and experimental context in a
sufficiently rigorous way.
Another set of approaches are the realistic collapse proposals. Like the Copenhagen approaches, they take the quantum-mechanical state of a system to be its complete description, but unlike them,
these approaches allow the meaningfulness of physical properties even outside of the appropriate experimental contexts. The issue of how to specify when collapse will occur is thus somewhat more
pressing for these approaches because the collapse represents not a change in our knowledge, but a physical change in the world. There are several attempts to provide an account of when collapse will
occur, perhaps the two most famous being observer-induced collapse and spontaneous localization theories. According to the former, notably advocated by Eugene Wigner (1902–1995), the act of
observation by a conscious being has a real effect on the physical state of the world, causing it to change from a superposition to a state representing the world as perceived by the conscious
observer. This approach faces the very significant problem of explaining why there should be any connection between the act of conscious observation and the state of, for example, some electron in a
hydrogen atom.
The spontaneous-localization theories define an observer-independent mechanism for collapse that depends, for example, on the number of particles in a physical system. For low numbers of particles
the rate of collapse is very slow, whereas for higher values, the rate of collapse is very high. The collapse itself occurs continuously, by means of a randomly distributed infinitesimal deformation
of the quantum state. The dynamics of the collapse are designed to reproduce the probabilities of quantum theory to a very high degree of accuracy.
The problem of nonlocality. The other major issue facing the interpretation of quantum theory is nonlocality. In 1964, John Bell (1928–1990) proved that, under natural conditions, any interpretation
of quantum theory must be nonlocal. More precisely, in certain experimental situations, the states of well-separated pairs of particles are correlated in a way that cannot be explained in terms of a
common cause. One can think, here, of everyday cases to illustrate the point. Suppose you write the same word on two pieces of paper and send them to two people, who open the envelopes simultaneously
and discover the word. There is a correlation between these two events (they both see the same word), but the correlation is easily explained in terms of a common cause, you.
Under certain experimental circumstances, particles exhibit similar correlations in their states, and yet those correlations cannot be explained in terms of a common cause. It seems, instead, that
one must invoke nonlocal explanations, explanations that resort to the idea that something in the vicinity of one of the particles instantaneously influences the state of the other particle, even
though the particles are far apart.
On the face of it, nonlocality contradicts special relativity. According to standard interpretations of the theory of relativity, causal influences cannot travel faster than light, and in particular,
events in one region of space cannot influence events in other regions of space if the influence would have to travel faster than light to get from one region to the other in time to influence the
However, the matter is not so simple as a direct contradiction between quantum theory and relativity. The best arguments for the absence of faster-than-light influences in relativity are based on the
fact that faster-than-light communication—more specifically, transfer of information—can lead to causal paradoxes. But in the situations to which Bell's theorem applies, the purported
faster-than-light influences cannot be exploited to enable faster-than-light communication. This result is attributable to the indeterministic nature of standard quantum theory. In de Broglie and
Bohm's deterministic hidden-variable theory, one could exploit knowledge of the values of the hidden variables to send faster-than-light signals; however, such knowledge is, in Bohm's theory,
physically impossible in principle.
Other areas of research. There are of course many other areas of research in the interpretation of quantum theory. These include traditional areas of concern, such as the classical limit of quantum
theory. How do the nonclassical predictions of quantum theory become (roughly) equivalent to the (roughly accurate) predictions of classical mechanics in some appropriate limit? How is this limit
defined? In general, what is the relationship between classical and quantum theory? Other areas of research arise from work in quantum theory itself, perhaps the most notable being the work in
quantum computation. It appears that a quantum computer could perform computations in qualitatively faster time than a classical computer. Apart from obvious practical considerations, the possibility
of quantum computers raises questions about traditional conceptions of computation, and possibly, thereby, about traditional philosophical uses of those conceptions, especially concerning the
analogies often drawn between human thought and computation.
Applications to religious thought
Quantum theory was the concern of numerous religious thinkers during the twentieth century. Given the obviously provisional status of the theory, not to mention the extremely uncertain state of its
interpretation, one must proceed with great caution here, but we can at least note some areas of religious thought to which quantum theory, or its interpretation, has often been taken to be relevant.
Perhaps the most obvious is the issue of whether the world is ultimately deterministic or not. Several thinkers, including such scientists as Isaac Newton (1642–1727) and Pierre-Simon Laplace (1749–
1827), have seen important ties to religious thought. In the case of classical mechanics, Newton had good reason to believe that his theory did not completely determine the phenomena, whereas Laplace
(who played a key role in patching up the areas where Newton saw the theory to fail) had good reason to think that the theory did completely and deterministically describe the world. Newton thus saw
room for God's action in the world; Laplace did not.
In the case of quantum theory the situation is considerably more difficult because there exist both indeterministic and deterministic interpretations of the theory, each of which is empirically
adequate. Indeed, they are empirically equivalent. Those who, for various reasons, have adopted one or the other interpretation, though, have gone on to investigate the consequences for religious
thought. Some, for example, see in quantum indeterminism an explanation of the possibility of human free will. Others have suggested that quantum indeterminism leaves an important role for God in the
universe, namely, as the source of the agreement between actual relative frequencies and the probabilistic predictions of quantum theory.
Other thinkers have seen similarities between aspects of quantum theory and Eastern religions, notably various strains of Buddhism and Daoism. Fritjof Capra (1939–), who is perhaps most famous in
this regard, has drawn analogies between issues that arise from the measurement problem and quantum nonlocality and what he takes to be Eastern commitments to the "connectedness" of all things. Other
thinkers have seen in the interpretive problems of quantum theory evidence of a limitation in science's ability to provide a comprehensive understanding of the world, thus making room for other,
perhaps religious, modes of understanding. Still others, drawing on views such as Wigner's (according to which conscious observation plays a crucial role in making the world determinate), see in
quantum theory a justification of what they take to be traditional religious views about the role of conscious beings in the world. Others, including Capra, see affinities between wave-particle
duality, or more generally, the duality implicit in the Uncertainty Principle, and various purportedly Eastern views about duality (for example, the Taoist doctrine of yin and yang, or the Buddhist
use of koans).
Finally, quantum cosmology has provided some with material for speculation. One must be extraordinarily careful here because there is, at present, no satisfactory theory of quantum gravity, much less
of quantum cosmology. Nonetheless, a couple of (largely negative) points can be made. First, it is clear that the standard Big Bang theory will have to be modified, somehow or other, in light of
quantum theory. Hence, the considerable discussion to date of the religious consequences of the Big Bang theory will also need to be reevaluated. Second, due to considerations that arise from the
time-energy Uncertainty Principle, even a satisfactory quantum cosmology is unlikely to address what happened in the early universe prior to the Planck time (approximately 10–43 seconds) because
quantum theory itself holds that units of time less than the Planck time are (perhaps) meaningless. Some have seen here a fundamental limit in scientific analysis, a limit that is implied by the
science itself. Of course, others see an opportunity for a successor theory.
This situation is, in fact, indicative of the state of quantum theory as a whole. While it is an empirically successful theory, its interpretations, and hence any consequences it might have for
religious thought, remain matters of speculation.
See also Copenhagen Interpretation; EPR Paradox; Heisenberg's Uncertainty Principle; Indeterminism; Locality; Many-worlds Hypothesis; Phase Space; Planck Time; Quantum Cosmologies; Quantum Field
Theory; SchrÖdinger's Cat; Wave-particle Duality
bohm, david. quantum theory. new york: dover, 1989.
gribbin, john. in search of schrödinger's cat: quantum physics and reality. new york: bantam, 1984.
heisenberg, werner. physical principles of the quantum theory. new york: dover, 1930.
shankar, ramamurti. principles of quantum mechanics. new york: plenum, 1994.
w. michael dickson
Quantum Mechanics
Quantum Mechanics
Theoretical implications of quantum mechanics
Quantum mechanics is a fundamental part of theoretical physics that involves the theory used to provide an understanding of the behavior of microscopic particles such as electrons and atoms. More
importantly, quantum mechanics describes the relationships between energy and matter on atomic and subatomic scales. Thus, it replaces classical mechanics and electromagnetism when dealing with these
very small scales. Quantum mechanics is used in such scientific fields as computational chemistry, condensed matter, molecular physics, nuclear physics, particle physics, and quantum chemistry.
At the beginning of the twentieth century, German physicist Maxwell Planck (1858–1947) proposed that atoms absorb or emit electromagnetic radiation in bundles of energy termed quanta. This quantum
concept seemed counter-intuitive to well-established Newtonian physics. Ultimately, advancements associated with quantum mechanics (e.g., the uncertainty principle) also had profound implications
with regard to the philosophical scientific arguments regarding the limitations of human knowledge.
Planck proposed that atoms absorb or emit electromagnetic radiation in defined and discrete units (quanta). Planck’s quantum theory also asserted that the energy of light was directly proportional to
its frequency, and this proved a powerful observation that accounted for a wide range of physical phenomena.
Planck’s constant relates the energy of a photon with the frequency of light. Along with the constant for the speed of light (c), Planck’s constant (h = 6.626 x 10^-34 joule-second) is a fundamental
constant of nature.
Prior to Planck’s work, electromagnetic radiation (light) was thought to travel in waves with an infinite number of available frequencies and wavelengths. Planck’s work focused on attempting to
explain the limited spectrum of light emitted by hot objects and to explain the absence of what was termed the violet catastrophe predicted by nineteenth century theories developed by Prussian
physicist Wilhelm Wien (1864-1928) and English physicist Baron (John William Strutt) Rayleigh (1842-1919).
Danish physicist Niels Bohr (1885-1962) studied Planck’s quantum theory of radiation and worked in England with physicists J. J. Thomson (1856-1940), and Ernest Rutherford (1871-1937) improving their
classical models of the atom by incorporating quantum theory. During this time, Bohr developed his model of atomic structure. To account for the observed properties of hydrogen, Bohr proposed that
electrons existed only in certain orbits and that, instead of traveling between orbits, electrons made instantaneous quantum leaps or jumps between allowed orbits. According to the Bohr model, when
an electron is excited by energy it jumps from its ground state to an excited state (i.e., a higher energy orbital). The excited atom can then emit energy only in certain (quantized) amounts as its
electrons jump back to lower energy orbits located closer to the nucleus. This excess energy is emitted in quanta of electromagnetic radiation (photons of light) that have exactly the same energy as
the difference in energy between the orbits jumped by the electron.
The electron quantum leaps between orbits proposed by the Bohr model accounted for Plank’s observations that atoms emit or absorb electromagnetic radiation in quanta. Bohr’s model also explained many
important properties of the photoelectric effect described by German-American mathematician and physicist Albert Einstein (1879-1955).
Using probability theory, and allowing for a wave-particle duality, quantum mechanics also replaced classical mechanics as the method by which to describe interactions between subatomic particles.
Quantum mechanics replaced electron orbitals of classical atomic models with allowable values for angular momentum (angular velocity multiplied by mass) and depicted electrons position in terms of
probability clouds and regions.
In the 1920s, the concept of quantization and its application to physical phenomena was further advanced by more mathematically complex models based on the work of French physicist Louis Victor de
Broglie (1892-1987) and Austrian physicist Erwin Schrodinger (1887-1961) that depicted the particle and wave nature of electrons. De Broglie showed that the electron was not merely a particle but a
waveform. This proposal led Schrodinger to publish his wave equation in 1926. Schrodinger’s work described electrons as a standing wave surrounding the nucleus and his system of quantum mechanics is
called wave mechanics. German physicist Max Born (1882-1970) and English physicist P.A.M Dirac (1902-1984) made further advances in defining the subatomic particles (principally the electron) as a
wave rather than as a particle and in reconciling portions of quantum theory with relativity theory.
Working at about the same time, German physicist Werner Heisenberg (1901-1976) formulated the first complete and self-consistent theory of quantum mechanics. Matrixmathematics was well-established by
the 1920s, and Heisenberg applied this powerful tool to quantum mechanics. In 1926, Heisenberg put forward his uncertainty principle that states that two complementary properties of a system, such as
position and momentum, can never both be known exactly. This proposition helped cement the dual nature of particles (e.g., light can be described as having both wave and particle characteristics).
Electromagnetic radiation (one region of the spectrum of which comprises visible light) is now understood as having both particle and wave-like properties.
In 1925, Austrian-born physicist Wolfgang Pauli (1900-1958) published the Pauli exclusion principle that states that no two electrons in an atom can simultaneously occupy the same quantum state
(i.e., energy state). Pauli’s specification of spin (+1/2 or — 1/2) on an electron gave the two electrons in any suborbital differing quantum numbers (a system used to describe the quantum state) and
made completely understandable the structure of the periodic table in terms of electron configurations (i.e., the energy related arrangement of electrons in energy shells and suborbitals). In 1931,
American chemist Linus Pauling (1901-1994)published a paper that used quantum mechanics to explain how two electrons, from two different atoms, are shared to make a covalent bond between the two
atoms. Pauling’s work provided the connection needed in order to fully apply the new quantum theory to chemical reactions.
Quantum mechanics posed profound questions for scientists and philosophers. The concept that particles such as electrons making quantum leaps from one orbit to another, as opposed to simply moving
between orbits, seems counter-intuitive, that is, outside the human experience with nature. Like much of quantum theory, the proofs of how nature works at the atomic level are mathematical. Bohr
himself remarked, “Anyone who is not shocked by quantum theory has not understood it.”
Quantum mechanics requires advanced mathematics to give numerical predictions for the outcome of measurements. However, one can understand many significant results of the theory from the basic
properties of the probability waves. An important example is the behavior of electrons within atoms. Since such electrons are confined in some manner, scientists expect that they must be represented
by standing waves that correspond to a set of allowed frequencies. Quantum mechanics states that for this new type of wave, its frequency is proportional to the energy associated with the microscopic
particle. Thus, one reaches the conclusion that electrons within atoms can only exist in certain states, each of which corresponds to only one possible amount of energy. The energy of an electron in
an atom is an example of an observable which is quantized, that is it comes in certain allowed amounts, called quanta (like quantities).
When an atom contains more than one electron, quantum mechanics predicts that two of the electrons both exist in the state with the lowest energy, called the ground state. The next eight electrons
are in the state of the next highest energy, and so on following a specific relationship. This is the origin of the idea of electron shells, or orbits, although these are just convenient ways of
talking about the states. The first shell is filled by two electrons, the second shell is filled by another eight, etc. This explains why some atoms try to combine with other atoms in chemical
This idea of electron states also explains why different atoms emit different colors of light when they are heated. Heating an object gives extra energy to the atoms inside it and this can transform
an electron within an atom from one state to another of higher energy. The atom eventually loses the energy when the electron transforms back to the lower-energy state. Usually the extra energy is
carried away in the form of light which is said was produced by the electron making a transition, or a change of its state. The difference in energy between the two states of the electron (before and
after the transition) is the same for all atoms of the same kind. Thus, those atoms will always give off a wavelength and frequency of light (i.e.,
Classical mechanics— A collection of theories, all derived from a few basic principles, that can be used to describe the motion of macroscopic objects.
Macroscopic— This term describes large-scale objects like those humans directly interact with on an everyday basis.
Microscopic— This term describes extremely small-scale objects such as electrons and atoms with which humans seldom interact on an individual basis as humans do with macroscopic objects.
Observable— A physical quantity, like position, velocity or energy, which can be determined by a measurement.
Planck’s constant— A constant written as h, which was introduced by Max Planck in his quantum theory and that appears in every formula of quantum mechanics.
Probability— The likelihood that a certain event will occur. If something happens half of the time, its probability is 1/2 = 0.5 = 50%.
Quantum— The amount of radiant energy in the different orbits of an electron around the nucleus of an atom.
Wave— A motion, inwhichenergy and momentum is carried away from some source, which repeats itself in space and time with little or no change.
color) that corresponds to that energy. Another element’s atomic structure contains electron states with different energies (since the electron is confined differently) and so the differing energy
levels produce light in other regions of the electromagnetic spectrum. Using this principle, scientists can determine which elements are present in stars by measuring the exact colors in the emitted
Quantum mechanics theory has been extremely successful in explaining a wide range of phenomena, including a description of how electrons move in materials (e.g., through chips in a personal computer
). Quantum mechanics is also used to understand superconductivity, the decay of nuclei, and how lasers work.
The standard model of quantum physics offers an theoretically and mathematically sound model of particle behavior that serves as an empirically validated middle-ground between the need for
undiscovered hidden variables that determine particle behavior, and a mystical anthropocentric universe where it is the observations of humans that determine reality. Although the implications of the
latter can be easily dismissed, the debate over the existence of hidden variables in quantum theory remained a subject of serious scientific debate during the twentieth century and, now, early into
the twenty-first century. Based upon everyday experience, well explained by the deterministic concepts of classical physics, it is intuitive that there be hidden variables to determine quantum
states. Nature is not, however, obliged to act in accord with what is convenient or easy to understand. Although the existence and understanding of heretofore hidden variables might seemingly explain
Einstein’s “spooky” forces, the existence of such variables would simply provide the need to determine whether they, too, included their own hidden variables.
Quantum theory breaks this never-ending chain of causality by asserting (with substantial empirical evidence) that there are no hidden variables. Moreover, quantum theory replaces the need for a
deterministic evaluation of natural phenomena with an understanding of particles and particle behavior based upon statistical probabilities. Although some philosophers and metaphysicists would like
to keep the hidden variable argument alive, the experimental evidence is persuasive, compelling, and conclusive that such hidden variables do not exist.
See also Quantum number.
Bohr, Niels. The Unity of Knowledge. New York: Doubleday & Co., 1955.
Duck, Ian. 100 Years of Planck’s Quantum. Singapore and River Edge, NJ: World Scientific, 2001.
Feynman, Richard P. QED: The Strange Theory of Light and Matter. New Jersey: Princeton University Press, 1985.
_____. The Character of Physical Law. MIT Press, 1985.
Huang, Fannie, ed. Quantum Physics: An Anthology of Current Thought. New York: Rosen Publishing Group, 2006.
Lewin, Roger. Making Waves: Irving Darkik and his Superwave Principle. Emmaus, PA: Rodale, 2005.
Liboff, Richard L. Introductory Quantum Mechanics, 4th ed. Addison-Wesley Publishing, 2002.
Mehra, Jagdish. The Golden Age of Theoretical Physics. Singapore and River Edge, NJ: World Scientific, 2000.
Phillips, A.C. Introduction to Quantum Mechanics. New York: John Wiley & Sons, 2003.
K. Lee Lerner
Quantum Mechanics
Quantum mechanics
Quantum mechanics is the theory used to provide an understanding of the behavior of microscopic particles such as electrons and atoms . More importantly, quantum mechanics describes the relationships
between energy and matter on atomic and subatomic scale.
At the beginning of the twentieth century, German physicist Maxwell Planck (1858–1947) proposed that atoms absorb or emit electromagnetic radiation in bundles of energy termed quanta. This quantum
concept seemed counter-intuitive to well-established Newtonian physics . Ultimately, advancements associated with quantum mechanics (e.g., the uncertainty principle) also had profound implications
with regard to the philosophical scientific arguments regarding the limitations of human knowledge.
Planck proposed that atoms absorb or emit electro-magnetic radiation in defined and discrete units (quanta). Planck's quantum theory also asserted that the energy of light was directly proportional
to its frequency, and this proved a powerful observation that accounted for a wide range of physical phenomena.
Planck's constant relates the energy of a photon with the frequency of light. Along with constant for the speed of light, Planck's constant (h = 6.626 10–34 Joule-second) is a fundamental constant of
Prior to Planck's work, electromagnetic radiation (light) was thought to travel in waves with an infinite number of available frequencies and wavelengths. Planck's work focused on attempting to
explain the limited spectrum of light emitted by hot objects and to explain the absence of what was termed the "violet catastrophe" predicted by 19th century theories developed by Prussian physicist
Wilhelm Wien (1864–1928) and English physicist Baron (John William Strutt) Rayleigh (1842–1919).
Danish physicist Niels Bohr (1885–1962) studied Planck's quantum theory of radiation and worked in England with physicists J. J. Thomson (1856–1940), and Ernest Rutherford (1871–1937) improving their
classical models of the atom by incorporating quantum theory. During this time Bohr developed his model of atomic structure. To account for the observed properties of hydrogen , Bohr proposed that
electrons existed only in certain orbits and that, instead of traveling between orbits, electrons made instantaneous quantum leaps or jumps between allowed orbits. According to the Bohr model , when
an electron is excited by energy it jumps from its ground state to an excited state (i.e., a higher energy orbital). The excited atom can then emit energy only in certain (quantized) amounts as its
electrons jump back to lower energy orbits located closer to the nucleus. This excess energy is emitted in quanta of electromagnetic radiation (photons of light) that have exactly same energy as the
difference in energy between the orbits jumped by the electron.
The electron quantum leaps between orbits proposed by the Bohr model accounted for Plank's observations that atoms emit or absorb electromagnetic radiation in quanta. Bohr's model also explained many
important properties of the photoelectric effect described by Albert Einstein (1879–1955).
Using probability theory , and allowing for a wave-particle duality, quantum mechanics also replaced classical mechanics as the method by which to describe interactions between subatomic particles .
Quantum mechanics replaced electron "orbitals" of classical atomic models with allowable values for angular momentum (angularvelocity multiplied by mass ) and depicted electrons position in terms of
probability "clouds" and regions.
In the 1920s, the concept of quantization and its application to physical phenomena was further advanced by more mathematically complex models based on the work of the French physicist Louis Victor
de Broglie (1892–1987) and Austrian physicist Erwin Schrödinger (1887–1961) that depicted the particle and wave nature of electrons. De Broglie showed that the electron was not merely a particle but
a wave form. This proposal led Schrodinger to publish his wave equation in 1926. Schrödinger's work described electrons as "standing wave" surrounding the nucleus and his system of quantum mechanics
is called wave mechanics. German physicist Max Born (1882–1970) and English physicist P.A.M Dirac (1902–1984) made further advances in defining the subatomic particles (principally the electron) as a
wave rather than as a particle and in reconciling portions of quantum theory with relativity theory.
Working at about the same time, German physicist Werner Heisenberg (1901–1976) formulated the first complete and self-consistent theory of quantum mechanics. Matrix mathematics was well-established
by the 1920s, and Heisenberg applied this powerful tool to quantum mechanics. In 1926, Heisenberg put forward his uncertainty principle that states that two complementary properties of a system, such
as position and momentum, can never both be known exactly. This proposition helped cement the dual nature of particles (e.g., light can be described as having both wave and a particle
characteristics). Electromagnetic radiation (one region of the spectrum of which comprises visible light) is now understood as having both particle and wave-like properties.
In 1925, Austrian-born physicist Wolfgang Pauli (1900–1958) published the Pauli exclusion principle that states that no two electrons in an atom can simultaneously occupy the same quantum state
(i.e., energy state). Pauli's specification of spin (+1/2 or −1/2) on an electron gave the two electrons in any suborbital differing quantum numbers (a system used to describe the quantum state) and
made completely understandable the structure of the periodic table in terms of electron configurations (i.e., the energy related arrangement of electrons in energy shells and suborbitals). In 1931,
American chemist Linus Pauling published a paper that used quantum mechanics to explain how two electrons, from two different atoms, are shared to make a covalent bond between the two atoms.
Pauling's work provided the connection needed in order to fully apply the new quantum theory to chemical reactions .
Quantum mechanics posed profound questions for scientists and philosophers. The concept that particles such as electrons making quantum leaps from one orbit to another, as opposed to simply moving
between orbits, seems counter-intuitive, that is, outside the human experience with nature. Like much of quantum theory, the proofs of how nature works at the atomic level are mathematical. Bohr
himself remarked, "Anyone who is not shocked by quantum theory has not understood it."
Quantum results
Quantum mechanics requires advanced mathematics to give numerical predictions for the outcome of measurements. However, one can understand many significant results of the theory from the basic
properties of the probability waves. An important example is the behavior of electrons within atoms. Since such electrons are confined in some manner, we expect that they must be represented by
standing waves that correspond to a set of allowed frequencies. Quantum mechanics states that for this new type of wave, its frequency is proportional to the energy associated with the microscopic
particle. Thus, we reach the conclusion that electrons within atoms can only exist in certain states, each of which corresponds to only one possible amount of energy. The energy of an electron in an
atom is an example of an observable which is quantized, that is it comes in certain allowed amounts, called quanta (like quantities).
When an atom contains more than one electron, quantum mechanics predicts that two of the electrons both exist in the state with the lowest energy, called the ground state. The next eight electrons
are in the state of the next highest energy, and so on following a specific relationship. This is the origin of the idea of electron "shells" or "orbits," although these are just convenient ways of
talking about the states. The first shell is "filled" by two electrons, the second shell is filled by another eight, etc. This explains why some atoms try to combine with other atoms in chemical
This idea of electron states also explains why different atoms emit different colors of light when they are heated. Heating an object gives extra energy to the atoms inside it and this can transform
an electron within an atom from one state to another of higher energy. The atom eventually loses the energy when the electron transforms back to the lower-energy state. Usually the extra energy is
carried away in the form of light which we say was produced by the electron making a transition, or a change of its state. The difference in energy between the two states of the electron (before and
after the transition) is the same for all atoms of the same kind. Thus, those atoms will always give off a wavelength and frequency of light (i.e., color ) that corresponds to that energy. Another
element's atomic structure contains electron states with different energies (since the electron is confined differently) and so the differing energy levels produce light in other regions of the
electromagnetic spectrum . Using this principle, scientists can determine which elements are present in stars by measuring the exact colors in the emitted light.
Quantum mechanics theory has been extremely successful in explaining a wide range of phenomena, including a description of how electrons move in materials (e.g., through chips in a personal computer
). Quantum mechanics is also used to understand superconductivity, the decay of nuclei, and how lasers work.
Theoretical implications of quantum mechanics
The standard model of quantum physics offers an theoretically and mathematically sound model of particle behavior that serves as an empirically validated middle-ground between the need for
undiscovered hidden variables that determine particle behavior, and a mystical anthropocentric universe where it is the observations of humans that determine reality. Although the implications of the
latter can be easily dismissed, the debate over the existence of hidden variables in quantum theory remained a subject of serious scientific debate during the twentieth century. Based upon our
everyday experience, well explained by the deterministic concepts of classical physics, it is intuitive that there be hidden variables to determine quantum states. Nature is not, however, obliged to
act in accord with what is convenient or easy to understand. Although the existence and understanding of heretofore hidden variables might seemingly explain Albert Einstein's "spooky" forces, the
existence of such variables would simply provide the need to determine whether they, too, included their own hidden variables.
Quantum theory breaks this never-ending chain of causality by asserting (with substantial empirical evidence) that there are no hidden variables. Moreover, quantum theory replaces the need for a
deterministic evaluation of natural phenomena with an understanding of particles and particle behavior based upon statistical probabilities. Although some philosophers and metaphysicists would like
to keep the hidden variable argument alive, the experimental evidence is persuasive, compelling, and conclusive that such hidden variables do not exist.
See also Quantum number.
Albert, A. Z. Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press, 1992.
Bohr, Niels. The Unity of Knowledge. New York: Doubleday & Co., 1955.
Feynman, Richard P. QED: The Strange Theory of Light and Matter. New Jersey: Princeton University Press, 1985.
Feynman, Richard P. The Character of Physical Law. MIT Press, 1985.
Gregory, B. Inventing Reality: Physics as Language. New York: John Wiley & Sons, 1990.
Han, M.Y. The Probable Universe. Blue Ridge Summit, PA: TAB Books, 1993.
Liboff, Richard L. Introductory Quantum Mechanics. 4th ed. Addison-Wesley Publishing, 2002.
Phillips, A.C. Introduction to Quantum Mechanics. New York: John Wiley & Sons, 2003.
K. Lee Lerner
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Classical mechanics
—A collection of theories, all derived from a few basic principles, that can be used to describe the motion of macroscopic objects.
—This term describes large-scale objects like those we directly interact with on an everyday basis.
—This term describes extremely small-scale objects such as electrons and atoms with which we seldom interact on an individual basis as we do with macroscopic objects.
—A physical quantity, like position, velocity or energy, which can be determined by a measurement.
Planck's constant
—A constant written as h which was introduced by Max Planck in his quantum theory and which appears in every formula of quantum mechanics.
—The likelihood that a certain event will occur. If something happens half of the time, its probability is 1/2 = 0.5 = 50%.
—The amount of radiant energy in the different orbits of an electron around the nucleus of an atom.
—A motion, in which energy and momentum is carried away from some source, which repeats itself in space and time with little or no change.
Quantum Mechanics
What is quantum mechanics? An answer to this question can be found by contrasting quantum and classical mechanics. Classical mechanics is a frame-work—a set of rules—used to describe the behavior of
ordinary-sized things: footballs, specks of dust, planets. Classical mechanics is familiar to everyone through commonplace activities like tossing balls, driving cars, and chewing food. Physicists
have studied classical mechanics for centuries (whence the name "classical") and developed elaborate mathematical tools to make accurate predictions involving complex situations: situations like the
motion of satellites, the twist of a spinning top, or the jiggle of jello. Sometimes (as in the spinning top) the results of classical mechanics are unexpected, but always the setting is familiar due
to one's daily interaction with ordinary-sized things.
Quantum mechanics is a parallel framework used to describe the behavior of very small things: atoms, electrons, quarks. When physicists began exploring the atomic realm (starting around 1890), the
obvious thought was to apply the familiar classical framework to the new atomic situation. This resulted in disaster; the classical mechanics that had worked so well in so many other situations
failed spectacularly when applied to atomic-sized situations. The obvious need to find a new framework remained the central problem of physics until 1925, when that new framework—the framework of
quantum mechanics—was discovered by Werner Heisenberg. Quantum mechanics does not involve familiar things, so it is not surprising that both the results and the setting are often contrary to anything
that we would have expected from everyday experience. Quantum mechanics is not merely unfamiliar; it is counterintuitive.
The fact that quantum mechanics is counterintuitive does not mean that it is unsuccessful. On the contrary, quantum mechanics is the most remarkably successful product of the human mind, the
brightest jewel in our intellectual crown. To cite just one example, quantum mechanics predicts that an electron behaves in some ways like a tiny bar magnet. The strength of that magnet can be
measured with high accuracy and is found to be, in certain units,
1.001 159 652 188
with a measurement uncertainty of about four in the last digit. The strength of the electron's magnet can also be predicted theoretically through quantum mechanics. The predicted strength is
1.001 159 652 153
with about seven times as much uncertainty. The agreement between experiment and quantum theory is magnificent: if I could measure the distance from New York to Los Angeles to this accuracy, my
measurement would be accurate to within the thickness of a silken strand.
So, what does this unfamiliar quantum mechanical framework look like? Why did it take thirty-five years of intense effort to discover? The framework has four pillars: quantization, probability,
interference, and entanglement.
A classical marble rolling within a bowl can have any energy at all (as long as it's greater than or equal to the minimum energy of a stationary marble resting at the bottom of the bowl). The faster
the marble moves, the more energy it has, and that energy can be increased or decreased by any amount, whether large or small. But a quantal electron moving within a bowl can have only certain
specified amounts of energy. The electron's energy can again be increased or decreased, but the electron cannot accept just any arbitrary amount of energy: it can only absorb or emit energy in
certain discrete lumps. If an attempt is made to increase its energy by less than the minimum lump, it will not accept any energy at all. If an attempt is made to increase its energy by two and a
half lumps, the electron will ac cept only two of them. If an attempt is made to decrease its energy by four and two-thirds lumps, it will give up only four. This phenomena is an example of
quantization, a word derived from the Latin quantus, meaning "how much."
Quantization was the first of the four pillars to be uncovered, and it gave its name to the topic, but today quantization is not regarded as the most essential characteristic of quantum mechanics.
There are atomic quantities, like momentum, that do not come in lumps, and under certain circumstances even energy doesn't come in lumps.
Furthermore, quantization of a different sort exists even within the classical domain. For example, a single organ pipe cannot produce any tone but only those tones for which it is tuned.
Suppose a gun is clamped in a certain position with a certain launch angle. A bullet is shot from this gun, and a mark is made where the bullet lands. Then a second, identical, bullet is shot from
the gun at the same position and angle. The bullet leaves the muzzle with the same speed. And the bullet lands exactly where the first bullet landed. This unsurprising fact is called determinism:
identical initial conditions always lead to identical results, so the results are determined by the initial conditions. Indeed, using the tools of classical mechanics one can, if given sufficient
information about the system as it exists, predict exactly how it will behave in the future.
It often happens that this prediction is very hard to execute or that it is very hard to find sufficient information about the system as it currently exists, so that an exact prediction is not always
a practical possibility—for example, predicting the outcome when one flips a coin or rolls a die. Nevertheless, in principle the prediction can be done even if it's so difficult that no one would
ever attempt it.
But it is an experimental fact that if one shoots two electrons in sequence from a gun, each with exactly the same initial condition, those two electrons will probably land at different locations
(although there is some small chance that they will go to the same place). The atomic realm is probabilistic, not deterministic. The tools of quantum mechanics can predict probabilities with
exquisite accuracy, but it cannot predict exactly what will happen because nature itself doesn't know exactly what will happen.
The second pillar of quantum mechanics is probability: Even given perfect information about the current state of the system, no one can predict exactly what the future will hold. This is indeed an
important hallmark distinguishing quantum and classical mechanics, but even in the classical world probability exists as a practical matter—every casino operator and every politician relies upon it.
A gun shoots a number of electrons, one at a time, toward a metal plate punched with two holes. On the far side of the plate is a bank of detectors to determine where each electron lands. (Each
electron is launched identically, so if one were launching classical bullets instead of quantal electrons, each would take an identical route to an identical place. But in quantum mechanics the
several electrons, although identically launched, might end up at different places.)
First the experiment is performed with the right hole blocked. Most of the electrons strike the metal plate and never reach the detectors, but those that do make it through the single open hole end
up in one of several different detectors—it's more likely that they will hit the detectors toward the left than those toward the right. Similar results hold if the left hole is blocked, except that
now the rightward detectors are more likely to be hit.
What if both holes are open? It seems reasonable that an electron passing through the left hole when both holes are open should behave exactly like an electron passing through the left hole when the
right hole is blocked. After all, how could such an electron possibly know whether the right hole were open or blocked? The same should be true for an electron passing through the right hole. Thus,
the pattern of electron strikes with both holes open would be the sum of the pattern with the right hole blocked plus the pattern with the left hole blocked.
In fact, this is not what happens at all. The distribution of strikes breaks up into an intricate pattern with bands of intense electron bombardment separated by gaps with absolutely no strikes.
There are some detectors which are struck by many electrons when the right hole is blocked, by some electrons when the left hole is blocked, but by no electrons at all when neither hole is blocked.
And this is true even if at any instant only a single electron is present in the apparatus!
What went wrong with the above reasoning? In fact, the flaw is not in the reasoning but in an unstated premise. The assumption was made that an electron moving from the gun to the detector bank would
pass through either the right hole or the left. This simple, common-sense premise is—and must be—wrong. The English language was invented by people who didn't understand quantum mechanics, so there
is no concise yet accurate way to describe the situation using everyday language. The closest approximation is "the electron goes through both holes." In technical terms, the electron in transit is a
superposition of an electron going through the right hole and an electron going through the left hole. It is hard to imagine what such an electron would look like, but the essential point is that the
electron doesn't look like the classic "particle": a small, hard marble.
The phenomenon of entanglement is difficult to describe succinctly. It always involves two (or more) particles and usually involves the measurement of two (or more) different properties of those
particles. There are circumstances in which the measurement results from one particle are correlated with the measurement results from the other particle, even though the particles may be very far
away from each other. In some cases, one can prove that these correlations could not occur for any classical system, no matter how elaborate. The best experimental tests of quantum mechanics involve
entanglement because it is in this way that the atomic world differs most dramatically from the everyday, classical world.
Mathematical Formalism
Quantum physics is richer and more textured than classical physics: quantal particles can, for example, interfere or become entangled, options that are simply unavailable to classical particles. For
this reason the mathematics needed to describe a quantal situation is necessarily more elaborate than the mathematics needed to describe a corresponding classical situation. For example, suppose a
single particle moves in three-dimensional space. The classical description of this particle requires six numbers (three for position and three for velocity). But the quantal description requires an
infinite number of numbers—two numbers (a "magnitude" and a "phase") at every point in space.
Classical limit
Classical mechanics holds for ordinary-sized objects, while quantum mechanics holds for atomic-sized objects. So at exactly what size must one framework be switched for another? Fortunately, this
difficulty doesn't require a resolution. The truth is that quantum mechanics holds for objects of all sizes, but that classical mechanics is a good approximation to quantum mechanics when quantum
mechanics is applied to ordinary-sized objects. As an analogy, the surface of the Earth is nearly spherical, but sheet maps, not globes, are used for navigation over short distances. This "flat Earth
approximation" is highly accurate for journeys of a few hundred miles but quite misleading when applied to journeys of ten thousand miles. Similarly, the "classical approximation" is highly accurate
for ordinary-sized objects but not for atomic-sized objects.
The Subatomic Domain
When scientists first investigated the atomic realm, they found that a new physical framework (namely quantum mechanics) was needed. What about the even smaller domain of elementary particle physics?
The surprising answer is that, as far as is known, the quantum framework holds in this domain as well. As physicists have explored smaller and smaller objects (first atoms, then nuclei, then
neutrons, then quarks), surprises were encountered and new rules were discovered—rules with names like quantum electrodynamics and quantum chromodynamics. But these new rules have always fit
comfortably within the framework of quantum mechanics.
See also:Quantum Chromodynamics; Quantum Electrodynamics; Quantum Field Theory; Quantum Tunneling; Virtual Processes
Feynman, R. QED: The Strange Theory of Light and Matter (Princeton University Press, Princeton, New Jersey, 1985).
Milburn, G. J. Schrödinger's Machines: The Quantum Technology Reshaping Everyday Life (W.H. Freeman, New York, 1997).
Styer, D. F. The Strange World of Quantum Mechanics (Cambridge University Press, Cambridge, UK, 2000).
Treiman, S. The Odd Quantum (Princeton University Press, Princeton, New Jersey, 1999).
Daniel F. Styer
Quantum Mechanics
Quantum mechanics, which is primarily concerned with the structures and activities of subatomic, atomic, and molecular entities, had a European provenance, and its story is in some ways as strange as
the ideas it espouses. Although the German physicist Max Planck (1858–1947) is often credited with originating quantum theory, and although this theory's fundamental constant, which ushered in the
disjunction between macroscopic and quantum realms, is named in his honor, it was the German Swiss physicist Albert Einstein (1879–1955) who really grasped the revolutionary consequences of Planck's
quantum as a discrete quantity of electromagnetic radiation (later named the photon). Ironically, Einstein would later distance himself from the mainstream interpretation of quantum mechanics.
The Danish physicist Niels Bohr (1885–1962), by combining the nuclear model of the atom with quantum ideas, developed an enlightening explanation of the radiative regularities of the simple hydrogen
atom, but the paradoxes of his theory (for example, nonradiating electron orbits) and its failure to make sense of more complex atoms led to a new quantum theory, which, in its first form of matrix
mechanics, was the work of the German physicist Werner Heisenberg (1901–1976), whose arrays of numbers (matrices) represented observable properties of atomic constituents. Heisenberg's matrix model
was highly mathematical, unlike the visualizable models favored by many scientists. However, in 1926 the Austrian physicist Erwin Schrödinger (1887–1961), basing his theory on a wave interpretation
of the electron developed by the French physicist Louis de Broglie (1892–1987), proposed a wave mechanics in which he treated the electron in an atom not as a particle but by means of a wave
function. Within a short time physicists proved that both matrix and wave mechanics gave equivalent quantum mechanical answers to basic questions about the atom.
Quantum mechanics proved extremely successful in providing physicists with detailed knowledge, confirmed by many experiments, of all the atoms in the periodic table, and it also enabled chemists to
understand how atoms bond together in simple and complex compounds. Despite its successes quantum mechanics provoked controversial interpretations and philosophical conundrums. Such quantum
physicists as Max Born (1882–1970) rejected the strict causality underlying Newtonian science and gave a probabilistic interpretation of Schrödinger's wave equation. Then, in 1927, Heisenberg
introduced his uncertainty principle, which stated that an electron's position and velocity could not be precisely determined simultaneously. Impressed by Heisenberg's proposal, Bohr, in Copenhagen,
developed an interpretation of quantum mechanics that became standard for several decades. This "Copenhagen interpretation," even though for some it was more a philosophical proposal than a
scientific explanation, garnered the support of such physicists as Heisenberg, Born, and Wolfgang Pauli (1900–1958). But its unification of objects, observers, and measuring devices; its acceptance
of discontinuous action; and its rejection of classical causality were unacceptable to such scientists as Einstein, Planck, and Schrödinger. To emphasize the absurdity of the Copenhagen
interpretation, Schrödinger proposed a thought experiment involving a cat in a covered box containing a radioactive isotope with a fifty-fifty chance of decaying and thereby triggering the release of
a poison gas. For Copenhagen interpreters, "Schrödinger's cat" remains in limbo between life and death until an observer uncovers the box; for Copenhagen critics, the idea of a cat who is somehow
both alive and dead is ridiculous.
This and other quantum quandaries led some physicists to propose other interpretations of quantum mechanics. For example, David Bohm (1917–1992), an American physicist who worked in England in the
period of American anticommunist hysteria associated with Senator Joseph McCarthy (1908–1957), proposed that Schrödinger's wave function described a real wave "piloting" a particle, and that the
paradoxes of quantum mechanics could be explained in terms of "hidden variables" that would preserve causality. Einstein, who had been critical of the Copenhagen interpretation since its founding (he
stated that "God does not play dice," and a mouse cannot change the world simply by observing it), proposed, with two collaborators, a thought experiment in which distantly separated particles could,
if the Copenhagen interpretation were true, instantaneously communicate with each other when an attribute of one of them is measured. Einstein would have been surprised when, much later, this
experiment was actually done and resulted in quantum nonlocal communication being verified. The paradoxes of this instantaneous "entanglement" have become largely accepted by both physicists and
After the early achievements of quantum mechanics it was natural for physicists to attempt to unify it with the other great modern theory of physics, relativity. In the late 1920s the Swiss-born
English physicist Paul Dirac (1902–1984) developed a relativistic wave equation whose significance some scholars compared to the discoveries of Newton and Einstein. The Dirac equation was not only
elegant but it also successfully predicted the positive electron. Even though Dirac declared that the general theory of quantum mechanics was "almost complete," the full union of quantum mechanics
and general relativity had not been achieved. Einstein spent the final decades of his life searching for a way to unify his general theory of relativity and Scottish physicist James Clerk Maxwell's
(1831–1879) theory of electromagnetism, and many theoreticians after Einstein have proposed ideas attempting to join together quantum mechanics, a very successful theory of the atomic world, and
general relativity, a very successful theory of the cosmic world. Superstring theory is one of these "theories of everything," and its assertion that everything, from gigantic galaxies to
infinitesimal quarks, can be explained by the vibrations of minuscule lines and loops of energy in ten dimensions has generated enthusiastic supporters as well as ardent critics, who maintain that
the theory, though elegant, is unverifiable and unfalsifiable (and hence not even a scientific theory).
The British cosmologist Stephen Hawking (b. 1942) has brought his interpretation of quantum physics and general relativity together to deepen astronomers' understanding of black holes, regions of
spacetime in which gravitational forces are so strong that not even photons can escape. Some optimists claim that the unification of quantum mechanics and general relativity has already been achieved
in superstring theory, whereas pessimists claim that this quest is really attempting to reconcile the irreconcilable. As Wolfgang Pauli, Einstein's colleague at the Institute for Advanced Study, once
said of his friend's search for a unified field theory: "What God has put asunder, let no man join together."
See alsoBohr, Niels; Einstein, Albert; Science.
Al-Khalili, Jim. Quantum. London, 2003. There have been many popularizations of quantum theory, and this illustrated vade mecum by an English physicist is a good example of the genre.
Mehra, Jagdish, and Helmut Rechenberg. The Historical Development of Quantum Theory. 6 vols. New York, 1982. Some historians of science, wary of the authors' uncritical approach, have expressed
reservations about the nine books of this set (some volumes have two parts), but the massive amount of scientific, historical, and biographical material collected by the authors can be helpful if
used judiciously.
Penrose, Roger. The Road to Reality: A Complete Guide to the Laws of the Universe. New York, 2005. In this comprehensive mathematical and historical account of scientists' search for the basic laws
underlying the universe, an important theme is the exploration of the compatibility of relativity and quantum mechanics.
Robert J. Paradowski
Quantum Mechanics
Quantum mechanics
Quantum mechanics is a method of studying the natural world based on the concept that waves of energy also have certain properties normally associated with matter, and that matter sometimes has
properties that we usually associate with energy. For example, physicists normally talk about light as if it were some form of wave traveling through space. Many properties of light—such as
reflection and refraction—can be understood if we think of light as waves bouncing off an object or passing through the object.
But some optical (light) phenomena cannot be explained by thinking of light as if it traveled in waves. One can only understand these phenomena by imagining tiny discrete particles of light somewhat
similar to atoms. These tiny particles of light are known as photons. Photons are often described as quanta (the plural of quantum) of light. The term quantum comes from the Latin word for "how
much." A quantum, or photon, of light, then, tells how much light energy there is in a "package" or "atom" of light.
The fact that waves sometimes act like matter and waves sometimes acts like waves is now known as the principle of duality. The term duality means that many phenomena have two different faces,
depending on the circumstances in which they are being studied.
Macroscopic and submicroscopic properties
Until the 1920s, physicists thought they understood the macroscopic properties of nature rather well. The term macroscopic refers to properties that can be observed with the five human senses, aided
or unaided. For example, the path followed by a bullet as it travels through the air can be described very accurately using only the laws of classical physics, the kind of physics originally
developed by Italian scientist Galileo Galilei (1564–1642) and English physicist Isaac Newton (1642–1727).
But the methods of classical physics do not work nearly as well—and sometimes they don't work at all—when problems at the submicroscopic level are studied. The submicroscopic level involves objects
and events that are too small to be seen even with the very best microscopes. The movement of an electron in an atom is an example of a submicroscopic phenomenon.
Words to Know
Classical mechanics: A collection of theories and laws that was developed early in the history of physics and that can be used to describe the motion of most macroscopic objects.
Macroscopic: A term describing objects and events that can be observed with the five human senses, aided or unaided.
Photon: A unit of energy.
Quantum: A discrete amount of any form of energy.
Wave: A disturbance in a medium that carries energy from one place to another.
In the first two decades of the twentieth century, physicists found that the old, familiar tools of classical physics produced peculiar answers or no answers at all in dealing with submicroscopic
phenomena. As a result, they developed an entirely new way of thinking about and dealing with problems on the atomic level.
Uncertainty principle
Some of the concepts involved in quantum mechanics are very surprising, and they often run counter to our common sense. One of these is another revolutionary concept in physics—the uncertainty
principle. In 1927, German physicist Werner Heisenberg (1901–1976) made a remarkable discovery about the path taken by an electron in an atom. In the macroscopic world, we always see objects by
shining light on them. Why not shine light on the electron so that its movement could be seen?
But the submicroscopic world presents new problems, Heisenberg said. The electron is so small that the simple act of shining light on it will knock it out of its normal path. What a scientist would
see, then, is not the electron as it really exists in an atom but as it exists when moved by a light shining on it. In general, Heisenberg went on, the very act of measuring very small objects
changes the objects. What we see is not what they are but what they have become as a result of looking at them. Heisenberg called his theory the uncertainty principle. The term means that one can
never be sure as to the state of affairs for any object or event at the submicroscopic level.
A new physics
Both the principle of duality and the uncertainty principle shook the foundations of physics. Concepts such as Newton's laws of motion still held true for events at the macroscopic level, but they
were essentially worthless in dealing with submicroscopic phenomena. As a result, physicists essentially had to start over in thinking about the ways they studied nature. Many new techniques and
methods were developed to deal with the problems of the submicroscopic world. Those techniques and methods are what we think of today as quantum physics or quantum mechanics.
[See also Light; Subatomic particles ]
quantum mechanics
quantum mechanics
Branch of physics that uses the
quantum theory
to explain the behaviour of
elementary particles
. According to
quantum theory
, all radiant energy emits and absorbs in multiples of tiny ‘packets’ or quanta. Atomic particles have wavelike properties and thereby exhibit a wave-particle duality. Sometimes the wave properties
dominate, and other times the particle aspects dominate. The
quantum theory
uses four
quantum numbers
to classify
and their atomic states: energy level, angular momentum, energy in a magnetic field and
. The
exclusion principle
says any two electrons in an atom cannot have the same energy and spin. A change in an electron, atom or molecule from one quantum state to another, called a
quantum jump
, is accompanied by the absorption or emission of a quantum. The
quantum field theory
seeks to explain this exchange. The strong interactions between
and between
are described by
quantum chronodynamics
. The idea that energy radiates and absorbs in packets was first proposed by German theoretical physicist Max
in 1900 to explain
black body
radiation. Using Planck's work, German-born US physicist Albert
quantized light radiation, and in 1905 explained the
photoelectric effect
. He chose the name of
for a quantum of light energy. In 1913, Danish physicist Niels
used quantum theory to explain atomic structure and atomic spectra, showing the relationship between the energy levels of an atom's electrons and the frequencies of radiation emitted or absorbed by
the atom. In 1924, French physicist Louis de
suggested that particles have wave properties, the converse having been postulated in 1905 by Albert Einstein. In 1926, Austrian physicist Erwin
used this hypothesis of
wave mechanics
to predict particle behaviour on the basis of wave properties, but a year earlier German physicist Werner
had produced a mathematical equivalent to Schrödinger's theory without using wave concepts at all. In 1928, English physicist Paul
unified these approaches while incorporating
into quantum mechanics (especially when large speeds are involved). This predicted the existence of
and helped develop the
quantum electrodynamics
theory of how charged subatomic particles interact within electric and magnetic fields. The
superstring theory
provides a possible answer to gravitational interaction. The complete, modern theory of quantum mechanics is the quantum field theory of
quantum electrodynamics
, also known as the quantum theory of light. It was derived by US theoretical physicist Richard
in the 1940s. The theory predicts that a collision between an electron and a proton should result in the production of a photon of
electromagnetic radiation
, which is exchanged between the colliding particles. Quantum mechanics remains a difficult system because the
uncertainty principle
, formulated in 1927 by Heisenberg, states that nothing on the atomic scale can be measured or observed without disturbing it. This makes it impossible to know the position and momentum of a particle
at the same time.
quantum mechanics
quan·tum me·chan·ics • pl. n. [treated as sing.] Physics the branch of mechanics that deals with the mathematical description of the motion and interaction of subatomic particles, incorporating the
concepts of quantization of energy, wave-particle duality, the uncertainty principle, and the correspondence principle.DERIVATIVES: quan·tum-me·chan·i·cal adj.
More From encyclopedia.com
About this article
Quantum Physics | {"url":"https://www.encyclopedia.com/science-and-technology/physics/physics/quantum-physics","timestamp":"2024-11-11T06:49:40Z","content_type":"text/html","content_length":"307954","record_id":"<urn:uuid:f75a4241-ff76-4ba5-affc-9d3fa435c17d>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00754.warc.gz"} |
Absolute Value - Meaning, How to Find Absolute Value, Examples - [[company name]] [[target location]], [[stateabr]]
Absolute ValueMeaning, How to Calculate Absolute Value, Examples
Many comprehend absolute value as the length from zero to a number line. And that's not incorrect, but it's by no means the entire story.
In mathematics, an absolute value is the magnitude of a real number without regard to its sign. So the absolute value is at all time a positive number or zero (0). Let's observe at what absolute
value is, how to calculate absolute value, few examples of absolute value, and the absolute value derivative.
Definition of Absolute Value?
An absolute value of a number is constantly positive or zero (0). It is the magnitude of a real number irrespective to its sign. That means if you have a negative number, the absolute value of that
number is the number disregarding the negative sign.
Definition of Absolute Value
The prior explanation states that the absolute value is the distance of a number from zero on a number line. So, if you think about that, the absolute value is the length or distance a number has
from zero. You can visualize it if you take a look at a real number line:
As shown, the absolute value of a figure is the distance of the figure is from zero on the number line. The absolute value of negative five is five due to the fact it is 5 units away from zero on the
number line.
If we plot negative three on a line, we can see that it is 3 units apart from zero:
The absolute value of negative three is 3.
Now, let's check out another absolute value example. Let's assume we hold an absolute value of 6. We can graph this on a number line as well:
The absolute value of 6 is 6. Hence, what does this tell us? It states that absolute value is at all times positive, even if the number itself is negative.
How to Find the Absolute Value of a Number or Expression
You need to know few points prior going into how to do it. A handful of closely linked properties will help you understand how the number within the absolute value symbol functions. Thankfully, what
we have here is an meaning of the ensuing 4 fundamental properties of absolute value.
Essential Properties of Absolute Values
Non-negativity: The absolute value of any real number is constantly positive or zero (0).
Identity: The absolute value of a positive number is the expression itself. Alternatively, the absolute value of a negative number is the non-negative value of that same expression.
Addition: The absolute value of a total is lower than or equivalent to the sum of absolute values.
Multiplication: The absolute value of a product is equivalent to the product of absolute values.
With above-mentioned four fundamental properties in mind, let's look at two more useful properties of the absolute value:
Positive definiteness: The absolute value of any real number is always zero (0) or positive.
Triangle inequality: The absolute value of the variance between two real numbers is less than or equivalent to the absolute value of the sum of their absolute values.
Now that we went through these characteristics, we can in the end start learning how to do it!
Steps to Discover the Absolute Value of a Expression
You are required to obey a couple of steps to calculate the absolute value. These steps are:
Step 1: Note down the expression whose absolute value you want to find.
Step 2: If the expression is negative, multiply it by -1. This will make the number positive.
Step3: If the expression is positive, do not alter it.
Step 4: Apply all properties significant to the absolute value equations.
Step 5: The absolute value of the figure is the expression you get after steps 2, 3 or 4.
Bear in mind that the absolute value symbol is two vertical bars on both side of a number or expression, similar to this: |x|.
Example 1
To start out, let's assume an absolute value equation, such as |x + 5| = 20. As we can observe, there are two real numbers and a variable inside. To solve this, we need to locate the absolute value
of the two numbers in the inequality. We can do this by following the steps mentioned above:
Step 1: We are given the equation |x+5| = 20, and we have to calculate the absolute value inside the equation to find x.
Step 2: By using the essential characteristics, we understand that the absolute value of the addition of these two figures is as same as the sum of each absolute value: |x|+|5| = 20
Step 3: The absolute value of 5 is 5, and the x is unidentified, so let's get rid of the vertical bars: x+5 = 20
Step 4: Let's calculate for x: x = 20-5, x = 15
As we see, x equals 15, so its distance from zero will also be as same as 15, and the equation above is genuine.
Example 2
Now let's try one more absolute value example. We'll utilize the absolute value function to get a new equation, like |x*3| = 6. To get there, we again need to observe the steps:
Step 1: We have the equation |x*3| = 6.
Step 2: We are required to find the value of x, so we'll begin by dividing 3 from both side of the equation. This step offers us |x| = 2.
Step 3: |x| = 2 has two possible results: x = 2 and x = -2.
Step 4: So, the original equation |x*3| = 6 also has two potential results, x=2 and x=-2.
Absolute value can include several intricate numbers or rational numbers in mathematical settings; still, that is something we will work on separately to this.
The Derivative of Absolute Value Functions
The absolute value is a continuous function, this refers it is varied everywhere. The ensuing formula gives the derivative of the absolute value function:
For absolute value functions, the domain is all real numbers except zero (0), and the range is all positive real numbers. The absolute value function increases for all x<0 and all x>0. The absolute
value function is consistent at 0, so the derivative of the absolute value at 0 is 0.
The absolute value function is not distinctable at 0 because the left-hand limit and the right-hand limit are not uniform. The left-hand limit is given by:
I'm →0−(|x|/x)
The right-hand limit is given by:
I'm →0+(|x|/x)
Because the left-hand limit is negative and the right-hand limit is positive, the absolute value function is not distinguishable at zero (0).
Grade Potential Can Guide You with Absolute Value
If the absolute value seems like a lot to take in, or if you're struggling with math, Grade Potential can guide you. We offer face-to-face tutoring by professional and authorized teachers. They can
guide you with absolute value, derivatives, and any other theories that are confusing you.
Call us today to learn more about how we can help you succeed. | {"url":"https://www.kansascityinhometutors.com/blog/absolute-value-meaning-how-to-find-absolute-value-examples","timestamp":"2024-11-11T19:14:15Z","content_type":"text/html","content_length":"78867","record_id":"<urn:uuid:a39752fb-2548-48b4-8d80-e858e04b0844>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00891.warc.gz"} |
2017 AMC 12A Problems/Problem 18
Let $S(n)$ equal the sum of the digits of positive integer $n$. For example, $S(1507) = 13$. For a particular positive integer $n$, $S(n) = 1274$. Which of the following could be the value of $S(n+1)
$\textbf{(A)}\ 1 \qquad\textbf{(B)}\ 3\qquad\textbf{(C)}\ 12\qquad\textbf{(D)}\ 1239\qquad\textbf{(E)}\ 1265$
Solution 1
Note that $n\equiv S(n)\bmod 9$, so $S(n+1)-S(n)\equiv n+1-n = 1\bmod 9$. So, since $S(n)=1274\equiv 5\bmod 9$, we have that $S(n+1)\equiv 6\bmod 9$. The only one of the answer choices $\equiv 6\bmod
9$ is $\boxed{(D)=\ 1239}$.
Solution 2
One possible value of $S(n)$ would be $1275$, but this is not any of the choices. Therefore, we know that $n$ ends in $9$, and after adding $1$, the last digit $9$ carries over, turning the last
digit into $0$. If the next digit is also a $9$, this process repeats until we get to a non-$9$ digit. By the end, the sum of digits would decrease by $9$ multiplied by the number of carry-overs but
increase by $1$ as a result of the final carrying over. Therefore, the result must be $9x-1$ less than original value of $S(n)$, $1274$, where $x$ is a positive integer. The only choice that
satisfies this condition is $\boxed{1239}$, since $(1274-1239+1) \bmod 9 = 0$. The answer is $\boxed{D}$.
Solution 3
Another way to solve this is to realize that if you continuously add the digits of the number $1274 (1 + 2 + 7 + 4 = 14, 1 + 4 = 5)$, we get $5$. Adding one to that, we get $6$. So, if we assess each
option to see which one attains $6$, we would discover that $1239$ satisfies the requirement, because $1 + 2 + 3 + 9 = 15$. $1 + 5 = 6$. The answer is $\boxed{D}$.
Solution 4(Similar to Solution 1)
Note that a lot of numbers can have a sum of $1274$, but what we use wishful thinking and want is some simple number $n$ where it is easy to compute the sum of the digits of $n+1$. This number would
consists of basically all digits $9$, since when you add $1$ a lot of stuff will cancel out and end up at $0$(ex: $399+1=400$). We see that the maximum number of $9$s that can be in $1274$ is $141$
and we are left with a remainder of $5$, so $n$ is in the form $99...9599...9$. If we add $1$ to this number we will get $99...9600...0$ so this the sum of the digits of $n+1$ is congruent to $6 \mod
9$. The only answer choice that is equivalent to $6 \mod 9$ is $1239$, so our answer is $\boxed{D}$ -srisainandan6
Notice that $S(n+1)=S(n)+1-9k$, where $k$ is the # of carry overs that happen
Video Solution by OmegaLearn
~ pi_is_3.14
See Also
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions. | {"url":"https://artofproblemsolving.com/wiki/index.php/2017_AMC_12A_Problems/Problem_18","timestamp":"2024-11-05T04:37:02Z","content_type":"text/html","content_length":"57478","record_id":"<urn:uuid:6a72b8ca-c8ce-4def-a5fb-43993d769816>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00797.warc.gz"} |
The BESSELK Function | SumProduct are experts in Excel Training: Financial Modelling, Strategic Data Modelling, Model Auditing, Planning & Strategy, Training Courses, Tips & Online Knowledgebase
A to Z of Excel Functions: The BESSELK Function
Welcome back to our regular A to Z of Excel Functions blog. Today we look at the BESSELK function.
The BESSELK function
Bessel functions were first defined by the mathematician Daniel Bernoulli and then generalised by Friedrich Bessel as the canonical solutions y(x) of the differential equation
(known as Bessel's differential equation) for an arbitrary complex number α, the order of the Bessel function. Although α and −α produce the same differential equation for real α, it is conventional
to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of α.
This is not meant to be a mathematical lecture. I will be out of my depth very quickly. Essentially, Excel has four modified Bessel functions, which may be used by specialists as and when needed.
BESSELK returns the modified Bessel function, which is equivalent to the Bessel functions evaluated for purely imaginary arguments.
The BESSELK function employs the following syntax:
The BESSELK function has the following arguments:
• x: required. This is the value at which to evaluate the function
• n: also required. This represents the order of the Bessel function. If n is not an integer, it is truncated accordingly.
It should be further noted that:
• If x is nonnumeric, BESSELK returns the #VALUE! error value
• If n is nonnumeric, BESSELK returns the #VALUE! error value
• If n < 0, BESSELK returns the #NUM! error value
• The n^th order modified Bessel function of the variable x is:
where J[n] and Y[n] are the J (BESSELJ) and Y (BESSELY) Bessel functions, respectively.
Please see my highly informative example below:
We’ll continue our A to Z of Excel Functions soon. Keep checking back – there’s a new blog post every other business day. | {"url":"https://www.sumproduct.com/blog/article/a-to-z-of-excel-functions/the-besselk-function","timestamp":"2024-11-10T14:35:35Z","content_type":"text/html","content_length":"22962","record_id":"<urn:uuid:fb9b309a-b62c-41ca-9407-d07fdd1d05aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00764.warc.gz"} |
LCM and HCF tricks, problems and formulas
LCM i.e. least common multiple is a number which is multiple of two or more than two numbers. For example: The common multiples of 3 and 4 are 12,24 and so on. Therefore, l.c.m.is smallest positive
number that is multiple of both. Here, l.c.m. is 12.. HCF i.e. highest common factor are those integral values of number that can divide that number. LCM and HCF problems are very important part of
all competitive exams.
Some important l.c.m. and h.c.f. tricks:
1) Product of two numbers = Their h.c.f. * Their l.c.m.
2) h.c.f. of given numbers always divides their l.c.m.
3) h.c.f. of given fractions =
h.c.f. of numerator
l.c.m. of denominator
4) l.c.m. of given fractions =
l.c.m. of numerator
h.c.f. of denominator
5) If d is the h.c.f. of two positive integer a and b, then there exist unique integer m and n, such that
d = am + bn
6) If p is prime and a,b are any integer then
,This implies
ab a b
7) h.c.f. of a given number always divides its l.c.m.
Most important points about l.c.m. and h.c.f. problems :
1) Largest number which divides x,y,z to leave same remainder = h.c.f. of y-x, z-y, z-x.
2) Largest number which divides x,y,z to leave remainder R (i.e. same) = h.c.f of x-R, y-R, z-R.
3) Largest number which divides x,y,z to leave same remainder a,b,c = h.c.f. of x-a, y-b, z-c.
4) Least number which when divided by x,y,z and leaves a remainder R in each case = ( l.c.m. of x,y,z) + R
HCF and LCM questions:
Problem 1
: Least number which when divided by 35,45,55 and leaves remainder 18,28,38; is?
: i) In this case we will evaluate l.c.m.
ii) Here the difference between every divisor and remainder is same i.e. 17.
Therefore, required number = l.c.m. of (35,45,55)-17 = (3465-17)= 3448.
Problem 2
: Least number which when divided by 5,6,7,8 and leaves remainder 3, but when divided by 9, leaves no remainder?
: l.c.m. of 5,6,7,8 = 840
Required number = 840 k + 3
Least value of k for which (840 k + 3) is divided by 9 is 2
Therefore, required number = 840*2 + 3
= 1683
Problem 3
: Greater number of 4 digits which is divisible by each one of 12,18,21 and 28 is?
: l.c.m. of 12,18,21,28 = 254
Therefore, required number must be divisible by 254.
Greatest four digit number = 9999
On dividing 9999 by 252, remainder = 171
Therefore, 9999-171 = 9828. | {"url":"https://www.bankexamstoday.com/2013/07/lcm-and-hcf-tricks-problems-and-formulas.html","timestamp":"2024-11-02T18:02:46Z","content_type":"application/xhtml+xml","content_length":"123651","record_id":"<urn:uuid:2c447093-2330-4232-8a3f-1e01ee4bad2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00022.warc.gz"} |
Optimal maintenance policy for a deteriorating system based on the improved δ-shock maintenance model
In the classic $\delta$ model, there is only one threshold for maintenance decision. Taking the different levels of failures causing by shocks into account, the interval between successive shocks and
different thresholds should be considered when making replacement decision. This paper proposes a replacement policy based on multi-threshold for a deteriorating system which is subject to $\delta$
-shock. This approach assumes the failure thresholds will geometrically decrease with the increasing repair times, and. The optimal replacement policy $N$ is determined by using this method, and the
average cost in life cycle is minimized. A numerical case study is conducted to validate the related concept and maintenance decision-making model. The results indicate that the proposed method can
effectively optimize the maintenance policy and minimize the life-cycle cost.
1. Introduction
It is generally known that the maintenance decision-making is based on the assumption “A repairable system is as good as new after maintenance” in the earlier research. That is so-called “the perfect
repair”. For a deteriorating system, it is not always as true as it is. With the performance degradation of the system, it maybe need much more time to restore it. Barlow and Hunter [1]. first
introduced the minimal repair model. In this model, the system continued to run after repair, while the failure rate after repair would be as well as before. Brown and Proschan [2] put forward an
imperfect repair model, and a perfect repair with probability $p$ and minimal maintenance with probability $1-p$. In addition, Lam and Stanley et al has done much related research [3-4].
$\delta$-shock is one of the shock model, and proposed by Lam Yeh at first. In the $\delta$-shock model, a shock is fatal if the interval between this and last shock arrival time is not greater than
a specified threshold $\delta$. The threshold $\delta$ is usually a constant. Wang Guan Jun [5] studied the $\delta$-shock model for the optimal replacement policy based on the random variable $\
delta$. Wang Xiao Lin et al. [7] assumed that the system has two kinds of failure mode, and established an imperfect repair model by constraint of availability. Recently, Chen guoqing and Lam Ling et
al. [7] proposed a multi-threshold method for reducing the system operation cost and optimizing the replacement policy [7].
In summary, the $\delta$-shock model has been applied in many cases of deteriorating law, and so has geometric process in maintenance. However, there are still some problems: 1) Rarely considering
the combine the $\delta$-shock model with geometric process to establish the deteriorating system maintenance policy; 2) Seldom considering the maintenance policy by constraint of availability; 3)
Although some literatures have already considered different failure state, the probability of different failure state is not clear. To solve the above problems, we propose the improved $\delta$-shock
2. Definition and assumption
Definition 1 [8]. Given two random variables $X$ and $Y$, $P\left(X>a\right)\ge P\left(Y>a\right)$ if for all real $t$;Then $X$ is called stochastically larger than $Y$ or $Y$ is stochastically less
than $X$. This is denoted by ${X}_{n}\ge \left(\le \right){X}_{n+1}$.
Definition 2. A stochastic process $\left\{{\tau }_{n},n=1,2,\cdots \right\}$ is called a geometric process (GP), if there exists a real $a>0$, such that $\left\{{a}^{n-1}{\tau }_{n},n=1,2,\cdots \
right\}$ forms a renewal process (RP). Real number $a$ called the ratio of the GP.
The $\delta$-shock maintenance model for a deteriorating system is introduced here by making the following assumptions.
Assumption 1. At the beginning a new system is installed. Whenever the system fails, it will be repaired or replaced. The system will be replaced by an identical new one sometime later. The system
requirement is that the system steady availability is not less than ${A}_{0}$.
Assumption 2. In $\delta$-shock model, the shocks will arrive according to a renewal process with inter-arrival times having a general distribution $F\left(t\right)$.
Assumption 3. In $\delta$-shock model, if the system has been repaired for $n$ times $\left(n=1,2,\cdots \right)$, the threshold of a failure shock will be ${b}^{n}\delta$$\left(b>1\right)$ where $b$
is the rate and$\mathrm{}\delta$ is the threshold of a failure shock for a new system. This means that whenever the time to the first shock following the nth repair or an inter-arrival time of two
successive shocks after the $n$th repair is less than ${b}^{n}\delta$, the system will fail. During the repair time, any shock arriving when the system is ineffective, under repair
Assumption 4. In the improving $\delta$-shock model, Assume that the system has two failure threshold ${\delta }_{1}$, ${\delta }_{2}$, $\left({{\delta }_{1}>\delta }_{2}\right)$. If shock
interarrival time of two successive shocks $t\in \left({\delta }_{2},{\delta }_{1}\right)$, the failure system is the first type of failure state, and the system need repair; If the failure system is
the second type of failure state, we need replace a new system.
Assumption 5. Assume that ${p}_{ik}$ is condition probability of the $i$th type $\left(i=1,2\right)$ failure state when system occurs the $k$th failure. Then under Assumptions 2, the system failure
probability is $p\left(t<{b}^{k-1}{\delta }_{1}\right)=F\left({b}^{k-1}{\delta }_{1}\right)$, and:
${p}_{1k}=\frac{F\left({b}^{k-1}{\delta }_{1}\right)-F\left({b}^{k-1}{\delta }_{2}\right)}{F\left({b}^{k-1}{\delta }_{1}\right)},{p}_{2k}=\frac{F\left({b}^{k-1}{\delta }_{2}\right)}{F\left({b}^{k-1}
{\delta }_{1}\right)}.$
Assumption 6. By applying the replacement policy $N$, the system has been replaced by an identical new one at the time following the $N$th failure of the 1th failure. The replacement time is a random
variable ${Z}_{1}$ with $E\left({Z}_{1}\right)={\beta }_{1}$; When the system is subject to the 2th kind failure, the failure is deadly, the system must be replaced immediately; The replacement time
is a random variable ${Z}_{2}$ with $E\left({Z}_{2}\right)={\beta }_{2}$.
Assumption 7. Let ${Y}_{i}$ be the repair time of the system after the$i$th failure. Then the repair time constitutes a GP with ratio $\theta$. Thus $E\left({Y}_{n}\right)={\theta }^{n-1}\mu$.
Assumption 8. The repair cost has a constant rate ${c}_{r}$, and the unit reward when the system is operating has a GP with ratio $k$$\left(0<k\le 1\right)$, this is ${c}_{w}^{n}={c}_{w}{k}^{n}$. The
replacement cost comprises two parts: the basic replacement cost$R,$and The other ones are cost proportional to the replacement time ${Z}_{1}$ at a constant rate ${c}_{1}$ and the replacement time $
{Z}_{2}$ at a constant rate ${c}_{2}$.
First, Policy $N$ we adopt in Assumption 3 we explain the reason of using replacement policy $N$. Besides policy $N$, policy $T$ is also applied, by which the system will be replaced by an identical
new one at a stopping time $T$. However, for the long-run average cost case, policy ${N}^{*}$ is at least as good as an optimal policy ${T}^{*}$. Thereafter Lam proved that the above result is true.
Second, for a deteriorating system, it will be more fragile and easier to break down after repair. As a result, the failure threshold $\delta$ of the system will be increasing with the number of
repairs taken; the operating reward of the system will be smaller and smaller, while the consecutive repair times of the system will be longer and longer. Assumption 3, 7 and 8 are approximated to
the geometric process of the above situations.
Third, due to ageing effect and accumulated wearing, it is reasonable to assume that the reward operating for a deteriorating system form a decreasing GP, and the consecutive repair times for the
system constitute an increasing GP. This is not only based on our general knowledge but also on the result in real data analysis, and Lam and Chan have studied related research. It was shown that on
average the GP model has been applied to the maintenance problems is reasonable.
3. Determining the length of a renewal cycle
First of all, we define a cycle is completed if a replacement is completed. Therefore, a cycle is actually either a time interval between the installation of a system and the first replacement or a
time interval between two consecutive replacements.
Now, assume a replacement policy $N$ is adopted. Let $W$ be the length of a cycle under replacement policy $N$; Assume that ${P}_{1}$ is $N$ times of failure are first one and ${P}_{2}$ is before the
$\left(m-1\right)$ times of failure are first failure, the first is the second in the $m$ time; it follows from Assumption 6 that:
$E\left(W\right)=E\left(\sum _{n=1}^{N}{X}_{n}+\sum _{n=1}^{N-1}{Y}_{n}+{Z}_{1}\right){P}_{1}+\sum _{m=1}^{N}E\left(\sum _{n=1}^{m}{X}_{n}+\sum _{n=1}^{m-1}{Y}_{n}+{Z}_{2}\right){P}_{2}.$
It follows from Assumption 4 and Assumption 5 that ${P}_{1}=\prod _{k=1}^{N}{p}_{1k}$, ${P}_{2}=\prod _{k=1}^{m-1}{p}_{1k}\bullet {p}_{2m}$.
According to the analysis method of the length of a renewal cycle, we can get the operating time in a renewal cycle $U\left(N\right)$, then:
$E\left(U\left(N\right)\right)=E\left(\sum _{n-1}^{N}{X}_{n}\right)\prod _{k=1}^{N}{p}_{1k}+\sum _{m=1}^{N}E\left(\sum _{n=1}^{m}{X}_{n}\right)\prod _{k=1}^{m-1}{p}_{1k}\bullet {p}_{2m}.$
Further, based on the renewal theorem, we can get the system of steady-state availability $A\left(N\right):$
Let the cost be $C\left(N\right)$, It follows from Assumption 7 with the help of Eq. (1) that it is easy to get the cost of a renewal cycle:
$\mathrm{C}\left(N\right)=\mathrm{}\left({c}_{r}\sum _{n=1}^{N-1}{Y}_{n}-{c}_{w}\sum _{n=1}^{N}{k}^{n}{X}_{n}+R+{c}_{1}{Z}_{1}\right)\prod _{k=1}^{N}{p}_{1k}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\
mathrm{}\mathrm{}+\sum _{m=1}^{N}\left({c}_{r}\sum _{n=1}^{m-1}{Y}_{n}-{c}_{w}\sum _{n=1}^{m}{k}^{m}{X}_{n}+\mathrm{}R+{c}_{2}{Z}_{2}\right)\bullet \prod _{k=1}^{m-1}{p}_{1k}^{m-1}\bullet {p}_{2m}$
$E\left(C\left(N\right)\right)=\left({c}_{r}\sum _{n=1}^{N-1}E\left({Y}_{n}\right)-{c}_{w}\sum _{n=1}^{N}{k}^{n}\bullet E\left({X}_{n}\right)+R+{c}_{1}E\left({Z}_{1}\right)\right)\bullet \prod _{k=1}
^{N}{p}_{1k}$$+\sum _{m=1}^{N}\left({c}_{r}\sum _{n}^{m-1}E\left({Y}_{n}\right)-{c}_{w}\sum _{n=1}^{m}{k}^{m}\bullet E\left({X}_{n}\right)+R+{c}_{1}E\left({Z}_{1}\right)\right)\prod _{k=1}^{N}{p}_
{1k}$$+\sum _{m=1}^{N}\left({c}_{r}\sum _{n=1}^{m-1}E\left({Y}_{n}\right)-{c}_{w}\sum _{n=1}^{m}{k}^{m}E\left({X}_{n}\right)+R+{c}_{2}E\left({Z}_{2}\right)\right)\bullet \prod _{k=1}^{m-1}{p}_{1k}\
bullet {p}_{2m}.$
Let the average cost be $c\left(N\right)$, according to the renewal theorem, $c\left(N\right)$ is given by:
4. Establishing maintenance policy model
Now, by using Eq. (2)-(6), it is easy to get the explicit expression of the long-run average cost per unit time is derived on the constraint of availability:
ge {A}_{0}.$
To solve the above model, the main problem is to solve out $E\left(C\left(N\right)\right)$, $E\left(U\left(N\right)\right)$, $E\left(W\right)$, it from Eq. (4) and (5) know that it is key to get $E\
left({X}_{n}\right)$, $E\left({Y}_{n}\right)$, $E\left({Z}_{1}\right)$, $E\left({Z}_{2}\right)$ and $E\left({Z}_{1}\right)={\beta }_{1}$, $E\left({Z}_{2}\right)={\beta }_{2}$, thus, the problem is
reduced to find the values $E\left({X}_{n}\right).$
Let ${M}_{n}$ be the shock time of the system following the $n$th operating, denote the system is subject to ${M}_{n}-1$ time shock without failure, and in the ${M}_{n}$ shock to fail. It is easy
that ${M}_{n}$ obey to the geometric distribution. Therefore:
$p\left({M}_{n}=l\right)=p{\left(\tau >{b}_{n-1}{\delta }_{1}\right)}^{l-1}\bullet p\left(\tau <{b}_{n-1}{\delta }_{1}\right)={\left[1-F\left({b}_{n-1}{\delta }_{1}\right)\right]}^{l-1}F\left({b}_
{n-1}{\delta }_{1}\right),$$l=1,2,\cdots .$
Let ${V}_{ni}$ be the shock inter-arrival time of the $i$th shocks time of the system following the $n$th operating $\left(i=1,2,\cdots ,{M}_{n}\right)$. Hence:
$E\left({V}_{ni}|{V}_{ni}>{b}^{n-1}{\delta }_{1}\right)=\frac{{\int }_{{b}^{n-1}{\delta }_{1}}^{+\mathrm{\infty }}tf\left(t\right)dt}{1-F\left({b}^{n-1}{\delta }_{1}\right)},$
$E\left({V}_{n{M}_{n}}|{V}_{n{M}_{n}}>{b}^{n-1}{\delta }_{1}\right)=\frac{{\int }_{0}^{{b}^{n-1}{\delta }_{1}}tf\left(t\right)dt}{F\left({b}^{n-1}{\delta }_{1}\right)}.$
It from Eq. (9), (10) know that it is easy to get $E\left({X}_{n}\right)$:
$E\left({X}_{n}\right)=\sum _{l=1}^{\infty }E\left({X}_{n}|{M}_{n}=l\right)p\left({M}_{n}=l\right)$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}=\sum _{l=1}^{\infty }\left[\left(l-1\right)\
frac{{\int }_{{b}^{n-1}{\delta }_{1}}^{+\infty }tf\left(t\right)dt}{1-F\left({b}^{n-1}{\delta }_{1}\right)}+\frac{{\int }_{0}^{{b}^{n-1}{\delta }_{1}}tf\left(t\right)dt}{F\left({b}^{n-1}{\delta }_{1}
\right)}\right]{\left[1-F\left({b}^{n-1}{\delta }_{1}\right)\right]}^{l-1}F\left({b}^{n-1}{\delta }_{1}\right)$$=\frac{{\int }_{0}^{+\infty }tf\left(t\right)dt}{F\left({b}^{n-1}{\delta }_{1}\right)}=
\frac{{\int }_{0}^{+\infty }exp\left(-{\left(t{\eta }^{-1}\right)}^{\gamma }\right)\bullet {\left(\frac{t}{\eta }\right)}^{\gamma -1}\bullet \frac{\gamma }{\eta }dt}{1-exp\left(-{\left(t{\eta }^{-1}\
right)}^{\gamma }\right)}.$
From what has been discussed above, we can get $A\left(N\right)$ and $c\left(N\right)$.
5. Example
In this section, we through a numerical example for the validity of this model. Assume that the shocks will arrive according to a renewal process with interval having Weibull distribution $F\left(t\
right)=1-exp\left(-{\left(t{\eta }^{-1}\right)}^{\gamma }\right)$ related parameters settings are shown in Table 1.
Table 1Parameter setting table
${\delta }_{1}$ ${\delta }_{2}$ ${c}_{r}$ ${c}_{w}$ ${c}_{1}$ ${c}_{2}$ $R$ $k$ $b$ $\theta$ $\mu$ ${\beta }_{1}$ ${\beta }_{2}$ $\eta$ $\gamma$
4.5 1.5 30 80 50 80 400 0.85 1.02 0.75 6 2 4 10 2
In combination the above parameters with the explicit expression of $A\left(N\right)$ and $c\left(N\right)$, by the numerical calculation, we can know the concrete result $A\left(N\right)$ sand $c\
left(N\right)$, seeing in Table 2, and the trend of $A\left(N\right)$ with $N$ is shown in Fig. 1, and the $c\left(N\right)$ trend of with N is shown in Fig. 1. From Fig. 1 and Fig. 2 with help Table
2, it is easy that the min cost radio be $c\left(1\right)=-46.93$ with the system of steady-state availability $A\left(N\right)$ to the max, this is $A\left(1\right)=0.95$. In conclusion, the optimal
replacement policy ${N}^{*}$ is get.
Table 2The Result of cN and AN
$N$ $c\left(N\right)$ $N$ $c\left(N\right)$ $N$ $c\left(N\right)$ $N$ $A\left(N\right)$ $N$ $A\left(N\right)$ $N$ $A\left(N\right)$
1 -46.93 6 -24.93 11 -13.44 1 0.95 6 0.82 11 0.59
2 -40.22 7 -22.48 12 -12.11 2 0.91 7 0.79 12 0.53
3 -35.14 8 -20.31 13 -10.91 3 0.88 8 0.75 13 0.48
4 -31.08 9 -18.34 14 -9.84 4 0.86 9 0.71 14 0.43
5 -27.75 10 -14.92 15 -8.89 5 0.84 10 0.65 15 0.39
Fig. 1c(N) against N for Weibull distribution
Fig. 2A(N) against N for Weibull distribution
6. Conclusions
In this paper, we proposed a $\delta$-shock model based on multi-threshold. Based on different degrees of failure caused by different interval of two successive shocks, the different failure
thresholds were made. The failure level was determined by the failure threshold $\delta$. For a deteriorating system, it would be more fragile and much easier to break down after repair. We
characterized the deterioration as following: 1) the failure threshold of the system will increase; 2) the repair time will increase; 3) the reward of system operation will decrease, to make it much
more close to the reality. In this paper, the optimal replacement policy N was presented analytically. And it provided a numerical example to illustrate the proposed model, and validated the
rationality of the method. It would have theoretical and practical significance for the analysis of the degradation system maintenance policy.
In addition, the model applies to many reliability systems, for example electronic equipment, machinery and computer systems, which makes that the model is successful in a larger scale. In this
model, we do not need specific distribution of correlated random variables, thereby, the model needs less constrains. To make the model more perfect and practical significance, further research is
that preventive repair is taken into the model. Besides, as the system parameter, the threshold value can be estimated. Few papers studied it. A continuation of this work intends to investigate the
• Barlow R. E., Hunter L. C. Optimum preventive maintenance policy. Operations Research, Vol. 8, 1960, p. 99-100.
• Brown M., Proscham F. Imperfect repair. Application Probability, Vol. 20, 1983, p. 851-859.
• Lam Y. A geometric process maintenance model. Southeast Asian Bulletin of Mathematics, Vol. 27, 2003, p. 295-305.
• Lam Y. An optimal repairable replacement model for deteriorating systems. Journal of Applied Probability, Vol. 28, 1991, p. 843-851.
• Wang Guanjun, Zhang Yuan General δ-shock model and its optimal replacement policy. OR Transactions, Vol. 7, Issue 3, 2003, p. 75-82.
• Wang Xiaolin, Cheng Zhijun, Guo Bo, et al. Imperfect maintenance decision for a deteriorating system based on shock model. Systems Engineering Theory and Practice, Vol. 31, Issue 12, 2011, p.
• Cheng Guoqing, Li Ling, Liu Bingxiang, et al. Optimal maintenance strategy for an extended δ-shock model with multi-failure thresholds. Mathematics applicata, Vol. 26, Issue 1, 2013, p. 165-171.
• Ross S. M. Stochastic Processes. New York, Wiley, 1996.
About this article
15 September 2014
03 November 2014
improved δ-shock model
failure threshold δ
geometric process
replacement policy N
Copyright © 2014 JVE International Ltd.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/15553","timestamp":"2024-11-14T13:45:06Z","content_type":"text/html","content_length":"144242","record_id":"<urn:uuid:49a8fb8b-1bdb-4671-8a11-61c3ceaa4ee1>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00029.warc.gz"} |
How to Calculate the Difference Between Two Dates using PHP? - Local Coder
How to Calculate the Difference Between Two Dates using PHP?
When working with dates in PHP, you may come across situations where you need to calculate the difference between two dates. This can be useful in a variety of scenarios, such as calculating the age
of a person, determining the duration of an event, or finding the time passed since a specific date.
In this article, we will explore different methods to calculate the difference between two dates using PHP. We will discuss the use of built-in PHP functions, such as strtotime() and date_diff(), as
well as explore some custom approaches.
Using strtotime() Function
The strtotime() function in PHP is a powerful function that can convert a given string representation of a date and time into a Unix timestamp. By utilizing this function, we can easily calculate the
difference between two dates.
Here's an example of how to use strtotime() to calculate the difference between two dates:
$startDate = strtotime('2007-03-24');
$endDate = strtotime('2009-06-26');
$differenceInSeconds = $endDate - $startDate;
$years = floor($differenceInSeconds / (365 * 24 * 60 * 60));
$months = floor(($differenceInSeconds - ($years * 365 * 24 * 60 * 60)) / (30 * 24 * 60 * 60));
$days = floor(($differenceInSeconds - ($years * 365 * 24 * 60 * 60) - ($months * 30 * 24 * 60 * 60)) / (24 * 60 * 60));
echo $years . ' years, ' . $months . ' months, ' . $days . ' days';
In this code snippet, we first use the strtotime() function to convert the start and end dates into Unix timestamps. Then, we calculate the overall difference in seconds by subtracting the start date
from the end date.
Next, we calculate the number of years by dividing the difference in seconds by the number of seconds in a year (365 days * 24 hours * 60 minutes * 60 seconds). We use the floor() function to round
down the result to the nearest whole number.
Similarly, we calculate the number of months and days by using the same logic. We subtract the already calculated years and months from the overall difference in seconds to get the remaining time in
seconds, and then divide it by the number of seconds in a month (30 days * 24 hours * 60 minutes * 60 seconds) or a day (24 hours * 60 minutes * 60 seconds) respectively.
Finally, we print the calculated years, months, and days using the echo statement.
Using the date_diff() Function
If you are using PHP version 5.3.0 or higher, you have access to the date_diff() function, which provides a more streamlined way to calculate the difference between two dates.
Here's an example:
$startDate = new DateTime('2007-03-24');
$endDate = new DateTime('2009-06-26');
$difference = $startDate->diff($endDate);
echo $difference->y . ' years, ' . $difference->m . ' months, ' . $difference->d . ' days';
In this code snippet, we create two new DateTime objects representing the start and end dates. We then use the diff() method of the start date object, passing in the end date object, to calculate the
The diff() method returns a DateInterval object, which provides access to the calculated years, months, and days through its properties: y, m, and d.
Finally, we use the echo statement to display the calculated difference.
Alternative Approaches
Aside from using the built-in PHP functions, you can also implement your own custom logic to calculate the difference between two dates. However, this approach may require more effort and can be more
One possible approach is to convert the start and end dates into Julian Day Numbers (JDN) and find the difference between them. Here's an example:
$start = new DateTime('2007-03-24');
$end = new DateTime('2009-06-26');
$startJdn = gregoriantojd($start->format('n'), $start->format('j'), $start->format('Y'));
$endJdn = gregoriantojd($end->format('n'), $end->format('j'), $end->format('Y'));
$differenceJdn = $endJdn - $startJdn;
$years = floor($differenceJdn / 365);
$months = floor(($differenceJdn - ($years * 365)) / 30);
$days = $differenceJdn - ($years * 365) - ($months * 30);
echo $years . ' years, ' . $months . ' months, ' . $days . ' days';
In this example, we first create two new DateTime objects representing the start and end dates. We then use the format() method to extract the month, day, and year components of the dates.
Next, we convert the extracted components into Julian Day Numbers using the gregoriantojd() function. This function takes the month, day, and year as input parameters and returns the corresponding
Julian Day Number.
After obtaining the Julian Day Numbers for both dates, we subtract the start date's Julian Day Number from the end date's Julian Day Number to get the overall difference in days.
Finally, we calculate the number of years and months by dividing the difference in days by 365 and 30 respectively, and calculate the remaining days by subtracting the already calculated years and
months from the difference in days.
We then use the echo statement to display the calculated difference.
Calculating the difference between two dates can be a common task when working with dates in PHP. In this article, we explored three different methods to accomplish this task: using the strtotime()
function, the date_diff() function, and a custom approach with Julian Day Numbers.
By using these methods, you should be able to easily calculate the difference between two dates in various formats. You can choose the approach that best fits your needs based on your PHP version and
the desired level of customization. | {"url":"https://localcoder.net/how-to-calculate-the-difference-between-two-dates-using-php","timestamp":"2024-11-05T19:46:31Z","content_type":"text/html","content_length":"27987","record_id":"<urn:uuid:98b578d2-dfc5-45a8-9009-9977689d6358>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00823.warc.gz"} |
Radford-Majid bosonisation
nLab Radford-Majid bosonisation
Suppose $H$ is a cocommutative Hopf algebra in a braided monoidal category $C$ and suppose $H$ acts on another Hopf algebra $B$ in $C$. Then a smash product and smash coproduct of the two has a
structure of an ordinary Hopf algebra called a Radford biproduct or Majid bosonisation of $H$. In an intuitive description, it turns the braided statistics into a bosonic statistics. A particular
case is the case of super-Hopf algebras related to the more traditional physical concept of bosonization.
Majid uses English spelling bosonisation rather than bosonization, so it is a tradition that the following literature uses the same spelling for the algebraic concept.
One can perform bosonisation on Nichols algebras.
• David E. Radford, Hopf algebras with projection, J. Algebra 92 (1985) 322–347 doi
• Shahn Majid, Cross products by braided groups and bosonization, Journal of Algebra 163:1 (1994) 165–190 doi
• Shahn Majid, Double-bosonization of braided groups and the construction of $U_q(g)$, Math. Proc. Cambridge Philos. Soc. 125 (1999) 151–192
Last revised on October 20, 2024 at 18:33:33. See the history of this page for a list of all contributions to it. | {"url":"https://ncatlab.org/nlab/show/Radford-Majid+bosonisation","timestamp":"2024-11-09T05:01:06Z","content_type":"application/xhtml+xml","content_length":"16049","record_id":"<urn:uuid:50c48ec1-d258-43bf-a96d-303f68d4de8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00623.warc.gz"} |
Factorial Program in Python(Factorial of a Number in Python)
The factorial of a number is an essential concept in mathematics and programming. It is denoted as n!, which represents the product of all positive integers from 1 to n. For example, the factorial of
5 (written as 5!) is calculated as:
In Python, calculating the factorial of a number can be done in several ways: using loops, recursion, or built-in functions. In this article, we will explore various ways to write a program to find
the factorial of a number in Python, covering loops and recursion.
What is the Factorial of a Number?
Factorial is the product of all positive integers up to a given number. Mathematically, it is expressed as:
Special Case:
• The factorial of 0 is defined as 1 (0! = 1).
Factorial Program in Python Using For Loop
A for loop is one of the simplest ways to calculate the factorial of a number in Python. Here's how you can implement it:
def factorial_for_loop(n):
factorial = 1
for i in range(1, n + 1):
factorial *= i
return factorial
# Example
number = 5
print(f"Factorial of {number} using for loop is:", factorial_for_loop(number))
• The factorial_for_loop function takes an integer n as input.
• A loop runs from 1 to n, multiplying each number and storing the result in the factorial variable.
• The final result is returned after the loop completes.
Factorial of 5 using for loop is: 120
Factorial Program in Python Using While Loop
The while loop can also be used to calculate the factorial of a number. Here's a Python program to achieve that:
def factorial_while_loop(n):
factorial = 1
while n > 0:
factorial *= n
n -= 1
return factorial
# Example
number = 5
print(f"Factorial of {number} using while loop is:", factorial_while_loop(number))
• The factorial_while_loop function uses a while loop that continues to multiply the current value of n and decrements n in each iteration until it reaches 0.
Factorial of 5 using while loop is: 120
Factorial Program in Python Using Recursion
Recursion is a powerful concept where a function calls itself to solve smaller instances of the same problem. Here's how you can calculate the factorial using recursion:
def factorial_recursion(n):
if n == 0 or n == 1:
return 1
return n * factorial_recursion(n - 1)
# Example
number = 5
print(f"Factorial of {number} using recursion is:", factorial_recursion(number))
• The factorial_recursion function checks for the base case (n == 0 or n == 1), where it returns 1.
• Otherwise, it multiplies n by the factorial of n-1, recursively calculating the factorial.
Factorial of 5 using recursion is: 120
Recursive Process Breakdown:
For n = 5, the recursive function operates as follows:
factorial_recursion(5) = 5 * factorial_recursion(4)
factorial_recursion(4) = 4 * factorial_recursion(3)
factorial_recursion(3) = 3 * factorial_recursion(2)
factorial_recursion(2) = 2 * factorial_recursion(1)
factorial_recursion(1) = 1 (base case)
Factorial Program in Python Using Function
It’s common to encapsulate the factorial logic into a reusable Python function, which can be invoked multiple times with different values. Here’s an example:
def factorial(n):
if n == 0:
return 1
result = 1
for i in range(1, n + 1):
result *= i
return result
# Example
number = 6
print(f"Factorial of {number} is:", factorial(number))
• The factorial function is a general implementation that uses a for loop inside a function, returning the factorial result.
Factorial of 6 is: 720
Using Python's Built-in Function
If you prefer not to implement your own function, Python's math library provides a built-in function to calculate factorials.
import math
number = 5
print(f"Factorial of {number} using math.factorial is:", math.factorial(number))
Factorial of 5 using math.factorial is: 120
The math.factorial function handles the calculation internally and is highly optimized.
Use Cases of Factorial in Python
Factorials are important in mathematics and computer science because they help solve problems involving counting, arrangements, and probabilities. Here are a few key use cases:
1. Combinatorics:
Factorials are used to calculate permutations (arrangements) and combinations (selections) of objects. For example, the number of ways to arrange n objects is given by n!.
2. Probability:
Factorials are essential in probability calculations, especially in problems involving random selections or outcomes. They help compute the number of possible outcomes in events like lotteries or
3. Algorithms:
In computer science, factorials are used in recursive algorithms and problems like the traveling salesman problem, where all possible solutions must be evaluated.
4. Mathematical Series:
Factorials appear in expansions like the Taylor series, used to approximate functions in calculus.
5. Biology and Genetics:
Factorials help compute the possible sequences in DNA or protein structures, vital in genetics and bioinformatics.
6. Optimization Problems:
Factorials are used in scheduling, logistics, and pathfinding problems, where all possible routes or arrangements need to be considered.
Enhance your skills with our Python tutorial, explore the Python cheat sheet, and try our latest tool the online Python compiler!
In this article, we explored multiple ways to calculate the factorial of a number in Python and python programs to find factorial of a number, including the use of for loops, while loops, recursion,
and Python's built-in function. Each method has its use case:
• For loops: Easy to understand and efficient for small to medium-sized inputs.
• While loops: Offers flexibility if you need more control over the loop termination condition.
• Recursion: Elegant but may face limitations due to Python's recursion depth limits for large inputs.
• Built-in functions: Fast and optimized, recommended for large-scale applications.
Take the next step in your career with our data science course or advance further with a masters in data science! | {"url":"https://www.almabetter.com/bytes/articles/factorial-program-in-python","timestamp":"2024-11-13T22:20:28Z","content_type":"text/html","content_length":"1048986","record_id":"<urn:uuid:d373468a-816b-4341-8811-6d8d172ffbf8>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00159.warc.gz"} |
Reasoning Ability Quiz For SBI Clerk Prelims 2021- 13th May
Q1. A person starts walking towards point Y which in North direction of starting point. From point Y, he start walking in west direction then he takes right turn to reach at point W. From point W, he
takes a right turn and stop at point J. Find in which direction person is facing now?
(a) North
(b) South
(c) East
(d) West
(e) None of these
Q2. Point H is 5m west of J. Point J is 10m south of point T. Point T is 12m west of G. Point G is 6m north of Point D. Point D is 15m east of Point Y. What is the shortest distance between Point J
and Point Y?
(a) 10m
(b) 4m
(c) 9m
(d) 7m
(e) None of these
Directions (3-7): Study the information carefully and answer the questions given below.
Amit starts his journey from point S, walks 20m towards west to reach at point D, then takes a left turn and walks for 8m to reach point N. From point N he takes a left turn and walks for 18m to
reach at point X. From point X, he takes a left turn and walks 12m to reach at point Y, then takes a right turn and walks 6m to reach point T. From point T, He walks 4m in south direction and reached
point M.
Q3. What is the shortest distance between point S and M?
(a) 4m
(b) 6m
(c) 2m
(d) 5m
(e) None of these
Q4. If point H is 6m west of point M then in which direction point Y with respect to point H?
(a) South
(b) North-west
(c) South-east
(d) East
(e) North
Q5. In which direction point T with respect of D?
(a) North
(b) South-west
(c) North-east
(d) East
(e) South-east
Q6. What is the shortest distance between Point D and Point X?
(a) √388m
(b) 10m
(c) 15m
(d) 20m
(e) None of these
Q7. In which direction point N is with respect of Y?
(a) North-east
(b) South-west
(c) North-west
(d) south
(e) None of these
Directions (8-10): Study the information carefully and answer the questions given below.
Point A is 12m west of point B. Point F is 8m east of point D. Point G is 10m north of point E. Point C is 8m south of point B. Point D is 3m north of point C. Point E is 6m west of point C.
Q8. What is the shortest distance between D and A?
(a) 13m
(b) 10m
(c) 12m
(d) 14m
(e) None of these
Q9. Point D is in which direction with respect to G?
(a) North-East
(b) South-west
(c) East
(d) North
(e) South-east
Q10. If Point Y is 7m south of point G than what is the shortest distance between Point Y and Point F?
(a) 16m
(b) 14m
(c) 13m
(d) 10m
(e) None of these
Directions (11-13): Study the information carefully and answer the questions given below.
Dheeraj starts his journey from point S, walks 10m towards North to reach at point D, then takes a right turn and walks 6m to reach point F. From point F, He start walking in west direction and walk
4m to reach at point N, From point N he takes a left turn and walks 9m to reach at point M, then takes a left turn again and walks 4m to reach point T.
Q11. What is the shortest distance between point T and F?
(a) 6m
(b) 5m
(c) 8m
(d) 9m
(e) None of these
Q12. In which direction point N with respect to T?
(a) North
(b) North east
(c) South east
(d) North west
(e) Can’t be determined
Q13. If point K is 2m north of point S and the shortest distance between point K and F is 10m then, what is the distance between point D and K?
(a) 6m
(b) 8m
(c) 3m
(d) 10m
(e) None of these
Directions (14-15): Study the information carefully and answer the questions given below.
A person starts walking from point X to the east direction, after walking of 5km to reach point P from there he turns towards left and then walk 4km to reach point Q. Then turn to his right and walk
5km to reach point R then turn to his right and walk 4km to reach point S then move towards his left and walk 3km to reach point T then again turn right and walk 3km to reach point U finally turn to
his right and walk 8km to reach point Y.
Q14. How far (Shortest Distance) and which direction is point X with respect to point Y?
(a) √30 km North
(b) 34 km North-East
(c) 2√34 km South-West
(d) √34 km North-West
(e) None of these.
Q15. In which direction is point T with respect to point Y?
(a) North east
(b) South east
(c) North
(d) West
(e) South west
Practice More Questions of Reasoning for Competitive Exams: | {"url":"https://www.bankersadda.com/reasoning-ability-quiz-for-sbi-clerk-prelims-2021-13th-may/","timestamp":"2024-11-11T23:36:21Z","content_type":"text/html","content_length":"613335","record_id":"<urn:uuid:bf0dba4b-c70b-4617-9bc2-393d959fa7e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00148.warc.gz"} |
P-V diagram in context of pressure volume work
31 Aug 2024
Journal of Thermodynamics and Mechanics
Volume 12, Issue 3, 2022
The P-V Diagram: A Fundamental Tool for Understanding Pressure-Volume Work
The P-V diagram is a graphical representation of the relationship between pressure (P) and volume (V) in a thermodynamic system. It plays a crucial role in understanding the concept of
pressure-volume work, which is essential in various fields such as mechanical engineering, chemical engineering, and physics. In this article, we will discuss the P-V diagram, its significance, and
the formulae associated with it.
The P-V diagram is a graphical representation of the relationship between pressure (P) and volume (V) in a thermodynamic system. It is a fundamental tool for understanding the behavior of gases and
liquids under various conditions. The diagram is typically plotted on a coordinate plane, where the x-axis represents the volume (V) and the y-axis represents the pressure (P).
The P-V relationship can be described by the following formula:
P = -dU/dV
where P is the pressure, dU is the change in internal energy, and dV is the change in volume.
This equation shows that the pressure of a system is directly proportional to the negative change in internal energy per unit change in volume.
Another important formula related to the P-V diagram is:
W = ∫P dV
where W is the work done on or by the system, and the integral represents the area under the P-V curve.
The P-V diagram has significant implications for understanding pressure-volume work. It allows us to visualize the relationship between pressure and volume in a thermodynamic system, which is
essential for designing and optimizing various systems such as engines, compressors, and pumps.
In addition, the P-V diagram provides valuable information about the stability of a system under different conditions. For example, if the P-V curve shows a sudden increase in pressure at a specific
volume, it may indicate an unstable condition that requires attention.
The P-V diagram is a fundamental tool for understanding pressure-volume work and its significance in various fields such as mechanical engineering, chemical engineering, and physics. The formulae
associated with the P-V diagram provide valuable insights into the behavior of thermodynamic systems under different conditions.
Related articles for ‘pressure volume work’ :
• Reading: P-V diagram in context of pressure volume work
Calculators for ‘pressure volume work’ | {"url":"https://blog.truegeometry.com/tutorials/education/bf1c3341321cdc59118f7e0bc5b9340b/JSON_TO_ARTCL_P_V_diagram_in_context_of_pressure_volume_work.html","timestamp":"2024-11-04T08:09:59Z","content_type":"text/html","content_length":"16973","record_id":"<urn:uuid:d5bd54fc-5c42-490c-ab97-e2089367579d>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00450.warc.gz"} |
What is a Bar Graph ? Definition and Examples
What is a bar graph ?
A bar graph is a kind of graph that we use to compare quantities. Rectangular bars are used to compare these quantities and we can arrange these bars either vertically or horizontally.
The vertical bar graph below shows people's favorite seasons of the year.
1. What is the least favorite season according to the bar graph?
The least favorite season is Winter.
2. What is the most favorite season according to the bar graph?
The most favorite season is summer.
3. About how many people like summer more than winter?
Seven people said they preferred summer and four people said they preferred summer. Therefore, three more people preferred summer than winter.
We can also arrange the information with a horizontal bar graph. | {"url":"https://www.math-dictionary.com/what-is-a-bar-graph.html","timestamp":"2024-11-14T00:37:46Z","content_type":"text/html","content_length":"23687","record_id":"<urn:uuid:1e8b9831-dca5-4757-898b-85966f55ed98>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00851.warc.gz"} |
The prediction of pseudo-resonance positions in the Schwinger variational principle
The Schwinger Variational Principle is appled to s-wave electron-hydrogen atom scattering. It is shown computationally, that, consistent with a paper by B. Apagyi, P. Levay, and K. Ladanyi, there are
pseudo-resonances at the static exchange level of approximation, but not at the static level. The T-matrix was employed, as well as, the K-matrix version of the Schwinger Principle, with a real
Slater basis, and obtained the same results in both. The origin of the pseudo-resonances as resulting from singularities in the separable potential that is effectively employed in the
Lippmann-Schwinger equation from which the Schwinger Variational Principle can be derived. The determination of the pseudo-resonance parameters from the separable potential is computationally
inexpensive and may be used to predict the pseudo-resonance parameters for the scattering calculations so that they may be avoided.
Space Science and Engineering Research Forum
Pub Date:
□ S Waves;
□ Scattering;
□ Static Characteristics;
□ Variational Principles;
□ Low Cost;
□ Matrices (Mathematics);
□ Singularity (Mathematics);
□ Atomic and Molecular Physics | {"url":"https://ui.adsabs.harvard.edu/abs/1989sser.proc...77W/abstract","timestamp":"2024-11-11T17:28:25Z","content_type":"text/html","content_length":"35874","record_id":"<urn:uuid:a6f0227f-d932-4dc0-94b9-9bbaeacde7a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00462.warc.gz"} |
Simple Scheduling Problem | AIMMS Community
Hello, i am new to aimms and completed the beginners tutorial and tried to understand the machine planning example, but it was to complicated for me to understand all of it.
I want to solve a scheduling problem but i have problems creating the model. Let me explain my situation: In the production there are build articles from different orders and they are transported
inside the company within containers. The transport limitation is 8 containers an hour. I want to solve the problem and want the best order which is <= 8 containers an hour, if not possible, than the
next following orders should be so small, that it will be compensated, if not possible, a gap with no production time should be inserted.
It would be ideal, that the same article is produced in one step (e.g. Article 52107 has to be build for 3 different customer orders). After all, i want to visualize it. My first problem is, that i
don’t know how to produce on order after another and make the timeline to calculate the demand of containers per hour and how i iterate the different variations and store the best possibility.
I have attached the sample data. I would be glad, if anybody can provide me an example or help :) | {"url":"https://community.aimms.com/aimms-language-12/simple-scheduling-problem-595?sort=dateline.desc","timestamp":"2024-11-07T14:07:05Z","content_type":"text/html","content_length":"158654","record_id":"<urn:uuid:715cbea5-6bbb-4ed7-be04-d3229795e593>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00229.warc.gz"} |
large iron ore mining process optimization
The simultaneous stochastic optimization of mining complexes optimizes various components of the related mineral value chain jointly while considering material supply (geological) uncertainty. As a
result, the optimization process capitalizes on the synergies between the components of the system while not only quantifying and considering geological uncertainty, but also producing strategic ... | {"url":"https://legitemauve.fr/04-15/large-iron-ore-mining-process-optimization.html","timestamp":"2024-11-03T12:23:17Z","content_type":"text/html","content_length":"48226","record_id":"<urn:uuid:3ea37ee4-25c7-448d-9c4f-0ab6d65d0882>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00344.warc.gz"} |
Universitat de Girona. Departament d’Informà tica i Matemà tica Aplicada
Boezio, M.N.M.
Costa, J.F.C.L.
Koppe, J.C.
Risk assessment and economic evaluation of mining projects are mainly affected by the determination of grades and tonnages. In the case of iron ore, multiple variables must be determined for ore
characterization which estimation must satisfy the original mass balances and stoichiometry among granulometric fractions and chemical species. Models of these deposits are generally built from
estimates obtained using ordinary kriging or cokriging, most time using solely the global grades and determining the ones present at different granulometric partitions by regression. Alternative
approaches include determining the totality of the chemical species and distributing the closing error or leaving one variable aside and determining it by difference afterwards, adding up the error
of previous determinations. Furthermore, the estimates obtained are outside the interval of the original variables or even exhibiting negative values. These inconsistencies are generally overridden
by post-processing the estimates to satisfy the closed sum condition and positiveness. In this paper, cokriging of additive log-ratios (alr) is implemented to determine global grades of iron,
silica, alumina, phosphorous, manganese and loss by ignition and masses of three different granulometric partitions, providing better results than the ones obtained through cokriging of the original
variables, with all the estimates within the original data values interval and satisfying the considered mass balances
Universitat de Girona. Departament d’Informà tica i Matemà tica Aplicada
Tots els drets reservats
EstadÃstica matemà tica -- Congressos
Mathematical statistics -- Congresses
GeoquÃmica -- Mètodes estadÃstics -- Congressos
Geochemistry -- Statistical methods -- Congresses
Geologia -- Mètodes estadÃstics -- Congressos
Geology -- Statistical methods -- Congresses
Ordinary Cokriging of Additive Log-Ratios for Estimating Grades in Iron Ore Deposits | {"url":"http://dugi.udg.edu/item/http:@@@@hdl.handle.net@@2072@@299048","timestamp":"2024-11-06T01:20:44Z","content_type":"application/xhtml+xml","content_length":"32050","record_id":"<urn:uuid:ef050029-70f9-4ca4-9ef4-663a1f05dd97>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00867.warc.gz"} |
Simultaneous Stoquasticity
Title Simultaneous Stoquasticity
Publication Journal Article
Year of 2022
Authors Bringewatt, J, Brady, LT
Journal Phys. Rev. A
Volume 105
Issue 062601
Date 06/09/2022
Keywords FOS: Physical sciences, Quantum Physics (quant-ph)
Stoquastic Hamiltonians play a role in the computational complexity of the local Hamiltonian problem as well as the study of classical simulability. In particular, stoquastic Hamiltonians
can be straightforwardly simulated using Monte Carlo techniques. We address the question of whether two or more Hamiltonians may be made simultaneously stoquastic via a unitary
transformation. This question has important implications for the complexity of simulating quantum annealing where quantum advantage is related to the stoquasticity of the Hamiltonians
Abstract involved in the anneal. We find that for almost all problems no such unitary exists and show that the problem of determining the existence of such a unitary is equivalent to identifying
if there is a solution to a system of polynomial (in)equalities in the matrix elements of the initial and transformed Hamiltonians. Solving such a system of equations is NP-hard. We
highlight a geometric understanding of this problem in terms of a collection of generalized Bloch vectors.
URL https://arxiv.org/abs/2202.08863
DOI 10.1103/PhysRevA.105.062601 | {"url":"https://quics.umd.edu/publications/simultaneous-stoquasticity","timestamp":"2024-11-10T09:28:12Z","content_type":"text/html","content_length":"21575","record_id":"<urn:uuid:619d97a3-2dd7-4e6e-87a5-a7b10790ed8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00202.warc.gz"} |
62.8 kg to lbs
To convert kilograms (kg) to pounds (lbs), you can use the following step-by-step instructions:
Step 1: Understand the conversion factor
1 kilogram (kg) is equal to 2.20462 pounds (lbs). This means that to convert kg to lbs, you need to multiply the weight in kg by 2.20462.
Step 2: Set up the conversion equation
Let’s denote the weight in kg as W_kg and the weight in lbs as W_lbs. The conversion equation can be written as:
W_lbs = W_kg * 2.20462
Step 3: Plug in the given weight in kg
In this case, the given weight is 62.8 kg. So, we substitute W_kg = 62.8 in the equation:
W_lbs = 62.8 * 2.20462
Step 4: Perform the calculation
Multiply 62.8 by 2.20462 using a calculator or by hand:
W_lbs = 138.891376
Step 5: Round the result (if necessary)
Since weight is typically rounded to the nearest whole number, you can round 138.891376 to the nearest pound:
W_lbs ≈ 139 lbs
Therefore, 62.8 kg is approximately equal to 139 lbs.
Visited 2 times, 1 visit(s) today | {"url":"https://unitconvertify.com/weight/62-8-kg-to-lbs/","timestamp":"2024-11-02T01:30:13Z","content_type":"text/html","content_length":"43020","record_id":"<urn:uuid:22273b36-c9d6-4b93-88dc-e5c3ed3c8a2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00285.warc.gz"} |
BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Sabre//Sabre VObject 4.5.5//EN CALSCALE:GREGORIAN X-WR-CALNAME:Zahlentheorie BEGIN:VTIMEZONE TZID:Europe/Zurich X-LIC-LOCATION:Europe/Zurich TZURL:http://
tzurl.org/zoneinfo/Europe/Zurich BEGIN:DAYLIGHT TZOFFSETFROM:+0100 TZOFFSETTO:+0200 TZNAME:CEST DTSTART:19810329T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU END:DAYLIGHT BEGIN:STANDARD
TZOFFSETFROM:+0200 TZOFFSETTO:+0100 TZNAME:CET DTSTART:19961027T030000 RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT UID:news1740@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20241010T184436 DTSTART;TZID=Europe/Zurich:20241031T141500 SUMMARY:Number Theory Seminar: Alina Ostafe (University of New South Wales) DESCRIPTION:Title: On the frequency of primes preserving
dynamical irreduci bility of polynomials\\r\\nAbstract: In this talk we address an open quest ion in arithmetic dynamics regarding the frequency of primes modulo which all the iterates of an integer
polynomial remain irreducible. More precise ly\, for a class of integer polynomials $f$\, which in particular includes all quadratic polynomials\, we show that\, under some natural conditions\ , the
set of primes $p$ such that all iterates of $f$ are irreducible modu lo $p$ is of relative density zero. Our results rely on a combination of a nalytic (Selberg's sieve) and Diophantine (finiteness
of solutions to cert ain hyperelliptic equations) tools\, which we will briefly describe. Joint wok with Laszlo Mérai and Igor Shparlinski (2021\, 2024).\\r\\nSpiegelga sse 5\, Seminarraum 05.002
Title: On the frequency of primes preserving dynamical irredu cibility of polynomials
Abstract: In this talk we address an open question in arithmetic dynamics regarding the frequency of primes modulo w hich all the iterates of an integer polynomial remain irreducible. More pr ecisely
\, for a class of integer polynomials $f$\, which in particular inc ludes all quadratic polynomials\, we show that\, under some natural condit ions\, the set of primes $p$ such that all iterates of
$f$ are irreducible modulo $p$ is of relative density zero. Our results rely on a combination of analytic (Selberg's sieve) and Diophantine (finiteness of solutions to certain hyperelliptic
equations) tools\, which we will briefly describe. Joint wok with Laszlo Mérai and Igor Shparlinski (2021\, 2024).
S piegelgasse 5\, Seminarraum 05.002
DTEND;TZID=Europe/Zurich:20241031T151500 END:VEVENT BEGIN:VEVENT UID:news1739@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20241018T121606 DTSTART;TZID=Europe/Zurich:20241024T141500 SUMMARY:Number Theory
Seminar: Michael Stoll (Universität Bayreuth) DESCRIPTION:Titel: Conjectural asymptotics of prime orders of points on ell iptic curves over number fields Abstract: Define\, for a positive integer
~$d$\, $S(d)$ to be the set of all primes $p$ that occur as the order of a point $P \\in E(K)$ on an elliptic curve $E$ defined over a number field $K$ of degree $d$. We discuss how some plausible
conjectures on the sparsi ty of newforms with certain properties would allow us to deduce a fairly p recise result on the asymptotic behavior of $\\max S(d)$ as $d$ tends to i nfinity. This is joint
work with Maarten Derickx.\\r\\nLocation: Spiegelg asse 5\, Seminarraum 05.002 X-ALT-DESC:
Titel: Conjectural asymptotics of prime orders of points on e lliptic curves over number fields
Abstract: Define\, for a po sitive integer~$d$\, $S(d)$ to be the set of all primes $p$ that occur as the order of a point $P \\in E(K)$ on an elliptic curve $E$ defined over a number field $K$ of
degree $d$. We discuss how some plausible conjectures on the sparsity of newforms with certain properties would allow us to ded uce a fairly precise result on the asymptotic behavior of $\\max S(d)$
as $d$ tends to infinity.
This is joint work with Maarten Derick x.
Location: Spiegelgasse 5\, Seminarraum 05.002
DTEND;TZID=Europe/Zurich:20241024T151500 END:VEVENT BEGIN:VEVENT UID:news1738@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20241009T144613 DTSTART;TZID=Europe/Zurich:20241010T104500 SUMMARY:Number Theory
Seminar: Stefan Kebekus (Universität Freiburg) DESCRIPTION:Title: Extension Theorems for differential forms and applicatio ns\\r\\nAbstract: We present new extension theorems for differential forms
on singular complex spaces and explain their use in the study of minimal varieties. We survey a number of applications\, pertaining to classificati on and characterisation of special varieties\,
non-Abelian Hodge Theory in the singular setting\, and quasi-étale uniformization.\\r\\nLocation: H örsaal 114\, Kollegienhaus\\r\\nPlease carefully note the unusual time an d location. X-ALT-DESC:
Title: Extension Theorems for differential forms and applicat ions
Abstract: We present new extension theorems for differential forms on singular complex spaces and explain their use in the study of min imal varieties. We survey a number of applications\, pertaining
to classif ication and characterisation of special varieties\, non-Abelian Hodge Theo ry in the singular setting\, and quasi-étale uniformization.
Loca tion: Hörsaal 114\, Kollegienhaus
Please carefully note t he unusual time and location.
DTEND;TZID=Europe/Zurich:20241010T120000 END:VEVENT BEGIN:VEVENT UID:news1679@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240320T213152 DTSTART;TZID=Europe/Zurich:20240411T093000 SUMMARY:Rhine Seminar
on Transcendence Basel-Freiburg-Strasbourg DESCRIPTION:More information on the website:\\r\\nhttps://rhine-transcenden ce.github.io/meeting5 [https://rhine-transcendence.github.io/meeting5]
More information on the website:
https://rhine-transcendence.github.io /meeting5
DTEND;TZID=Europe/Zurich:20240411T161000 END:VEVENT BEGIN:VEVENT UID:news1592@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231016T134156 DTSTART;TZID=Europe/Zurich:20231110T140000 SUMMARY:Number Theory
Days 2023 DESCRIPTION:Weitere Informationen zur Konferenz finden Sie hier: https://nu mbertheory.dmi.unibas.ch/ntd2023/index.html [https://numbertheory.dmi.unib as.ch/ntd2023/index.html]\\r\\nDie
Registrierung ist kostenlos\, aber obli gatorisch. Registrieren Sie sich bitte hier: Registrierung [https://forms. gle/FWiTsAM5mP6MQhjFA]. X-ALT-DESC:
Weitere Informationen zur Konferenz finden Sie hier: https://numbertheo ry.dmi.unibas.ch/ntd2023/index.html
Die Registrierung ist kost enlos\, aber obligatorisch. Registrieren Sie sich bitte hier: Registrierung.
DTEND;TZID=Europe/Zurich:20231111T113000 END:VEVENT BEGIN:VEVENT UID:news1371@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220519T114925 DTSTART;TZID=Europe/Zurich:20220526T170000 SUMMARY:Number Theory
Web Seminar: Yunqing Tang (Princeton University) DESCRIPTION:In this talk\, we will discuss the proof of the unbounded denom inators conjecture on Fourier coefficients of SL_2(Z)-modular forms\, and
the proof of irrationality of 2-adic zeta value at 5. Both proofs use an a rithmetic holonomicity theorem\, which can be viewed as a refinement of An dré’s algebraicity criterion. If time permits\,
we will give a proof of the arithmetic holonomicity theorem via the slope method a la Bost.\\r\\n This is joint work with Frank Calegari and Vesselin Dimitrov.\\r\\nFor fur ther information about the
seminar\, please visit this webpage [https://ww w.ntwebseminar.org/]. X-ALT-DESC:
In this talk\, we will discuss the proof of the unb ounded denominators conjecture on Fourier coefficients of SL_2(Z)-modular forms\, and the proof of irrationality of 2-adic zeta value at 5. Both
pro ofs use an arithmetic holonomicity theorem\, which can be viewed as a refi nement of André’s algebraicity criterion. If time permits\, we will giv e a proof of the arithmetic holonomicity theorem
via the slope method a la Bost.
This is joint work with Frank Calegari and Vessel in Dimitrov.
For further information about the seminar\, please vi sit this webpage.
DTEND;TZID=Europe/Zurich:20220526T180000 END:VEVENT BEGIN:VEVENT UID:news1370@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220519T114626 DTSTART;TZID=Europe/Zurich:20220519T170000 SUMMARY:Number Theory
Web Seminar: Jeffrey Vaaler (University of Texas at A ustin) DESCRIPTION:The abstract of the talk is here [https://drive.google.com/file /d/1VDQLDlcC3IDEMduR6H-X9Rf0jRxSZ_J-/view] available.\\r\\nFor
further inf ormation about the seminar\, please visit this webpage [https://www.ntwebs eminar.org/]. X-ALT-DESC:
The abstract of the talk is here available.
\n< p>For further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20220519T180000 END:VEVENT BEGIN:VEVENT UID:news1363@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20220502T112414 DTSTART;TZID=Europe/Zurich:20220512T170000 SUMMARY:Number Theory Web Seminar: Robert Charles Vaughan (Pennsylvania St ate University) DESCRIPTION:The abstract of the talk is
here [https://drive.google.com/file /d/17K_PLvpAkfZ3S5nw2yOWQKC18MgLl2rC/view] available:\\r\\nFor further inf ormation about the seminar\, please visit this webpage [https://www.ntwebs eminar.org/].
The abstract of the talk is here available:
\n< p>For further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20220512T180000 END:VEVENT BEGIN:VEVENT UID:news1306@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20220502T111049 DTSTART;TZID=Europe/Zurich:20220505T170000 SUMMARY:Number Theory Web Seminar: Levent Alpöge (Harvard University) DESCRIPTION:It's easy that 0% of integers are the sum of two
integral cubes (allowing opposite signs!).\\r\\nI will explain joint work with Bhargava and Shnidman in which we show:\\r\\n1. At least a sixth of integers are no t the sum of two rational cubes\,\\r
\\nand\\r\\n2. At least a sixth of odd integers are the sum of two rational cubes!\\r\\n(--- with 2. relying on new 2-converse results of Burungale-Skinner.)\\r\\nThe basic principle is that "there
aren't even enough 2-Selmer elements to go around" to contradi ct e.g. 1.\, and we show this by using the circle method "inside" the usua l geometry of numbers argument applied to a particular
coregular represent ation. Even then the resulting constant isn't small enough to conclude 1.\ , so we use the clean form of root numbers in the family x^3 + y^3 = n and the p-parity theorem of
Nekovar/Dokchitser-Dokchitser to succeed.\\r\\nFo r further information about the seminar\, please visit this webpage [https ://www.ntwebseminar.org/]. X-ALT-DESC:
It's easy that 0% of integers are the sum of two in tegral cubes (allowing opposite signs!).
I will explain joint work with Bhargava and Shnidman in which we show:
1. At least a sixth of integers are not the sum of two rational cubes\,
2. At least a sixth of odd integers are the sum of two rational cu bes!
(--- with 2. relying on new 2-converse results of Burungale-S kinner.)
The basic principle is that "there aren't even enough 2-Selmer elements to go around" to contradict e.g. 1.\, and we show this by using the circle method "inside" the usual geometry of numbers ar
gument applied to a particular coregular representation. Even then the res ulting constant isn't small enough to conclude 1.\, so we use the clean fo rm of root numbers in the family x^3 + y^3 = n
and the p-parity theorem of Nekovar/Dokchitser-Dokchitser to succeed.
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220505T180000 END:VEVENT BEGIN:VEVENT UID:news1307@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220425T100745 DTSTART;TZID=Europe/Zurich:20220428T170000 SUMMARY:Number Theory
Web Seminar: Andrew Granville (Université de Montré al) DESCRIPTION:In 1878\, in the first volume of the first mathematics journal published in the US\, Edouard Lucas wrote 88 pages (in French) on
linear r ecurrence sequences\, placing Fibonacci numbers and other linear recurrenc e sequences into a broader context. He examined their behaviour locally as well as globally\, and asked several
questions that influenced much resea rch in the century and a half to come.\\r\\nIn a sequence of papers in the 1930s\, Marshall Hall further developed several of Lucas' themes\, includ ing studying
and trying to classify third order linear divisibility sequen ces\; that is\, linear recurrences like the Fibonacci numbers which have t he additional property that $F_m$ divides $F_n$ whenever $m$
divides $n$. Because of many special cases\, Hall was unable to even conjecture what a general theorem should look like\, and despite developments over the years by various authors\, such as Lehmer\,
Morgan Ward\, van der Poorten\, Bez ivin\, Petho\, Richard Guy\, Hugh Williams\,... with higher order linear d ivisibility sequences\, even the formulation of the classification has rem ained
mysterious.\\r\\nIn this talk we present our ongoing efforts to clas sify all linear divisibility sequences\, the key new input coming from a w onderful application of the Schmidt/Schlickewei
subspace theorem from the theory of diophantine approximation\, due to Corvaja and Zannier.\\r\\nFor further information about the seminar\, please visit this webpage [https: //www.ntwebseminar.org
/]. X-ALT-DESC:
In 1878\, in the first volume of the first mathemat ics journal published in the US\, Edouard Lucas wrote 88 pages (in French) on linear recurrence sequences\, placing Fibonacci numbers and other
line ar recurrence sequences into a broader context. He examined their behaviou r locally as well as globally\, and asked several questions that influence d much research in the century and a half to
In a sequence of papers in the 1930s\, Marshall Hall further developed several of Lucas' themes\, including studying and trying to classify third order l inear divisibility sequences\; that is\,
linear recurrences like the Fibon acci numbers which have the additional property that $F_m$ divides $F_n$ w henever $m$ divides $n$. Because of many special cases\, Hall was unable t o even
conjecture what a general theorem should look like\, and despite de velopments over the years by various authors\, such as Lehmer\, Morgan War d\, van der Poorten\, Bezivin\, Petho\, Richard Guy\,
Hugh Williams\,... w ith higher order linear divisibility sequences\, even the formulation of t he classification has remained mysterious.
In this talk we present our ongoing efforts to classify all linear divisibility sequenc es\, the key new input coming from a wonderful application of the Schmidt/ Schlickewei subspace theorem from
the theory of diophantine approximation\ , due to Corvaja and Zannier.
For further information about the se minar\, please visit this webpage< /a>.
DTEND;TZID=Europe/Zurich:20220428T180000 END:VEVENT BEGIN:VEVENT UID:news1304@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220325T101357 DTSTART;TZID=Europe/Zurich:20220421T170000 SUMMARY:Number Theory
Web Seminar: Joni Teräväinen (University of Turku) DESCRIPTION:I will discuss the short interval behaviour of the von Mangoldt and Möbius functions twisted by exponentials. I will in particular menti
on new results on sums of these functions twisted by polynomial exponentia l phases\, or even more general nilsequence phases. I will also discuss co nnections to Chowla's conjecture. This is based
on joint works with Kaisa Matomäki\, Maksym Radziwiłł\, Xuancheng Shao\, Terence Tao and Tamar Zi egler.\\r\\nFor further information about the seminar\, please visit this webpage [https://
www.ntwebseminar.org/]. X-ALT-DESC:
I will discuss the short interval behaviour of the von Mangoldt and Möbius functions twisted by exponentials. I will in part icular mention new results on sums of these functions twisted by polynomia
l exponential phases\, or even more general nilsequence phases. I will als o discuss connections to Chowla's conjecture. This is based on joint works with Kaisa Matomäki\, Maksym Radziwiłł\,
Xuancheng Shao\, Terence Tao and Tamar Ziegler.
For further information about the seminar\, ple ase visit this webpage.
DTEND;TZID=Europe/Zurich:20220421T180000 END:VEVENT BEGIN:VEVENT UID:news1303@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220411T135326 DTSTART;TZID=Europe/Zurich:20220414T170000 SUMMARY:Number Theory
Web Seminar: Ram Murty (Queen's University) DESCRIPTION:There is a probability distribution attached to the Riemann zet a function which allows one to formulate the Riemann hypothesis in terms o f
the cumulants of this distribution and is due to Biane\, Pitman and Yor. The cumulants can be related to generalized Euler-Stieltjes constants and to Li's criterion for the Riemann hypothesis. We
will discuss these resul ts and present some new results related to this theme.\\r\\nFor further in formation about the seminar\, please visit this webpage [https://www.ntweb seminar.org/].
There is a probability distribution attached to the Riemann zeta function which allows one to formulate the Riemann hypothesi s in terms of the cumulants of this distribution and is due to Biane\,
Pit man and Yor. The cumulants can be related to generalized Euler-Stieltjes c onstants and to Li's criterion for the Riemann hypothesis. We will discuss these results and present some new results
related to this theme.
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220414T180000 END:VEVENT BEGIN:VEVENT UID:news1302@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220325T101305 DTSTART;TZID=Europe/Zurich:20220407T170000 SUMMARY:Number Theory
Web Seminar: Ana Caraiani (Imperial College London) DESCRIPTION:Shimura varieties are certain highly symmetric algebraic variet ies that generalise modular curves and that play an important role in
the Langlands program. In this talk\, I will survey recent vanishing conjectur es and results about the cohomology of Shimura varieties with torsion coef ficients\, under both local and global
representation-theoretic conditions . I will illustrate the geometric ingredients needed to establish these re sults using the toy model of the modular curve. I will also mention severa l
applications\, including to (potential) modularity over CM fields.\\r\\n For further information about the seminar\, please visit this webpage [htt ps://www.ntwebseminar.org/]. X-ALT-DESC:
Shimura varieties are certain highly symmetric alge braic varieties that generalise modular curves and that play an important role in the Langlands program. In this talk\, I will survey recent
vanishi ng conjectures and results about the cohomology of Shimura varieties with torsion coefficients\, under both local and global representation-theoreti c conditions. I will illustrate the
geometric ingredients needed to establ ish these results using the toy model of the modular curve. I will also me ntion several applications\, including to (potential) modularity over CM f ields.
For further information about the seminar\, please visit th is webpage.
DTEND;TZID=Europe/Zurich:20220407T180000 END:VEVENT BEGIN:VEVENT UID:news1339@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220325T101040 DTSTART;TZID=Europe/Zurich:20220331T170000 SUMMARY:Number Theory
Web Seminar: William Chen (Institute for Advanced Stu dy) DESCRIPTION:In this talk we will show that the integral points of the Marko ff equation x^2 + y^2 + z^2 - xyz = 0 surject onto its F_p-points
for all but finitely many primes p. This essentially resolves a conjecture of Bour gain\, Gamburd\, and Sarnak\, and a question of Frobenius from 1913. The p roof relates the question to the
classical problem of classifying the conn ected components of the Hurwitz moduli spaces H(g\,n) classifying finite c overs of genus g curves with n branch points. Over a century ago\, Clebsch and
Hurwitz established connectivity for the subspace classifying simply branched covers of the projective line\, which led to the first proof of t he irreducibility of the moduli space of curves of a
given genus. More rec ently\, the work of Dunfield-Thurston and Conway-Parker establish connecti vity in certain situations where the monodromy group is fixed and either g or n are allowed to be
large\, which has been applied to study Cohen-Lens tra heuristics over function fields. In the case where (g\,n) are fixed an d the monodromy group is allowed to vary\, far less is known. In our case
we study SL(2\,p)-covers of elliptic curves\, only branched over the origi n\, and establish connectivity\, for all sufficiently large p\, of the sub space classifying those covers with ramification
indices 2p. The proof bui lds upon asymptotic results of Bourgain\, Gamburd\, and Sarnak\, the key n ew ingredient being a divisibility result on the degree of a certain forge tful map between moduli
spaces\, which provides enough rigidity to bootstr ap their asymptotics to a result for all sufficiently large p.\\r\\nFor fu rther information about the seminar\, please visit this webpage [https://
w ww.ntwebseminar.org/]. X-ALT-DESC:
In this talk we will show that the integral points of the Markoff equation x^2 + y^2 + z^2 - xyz = 0 surject onto its F_p-poi nts for all but finitely many primes p. This essentially resolves a
conjec ture of Bourgain\, Gamburd\, and Sarnak\, and a question of Frobenius from 1913. The proof relates the question to the classical problem of classify ing the connected components of the Hurwitz
moduli spaces H(g\,n) classify ing finite covers of genus g curves with n branch points. Over a century a go\, Clebsch and Hurwitz established connectivity for the subspace classif ying simply
branched covers of the projective line\, which led to the firs t proof of the irreducibility of the moduli space of curves of a given gen us. More recently\, the work of Dunfield-Thurston and
Conway-Parker establ ish connectivity in certain situations where the monodromy group is fixed and either g or n are allowed to be large\, which has been applied to stud y Cohen-Lenstra heuristics
over function fields. In the case where (g\,n) are fixed and the monodromy group is allowed to vary\, far less is known. In our case we study SL(2\,p)-covers of elliptic curves\, only branched ov er
the origin\, and establish connectivity\, for all sufficiently large p\ , of the subspace classifying those covers with ramification indices 2p. T he proof builds upon asymptotic results of Bourgain
\, Gamburd\, and Sarnak \, the key new ingredient being a divisibility result on the degree of a c ertain forgetful map between moduli spaces\, which provides enough rigidit y to bootstrap their
asymptotics to a result for all sufficiently large p.
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220331T180000 END:VEVENT BEGIN:VEVENT UID:news1301@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220317T094916 DTSTART;TZID=Europe/Zurich:20220324T170000 SUMMARY:Number Theory
Web Seminar: Winnie Li (Pennsylvania State University ) DESCRIPTION:The theme of this survey talk is zeta functions which count clo sed geodesics on objects arising from real and p-adic groups. Our
focus is on PGL(n). For PGL(2)\, these are the Selberg zeta function for compact q uotients of the upper half-plane and the Ihara zeta function for finite re gular graphs. We shall explain the
identities satisfied by these zeta func tions\, which show interconnections between combinatorics\, group theory a nd number theory. Comparisons will be made for zeta identities from differ ent
background. Like the Riemann zeta function\, the analytic behavior of a group based zeta function governs the distribution of the prime geodesic s in its definition.\\r\\nFor further information
about the seminar\, plea se visit this webpage [https://www.ntwebseminar.org/]. X-ALT-DESC:
The theme of this survey talk is zeta functions whi ch count closed geodesics on objects arising from real and p-adic groups. Our focus is on PGL(n). For PGL(2)\, these are the Selberg zeta function
f or compact quotients of the upper half-plane and the Ihara zeta function f or finite regular graphs. We shall explain the identities satisfied by the se zeta functions\, which show interconnections
between combinatorics\, gr oup theory and number theory. Comparisons will be made for zeta identities from different background. Like the Riemann zeta function\, the analytic behavior of a group
based zeta function governs the distribution of the pr ime geodesics in its definition.
For further information about the seminar\, please visit this webpa ge.
DTEND;TZID=Europe/Zurich:20220324T180000 END:VEVENT BEGIN:VEVENT UID:news1300@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220310T151447 DTSTART;TZID=Europe/Zurich:20220317T170000 SUMMARY:Number Theory
Web Seminar: Aaron Levin (Michigan State University) DESCRIPTION:The classical Weil height machine associates heights to divisor s on a projective variety. I will give a brief\, but gentle\,
introduction to how this machinery extends to objects (closed subschemes) in higher co dimension\, due to Silverman\, and discuss various ways to interpret the h eights. We will then discuss several
recent results in which these ideas p lay a prominent and central role.\\r\\nFor further information about the s eminar\, please visit this webpage [https://www.ntwebseminar.org/]. X-ALT-DESC:
The classical Weil height machine associates height s to divisors on a projective variety. I will give a brief\, but gentle\, introduction to how this machinery extends to objects (closed subschemes)
in higher codimension\, due to Silverman\, and discuss various ways to int erpret the heights. We will then discuss several recent results in which t hese ideas play a prominent and central role.
For further informat ion about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220317T180000 END:VEVENT BEGIN:VEVENT UID:news1299@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220301T152844 DTSTART;TZID=Europe/Zurich:20220310T170000 SUMMARY:Number Theory
Web Seminar: Dmitry Kleinbock (Brandeis University) DESCRIPTION:Let $\\psi$ be a decreasing function defined on all large posit ive real numbers. We say that a real $m \\times n$ matrix $Y$ is "$\\
psi$- Dirichlet" if for every sufficiently large real number $T$ there exist non -trivial integer vectors $(p\,q)$ satisfying $\\|Yq-p\\|^m < \\psi(T)$ and $\\|q\\|^n < T$ (where $\\|\\cdot\\|$
denotes the supremum norm on vector s). This generalizes the property of $Y$ being "Dirichlet improvable" whic h has been studied by several people\, starting with Davenport and Schmidt in 1969. I
will present results giving sufficient conditions on $\\psi$ t o ensure that the set of $\\psi$-Dirichlet matrices has zero (resp.\, full ) measure. If time allows I will mention a geometric
generalization of the set-up\, where the supremum norm is replaced by an arbitrary norm. Joint work with Anurag Rao\, Andreas Strombergsson\, Nick Wadleigh and Shuchweng Yu.\\r\\nFor further
information about the seminar\, please visit this we bpage [https://www.ntwebseminar.org/]. X-ALT-DESC:
Let $\\psi$ be a decreasing function defined on all large positive real numbers. We say that a real $m \\times n$ matrix $Y$ is "$\\psi$-Dirichlet" if for every sufficiently large real number $T$ the
re exist non-trivial integer vectors $(p\,q)$ satisfying $\\|Yq-p\\|^m < \; \\psi(T)$ and $\\|q\\|^n <\; T$ (where $\\|\\cdot\\|$ denotes the sup remum norm on vectors). This generalizes the
property of $Y$ being "Dirich let improvable" which has been studied by several people\, starting with D avenport and Schmidt in 1969. I will present results giving sufficient con ditions on $\\psi$
to ensure that the set of $\\psi$-Dirichlet matrices ha s zero (resp.\, full) measure. If time allows I will mention a geometric g eneralization of the set-up\, where the supremum norm is replaced by
an ar bitrary norm. Joint work with Anurag Rao\, Andreas Strombergsson\, Nick Wa dleigh and Shuchweng Yu.
For further information about the seminar \, please visit this webpage.< /p> DTEND;TZID=Europe/Zurich:20220310T180000 END:VEVENT BEGIN:VEVENT UID:news1291@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20220301T152634 DTSTART;TZID=Europe/Zurich:20220303T170000 SUMMARY:Number Theory Web Seminar: Ekin Özman (Boğaziçi University) DESCRIPTION:Understanding solutions of Diophantine equations over
rationals or more generally over any number field is one of the main problems of nu mber theory. By the help of the modular techniques used in the proof of Fe rmat’s last theorem by Wiles and its
generalizations\, it is possible to solve other Diophantine equations too. Understanding quadratic points on the classical modular curve play a central role in this approach. It is al so possible to
study the solutions of Fermat type equations over number fi elds asymptotically. In this talk\, I will mention some recent results abo ut these notions for the classical Fermat equation as well as
some other D iophantine equations.\\r\\nFor further information about the seminar\, ple ase visit this webpage [https://www.ntwebseminar.org/]. X-ALT-DESC:
Understanding solutions of Diophantine equations ov er rationals or more generally over any number field is one of the main pr oblems of number theory. By the help of the modular techniques used in
the proof of Fermat’s last theorem by Wiles and its generalizations\, it is possible to solve other Diophantine equations too. Understanding quadrati c points on the classical modular curve play a
central role in this approa ch. It is also possible to study the solutions of Fermat type equations ov er number fields asymptotically. In this talk\, I will mention some recent results about these
notions for the classical Fermat equation as well as some other Diophantine equations.
For further information about th e seminar\, please visit this webp age.
DTEND;TZID=Europe/Zurich:20220303T180000 END:VEVENT BEGIN:VEVENT UID:news1290@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220209T111633 DTSTART;TZID=Europe/Zurich:20220224T170000 SUMMARY:Number Theory
Web Seminar: Igor Shparlinski (UNSW Sydney) DESCRIPTION:We present some old and more recent results which suggest that Kloosterman and Salie sums exhibit a pseudorandom behaviour similar to the
behaviour which is traditionally attributed to the Mobius function. In pa rticular\, we formulate some analogues of the Chowla Conjecture for Kloost erman and Salie sums. We then describe several
results about the non-corre lation of Kloosterman and Salie sums between themselves and also with some classical number-theoretic functions such as the Mobius function\, the di visor function and the
sums of binary digits. Various arithmetic applicati ons of these results\, including to asymptotic formulas for moments of var ious L-functions\, will be outlined as well.\\r\\nFor further
information about the seminar\, please visit this webpage [https://www.ntwebseminar.or g/]. X-ALT-DESC:
We present some old and more recent results which suggest tha t Kloosterman and Salie sums exhibit a pseudorandom behaviour similar to t he behaviour which is traditionally attributed to the Mobius
function. In particular\, we formulate some analogues of the Chowla Conjecture for Kloo sterman and Salie sums. We then describe several results about the non-cor relation of Kloosterman and Salie
sums between themselves and also with so me classical number-theoretic functions such as the Mobius function\, the divisor function and the sums of binary digits. Various arithmetic applica tions of
these results\, including to asymptotic formulas for moments of v arious L-functions\, will be outlined as well.
For further informa tion about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220224T180000 END:VEVENT BEGIN:VEVENT UID:news1289@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220209T111403 DTSTART;TZID=Europe/Zurich:20220217T170000 SUMMARY:Number Theory
Web Seminar: Harry Schmidt (University of Basel) DESCRIPTION:In this talk I will give an overview of the history of the Andr é-Oort conjecture and its resolution last year after the final steps were
made in work of Pila\, Shankar\, Tsimerman\, Esnault and Groechenig as we ll as Binyamini\, Yafaev and myself. I will focus on the key insights and ideas related to model theory and transcendence
theory.\\r\\nFor further i nformation about the seminar\, please visit this webpage [https://www.ntwe bseminar.org/]. X-ALT-DESC:
In this talk I will give an overview of the history of the An dré-Oort conjecture and its resolution last year after the final steps we re made in work of Pila\, Shankar\, Tsimerman\, Esnault and
Groechenig as well as Binyamini\, Yafaev and myself. I will focus on the key insights an d ideas related to model theory and transcendence theory.
For furt her information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220217T180000 END:VEVENT BEGIN:VEVENT UID:news1288@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220131T140633 DTSTART;TZID=Europe/Zurich:20220210T170000 SUMMARY:Number Theory
Web Seminar: Zeev Rudnick (Tel Aviv University) DESCRIPTION:The study of uniform distribution of sequences is more than a c entury old\, with pioneering work by Hardy and Littlewood\, Weyl\, van der
Corput and others. More recently\, the focus of research has shifted to m uch finer quantities\, such as the distribution of nearest neighbor gaps a nd the pair correlation function. Examples of
interesting sequences for wh ich these quantities have been studied include the zeros of the Riemann ze ta function\, energy levels of quantum systems\, and more. In this exposit ory talk\, I will
discuss what is known about these examples and discuss t he many outstanding problems that this theory has to offer.\\r\\nFor furth er information about the seminar\, please visit this webpage
[https://www. ntwebseminar.org/]. X-ALT-DESC:
The study of uniform distribution of sequences is more than a century old\, with pioneering work by Hardy and Littlewood\, Weyl\, van d er Corput and others. More recently\, the focus of research has
shifted to much finer quantities\, such as the distribution of nearest neighbor gaps and the pair correlation function. Examples of interesting sequences for which these quantities have been studied
include the zeros of the Riemann zeta function\, energy levels of quantum systems\, and more. In this expos itory talk\, I will discuss what is known about these examples and discuss the many
outstanding problems that this theory has to offer.
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220210T180000 END:VEVENT BEGIN:VEVENT UID:news1287@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220131T135710 DTSTART;TZID=Europe/Zurich:20220203T170000 SUMMARY:Number Theory
Web Seminar: Peter Humphries (University of Virginia) DESCRIPTION:A major area of study in analysis involves the distribution of mass of Laplacian eigenfunctions on a Riemannian manifold. A key
result to wards this is explicit L^p-norm bounds for Laplacian eigenfunctions in ter ms of their Laplacian eigenvalue\, due to Sogge in 1988. Sogge's bounds ar e sharp on the sphere\, but need not be
sharp on other manifolds. I will d iscuss some aspects of this problem for the modular surface\; in this sett ing\, the Laplacian eigenfunctions are automorphic forms\, and certain L^p -norms can be
shown to be closely related to certain mixed moments of L-fu nctions. This is joint with with Rizwanur Khan.\\r\\nFor further informati on about the seminar\, please visit this webpage [https://
www.ntwebseminar .org/]. X-ALT-DESC:
A major area of study in analysis involves the distribution o f mass of Laplacian eigenfunctions on a Riemannian manifold. A key result towards this is explicit L^p-norm bounds for Laplacian
eigenfunctions in t erms of their Laplacian eigenvalue\, due to Sogge in 1988. Sogge's bounds are sharp on the sphere\, but need not be sharp on other manifolds. I will discuss some aspects of this
problem for the modular surface\; in this se tting\, the Laplacian eigenfunctions are automorphic forms\, and certain L ^p-norms can be shown to be closely related to certain mixed moments of L-
functions. This is joint with with Rizwanur Khan.
For further info rmation about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220203T180000 END:VEVENT BEGIN:VEVENT UID:news1286@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220119T084242 DTSTART;TZID=Europe/Zurich:20220127T170000 SUMMARY:Number Theory
Web Seminar: Larry Guth (MIT) DESCRIPTION:The Vinogradov mean value conjecture concerns the number of sol utions of a system of diophantine equations. This number of solutions can also be written as
a certain moment of a trigonometric polynomial. The con jecture was proven in the 2010s by Bourgain-Demeter-Guth and by Wooley\, a nd recently there was a shorter proof by Guo-Li-Yang-Zorin-Kranich.
The de tails of each proof involve some intricate estimates. The goal of the talk is to try to reflect on the proof(s) in a big picture way. A key ingredie nt in all the proofs is to combine
estimates at many different scales\, us ually by doing induction on scales. Why does this multi-scale induction he lp? What can multi-scale induction tell us and what are its limitations?\\ r\\nFor
further information about the seminar\, please visit this webpage [https://www.ntwebseminar.org/]. X-ALT-DESC:
The Vinogradov mean value conjecture concerns the number of s olutions of a system of diophantine equations. This number of solutions ca n also be written as a certain moment of a trigonometric
polynomial. The c onjecture was proven in the 2010s by Bourgain-Demeter-Guth and by Wooley\, and recently there was a shorter proof by Guo-Li-Yang-Zorin-Kranich. The details of each proof involve
some intricate estimates. The goal of the ta lk is to try to reflect on the proof(s) in a big picture way. A key ingred ient in all the proofs is to combine estimates at many different scales\,
usually by doing induction on scales. Why does this multi-scale induction help? What can multi-scale induction tell us and what are its limitations?
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220127T180000 END:VEVENT BEGIN:VEVENT UID:news1285@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220117T122020 DTSTART;TZID=Europe/Zurich:20220120T170000 SUMMARY:Number Theory
Web Seminar: Jozsef Solymosi (University of British C olumbia) DESCRIPTION:We establish lower bounds on the rank of matrices in which all but the diagonal entries lie in a multiplicative group of
small rank. Appl ying these bounds we show that the distance sets of finite pointsets in ℝ^d generate high rank multiplicative groups and that multiplicative gro ups of small rank cannot contain
large sumsets. (Joint work with Noga Alon )\\r\\nFor further information about the seminar\, please visit this webpa ge [https://www.ntwebseminar.org/]. X-ALT-DESC:
We establish lower bounds on the rank of matrices in which al l but the diagonal entries lie in a multiplicative group of small rank. Ap plying these bounds we show that the distance sets of finite
pointsets in ℝ^d generate high rank multiplicative groups and that multiplicative gro ups of small rank cannot contain large sumsets. (Joint work with Noga Alon )
For further information about the seminar\, please visit this webpage.
END:VEVENT BEGIN:VEVENT UID:news1310@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220112T102603 DTSTART;TZID=Europe/Zurich:20220113T170000 SUMMARY:Number Theory Web Seminar: Péter Varjú (University of
Cambridge) DESCRIPTION:Consider random polynomials of degree d whose leading and const ant coefficients are 1 and the rest are independent taking the values 0 or 1 with equal probability. A
conjecture of Odlyzko and Poonen predicts tha t such a polynomial is irreducible in Z[x] with high probability as d grow s. This conjecture is still open\, but Emmanuel Breuillard and I proved it
assuming the Extended Riemann Hypothesis. I will briefly recall the metho d of proof of this result and will discuss later developments that apply t his method to other models of random polynomials.\
\r\\nFor further informa tion about the seminar\, please visit this webpage [https://www.ntwebsemin ar.org/]. X-ALT-DESC:
Consider random polynomials of degree d whose leading and con stant coefficients are 1 and the rest are independent taking the values 0 or 1 with equal probability. A conjecture of Odlyzko and Poonen
predicts t hat such a polynomial is irreducible in Z[x] with high probability as d gr ows. This conjecture is still open\, but Emmanuel Breuillard and I proved it assuming the Extended Riemann
Hypothesis. I will briefly recall the met hod of proof of this result and will discuss later developments that apply this method to other models of random polynomials.
For further in formation about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20220113T180000 END:VEVENT BEGIN:VEVENT UID:news1270@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211206T164028 DTSTART;TZID=Europe/Zurich:20211216T170000 SUMMARY:Number Theory
Web Seminar: Sarah Zerbes (University College London\ , UK) DESCRIPTION:Euler systems are one of the most powerful tools for proving ca ses of the Bloch--Kato conjecture\, and other related problems
such as the Birch and Swinnerton-Dyer conjecture. I will recall a series of recent wo rks (variously joint with Loeffler\, Pilloni\, Skinner) giving rise to an Euler system in the cohomology of
Shimura varieties for GSp(4)\, and an ex plicit reciprocity law relating the Euler system to values of L-functions. I will then recent work with Loeffler\, in which we use this Euler system to prove
new cases of the BSD conjecture for modular abelian surfaces ove r Q\, and modular elliptic curves over imaginary quadratic fields.\\r\\nFo r further information about the seminar\, please visit this
webpage [https ://www.ntwebseminar.org/]. X-ALT-DESC:
Euler systems are one of the most powerful tools for proving cases of the Bloch--Kato conjecture\, and other related problems such as t he Birch and Swinnerton-Dyer conjecture.
I will recall a series of r ecent works (variously joint with Loeffler\, Pilloni\, Skinner) giving ris e to an Euler system in the cohomology of Shimura varieties for GSp(4)\, a nd an explicit
reciprocity law relating the Euler system to values of L-fu nctions. I will then recent work with Loeffler\, in which we use this Eule r system to prove new cases of the BSD conjecture for modular
abelian surf aces over Q\, and modular elliptic curves over imaginary quadratic fields.
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20211216T180000 END:VEVENT BEGIN:VEVENT UID:news1269@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211201T103850 DTSTART;TZID=Europe/Zurich:20211209T170000 SUMMARY:Number Theory
Web Seminar: Samir Siksek (University of Warwick) DESCRIPTION:The asymptotic Fermat conjecture (AFC) states that for a number field K\, and for sufficiently large primes p\, the only solutions to the
Fermat equation X^p+Y^p+Z^p=0 in K are the obvious ones. We sketch recent work that connects the Fermat equation to the far more elementary unit eq uation\, and explain how this surprising connection
can be exploited to pr ove AFC for several infinite families of number fields. This talk is based on joint work with Nuno Freitas\, Alain Kraus and Haluk Sengun.\\r\\nFor further information about
the seminar\, please visit this webpage [https:/ /www.ntwebseminar.org/]. X-ALT-DESC:
The asymptotic Fermat conjecture (AFC) states that for a numb er field K\, and for sufficiently large primes p\, the only solutions to t he Fermat equation X^p+Y^p+Z^p=0 in K are the obvious ones. We
sketch rece nt work that connects the Fermat equation to the far more elementary unit equation\, and explain how this surprising connection can be exploited to prove AFC for several infinite families
of number fields. This talk is bas ed on joint work with Nuno Freitas\, Alain Kraus and Haluk Sengun.
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20211208T180000 END:VEVENT BEGIN:VEVENT UID:news1268@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211125T141706 DTSTART;TZID=Europe/Zurich:20211202T170000 SUMMARY:Number Theory
Web Seminar: Kiran Kedlaya (University of California San Diego) DESCRIPTION:We describe several recent results on orders of abelian varieti es over $\\mathbb{F}_2$: every positive integer occurs as
the order of an ordinary abelian variety over $\\mathbb{F}_2$ (joint with E. Howe)\; every positive integer occurs infinitely often as the order of a simple abelian variety over $\\mathbb{F}_2$\; the
geometric decomposition of the simple abelian varieties over $\\mathbb{F}_2$ can be described explicitly (joint with T. D'Nelly-Warady)\; and the relative class number one problem for fu nction
fields is reduced to a finite computation (work in progress). All o f these results rely on the relationship between isogeny classes of abelia n varieties over finite fields and Weil polynomials
given by the work of W eil and Honda-Tate. With these results in hand\, most of the work is to co nstruct algebraic integers satisfying suitable archimedean constraints.\\r \\nFor further information
about the seminar\, please visit this webpage [ https://www.ntwebseminar.org/]. X-ALT-DESC:
We describe several recent results on orders of abelian varie ties over $\\mathbb{F}_2$: every positive integer occurs as the order of a n ordinary abelian variety over $\\mathbb{F}_2$ (joint with E.
Howe)\; eve ry positive integer occurs infinitely often as the order of a simple abeli an variety over $\\mathbb{F}_2$\; the geometric decomposition of the simpl e abelian varieties over $\\mathbb{F}
_2$ can be described explicitly (join t with T. D'Nelly-Warady)\; and the relative class number one problem for function fields is reduced to a finite computation (work in progress). All of these
results rely on the relationship between isogeny classes of abel ian varieties over finite fields and Weil polynomials given by the work of Weil and Honda-Tate. With these results in hand\, most of
the work is to construct algebraic integers satisfying suitable archimedean constraints.< /p>\n
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20211202T180000 END:VEVENT BEGIN:VEVENT UID:news1267@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211125T141908 DTSTART;TZID=Europe/Zurich:20211125T170000 SUMMARY:Number Theory
Web Seminar: Alexei Skorobogatov (Imperial College Lo ndon) DESCRIPTION:I will discuss logical links among uniformity conjectures conce rning K3 surfaces and abelian varieties of bounded dimension
defined over number fields of bounded degree. The conjectures concern the endomorphism algebra of an abelian variety\, the Néron–Severi lattice of a K3 surfac e\, and the Galois invariant subgroup of
the geometric Brauer group. The t alk is based on a joint work with Martin Orr and Yuri Zarhin.\\r\\nFor fur ther information about the seminar\, please visit this webpage [https://ww
w.ntwebseminar.org/]. X-ALT-DESC:
I will discuss logical links among uniformity conjectures con cerning K3 surfaces and abelian varieties of bounded dimension defined ove r number fields of bounded degree. The conjectures concern the
endomorphis m algebra of an abelian variety\, the Néron–Severi lattice of a K3 surf ace\, and the Galois invariant subgroup of the geometric Brauer group. The talk is based on a joint work with
Martin Orr and Yuri Zarhin.
Fo r further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20211125T180000 END:VEVENT BEGIN:VEVENT UID:news1265@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211116T092154 DTSTART;TZID=Europe/Zurich:20211118T170000 SUMMARY:Number Theory
Web Seminar: Myrto Mavraki (Harvard University) DESCRIPTION:Inspired by an analogy between torsion and preperiodic points\, Zhang has proposed a dynamical generalization of the classical Manin-Mumf
ord and Bogomolov conjectures. A special case of these conjectures\, for ` split' maps\, has recently been established by Nguyen\, Ghioca and Ye. In particular\, they show that two rational maps have
at most finitely many c ommon preperiodic points\, unless they are `related'. Recent breakthroughs by Dimitrov\, Gao\, Habegger and Kühne have established that the classic al Bogomolov conjecture
holds uniformly across curves of given genus. In t his talk we discuss uniform versions of the dynamical Bogomolov conjecture across 1-parameter families of certain split maps. To this end\, we estab
lish an instance of a 'relative dynamical Bogomolov'. This is work in prog ress joint with Harry Schmidt (University of Basel).\\r\\nFor further info rmation about the seminar\, please visit this
webpage [https://www.ntwebse minar.org/]. X-ALT-DESC:
Inspired by an analogy between torsion and preperiodic points \, Zhang has proposed a dynamical generalization of the classical Manin-Mu mford and Bogomolov conjectures. A special case of these
conjectures\, for `split' maps\, has recently been established by Nguyen\, Ghioca and Ye. I n particular\, they show that two rational maps have at most finitely many common preperiodic points\,
unless they are `related'. Recent breakthroug hs by Dimitrov\, Gao\, Habegger and Kühne have established that the class ical Bogomolov conjecture holds uniformly across curves of given genus.
In this talk we discuss uniform versions of the dynamical Bogomolov co njecture across 1-parameter families of certain split maps. To this end\, we establish an instance of a 'relative dynamical
Bogomolov'. This is work in progress joint with Harry Schmidt (University of Basel).
For f urther information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20211118T180000 END:VEVENT BEGIN:VEVENT UID:news1261@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;TZID=Europe/Zurich:20211111T170000 SUMMARY:Number Theory
Web Seminar: Avi Wigderson (Institute for Advanced St udy) DESCRIPTION:Is the universe inherently deterministic or probabilistic? Perh aps more importantly - can we tell the difference between the
two?\\r\\nHu manity has pondered the meaning and utility of randomness for millennia.\\ r\\nThere is a remarkable variety of ways in which we utilize perfect coin tosses to our advantage: in
statistics\, cryptography\, game theory\, alg orithms\, gambling... Indeed\, randomness seems indispensable! Which of th ese applications survive if the universe had no (accessible) randomness in it
at all? Which of them survive if only poor quality randomness is avail able\, e.g. that arises from somewhat "unpredictable" phenomena like the w eather or the stock market?\\r\\nA computational
theory of randomness\, de veloped in the past several decades\, reveals (perhaps counter-intuitively ) that very little is lost in such deterministic or weakly random worlds. In the talk I'll explain
the main ideas and results of this theory\, notio ns of pseudo-randomness\, and connections to computational intractability. \\r\\nIt is interesting that Number Theory played an important role throug
hout this development. It supplied problems whose algorithmic solution mak e randomness seem powerful\, problems for which randomness can be eliminat ed from such solutions\, and problems where the
power of randomness remain s a major challenge for computational complexity theorists and mathematici ans. I will use these problems (and others) to demonstrate aspects of this theory.\\r\\nFor
further information about the seminar\, please visit thi s webpage [https://www.ntwebseminar.org/]. X-ALT-DESC:
Is the universe inherently deterministic or probabilistic? Pe rhaps more importantly - can we tell the difference between the two?
Humanity has pondered the meaning and utility of randomness for millenn ia.
There is a remarkable variety of ways in which we utilize perf ect coin tosses to our advantage: in statistics\, cryptography\, game theo ry\, algorithms\, gambling... Indeed\, randomness seems
indispensable! Whi ch of these applications survive if the universe had no (accessible) rando mness in it at all? Which of them survive if only poor quality randomness is available\, e.g. that arises
from somewhat "unpredictable" phenomena li ke the weather or the stock market?
A computational theory of rand omness\, developed in the past several decades\, reveals (perhaps counter- intuitively) that very little is lost in such deterministic or weakly rand om worlds. In the
talk I'll explain the main ideas and results of this the ory\, notions of pseudo-randomness\, and connections to computational intr actability.
It is interesting that Number Theory played an importa nt role throughout this development. It supplied problems whose algorithmi c solution make randomness seem powerful\, problems for which
randomness c an be eliminated from such solutions\, and problems where the power of ran domness remains a major challenge for computational complexity theorists a nd mathematicians. I will use these
problems (and others) to demonstrate a spects of this theory.
For further information about the seminar\, please visit this webpage.
DTEND;TZID=Europe/Zurich:20211111T180000 END:VEVENT BEGIN:VEVENT UID:news825@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;TZID=Europe/Zurich:20190228T141500 SUMMARY:Number Theory
Seminar: Yuri Bilu (Université de Bordeaux) DESCRIPTION:The celebrated André-Oort conjecture about special point on Sh imura varieties is now proved conditionally to the GRH in full generality and
unconditionally in many important special cases. In particular\, Pi la (2011) proved it for products of modular curves\, adapting a method p reviously developed by Pila and Zannier in the context of
the Manin-Mumfo rd conjecture. Unfortunately\, Pila's argument is non-effective\, using t he Siegel-Brauer inequality. Since 2012 various special cases of the Andr é-Oort conjecture has been proved
effectively\, most notably in the work of Lars Kühne. In my talk I will restrict to the case of the "Shimura v ariety" C^n and will try to explain on some simple examples how the effec tive approach
of Kühne works. No previous knowledge about André-Oort co njecture is required\, I will give all the necessary background. X-ALT-DESC: The celebrated André-Oort conjecture about special point on Sh
imura varieties is now proved conditionally to the GRH in full generality and unconditionally in many important special cases. In particular\, Pi la (2011) proved it for products of modular curves\,
adapting a method p reviously developed by Pila and Zannier in the context of the Manin-Mumfo rd conjecture. Unfortunately\, Pila's argument is non-effective\, using t he Siegel-Brauer inequality.
Since 2012 various special cases of the André-Oort conjecture has been proved effectively\, most notably in the work of Lars Kühne. In my talk I will restrict to the case of the "\;Shimura
variety"\; C^n and will try to explain on some simple examples how the effective approach of Kühne works.
No prev ious knowledge about André-Oort conjecture is required\, I will give all the necessary background. DTEND;TZID=Europe/Zurich:20190228T151500 END:VEVENT BEGIN:VEVENT UID:news321@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;TZID=Europe/Zurich:20181129T141500 SUMMARY:Number Theory Seminar: Ana Maria Botero (Univ. of Regensburg) DTEND;TZID=Europe/Zurich:20181129T151500
END:VEVENT BEGIN:VEVENT UID:news320@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;TZID=Europe/Zurich:20181122T141500 SUMMARY:Number Theory Seminar: Amador Martin-Pizarro (Univ. of
Freiburg) DESCRIPTION:Lascar showed that the group of automorphisms of the complex fi eld which fix the algebraic closure of the prime field is simple. For this \, he first showed that there are no
non-trivial bounded automorphisms. An automorphism is bounded if there is a finite set A such that the image of every element b is algebraic over A together with b. The same result hold s for a
"universal" differentially closed field of characteristic zero\, w here we replace algebraic by differentially algebraic. Together with T. Bl ossier and C. Hardouin\, we provided in https://arxiv.org
/abs/1505.03669 [ https://arxiv.org/abs/1505.03669] a complete classification of bounded aut omorphisms in various fields equipped with operators\, among others\, for generic difference fields in all
characteristics or for Hasse-Schmidt diff erential fields in positive characteristic. X-ALT-DESC: Lascar showed that the group of automorphisms of the complex fi eld which fix the algebraic closure
of the prime field is simple. For this \, he first showed that there are no non-trivial bounded automorphisms. An automorphism is bounded if there is a finite set A such that the image of every
element b is algebraic over A together with b. The same result hold s for a "\;universal"\; differentially closed field of characteris tic zero\, where we replace algebraic by differentially
algebraic. Togethe r with T. Blossier and C. Hardouin\, we provided in https://arxiv.org/abs/1505.03669 a complete class ification of bounded automorphisms in various fields equipped with operato rs
\, among others\, for generic difference fields in all characteristics o r for Hasse-Schmidt differential fields in positive characteristic. DTEND;TZID=Europe/Zurich:20181122T151500 END:VEVENT
BEGIN:VEVENT UID:news319@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;TZID=Europe/Zurich:20181108T141500 SUMMARY:Number Theory Seminar: David Masser (Univ. of Basel)
DESCRIPTION:Inspired by Schanuel's Conjecture\, Boris Zilber has proposed a ``Nullstellensatz'' (also conjectural) asserting which sorts of polynomia l-exponential equations in several variables have
a complex solution. Last year Dale Brownawell and I published a proof in the situation which can b e regarded as ``typical''. But it does not cover all situations for two va riables\, some of which
involve simply stated problems in one variable lik e finding complex $z \\neq 0$ with $e^z+e^{1/z}=1$. Recently Vincenzo Mant ova and I have settled the general case of two variables. We describe our
methods -- for example\, to solve $$e^z+e^{\\root 9 \\of {1-z^9}}=1$$ one approach uses theta functions on ${\\bf C}^{28}$. X-ALT-DESC: Inspired by Schanuel's Conjecture\, Boris Zilber has proposed a
``Nullstellensatz'' (also conjectural) asserting which sorts of polynomia l-exponential equations in several variables have a complex solution. Last year Dale Brownawell and I published a proof in
the situation which can b e regarded as ``typical''. But it does not cover all situations for two va riables\, some of which involve simply stated problems in one variable lik e finding complex $z \\
neq 0$ with $e^z+e^{1/z}=1$. Recently Vincenzo Mant ova and I have settled the general case of two variables. We describe our methods -- for example\, to solve
$$e^z+e^{\\root 9 \\of {1-z^9}}=1$ $
one approach uses theta functions on ${\\bf C}^{28}$. DTEND;TZID=Europe/Zurich:20181108T151500 END:VEVENT BEGIN:VEVENT UID:news318@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;
TZID=Europe/Zurich:20181101T141500 SUMMARY:Number Theory Seminar: Shabnam Akhtari (Univ. of Oregon / MPIM Bonn ) DTEND;TZID=Europe/Zurich:20181101T151500 END:VEVENT BEGIN:VEVENT
UID:news317@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;TZID=Europe/Zurich:20181004T141500 SUMMARY:Number Theory Seminar: Dijana Kreso (TU Graz) DESCRIPTION:In my talk I will
present results that come from a joint work w ith M. Bennett and A. Gherga from The University of British Columbia. We s tudied Goormaghtigh's equation:\\begin{equation}\\label{eq}\\frac{x^m-1}{x -1}
= \\frac{y^n-1}{y-1}\, \\\; \\\; y>x>1\, \\\; m > n > 2. \\end{equatio n}There are two known solutions $(x\, y\,m\, n)=(2\, 5\, 5\, 3)\, (2\, 90\ , 13\, 3)$ and it is believed that these are the only
solutions. It is not known if this equation has finitely or infinitely many solutions\, and no t even if that is the case if we fix one of the variables. It is known tha t there are finitely many
solutions if we fix any two variables. Moreover\ , there are effective results in all cases\, except when the two fixed var iables are the exponents $m$ and $n$. If the fixed $m$ and $n$ additionall
y satisfy $\\gcd(m-1\, n-1)>1$\, then there is an effective finiteness r esult. My co-authors and me showed that if $n \\geq 3$ is a fixed integer\ , then there exists an effectively computable
constant $c (n)$ such that $ \\max \\{ x\, y\, m \\} < c (n)$ for all $x\, y$ and $m$ that satisfy Goor maghtigh's equation with $\\gcd(m-1\,n-1)>1$. In case $n \\in \\{ 3\, 4\ , 5 \\}$\, we solved
the equation completely\, subject to this non-coprima lity condition. X-ALT-DESC:In my talk I will present results that come from a joint work wi th M. Bennett and A. Gherga from The University of
British Columbia. We st udied Goormaghtigh's equation:
\\fr ac{x^m-1}{x-1} = \\frac{y^n-1}{y-1}\, \\\; \\\; y>\;x>\;1\, \\\; m > \; n >\; 2.
There are two known solutions $(x \, y\,m\, n)=(2\, 5\, 5\, 3)\, (2\, 90\, 13\, 3)$ and it is believed that these are the only solutions. It is not known if this equation has finitel y or infinitely
many solutions\, and not even if that is the case if we fi x one of the variables. It is known that there are finitely many solutions if we fix any two variables. Moreover\, there are effective
results in al l cases\, except when the two fixed variables are the exponents $m$ and $n $. If the fixed $m$ and $n$ additionally satisfy \; $\\gcd(m-1\, n-1)& gt\;1$\, then there is an effective
finiteness result. My co-authors and m e showed that if $n \\geq 3$ is a fixed integer\, then there exists an eff ectively computable constant $c (n)$ such that $\\max \\{ x\, y\, m \\} &l t\; c (n)$
for all $x\, y$ and $m$ that satisfy Goormaghtigh's equation wi th $\\gcd(m-1\,n-1)>\;1$. \; In case $n \\in \\{ 3\, 4\, 5 \\}$\, we solved the equation completely\, subject to this
non-coprimality conditio n. DTEND;TZID=Europe/Zurich:20181004T151500 END:VEVENT BEGIN:VEVENT UID:news316@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;TZID=Europe/
Zurich:20180927T141500 SUMMARY:Number Theory Seminar: David Belius (Univ. of Basel) DESCRIPTION:
I will describe how the Riemann Zeta function on t
he critical line can be viewed as a pseudo-random Gaussian field with a co
rrelation function with logarithmic growth. Such log-correlated random fie
lds have recently attracted considerable interest in probability theory. F
yodorv\, Hiary and Keating conjectured several striking results about the
extreme values of the Riemann Zeta function based on this connection. In t
his talk I will explain how a certain approximate tree structure in Dirich
let polynomials can be used to prove one of their conjectures\, giving the
asymptotics of the maximum of the magnitude of the function in a typical
interval of length O(1).
X-ALT-DESC: <\;pre wrap="\;"\;>\;I will describe how the Rieman n Zeta function on the critical line can be viewed as a pseudo-random Gaus sian field with a correlation function with
logarithmic growth. Such log-c orrelated random fields have recently attracted considerable interest in p robability theory. Fyodorv\, Hiary and Keating conjectured several strikin g results about
the extreme values of the Riemann Zeta function based on t his connection. In this talk I will explain how a certain approximate tree structure in Dirichlet polynomials can be used to prove one of
their conj ectures\, giving the asymptotics of the maximum of the magnitude of the fu nction in a typical interval of length O(1).<\;/pre>\; DTEND;TZID=Europe/Zurich:20180927T151500 END:VEVENT
BEGIN:VEVENT UID:news209@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211104T143247 DTSTART;VALUE=DATE:20180713 SUMMARY:Donau–Rhein Modelltheorie und Anwendungen\, 3rd meeting DESCRIPTION:Link to
schedule. [https://sites.google.com/site/drmta3/] X-ALT-DESC:Link to sch edule. \; END:VEVENT END:VCALENDAR | {"url":"https://dmi.unibas.ch/de/news-events/vergangene-events/vergangene-events-mathematik/4348.ics","timestamp":"2024-11-13T09:40:59Z","content_type":"text/calendar","content_length":"69178","record_id":"<urn:uuid:9775e3ba-8507-40ff-b15b-a9dead8da674>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00300.warc.gz"} |
Basis point
Basis point facts for kids
Quick facts for kids
per ten thousand sign
A per ten thousand sign or basis point (often denoted as bp, often pronounced as "bip" or "beep") is (a difference of) one hundredth of a percent or equivalently one ten thousandth. The related
concept of a permyriad is literally one part per ten thousand. Figures are commonly quoted in basis points in finance, especially in fixed income markets.
1 basis point = (a difference of) 1 permyriad or one-hundredth of one percent.
1 bp = (a difference of) 1‱ or 0.01% or 0.1‰ or 10^−4 or 110,000 or 0.0001.
100 bp = (a difference of) 1% or 10‰ of 100‱.
Basis points are used as a convenient unit of measurement in contexts where percentage differences of less than 1% are discussed. The most common example is interest rates, where differences in
interest rates of less than 1% per year are usually meaningful to talk about. For example, a difference of 0.10 percentage points is equivalent to a change of 10 basis points (e.g., a 4.67% rate
increases by 10 basis points to 4.77%). In other words, an increase of 100 basis points means a rise by 1 percentage point.
Like percentage points, basis points avoid the ambiguity between relative and absolute discussions about interest rates by dealing only with the absolute change in numeric value of a rate. For
example, if a report says there has been a "1% increase" from a 10% interest rate, this could refer to an increase either from 10% to 10.1% (relative, 1% of 10%), or from 10% to 11% (absolute, 1%
plus 10%). However, if the report says there has been a "100 basis point increase" from a 10% interest rate, then the interest rate of 10% has increased by 1.00% (the absolute change) to an 11% rate.
It is common practice in the financial industry to use basis points to denote a rate change in a financial instrument, or the difference (spread) between two interest rates, including the yields of
fixed-income securities.
Since certain loans and bonds may commonly be quoted in relation to some index or underlying security, they will often be quoted as a spread over (or under) the index. For example, a loan that bears
interest of 0.50% per annum above the London Interbank Offered Rate (LIBOR) is said to be 50 basis points over LIBOR, which is commonly expressed as "L+50bps" or simply "L+50".
The term "basis point" has its origins in trading the "basis" or the spread between two interest rates. Since the basis is usually small, these are quoted multiplied up by 10,000, and hence a "full
point" movement in the "basis" is a basis point. Contrast with pips in FX forward markets. En lieu of referencing individual basis points for larger percentages, the below terms have been gaining
traction and use in the financial industry.
1 "MegaBip" = 10 bps = 00.1%
1 "UltraBip" = 100 bps = 01.0%
1 "GigaBip" = 1000 bps = 10.0%
Expense ratios of investment funds are often quoted in basis points.
See also | {"url":"https://kids.kiddle.co/Basis_point","timestamp":"2024-11-08T10:35:36Z","content_type":"text/html","content_length":"37726","record_id":"<urn:uuid:23391f3e-c86a-443d-b18c-749caae6f62a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00714.warc.gz"} |
what does minimum odds mean
How to Read Vegas Odds | Learn How to Place Smart Bets
Minimum deposit £20. Similar to odds with a "+", they are based on a bet of $100. The larger the number, the bigger the favorite to win the match. For example, a -200 money line means you would win
$100 if you bet $200 and won. +200 Means that when you bet 100, you'll earn 200 or is the decimal odds 3.00. Odds per leg is the bit the baffles me as one football match is sure counted as one leg,
as no. -200 Should bet 200 to win 100, which corresponds to 1.50 decimal odds. Fixed-Odds: A bet where you get the odds advertised by the better operator at the time you place your bet. Interestingly
enough, the odds for NFL moneylines are closely correlated to the lines of NFL point spreads. If the figure displayed is negative (-), the money line odds are stating how much money must be wagered
to win $100.
Sports Betting Explained
The benefit of gambling on the internet using real money is .
Caps Table Limits - 2021 - CrapsPit.org Betting Odds Explained - Understand What Fraction/Decimal Mean
I.e. If your first £20 bet wins, you have to bet £20 again at odds of 2.0 (1/1) or higher to turn the £20 bonus money into cash. A negative money line represents the amount that you would have to bet
to win $100 if you were correct. That's also equivalent to fractional odds of 2/1 and decimal odds of 3.
Horse Racing Odds: How to Read Odds & Calculate Payouts
Quite simply, for every value of B that you bet, you will win A, plus the return of your stake. Examples: Below is an example of NFL betting odds taken from an online betting site. American Odds. The
odds for each game would appear the same as they would if you were making an individual bet. That's also equivalent to fractional odds of 2/1 and decimal odds of 3. perform best when the person is
tested in the early stages of infection with COVID-19. A fractional listing of 6/1 (six-to-one) odds would mean that you win $6 against every $1 you wager, in addition to receiving your dollar back
(i.e., the amount you wagered). How does the strike rate calculator work? Let's see an example to understand . UNDERSTANDING how odds work is fundamental to being successful at betting on football,
horse racing or any sport.. Seasoned punters will find calculating odds to be second nature but for beginners the prospect of getting to grips with what odds mean and in particular grappling with
fractional odds can be a daunting one. Odds Ratio is a measure of the strength of association with an exposure and an outcome. -200 or +200). Or mean that you can only bet as much as $100. For
instance, the Texans are a -3 point favorite against the Colts. But for those wondering, it is a way to describe odds, . Only make sure you have at least 5 events that have minimum odds of 1.2 on
your ticket and you qualify for a bonus on your potential win. For instance - let's say that a team is -145 to win a game. Place any £20 sportsbook bet at minimum odds of 2.0 (1/1) to receive your
free £20 bet. The minus and plus signs are really important to pay attention to. Free Bet Series Comment. odds: [noun, plural in form but singular or plural in construction] inequalities. Texas
Hold'em is What Does Blackjack Mean In Dream the most popular poker game in the world, but three card poker is one of the quickest to learn. • Antigen tests . That means that if you risk $100 on San
. You'll also commonly find the minimum odds required to be at least 1.50 and often 2.00. sportsbooks set the spread hoping to get the same action on both sides of the match. OR > 1 means greater
odds of association with the exposure and outcome. If the odds are minus (-), then that amount of money must be wagered to win $100. Low odds. People are familiar with decimal odds more than the
others. If the figure displayed is positive (+), the odds are denoting how much money will be won on a $100 wager. But for those wondering, it is a way to describe odds, . So odds of 7-2 mean that
for every $2 invested, the punter gets $7 profit in return. (More on the Free Odds bet.) Manchester United 1/5. Odds-On: A term used for a strong favorite to win, when to have to actually spend more
to win. Odds expressed in terms of money, with $100 being the standard. The minus in front of the New England Patriots odds means they are favourites and the calculation is different. Our strike rate
calculator allows you to enter your strike rate and be told the minimum odds you should be placing a bet at. You Win $100 (20 x 5=100) You also get your $20 bet back. If you are like most men and
women and love playing games at casinos, then you are probably seeking the top online casino bonuses that are real money. So, odds of 9/1 would give an implied probability of 10 per cent. As a
result, it becomes hard to calculate them, and some of us can't understand what +110 means. Some bookmakers offer US odds and don't provide its converted type into decimal. (e.g. It took me about 15
minutes to lose, just playing the pass line bet with odds. It's a question of risk (variance), not expected profit (mean). Mush: A bettor or gambler who is considered to be bad luck. Here we have a
collection of 10 wagers that are all going off at odds shorter than 2 to 1. A $10 bet on +120 odds would pay out $12 in profits. Nickel: Jargon for a $500 bet. You will usually find the minimum stake
to be matched in the range of £5 and the maximum £25. It is also equivalent to fractional odds of 1/2 and decimal odds of 1.5. What Does Implied Odds Mean In Poker Malta. It indicates how much you
will win based on the odds and total wagered. Minimum. Sample size example: We can also use the formulae to calculate the sample size (n) we need, if we pick our • Antigen tests . Minimum deposit
£20. In this case, the -3 point is the spread. The short answer to the question of "what do the plus and minus signs before the odds number mean" is: a minus sign indicates a favourite to win, while
a plus sign indicates an underdog. Alternatively, you could also enter odds of a bet (either in Sport or Horse Racing) and be told how many times you need to find a winner to breakeven and ultimately
to make a profit (that is, we can tell you what your strike rate . Your £20 free bet must be wagered at odds of 2.0 (1/1) or higher before it becomes cash. Here we try to examine the calculation of
each conversion. detect the presence of a specific viral protein in a collected sample. The tote board does not show decimals, therefore, 5/2 odds means that the odds on a horse are 5 divided by 2,
or 2.5-1. Below is an explanation on how to bet on sports by using our betting odds calculator to get all the . The calculate probability from fractional odds, we need to divide the number on the
right-hand side of the fraction by the sum of both numbers. What do decimal odds mean? Place any £20 sportsbook bet at minimum odds of 2.0 (1/1) to receive your free £20 bet. What Does The Spread
Mean in Sports Betting? When it comes to fractional odds, an even bet is expressed by 1/1 ("one to one"). It is feasible that you have been betting for a long time and not really paid attention to
this term. What Does Mean In Gambling Odds million UK residents. So, for a favorite, the odds will begin with a minus (-) sign. The minimum and maximum amounts that may be wagered per bet, as well as
the odds allowed factors, are posted on a small placard at the side of the table near each dealer. Here is how to read odds: "five to one" for 5/1 and "seven to one" for 7/1. Particularly, you're
probably looking for ways to to play all the casino games online you love , like poker, blackjack, slots, roulette and craps, without needing to leave your home. Free bets are often offered along the
lines of "bet £10, get a £10 free bet". How about the 99% confidence interval? A horse priced at 1 . The bonus has a min odds per leg of 1.4. Minimum and maximum odds (usually between -300 and
+10,000) Minimum deposit (usually $10 to $20) Time limits (you may have to meet playthrough within 90 days or fewer) In short - if you see a "minus" symbol before a set of odds, this means that the
team (or person in an individual sport) is a favourite to win. What Does 3 4 5 Odds Mean In Craps, 320 No Deposit Bonus At Crazy Luck Casino March, Soboba Bingo Schedule, All Slott In Falsh By
submitting my registration I accept the terms & conditions of this agreement & certify that I am over the age of 21. Poker. Strictly 18+ begambleaware.org. My point is be prepared to lose a little
money while you get comfortable with the game. What is an antigen test? What does "odds of 1/2 or greater" mean? So far, so good. If you see a "plus" symbol before a set of odds, this means that the
team (or person) is an underdog to win. 9/1 for every £1 you bet, you will win £9. In simple terms longer odds mean that an outcome is less likely to happen, but it's possible to a dive a little
deeper. Usually used with bookies; if you bet "a nickel," that means a $500 . Odds such as +800 stand for the underdogs or highly difficult bets. if you have odds. Odds like -200 refer to betting on
the favorite in the match or a fairly predictable or easy bet. 2. The limits usually apply to all craps bets except the . This is the amount you will receive if you were to bet $100. A negative money
line represents the amount that you would have to bet to win $100 if you were correct. New customers only. Checking the terms of an offer may seem a bit dull, but it's an important task, and you'll
soon get used to picking out the key criteria. The odds ratio is used when one of two possible events or outcomes are measured, and there is a supposed causative factor. Decimal odds are always
presented in decimal format and may have no, one, or two decimal places. American odds, also known as moneyline odds, are primarily used by sites that cater to US sports bettors. Late Money: When a
horse gets a lot of money wagered right before a race. Decimal odds. Players can find many types of poker games at online casinos, and all of them require skill, strategy, and a bit of luck. What
does 9 2 odds mean in horse racing? In sports betting, a negative money line (represented as -200, -300, -400, etc.) Decimal odds are always presented in decimal format and may have no, one, or two
decimal places. This odds expression indicates a bettor's return relative to a base figure of 100 units. The tote board does not show decimals, therefore, 5/2 odds means that the odds on a horse are
5 divided by 2, or 2.5-1. What Does -110 Mean in Sports Betting? Example #1: A horse that wins at 5-1 will return $5.00 for every $1.00 wagered. For example, it can calculate the odds of an event
happening given a particular treatment intervention (1). So, odds of 9/1 would give an implied probability of 10 per cent. If you bet $100 on the entire parlay bet, you'd get a parlay payout of $700
- your original $100 plus . Also known as the line, the spread is often used to even the odds between the 2 unevenly matched teams. Odds Ratio is a measure of the strength of association with an
exposure and an outcome. On the other hand, a gambler backing Manchester United, who have odds of 1/5, will see a payout of just $1 for every $5 bet. That way, you'll know the exact amount you would
win prior to placing any picks at your favorite sportsbook. For example, you may see decimal odds of 2, 2.0, or 2.00. You need to make sure they meet any minimum odds requirement. However, the
football parlay odds for the entire bet would be +600 (6-1) since you have a 50-50 chance of winning each bet (3 bets x 2 = 6). A minus sign indicates a bookie's favorite to win while a plus symbol
indicates an underdog. You have probably heard the terms Long Odds and Short Odds in betting circles, but never paid attention to what they mean. An OR of 0.2 means there is an 80% decrease in the
odds of an outcome with a given exposure. When horse racing odds are shown in the form of 7-2, 5-1, etc, it expresses the amount of profit to the amount invested. Usually when the spread moves before
a game, the moneyline does as well. Your £20 free bet must be wagered at odds of 2.0 (1/1) or higher before it becomes cash. Summary. An OR of 0.2 means there is an 80% decrease in the odds of an
outcome with a given exposure. Examples with -200 и +275: In simple terms longer odds mean that an outcome is less likely to happen, but it's possible to a dive a little deeper. Your £20 free bet
must be wagered one time at odds of 2.0 (1/1) or higher before it becomes cash. This does not look so complicated yet. In terms of craps table limits, a typical craps table might have a $5 minimum,
$1000 maximum, and double odds allowed. This means that for every $5 bet you win, the dealer will pay you $9. Strictly 18+ begambleaware.org. Hey guys,So I have placed a bet on the World Cup which
has odds of 1.45. What do decimal odds mean? OR = 1 means there is no association between exposure and outcome. It is also equivalent to fractional odds of 1/2 and decimal odds of 1.5. OR > 1 means
greater odds of association with the exposure and outcome. Along with the number, they tell you tons of information about the bet and the match. Please note that AmWager does not used fixed-odds. New
customers only. degree of unlikeness. A horse priced at 1 . American odds are probably the easiest to understand as odds are displayed with plus (+) and minus (-) symbols to either indicate the
amount one needs to wager to win $100 or the amount one would win for every $100 staked. Decimal is one of many odds formats used by sports betting companies to present the likelihood of something
happening or not happening. The -145 means you will win £100 for every £145 you stake. American odds are expressed as whole numbers with a minus (-) or plus (+) sign placed in front of the odds. Some
sportsbooks credit you your risk amount for losing bets, and the lesser of the risk or win amount for winning bets. If the odds is positive, it shows how much you could win if you bet 100. -150 means
you must bet $150 to win $100.) Every region of the world has their own interpretation of what is considered legal real money online gambling. Place any £20 sportsbook bet at minimum odds of 2.0 (1/
1) to receive your free £20 bet. The same goes for 7/1 (in decimal odds: 8), which means that you can get three units for each unit you stake. represents the amount of money that you need to bet in
order to win $100 if your bet is correct. Minimum odds of 1.50 (1/2). Example #1: A horse that wins at 5-1 will return $5.00 for every $1.00 wagered. 4/1 becomes A/B. Our odds calculator is perfect
for showing you how to calculate potential winnings for all types of sport wagers. The Minimum means the minimum bet for a Pass Line, Don't Pass, Field, or Big 6 and/or8 bet. OR = 1 means there is no
association between exposure and outcome. For example, if you have odds. This refers to the odds of your chosen selection. For example, a -200 money line means you would win $100 if you bet $200 and
won. Looking at a craps table, the payout odds for landing a 4 are 9:5. Fixed-Odds: A bet where you get the odds advertised by the better operator at the time you place your bet. This means when you
bet $2, the total return if the bet is successful is $9. If you have up to 40 events on your ticket with odds that are at least 1.2 then you can get 225% extra on top of your potential winning. And,
we are 95% confident that the true population mean is 164 ± (1.96) x (4.3) minutes, or between 155.6 and 172.4 minutes of viewing. Odds-On: A term used for a strong favorite to win, when to have to
actually spend more to win. If the odds on a tennis player said +150, that means that for a $100 bet, you would . What does -200 mean in sports betting? The calculate probability from fractional
odds, we need to divide the number on the right-hand side of the fraction by the sum of both numbers. So, if you were to . American odds start with either a positive or negative sign (e.g. If you're
betting the . Yet, the combined odds of all 10 events is a staggering 853.79 to 1! Let's say you're at a craps table with a $10 minimum bet though. Sportsbooks May Place Odds Restrictions and Time
Limits on Rollover. Craps is one of those games that can win a lot or lose a lot. Bournemouth 5/1. You still want to bet on 4. This means that if you bet $10 on the Patriots at -110, you earn $10
towards your rollover requirement if they lose, but only $9.09 if they win. This is the amount you need to bet in order to win $100. Decimal is one of many odds formats used by sports betting
companies to present the likelihood of something happening or not happening. That means that in order to win $100 betting Kansas City's moneyline odds, you would need to risk $130. Win payoffs are
calculated based on a $2.00 wager because at most tracks this is the minimum bet. Here are some negative money line examples: The New England Patriots are-500 against the Buffalo Bills. But I still
have not unlocked my bet so I am trying to work out why its not happy still. Odds like -200 can get confusing though. Again this can easily be converted into smaller or larger size bets. If the odds
are less than 1/2 (1.5 in decimals), your bet will not qualify for a free bet. What Does a Positive Antigen Test Mean? Youmay even have to ask the dealers what table minimums and maximums are. This
does not mean you have to bet $100. For example, you may see decimal odds of 2, 2.0, or 2.00. If a team is a three-point favorite, for example, their moneyline odds would pay more than a team that is
a ten-point favorite. It is feasible that you have been betting for a long time and not really paid attention to this term. Also, the first time I played craps, I had $200 and played on a table with
a $5 minimum. What Does 3 4 5 Odds Mean In Craps, Best Vegas Casino For Blackjack, Poker Call Meaning, Jugar Al Poker Por Dinero Argentina If I can run the free bet through on very short odds, then
that's pretty close to a zero risk investment as the "non-free" bet is then pure profit - and at that point the professional gamblers will be on it like a hawk (at least until they trigger the . The
point spread is replaced by odds. Using the +120 odds, it shows us that a $100 bet on that outcome would pay out $120 in profits. Please note that AmWager does not used fixed-odds. Using the payout
odds of 9:5, you know your minimum bet of $10 equates to two $5 'units'. You Bet $20 on Bournemouth. This makes real money gambling very safe and gives players the best payment options to deposit and
withdraw their money What Does Mean In Gambling Odds with ease and speed. It's best to place your qualifying bets at low odds to minimise your qualifying losses. Let's use the same examples as
before, with the same replacement of numbers for letters, i.e. Summary. San Francisco is a +110 moneyline underdog. Late Money: When a horse gets a lot of money wagered right before a race. Betting
odds allow you to calculate how much money you will win if you make a bet. An easy way to test if your bet will qualify for a free bet is to check the potential returns for a £1 stake. If there's a
positive sign next to the odds, that indicates the amount of money you would win if you bet $100. Win payoffs are calculated based on a $2.00 wager because at most tracks this is the minimum bet. You
have probably heard the terms Long Odds and Short Odds in betting circles, but never paid attention to what they mean. This protein is known as an antigen and is on the surface of the COVID-19 virus.
The odds ratio is a versatile and robust statistic. So, let's break down some examples for you: At each ofthese casinos Table Odds are 3X on 4 and 10, 4X on 5 and 9 and 5X on 6 and 8 ineach of these
casinos. The typical restrictions for this type of free bet, are both the maximum and minimum amount you can bet. Keep reading for more detailed explanations of what the plus and minus signs mean and
how to read them. That means a total payout of $6 for someone betting $5. For customers accessing the services from Great Britain ("GB") MT SecureTrade Limited is licensed and regulated by the
Gambling Commission and holds a Remote Casino Operating License What Does Implied Odds Mean In Poker number 39575. If you had placed the minimum bet of $2 on that . Here is an example of how outsized
both the odds and payouts for a parlay can get. On the other hand, the underdog's odds will begin with a plus (+) sign. Overall odds: +85379.
Tripp Lite Cat5e Patch Panel
Okc Disc Golf Tournaments
Marcus Haislip Tennessee
Arusha Technical College
Weather Underground Old Fort, Nc
Marriott Fountain Hills, Az
Meat Rabbits For Sale Shipping
Weekend Forecast Golf With A Chance Of Drinking Polo
, , | {"url":"https://blog.gourmandisesdecamille.com/obftrxjs/what-does-minimum-odds-mean.html","timestamp":"2024-11-03T23:25:43Z","content_type":"text/html","content_length":"32586","record_id":"<urn:uuid:d5b80c34-34a2-4bbc-9c2a-2a19f4d976d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00843.warc.gz"} |
Excitation spectra of many-body systems by linear response: General theory and applications to trapped condensates
We derive a general linear-response many-body theory capable of computing excitation spectra of trapped interacting bosonic systems, e.g., depleted and fragmented Bose-Einstein condensates (BECs). To
obtain the linear-response equations we linearize the multiconfigurational time-dependent Hartree for bosons (MCTDHB) method, which provides a self-consistent description of many-boson systems in
terms of orbitals and a state vector (configurations), and is in principle numerically exact. The derived linear-response many-body theory, which we term LR-MCTDHB, is applicable to systems with
interaction potentials of general form. For the special case of a δ interaction potential we show explicitly that the response matrix has a very appealing bilinear form, composed of separate blocks
of submatrices originating from contributions of the orbitals, the state vector (configurations), and off-diagonal mixing terms. We further give expressions for the response weights and density
response. We introduce the notion of the type of excitations, useful in the study of the physical properties of the equations. From the numerical implementation of the LR-MCTDHB equations and
solution of the underlying eigenvalue problem, we obtain excitations beyond available theories of excitation spectra, such as the Bogoliubov-de Gennes (BdG) equations. The derived theory is first
applied to study BECs in a one-dimensional harmonic potential. The LR-MCTDHB method contains the BdG excitations and, also, predicts a plethora of additional many-body excitations which are out of
the realm of standard linear response. In particular, our theory describes the exact energy of the higher harmonic of the first (dipole) excitation not contained in the BdG theory. We next study a
BEC in a very shallow one-dimensional double-well potential. We find with LR-MCTDHB low-lying excitations which are not accounted for by BdG, even though the BEC has only little fragmentation and,
hence, the BdG theory is expected to be valid. The convergence of the LR-MCTDHB theory is assessed by systematically comparing the excitation spectra computed at several different levels of theory.
ASJC Scopus subject areas
• Atomic and Molecular Physics, and Optics
Dive into the research topics of 'Excitation spectra of many-body systems by linear response: General theory and applications to trapped condensates'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/excitation-spectra-of-many-body-systems-by-linear-response-genera","timestamp":"2024-11-07T10:35:46Z","content_type":"text/html","content_length":"59474","record_id":"<urn:uuid:52f816c3-14e2-495b-933f-df243d9a399b>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00832.warc.gz"} |
UMass Lowell Center for Atmospheric Research/ Digisonde DPSnavigationUntitled Document
The temporal and spatial variation in ionospheric structures have often frustrated the efforts of communications and radar system operators who base their frequency management decisions on monthly
mean predictions of radio propagation in the high frequency (short-wave) band. The University of Massachusetts Lowell s Center for Atmospheric Research (UMLCAR) has produced a low power miniature
version of its Digisonde^TM sounders, the Digisonde^TM Portable Sounder (DPS), capable of making measurements of the overhead ionosphere and providing real-time on-site processing and analysis to
characterize radio signal propagation to support communications or surveillance operations.
The system compensates for a low power transmitter (300 W vs. 10 kW for previous systems) by employing intrapulse phase coding, digital pulse compression and Doppler integration. The data
acquisition, control, signal processing, display, storage and automatic data analysis functions have been condensed into a single multi-tasking, multiple processor computer system, while the analog
circuitry has been condensed and simplified by the use of reduced transmitter power, wide bandwidth devices, and commercially available PC expansion boards. The DPS is shown in the composite Figure
1-1 (with the integrated transceiver package shown in Figure 1-1A, and one of the four crossed magnetic dipole receive antennas in Figure 1-1B).
│ │ │
│ │ │
│ │ Figure 1-1B Magnetic Loop Turnstile Antenna │
│ Figure 1-1A Digisonde^TM Portable Sounder │ │
│ │ │
Noteworthy new technology involved in this system includes:
□ Electronically switched active crossed loop receiving antenna
□ Commercially sourced 10 MIPS TMS 320C25 digital signal processor (DSP)
□ 4 million sample DSP buffer memory
□ 71 to 110 MHz digital synthesizer on a 4"x5" card
□ Compact DC-DC converters allowing operation on one battery
□ Four-channel high speed (1 million 12-bit samples/sec) digitizer board
□ A 160 Mbits/sec parallel data bus between the digitizer and the DSP
□ A proprietary multi-tasking operating system for remote interaction via a modem connection without suspending system operation
□ Direct digital synthesized coherent oscillators
□ 21 dB signal processing gain from phase coded pulse compression
□ 21 dB additional signal processing gain from coherent Doppler integration
□ Automatic ionospheric layer identification and parameter scaling by an embedded expert system
The availability of a small low power ionosonde that could be operated on-site wherever a high frequency (HF) radio or radar was in use, would greatly increase the value of the information produced
by the instrument since it would become available to the end user immediately.
One of the chief applications for the real-time data currently provided by digital ionospheric sounders is to manage the operation of HF radio channels and networks. Since many HF radios are operated
at remote locations (i.e., aircraft, boats, land vehicles of all sorts, and remote sites where telephone service is unreliable) the major obstacle to making practical use of the ionospheric sounder
data and associated computed propagation information is the dissemination of this data to a data processing and analysis site. Since HF is often used where no alternative communications link exists,
or is held in reserve in case primary communication is lost, it is not practical to assume that a communications link exists to make centrally tabulated real-time ionospheric data available to the
user. Furthermore, local measurements are superior to measurements at sites of opportunity in the user s general region of the globe since extreme variations in ionospheric properties are possible
even over short distances, especially at high latitudes [Buchau et al., 1985; Buchau and Reinisch, 1991] or near the sunset or sunrise terminator.
However, for most applications, the size, weight, power consumption and cost of a conventional ionospheric sounder have made local measurements impractical. Therefore the availability of a small, low
cost sounder is a major improvement in the usefulness of ionospheric sounder data. Shrinking the conventional 1 to 50 kW pulse sounders to a portable, battery operated 100 to 500 W system requires
the application of substantial signal processing gain to compensate for the 20 dB reduction in transmitter power. Furthermore, a compact portable package requires the use of highly integrated
control, data acquisition, timing, data processing, display and storage hardware.
The objective of the DPS development project was to develop a small vertical incidence (i.e., monostatic) ionospheric sounder which could automatically collect and analyze ionospheric measurements at
remote operating sites for the purpose of selecting optimum operating frequencies for obliquely propagated communication or radar propagation paths. Intermediate objectives assumed to be necessary to
produce such a capability were the development of optimally efficient waveforms and of functionally dense signal generation, processing and ancillary circuitry. Since the need for an embedded general
purpose computer was a given imperative, real-time control software was developed to incorporate as many functions as was feasible into this computer rather than having to provide additional
circuitry and components to perform these functions. The DPS duplicates all of the functions of its predecessor the Digisonde^TM 256 [Bibl et al., 1981] and [Reinisch, 1987] in a much smaller, low
power package. These include the simultaneous measurement of seven observable parameters of reflected (or in oblique incidence, refracted) signals received from the ionosphere:
1) Frequency
2) Range (or height for vertical incidence measurements)
3) Amplitude
4) Phase
5) Doppler Shift and Spread
6) Angle of Arrival
7) Wave Polarization
Because the physical parameters of the ionospheric plasma affect the way radio waves reflect from or pass through the ionosphere, it is possible by measuring all of these observable parameters at a
number of discrete heights and discrete frequencies to map out and characterize the structure of the plasma in the ionosphere. Both the height and frequency dimensions of this measurement require
hundreds of individual measurements to approximate the underlying continuous functions. The resulting measurement is called an ionogram and comprises a seven dimensional measurement of signal
amplitude vs. frequency and vs. height as shown in Figure 1-2 (due to the limitations of current software only five may be displayed at a time). Figure 1-2 is a five-dimensional display, with
sounding frequency as the abscissa, virtual reflection height (simple conversion of time delay to range assuming propagation at 3x10^8 m/sec) as the ordinate, signal amplitude as the spot (or pixel)
intensity, Doppler shift as the color shade and wave polarization as the color group (the blue-green-grey scale or "cool" colors showing extraordinary polarization, the red-yellow-white scale or
"hot" colors showing ordinary polarization).
Figure 1-2 Five-Dimensional Ionogram
Another objective of the DPS development was to store the data created by the system in an easily accessible format (e.g., DOS formatted personal computer files), while maintaining compatibility with
the existing base of Digisonde^TM sounder analysis software in use at the UMLCAR and at over 40 research institutes around the world. This objective often competed with the additional objective of
providing an easily accessible and simply understood standard data format to facilitate the development of novel post-processing analysis and display programs.
Ionospheric Propagation of Electromagnetic Waves back to top
An ionospheric sounder uses basic radar techniques to detect the electron density (equal to the ion density since the bulk plasma is neutral) of ionospheric plasma as a function of height. The
ionospheric plasma is created by energy from the sun transferred by particles in the solar wind as well as direct radiation (especially ultra-violet and x-rays). Each component of the solar emissions
tends to be deposited at a particular altitude or range of altitudes and therefore creates a horizontally stratified medium where each layer has a peak density and to some degree, a definable width,
or profile. The shape of the ionized layer is often referred to as a Chapman function [Davies, 1989] which is a roughly parabolic shape somewhat elongated on the top side. The peaks of these layers
usually form between 70 and 300 km altitude and are identified by the letters D, E, F1 and F2, in order of their altitude.
By scanning the transmitted frequency from 1 MHz to as high as 40 MHz and measuring the time delay of any echoes (i.e., apparent or virtual height of the reflecting medium) a vertically transmitting
sounder can provide a profile of electron density vs. height. This is possible because the relative refractive index of the ionospheric plasma is dependent on the density of the free electrons (N
[e]), as shown in Equation 1-1 (neglecting the geomagnetic field):
m^2(h)= 1 k (N[e]/f^2) (1 1)
where k = 80.5, N[e] is electrons/m^3, and f is in Hz [Davies, 1989; Chen, 1987].
The behavior of the plasma changes significantly in the presence of the Earth s magnetic field. An exhaustive derivation of m [Davies, 1989] results in the Appleton Equation for the refractive index,
which is one of the fundamental equations used in the field of ionospheric propagation. This equation clearly shows that there are two values for refractive index, resulting in the splitting of a
linearly polarized wave incident upon the ionosphere, into two components, known as the ordinary and extraordinary waves. These propagate with a different wave velocity and therefore appear as two
distinct echoes. They also exhibit two distinct polarizations, approximately right hand circular and left hand circular, which aid in distinguishing the two waves.
When the transmitted frequency is sufficient to drive the plasma at its resonant frequency there is a total internal reflection. The plasma resonance frequency (f[p]) is defined by several constants,
e the charge of an electron, m the mass of an electron, e[o] the permittivity of free space, but only one variable, N[e] electron density in electrons/m^3 [Chen, 1987]:
fp^2 = (N[e] e^2/4pe[o]m) = kN[e] (1-2)
A typical number for the F-region (200 to 400 km altitude) is 10^12 electrons/m^3, so the plasma resonance frequency would be 9 MHz. The value of m in Equation 1 1 approaches 0 as the operating
frequency, f, approaches the plasma frequency. The group velocity of a propagating wave is proportional to m, so m = 0 implies that the wave slows down to zero which is obviously required at some
point in the process of reflection since the propagation velocity reverses.
The total internal reflection from the ionosphere is similar to reflection of radio frequency (RF) energy from a metal surface in that the re-radiation of the incident energy is caused by the free
electrons in the medium. In both cases the wave penetrates to some depth. In a plasma the skin depth (the depth into the medium at which the electric field is 36.8% of its incident amplitude) is
defined by:
d = ---------- (1-3)
where l[0 ]is the free space wavelength.
The major difference between ionospheric reflection and reflection from a metallic surface is that the latter has a uniform electron density while the ionospheric density increases roughly
parabolically with altitude, with densities starting at essentially zero at stratospheric altitudes and rising to a peak at about 200 to 400 km. In the case of a metal there is no region where the
wave propagates below the resonance frequency, while in the ionosphere the refractive index and therefore the wave velocity change with altitude until the plasma resonance frequency is reached. Of
course if the RF frequency is above the maximum plasma resonance frequency the wave is never reflected and can penetrate the ionosphere and propagate into outer space. Otherwise what happens on a
microscopic scale at the surface of a metal and on a macroscopic scale at the plasma resonance in the ionosphere is very similar in that energy is re-radiated by electrons which are responding to the
incident electric field.
Coherent Integration back to top
During the 1960 s and 1970 s several variations in sounding techniques started moving significantly beyond the basic pulse techniques developed in the 1930 s. First was the coherent integration of
several pulses transmitted at the same frequency. Two signals are coherent if, having a phase and amplitude, they are able to be added together (e.g., one radar pulse echo received from a target
added to the next pulse echo received from the same target, thousandths of a second later) in such a way that the sum may be zero (if the two signals are exactly out of phase with each other) or
double the amplitude (if they are exactly in phase). Coherent integration of N signals can provide a factor of N improvement in power. This technique was first used in the Digisonde^TM 128 [Bibl and
Reinisch, 1975].
In ionospheric sounding, the motion of the ionosphere often makes it impossible to integrate by simple coherent summation for longer that a fraction of a second, although it is not rare to receive
coherent echoes for tens of seconds. However, with the application of spectral integration (which is a byproduct of the Fourier transform used to create a Doppler spectrum) it is possible to
coherently integrate pulse echoes for tens of seconds under nearly all ionospheric conditions [Bibl and Reinisch, 1978]. The integration may progress for as long a time as the rate of change of phase
remains constant (i.e., there is a constant Doppler shift, Df). The Digisonde^TM 128PS, and all subsequent versions perform this spectral integration.
Additional detail on this topic is contained in Chapter 2 in this section.
Coded Pulses to Facilitate Pulse Compression Radar Techniques back to top
A third general technique to improve on the simple pulse sounder is to stretch out the pulse by a factor of N, thus increasing the duty cycle so the pulse contains more energy without requiring a
higher power transmitter (power x time = energy). However, to maintain the higher range resolution of the simple short pulse the pulse can be bi-phase, or phase reversal modulated with a phase code
to enable the receiver to create a synthetic pulse with the original (i.e., that of the short pulse) range resolution. A network of sounders using a 13-bit Barker Code were operated by the U.S. Navy
in the 1960 s.
The critical factor in the use of pulse compression waveforms for any radar type measurement is the correlation properties of the internal phase code. Phase codes proposed and experimented with
included the Barker Code [Barker, 1953], Huffman Sequences [Huffman 1962], Convoluted Codes [Coll, 1961], Maximal Length Sequence Shift Register Codes (M-codes) [Sarwate and Pursley, 1980], or Golay
s Complementary Sequences [Golay, 1961], which have been implemented in the VHF mesospheric sounding radar at Ohio State University [Schmidt et al., 1979] and in the DPS. The internal phase code
alternative has just recently become economically feasible with the availability of very fast microprocessor and signal processor IC s. Barker Coded pulses have been implemented in several
ionospheric sounders to date, but until the DPS was developed there have been no other successful implementations of Complementary Series phase codes in ionospheric sounders.
The European Incoherent Scatter radar in Tromso, Norway (VanEiken, 1991 and 1993) and an over-the-horizon (OTH) HF radar used the Complementary Series codes. However most major radar systems
including all currently active OTH radars opted for the FM/CW chirp technique, due to its resistance to Doppler induced leakage and its compatibility with analog pulse compression processing
techniques. Basically, the chirp waveform avoids the need for extremely fast digital processing capabilities, since only the final stage is performed digitally, while the pulse compression is best
performed entirely digitally. Even at the modest bandwidths used for ionospheric sounding, this digital capability was until recently, much more expensive and cumbersome than the special synthesizers
required for chirpsounding.
Another new development in the 1970 s was the coherent multiple receiver array [Bibl and Reinisch, 1978] which allows angle of arrival (incidence angle) to be deduced from phase differences between
antennas by standard interferometer techniques. Given a known operating frequency, and known antenna spacing, by measuring the phase or phase difference on a number of antennas, the angle of arrival
of a plane wave can be deduced. This interferometry solution is invalid, however, if there are multiple sources contributing to the received signal (i.e., the received wave therefore does not have a
planar phase front). This problem can be overcome in over 90% of the cases as was first shown with the Digisonde^TM 256 [Reinisch et al., 1987] by first isolating or discriminating the multiple
sources in range, then in the Doppler domain (i.e., isolating a plane wavefront) before applying the interferometry relationships.
Except for the FM/CW chirpsounder which operates well on transmitter power levels of 10 to 100 W (peak power) the above techniques and cited references typically employ a 2 to 30 kW peak power pulse
transmitter. This power is needed to get sufficient signal strength to overcome an atmospheric noise environment which is typically 20 to 50 dB (CCIR Noise Tables) above thermal noise (defined as
kTB, the theoretical minimum noise due to thermal motion, where k = Boltzman s constant, T = temperature in ° K, and B = system bandwidth in Hz). More importantly, however, since ionogram
measurements require scanning of the entire propagating band of frequencies in the 0.5 to 20 MHz RF band (up to 45 MHz for oblique measurements), the sounder receiver will encounter broadcast
stations, ground-to-air communications channels, HF radars, ship-to-shore radio channels and several very active radio amateur bands which can add as much as 60 dB more background interference.
Therefore, the sounder signal must be strong enough to be detectable in the presence of these large interfering signals.
To make matters worse, a pulse sounder signal must have a broad bandwidth to provide the capability to accurately measure the reflection height, therefore the receiver must have a wide bandwidth,
which means more unwanted noise is received along with the signal. The noise is distributed quite evenly over bandwidth (i.e., white), while interfering signals occur almost randomly (except for
predictably larger probabilities in the broadcast bands and amateur radio bands) over the bandwidth. Thus a wider-bandwidth receiver receives proportionally more uniformly distributed noise and the
probability of receiving a strong interfering signal also goes up proportionally with increased bandwidth.
The DPS transmits only 300 W of pulsed RF power but compensates for this low power by digital pulse compression and coherent spectral (Doppler) integration. The two techniques together provide about
30 dB of signal processing gain (up to 42 dB for the bi-static oblique waveforms) thus for vertical incidence measurements the system performs equivalently with a simple pulse sounder of 1000 times
greater power (i.e., 300 kW).
Additional detail on this topic is contained in Chapter 2 in this section.
Current Applications of Ionospheric Sounding back to top
Current applications of ionospheric sounders fall into two categories:
a. Support of operational systems, including shortwave radio communications and OTH radar systems. This support can be in the form of predictions of propagating frequencies at given times and
locations in the future (e.g., over the ensuing month) or the provision of real-time updates (updated as frequently as every 15 minutes) to detect current conditions such that system operating
parameters can be optimized.
b. Scientific research to enable better prediction of ionospheric conditions and to understand the plasma physics of the solar-terrestrial interaction of the Earth s atmosphere and magnetic field
with the solar wind.
There has been considerable effort in producing global models of ionospheric densities, temperature, chemical constitution, etc, such that a few sounder measurements could calibrate the models and
improve the reliability of global predictions. It has been shown that if measurements are made within a few hundred kilometers of each other, the correlation of the measured parameters is very high
[Rush, 1978]. Therefore a network of sounders spaced by less than 500 km can provide reliable estimates of the ionosphere over a 250 km radius around them.
The areas of research pursued by users of the more sophisticated features of the Digisonde^TM sounders include polar cap plasma drift, auroral phenomena, equatorial spread-F and plasma irregularity
phenomena, and sporadic E-layer composition [Buchau et al., 1985; Reinisch 1987; and Buchau and Reinisch 1991]. There may be some driving technological needs (e.g., commercial or military uses) in
some of these efforts, but many are simply basic research efforts aimed at better understanding the manifestations of plasma physics provided by nature.
Requirements for a Small Flexible Sounding System back to top
The detailed design and synthesis of a RF measurement system (or any electronic system) must be based on several criteria:
a. The performance requirements necessary to provide the needed functions, in this case scientific measurements of electron densities and motions in the ionosphere.
b. The availability of technology to implement such a capability.
c. The cost of purchasing or developing such technology.
d. The risk involved in depending on certain technologies, especially if some of the technology needs to be developed.
e. The capabilities of the intended user of the system, and its expected willingness to learn to use and maintain it; i.e., how complicated can the operation be before the user will give up and
not try to learn it.
The question of what technology can be brought to bear on the realization of a new ionospheric sounder was answered in a survey of existing technology in 1989, when the portable sounder development
started in earnest. This survey showed the following available components, which showed promise in creating a smaller, less costly, more powerful instrument. Many of these components were not
available when the last generation of Digisondes^TM (circa 1980) was being developed:
Solid-state 300 W MOSFET RF power transistors
High-speed high precision (12, 14 and 16 bit) analog to digital (A D) converters
High-speed high precision (12 and 16 bit) digital to analog (D A) converters
Single chip Direct Digital Synthesizers (DDS)
Wideband (up to 200 MHz) solid state op amps for linear feedback amplifiers
Wideband (4 octaves, 2 32 MHz) 90° phase shifters
Proven Digisonde^TM 256 measurement techniques
Very fast programmable DSP (RISC) IC s
Fast, single board, microcomputer systems and supporting programming languages
Many of these components are inexpensive and well developed because they feed a mass market industry. The MOSFET transistors are used in Nuclear Magnetic Resonance medical imaging systems to provide
the RF power to excite the resonances. The high speed D A converters are used in high resolution graphic video display systems such as those used for high performance workstations. The DDS chips are
used in cellular telephone technology, in which the chip manufacturer, Qualcomm, is an industry leader. The DSP chips are widely used in speech processing, voice recognition, image processing
(including medical instrumentation). And of course, fast microcomputer boards are used by many small systems integrators which end up in a huge array of end user applications ranging from cash
registers to scientific computing to industrial process controllers.
The performance parameters were well known at the beginning of the DPS development, since several models of ionospheric pulse sounders had preceded it. The frequency range of 1 to 20 MHz for vertical
sounding was an accepted standard, and 2 to 30 MHz was accepted as a reasonable range for oblique incidence measurements. It was well known that radio waves of greater than 30 MHz often do propagate
via skywave paths, however, most systems relying on skywave propagation don t support these frequencies, so interest in this frequency band would only be limited to scientific investigations. A
required power level in the 5 to 10 kW range for pulse transmitters had provided good results in the past. The measurement objectives were to simultaneously measure all seven observable parameters
outlined at Paragraph 107 above in order to characterize the following physical features:
The height profile of electron density vs. altitude
Position and spatial extent of irregularity structures, gradients and waves
Motion vectors of structures and waves
As mentioned in the section above dealing with Current Applications of Ionospheric Sounding (Paragraph 127 et seq. above), the accurate measurement of all of the parameters, except frequency (it
being precisely set by the system and need not be measured) depends heavily on the signal to noise ratio of the received signal. Therefore vertical incidence ionospheric sounders capable of acquiring
high quality scientific data have historically utilized powerful pulse transmitters in the 2 to 30 kW range. The necessity for an extremely good signal to noise ratio is demanded by the sensitivity
of the phase measurements to the random noise component added to the signal level. For instance, to measure phase to 1 degree accuracy requires a signal to noise ratio better than 40 dB (assuming a
Gaussian noise distribution which is actually a best case), and measurement of amplitude to 10% accuracy requires over 20 dB signal to noise ratio. Of course, is it desirable that these measurements
be immune to degradation from noise and interference and maintain their high quality over a large frequency band. This requires that at the lower end of the HF band the system s design has to
overcome absorption, noise and interference, and poor antenna performance and still provide at least a 20 to 40 dB signal to noise ratio.
METHODOLOGY, THEORETICAL BASIS AND IMPLEMENTATION back to top
The VIS/DPS borrows several of the well proven measurement techniques used by the Digisonde^TM 256 sounder described in [Bibl, et al, 1981; Reinisch et al., 1989] and [Reinisch, 1987], which has been
produced for the past 12 years by the UMLCAR. The addition of digital pulse compression in the DPS makes the use of low power feasible, the implementation in software of processes that were
previously implemented in hardware results in a much smaller physical package, and the high level language control software and standard PC-DOS (i.e., IBM/PC) data file formats provide a new level of
flexibility in system operation and data processing.
A technical description of the DPS (sounder unit and receive antennas sub-systems) are contained in Section 2 of this manual.
Coherent Phase Modulation and Pulse Compression back to top
The DPS is able to be miniaturized by lengthening the transmitted pulse beyond the pulse width required to achieve the desired range resolution where the radar range resolution is defined as,
DR=c/2b where b is the system bandwidth, or (1-4)
DR=cT/2 for a simple rectangular pulse
waveform, with T being the width
of the rectangular pulse
The longer pulse allows a small low voltage solid state amplifier to transmit an amount of energy equal to that transmitted by a high power pulse transmitter (energy = power x time, and power = V^2/
R) without having to provide components to handle the high voltages required for tens of kilowatt power levels. The time resolution of the short pulse is provided by intrapulse phase modulation using
programmable phase codes (user selectable and firmware expandable), the Complementary Codes, and M-codes are standard. The use of a Complementary Code pulse compression technique is described in this
chapter, which shows that at 300 W of transmitter power the expected measurement quality is the same as that of a conventional sounder of about 500 kW peak pulse power.
The transmitted spread spectrum signal s(t) is a biphase (180° phase reversal) modulated pulse. As illustrated in Figure 1 3, bi-phase modulation is a linear multiplication of the binary spreading
code p(t) (a.k.a. a chipping sequence, where each code bit is a "chip") with a carrier signal sin(2pf[0]t) or in complex form, exp[j2pf[0]t], to create a transmitted signal,
s(t)=p(t)exp[j2pf[0]t] (1-5)
Figure 1-3 Generation of a Bi-phase Modulated Spread Spectrum Waveform
Notation throughout this chapter will use s(t) as the transmitted signal, r(t) the received signal and p(t) as the chip sequence. Functions r[1](t) and r[2](t) will be developed to describe the
signal after various stages of processing in the receiver.
The term chip is used rather than bit because for spread spectrum communications many chips are required to transmit one bit of message information, so a distinct term had to be developed. Figure 1-4
on the following page depicts the modulation of a sinusoidal RF carrier signal by a binary code (notice that the code is a zero mean signal, i.e., centred around 0 volts amplitude). Since the mixer
in Figure 1-3 can be thought of as a mathematical multiplier, the code creates a 180^o (p radians) phase shift in the sinusoidal carrier whenever p(t) is negative, since sin(wt) = sin(wt+p).
The binary spreading code is identical to a stream of data bits except that it is designed such that it forms a pattern with uniquely desirable autocorrelation function characteristics as described
later in this chapter. The 16-bit Complementary Code pair used in the DPS is 1-1-0-1-1-1-1-0-1-0-0-0-1-0-1-1 modulated onto the odd-numbered pulses and 1-1-0-1-1-1-1-0-0-1-1-1-0-1-0-0 modulated onto
the even-numbered pulses. This pattern of phase modulation chips is such that the frequency spectrum of such a signal (as shown in Figure 1-4) is uniformly spread over the signal bandwidth, thus the
term "spread spectrum". In fact, it is interesting to note that the frequency spectrum content of the spread spectrum signal used by the DPS is identical to that of the higher peak power, simple
short pulse used by the Digisonde^TM 256, even though the physical pulse is 8 times longer. Since they have the same bandwidth, Equation 1 4 would suggest that they have the same range resolution. It
will be shown later in this chapter, that the ability of the Digisonde^TM 256 and the DPS to determine range (i.e., time delay), phase, Doppler shift and angle of arrival is also identical between
the two systems, even though the transmitted waveforms appear to be vastly different.
Figure 1 4 Spectral Content of a Spread-Spectrum Waveform
Since the transmitted signal would obscure the detection of the much weaker echo in a monostatic system the transmitted pulse must be turned off before the first E-region echoes arrive at the
receiver which, as shown in Figure 1-5, is about T[E] = 600 m sec after the beginning of the pulse. Also, since the receiver is saturated when the transmitter pulse comes on again, the pulse
repetition frequency is limited by the longest time delay (listening interval) of interest, which is at least 5 msec, corresponding to reflections from 750 km altitude. To meet these constraints, a
533 m sec pulse made up of eight 66.67 m sec phase code chips (15 000 chips/sec) is selected which allows detection of ionospheric echoes starting at 80 km altitude. To avoid excessive range
ambiguity, a highest pulse repetition frequency of 200 pps is chosen, which allows reception of the entire pulse from a virtual height of 670 km (the pulse itself is 80 km long) altitude before the
next pulse is transmitted. This timing captures all but the highest multihop F-region echoes which are of little interest. Under conditions where higher unambiguous ranges, and therefore longer
receiver listening intervals, are desired 100 pps or 50 pps can be selected under software control.
Figure 1-5 Natural Timing Limitations for Monostatic Vertical Incidence Sounding
The key to the pulse compression technique lies in the selection of a spreading function, p(t), which possesses an autocorrelation function appropriate for the application. The ideal autocorrelation
function for any remote sensing application is a Dirac delta function (or instantaneous impulse, d (t) since this would provide perfect range accuracy and infinite resolution. However, since the
Dirac delta function has infinite instantaneous power and infinite bandwidth, the engineering tradeoffs in the design of any remote sensing system mainly involve how far one can afford to deviate
from this ideal (or how much one can afford to spend in more closely approximating this ideal) and still achieve the accuracy and resolution required. More to the point, for a discussion of a
discrete time digital system such as the DPS, the ideal signal is a complex unit impulse function, with the phase of the impulse conveying the RF phase of the received signal. The many different
pulse compression codes all represent some compromise in achieving this ideal, although each code has its own advantages, limitations, and trade-offs. The autocorrelation function as applied to code
compression in the VIS/DPS is defined as:
R(k)=S p(n) p(n+k) (1-6)
Therefore the ideal as described above is R(k) = d(k). (Several examples of autocorrelation functions of the codes described in this Section can be seen in Figures 1-9 through 1-13.)
For ionospheric applications, the received spread-spectrum coded signal, r(t), may be a superposition of several multipath echoes (i.e., echoes which have traveled over various propagation paths
between the transmitter and receiver) reflected at various ranges from various irregular features in the ionosphere. The algorithm used to perform the code compression operates on this received
multipath signal, r(t), which is an attenuated and time delayed (possibly multiple time delays) replica of the transmitted signal s(t) (from Equation 1 5), which can be represented as:
r(t)=S a[i] s(t-t[i]) or (1-7)
r(t)=S a[i] p(t-t[i])exp[j2pf[0]t - fi]
where S shows that the P multipath signals sum linearly at the receive antenna, a[i] is the amplitude of the ith multipath component of the signal, and t[i] is the propagation delay associated with
multipath i. The carrier phase f[i] of each multipath could be expressed in terms of the carrier frequency and the time delay t [i] ; however, since the multiple carriers (from the various multipath
components) cannot be resolved, while the delays in the complex code modulation envelope can be, a separate term, f [i], is used. Next, when the carrier is stripped off of the signal, this RF phase
term will be represented by a complex amplitude coefficient a[i] rather than a[i].
Figure 1-6 Conversion to Baseband by Undersampling
By down-converting to a baseband signal (a digital technique is shown in Figure 1-6), the carrier signal can be stripped away, leaving only the superposed code envelopes delayed by P multiple
propagation paths. Figure 1-6 presents one way to strip the carrier off a phase modulated signal. This is the screen display on a digital storage oscilloscope looking at the RF output from the DPS
system operating at 3.5 MHz. Notice that the horizontal scan spans 2 msec, which if the oscilloscope was capable of presenting more than 14 000 resolvable points, would display 7 000 cycles of RF.
The sample clock in the digital storage scope is not synchronized to the DPS, however, the digital sampling remains coherent with the RF for periods of several milliseconds. The analog signal is
digitized at a rate such that each sample is made an integer number of cycles apart (i.e., at the same phase point) and therefore looks like a DC level until the phase modulation creates a sudden
shift in the sampled phase point. Therefore the 180º phase reversals made on the RF carrier show up as DC level shifts, replicating the original modulating code exactly. The more hardware intensive
method of quadrature demodulation with hardware components (mixers, power splitters and phase shifters) can be found in any communications systems textbook, such as [Peebles, 1979]. After removing
the carrier, the modified r(t), now represented by r[1](t) becomes:
r[1](t)=S a[i] p(t-t[i]) (1-8)
where the carrier phase of each of the multipath components is now represented by a complex amplitude a i which carries along the RF phase term, originally defined by f [i] in Equation 1 7, for each
multipath. Since the pulse compression is a linear process and contributes no phase shift, the real and imaginary (i.e., in-phase and quadrature) components of this signal can be pulse compressed
independently by cross-correlating them with the known spreading code p(t). The complex components can be processed separately because the pulse compression (Equation 1 9B) is linear and the code
function, p(n), is all real. Therefore the phase of the cross-correlation function will be the same as the phase of r[1](t).
The classical derivation of matched filter theory [e.g., Thomas, 1964] creates a matched filter by first reversing the time axis of the function p(t) to create a matched filter impulse response h(t)
= p( t). Implementing the pulse compression as a linear system block (i.e., a "black box" with impulse response h(t)) will again reverse the time axis of the impulse response function by convolving h
(t) with the input signal. If neither reversal is performed (they effectively cancel each other) the process may be considered to be a cross-correlation of the received signal, r(t) with the known
code function, p(t). Either way, the received signal, r[2](n) after matched filter processing becomes:
r[2](n)=r[1](n)*h(n)=r[1](n)*p(-n) (1-9A)
or by substituting Equation 1 8 and writing out the discrete convolution, we obtain the cross-correlation approach,
[P M P]
r[2](n)=S a[i ]S p(k-ti)p(k-n)=S Ma[i ]d(n-t[i]) (1-9B)
^i=1 ^k=1 ^i=1
where n is the time domain index (as in the sample number, n, which occurs at time t = nT where T is the sampling interval), P is the number of multipaths, k is the auxiliary index used to perform
the convolution, and M is the number of phase code chips. The last expression in Equation 1 9B, the d(n), is only true if the autocorrelation function of the selected code, p(t), is an ideal unit
impulse or "thumbtack" function (i.e., it has a value of M at correlation lag zero, while it has a value of zero for all other correlation lags). So, if the selected code has this property, then the
function r[2](n), in Equation 1 9 is the impulse response of the propagation path, which has a value a[i], (the complex amplitude of multipath signal i) at each time n = t [i] (the propagation delay
attributable to multipath I).
Figure 1-7 Illustration of Complementary Code Pulse Compression
Figure 1-7 illustrates the unique implementation of Equation 1 9 employed for compression of Complementary Sequence waveforms. A 4-bit code is used in this figure for ease of illustration but
arbitrarily long sequences can be synthesized (the DPS s Complementary Code is 8-chips long). It is necessary to transmit two encoded pulses sequentially, since the Complementary Codes exist in
pairs, and only the pairs together have the desired autocorrelation properties. Equation 1 8 (the received signal without its sinusoidal carrier) is represented by the input signal shown in the upper
left of Figure 1-7. The time delay shifts (indexed by n in Equation 1 9 are illustrated by shifting the input signal by one sample period at a time into the matched filter. The convolution shifts
(indexed by k in Equation 1 9 sequence through a multiply-and-accumulate operation with the four ± 1 tap coefficients. The accumulated value becomes the output function r[2](n) for the current value
of n. The two resulting expressions for Equation 1 9 (an r[2](n) expression for each of the two Complementary Codes) are shown on the right with the amplitude M=4 clearly expressed. The non-ideal
approximation of a delta function, d(n ti), is apparent from the spurious a and a amplitudes. However, by summing the two r[2](n) expressions resulting from the two Complementary Codes, the spurious
terms are cancelled, leaving a perfect delta function of amplitude 2M.
The amplitude coefficient M in Equation 1 9 is tremendously significant! It is what makes spread-spectrum techniques practical and useful. The M means that a signal received at a level of 1 mv would
result in a compressed pulse of amplitude M mv, a gain of 20 log[10](M) dB. Unfortunately, the benefits of all of that gain are not actually realized because the RMS amplitude of the random noise
(which is incoherently summed by Equation 1 9B) which is received with the signal goes up by a factor of \/M. However, this still represents a power gain (since power = amplitude^2) equal to M, or
10log[10](M) dB. The \/M coefficient for the incoherent summation of multiple independent noise samples is developed more thoroughly in the following section on Coherent Spectral Integration, but the
factor of M-increase for the coherent summation of the signal is clearly illustrated in Figure 1-7.
The next concern is that the pulse compression process is still valid when multiple signals are superimposed on each other as occurs when multipath echoes are received. It seems likely that multiple
overlapping signals would be resolved since Equation 1 9 and the free space propagation phenomenon are linear processes, so the output of the process for multiple inputs should be the same as the sum
of the outputs for each input signal treated independently. This linearity property is illustrated in Figure 1-8. Two 4-chip input signals, one three times the amplitude of the other, are overlapped
by two chips at the upper left of the illustration. After pulse compression, as seen in the lower right, the two resolved components, still display a 3:1 amplitude ratio and are separated by two chip
Figure 1-8 Resolution of Overlapping Complementary Coded Pulses
The phase of the received signal is detected by quadrature sampling; but, how is the complex quantity, a i, or ai exp[f[i]], related to the RF phase (f[i]) of each individual multipath component? It
can be shown that this phase represents the phase of the original RF signal components exactly. As shown in Equations 1 10 and 1 11, the down-converting (frequency translation) of r(t) by an
oscillator, exp[j2pf[0]t] results in:
[P P]
^i=0 i=0
r[1](t)=Sa[i]p(t-t[i]) where a[i]=a[i]exp[jf[i]] is a complex amplitude (1-11)
This signal maintains the parameter f[i] which is the original phase of each RF multipath component. Note that the oscillator is defined as having zero phase (exp[j2pf[0]t]).
Alternative Pulse Compression Codes back to top
Due to many possible mechanisms the pulse compression process will have imperfections, which may cause energy reflected from any given height to leak or spill into other heights to some degree. This
leakage is the result of channel induced Doppler, mathematical imperfection of the phase code (except in the Complementary Codes which are mathematically perfect) and/or imperfection in the phase and
amplitude response of the transmitter or receiver. Several codes were simulated and analyzed for leakage from one height to another and for tolerance to signal distortion caused by band-limiting
filters. All of the pulse compression algorithms used are cross-correlations of the received signal with a replica of the unit amplitude code known to have been sent. Therefore, since Equation 1 9B
represents a "cross-correlation" (the unit amplitude function p(t) is cross-correlated with the complex amplitude weighted version) of p(k) with itself, it is the leakage properties of the
autocorrelation functions which are of interest.
The autocorrelation functions of several codes were computed either on a PC or a VAX computer for several different codes and are shown in the following figures:
a. Complementary Series (Figure 1-9)
b. Periodic M-codes (Figure 1-10)
c. Non-periodic M-codes (Figure 1-11)
d. Barker Codes (Figure 1-12)
e. Kasami Sequence Codes (Figure 1-13)
Figure 1-9 Autocorrelation Function of the Complementary Series
Figure 1-10 Autocorrelation Function of a Periodic Maximal Length Sequence
Figure 1-11 Autocorrelation Function of a Non-Periodic Maximal Length Sequence
Figure 1-12 Autocorrelation Function of the Barker Code
Figure 1-13 Autocorrelation Function of the Kasami Sequence
Since the Complementary Series pairs do not leak energy into any other height bin this phase code scheme seemed optimum and was chosen for the DPS s vertical incidence measurement mode in order to
provide the maximum possible dynamic range in the measurement. If there is too much leakage (for instance at a 20 dB level) then stronger echoes would create a "leakage noise floor" in which weaker
echoes could not be detectable. The autocorrelation function of the Maximal Length Sequence (M-code) is particularly good since for M = 127, the leakage level is over 40 dB lower than the correlation
peak and the correlation peak provides over 20 dB of SNR enhancement. However, since these must be implemented as a continuous transmission (100% duty cycle) they are not suitable for vertical
incidence monostatic sounding. Therefore the M-Code is the code of choice for oblique incidence bi-static sounding, where the transmitter need not be shut off to provide a listening interval.
The M-codes which provide the basic structure of the oblique waveform, all have a length of M = (2^N 1). The attractive property of the M-codes is their autocorrelation function, shown in Figure
1-10. This type of function is often referred to as a "thumbtack". As long as the code is repeated at least a second time, the value of the cross correlation function at lag values other than zero is
1 while the value at zero is M. However, if the M-Code is not repeated a second time, i.e., if it is a pulsed signal with zero amplitude before and after the pulse, the correlation function looks
more like Figure 1-11. The characteristics of Figure 1-11 also apply if the second repetition is modulated in phase, frequency, amplitude, code # or time shift (i.e., starting chip). So to achieve
the "clean" correlation function with M-Codes (depicted in Figure 1-10), the identical waveform must be cyclically repeated (i.e., periodic).
The problem that occurs using the M-codes is if any of the multipath signal components starts or ends during the acquisition of one code record, then there are zero amplitude samples (for that
multipath component) in the matched filter as the code is being pulse compressed. If this happens then the imperfect cancellation of code amplitude (which is illustrated by Figure 1-11) at
correlation lag values other than zero will occur. In order to obtain the thumbtack pulse compression, the matched filter must always be filled with samples from either the last code repetition, the
current code repetition or the next code repetition (with no significant change), since these sample values are necessary to make the code compression work. "Priming" the channel with 5 msec of
signal before acquiring samples at the receiver ensures that all of the multipath components will have preceding samples to keep the matched filter loaded. Similarly after the end of the last code
repetition an extra code repetition makes the synchronization less critical.
This "priming" becomes costly however, for when it is desired to switch frequencies, antennas, polarizations etc., the propagation path(s) have to be primed again. The 75% duty cycle waveform (X = 3)
allows these multiplexed operations to occur, but as a result, only 8.5 msec out of each 20 msec of measurement time is spent actually sampling received signals. The 100% duty cycle waveform (X = 4)
does not allow multiplexed operation, except that it will perform an O polarization coherent integration time (CIT) immediately after an X polarization CIT has been completed. Since the simultaneity
of the O/X multiplexed measurement is not so critical (the amplitude of these two modes fade independently anyway), this is essentially still a simultaneous measurement. Because the 100% mode
performs an entire CIT without changing any parameters, it can continuously repeat the code sequence and therefore the channel need only be primed before sampling the very first sample of each CIT.
After this subsequent code repetitions are primed by the previous repetition.
Even though the Complementary Code pairs are theoretically perfect, the physical realization of this signal may not be perfect. The Complementary Code pairs achieve zero leakage by producing two
compressed pulses (one from each of the two codes) which have the same absolute amplitude spurious correlation peaks (or leakage) at each height, but all except the main correlation peak are inverted
in phase between the two codes. Therefore, simply by adding the two pulse compression outputs, the leakage components disappear. Since the technique relies on the phase distance of the propagation
path remaining constant between the sequential transmission of the two coded pulses, the phase change vs. time caused by any movement in the channel geometry (i.e., Doppler shift imposed on the
signal) can cause imperfect cancellation of the two complex amplitude height profile records. Therefore, the Complementary Code is particularly sensitive to Doppler shifts since channel induced phase
changes which occur between pulses will cause the two pulse compressions to cancel imperfectly, while with most other codes we are only concerned with channel induced phase changes within the
duration of one pulse. However, if given the parameters of the propagation environment, we can calculate the maximum probable Doppler shift, and determine if this yields acceptable results for
vertical incidence sounding.
With 200 pps, the time interval between one pulse and the next is 5 msec. If one pulse is phase modulated with the first of the Complementary Codes, while the next pulse has the second phase code,
the interval over which motions on the channel can cause phase changes is only 5 msec. The degradation in leakage cancellation is not significant (i.e., less than 15 dB) until the phase has changed
by about 10 degrees between the two pulses. The Doppler induced phase shift is:
Df=2pTf[D] radians (1-12)
where f[D ]is the Doppler shift in Hz and T is the time between pulses.
The Doppler shift can be calculated as:
f[D]=(f[0]v[r])/c< (or for a 2-way radar propagation path)
f[D]=(2f[0]v[r])/c (1-13)
where f[0] is the operating frequency and v[r] is the radial velocity of the reflecting surface toward or away from the sounder transceiver. The radial velocity is defined as the projection of the
velocity of motion (v) on the unit amplitude radial vector (r) between the radar location and the moving object or surface, which in the ionosphere is an isodensity surface. This is the scalar
product of the two vectors:
v[r]=v.r=|v|cos(q) (1-14)
A phase change of 10° in 5 msec would require a Doppler shift of about 5.5 Hz, or 160 m/sec radial velocity (roughly half the speed of sound), which seldom occurs in the ionospheric except in the
polar cap region. The 8-chip complementary phase code pulse compression and coherent summation of the two echo profiles provides a 16-fold increase in signal amplitude, and a 4-fold increase in noise
amplitude for a net signal processing gain of 12 dB. The 127-chip Maximal Length Sequence provides a 127-fold increase in amplitude and a net signal processing gain of 21 dB. The Doppler integration,
as described later can provide another 21 dB of SNR enhancement, for a total signal processing gain of 42 dB, as shown by the following discussion.
Coherent Doppler (Spectral or Fourier) Integration back to top
The pulse compression described above occurs with each pulse transmitted, so the 12 to 21 dB SNR improvement (for 8-bit complementary phase codes or 127-bit M-codes respectively) is achieved without
even sending another pulse. However, if the measurement can be repeated phase coherently, the multiple returns can be coherently integrated to achieve an even more detectable or "cleaner" signal.
This process is essentially the same as averaging, but since complex signals are used, signals of the same phase are required if the summation is going to increase the signal amplitude. If the phase
changes by more than 90° during the coherent integration then continued summation will start to decrease the integrated amplitude rather than increase it. However, if transmitted pulses are being
reflected from a stationary object at a fixed distance, and the frequency and phase of the transmitted pulses remain the same, then the phase and amplitude of the received echoes will stay the same
The coherent summation of N echo signals causes the signal amplitude, to increase by N, while the incoherent summation of the noise amplitude in the signal results in an increase in the noise
amplitude of only \/N. Therefore with each N pulses integrated, the SNR increases by a factor of \/N in amplitude which is a factor of N in power. This improvement is called signal processing gain
and can be defined best in decibels (to avoid the confusion of whether it is an amplitude ratio or a power ratio) as:
Processing Gain = 20 log[10] {(S[p]/Q[p])/ (S[i]/Q[i])} (1-15)
where S[i] is the input signal amplitude, Q[i] the input noise amplitude, S[p] the processed signal amplitude, and Q[p] the processed noise amplitude. Q is chosen for the random variable to represent
the noise amplitude, since N would be confusing in this discussion. This coherent summation is similar to the pulse compression processing described in the preceding section, where N, the number of
pulses integrated is replaced by M, the number of code chips integrated.
Another perspective on this process is achieved if the signal is normalized during integration, as is often done in an FFT algorithm to avoid numeric overflow. In this case S[p] is nearly equal to S
[i], but the noise amplitude has been averaged. Thus by invoking the central limit theorem [Freund, 1967 or any basic text on probability], we would expect that as long as the input noise is a zero
mean (i.e., no DC offset) Gaussian process, the averaged RMS noise amplitude, s[np] (p for processed) will approach zero as the integration progresses, such that after N repetitions:
s[np]^2=s[ni]^2/N (the variance represents power) (1-16)
Since the SNR can be improved by a variable factor of N, one would think, we could use arbitrarily weak transmitters for almost any remote sensing task and just continue integrating until the desired
signal to noise ratio (SNR) is achieved. In practical applications the integration time limit occurs when the signal undergoes (or may undergo, in a statistical sense) a phase change of 90°. However,
if the signal is changing phase linearly with time (i.e., has a frequency shift, Dw ), the integration time may be extended by Doppler integration (also known as, spectral integration, Fourier
integration, or frequency domain integration). Since the Fourier transform applies the whole range of possible phase shifts needed to keep the phase of a frequency shifted signal constant, a coherent
summation of successive samples is achieved even though the phase of the signal is changing. The unity amplitude phase shift factor, e^ j^w^t, in the Fourier Integral (shown as Equation 1 17) varies
the phase of the signal r(t) as a function of time during integration. At the frequency (w) which stabilizes the phase of the component of r(t) with frequency w over the interval of integration
(i.e., makes r(t) e^ j^w^t coherent) the value of the integral increases with time rather than averaging to zero, thus creating an amplitude peak in the Doppler spectrum at the Doppler line which
corresponds to w:
F[r(t)]=R(w)=òr(t)e^-j^w^tdt (1-17)
Does this imply that an arbitrarily small transmitter can be used for any remote sensing application, since we can just integrate long enough to clearly see the echo signal? To some extent this is
true. There is no violation of conservation of energy in this concept since the measurement simply takes longer at a lower power; however, in most real world applications, the medium or environment
will change or the reflecting surface will move such that a discontinuous phase change will occur. Therefore a system must be able to detect the received signal before a significant movement (e.g., a
quarter to a half of a wavelength) has taken place. This limits the practical length of integration that will be effective.
The discrete time (sampled data) processing looks very similar (as shown in Equation 1 18). For a signal with a constant frequency offset (i.e., phase is changing linearly with time) the integration
time can be extended very significantly, by applying unity amplitude complex coefficients before the coherent summation is performed. This stabilizes the phase of a signal which would otherwise drift
constantly in phase in one direction or the other (a positive or negative frequency shift), by adding or subtracting increasingly larger phase angles from the signal as time progresses. Then when the
phase shifted complex signal vectors are added, they will be in phase as long as that set of "stabilizing" coefficients progress negatively in phase at the same rate as the signal vector is
progressing positively. The Fourier transform coefficients serve this purpose since they are unity amplitude complex exponentials (or phasors), whose only function is to shift the phase of the
signal, r(n), being analyzed.
Since the Digisonde^TM sounders have always done this spectral integration digitally, the following presentation will cover only discrete time (sampled data rather than continuous signal notation)
Fourier analysis.
F[r(t)]=R[k]=S r[n]exp[-jnk2p/N] (1-18)
where r[n] is the sampled data record of the received signal at one certain range bin, n is the pulse number upon which the sample r[n] was taken, T is the time period between pulses, N is the number
of pulses integrated (number of samples r[n] taken), and k is the Doppler bin number or frequency index. Since a Doppler spectrum is computed for each range sampled, we can think of the Fourier
transforms as F[56][w] or F[192][w] where the subscripts signify with which range bin the resulting Doppler spectra are associated.
By processing every range bin first by pulse compression (12 to 21 dB of signal processing gain) then by coherent integration, all echoes from each range have gained 21 to 42 dB of processing gain
(depending on the waveform used and the length of integration) before any attempt is made to detect them.
Further explanation of Equation 1 18 which can be gathered from any good reference on the Discrete Fourier Transformation, such as [Openheim & Schaefer, Prentice Hall, 1975], follows. The total
integration time is NT, where T is the sampling period (in the DPS, the time period between transmitted pulses). The frequency spacing between Doppler lines, i.e., the Doppler resolution, is 2p/
NT rads/sec (or 1/NT Hz) and the entire Doppler spectrum covers 2p/T rad/sec (with complex input samples this is ± p/T, but with real input samples the positive and negative halves of the spectra
are mirror image replicas of each other, so only p/T rad/sec are represented).
What is coherently integrated by the Fourier transformation in the DPS (as in any pulse-Doppler radar) is the time sequence of complex echo amplitudes received at the same range (or height) that is,
at the same time delay after each pulse is transmitted. Figure 1-14 shows memory buffers with range or time delay vertically and pulse number (typically 32 to 128 pulses are transmitted) horizontally
which hold the received samples as they are acquired by the digitizer. After each pulse is transmitted, one column is filled from the bottom up at regular sampling intervals, as the echoes from
progressively higher heights are received (33.3 msec/5 km). These columns of samples are referred to as height profiles, which are not to be confused with electron density profiles, but rather mirror
the radar terminology of a "slant range profile" (range becomes height for vertical incidence sounding) which is simply the time record of echoes resulting from a transmitted pulse. A height profile
is simply a column of numeric samples which may or may not represent any reflected energy (i.e., they may contain only noise)
Figure 1-14 Eight Coherent Parallel Buffers for Simultaneous Integration of Spectra
Complex Windowing Function back to top
With T, the sampling period between subsequent samples of the same coherent process, i.e., the same hardware parameters) defined by the measurement program, the first element of the Discrete Fourier
Transform (i.e., the amplitude of the DC component) will have a spectral width of 1/NT. This spectral resolution may be so wide that all Doppler shifts received from the ionosphere fall into this one
line. For instance, in the mid-latitudes it is very rare to see Doppler shifts of more that 3 Hz, yet with a ± 50 Hz spectrum of 16 lines, the Doppler resolution is 6.25 Hz, so a 3 Hz Doppler shift
would still appear to show "no movement". For sounding, it would be much more interesting if instead of a DC Doppler line, a +3.25 Hz and a 3.25 Hz line were produced, such that even very fine
Doppler shifts would indicate whether the motion was up or down. The DC line is a seemingly unalterable characteristic of the FFT method of computing the Discrete Fourier Transform, yet with a true
DFT algorithm the Fourier transform coefficients can be chosen such that, the centre of the Doppler lines analyzed can be placed wherever the designer desires them to be. Since the DSP could no
longer keep up with the real-time operation if the DFT algorithm were used another solution had to be found. What was needed was a ½ Doppler line shift which would be correct for any value of N or
Because the end samples in the sampled time domain function are random, a tapering window had to be used to control the spurious response of the Doppler spectrum to below 40 dB (to keep the SNR high
enough to not degrade the phase measurement beyond 1°). Therefore a Hanning function, H(n), which is a real function, was chosen and implemented early in the DPS development. The reader is referred
to [Oppenheim and Schafer, 1975] for the definition and applications of the Hanning function. The solution to achieving the ½ Doppler line shift was to make the Hanning function amplitudes complex
with a phase rotation of 180° during the entire time domain sampling period NT. The new complex Hanning weighting function is applied simply by performing complex rather than real multiplications.
This implements a single-sideband frequency conversion of ½ Doppler line before the FFT is performed. In the following equation, each received multipath signal has only one spectral component (k = D
[i]) such that it can be represented as, a[i] exp[j2pnDi]:
r(n) = {Sa[i]exp[-j2p(nD[i])} |H(n)| exp[-j2p(n/2NT)]=
=|H(n)| S a[i] exp[-j2p(nD[i]+n/2NT) (1-19)
Multiplexing back to top
When sending the next pulse, it need not be transmitted at the same frequency, or received on the same antenna with the same polarization. With the DPS it is possible to "go off" and measure
something else, then come back later and transmit the same frequency, antenna and polarization combination and fill the second column of the coherent integration buffer, as long as the data from each
coherent measurement is not intermingled (all samples integrated together must be from the same coherent statistical process). In this way, several coherent processes can be integrated at the same
time. Figure 1-14 shows eight coherent buffers, independently collecting the samples for two different polarizations and four antennas. This can be accomplished by transmitting one pulse for each
combination of antenna and polarization while maintaining the same frequency setting (to also integrate a second frequency would require eight more buffers), in which case, each subsequent column in
each array will be filled after each eight pulses are transmitted and received. This multiplexing continues until all of the buffers are filled with the desired number of pulse echo records. The DPS
can keep track of 64 separate buffers, and each buffer may contain up to 32 768 complex samples. The term "pulse" is used generically here. For Complementary Coded waveforms a pulse actually requires
two pulses to be sent, and for 127 chip M-codes the pulse becomes a 100% duty cycle, or CW, waveform. However, in both cases, after each pulse compression, one complex amplitude synthesized pulse, r2
(n) in Equation 1 9 which is equivalent to a 67 msec rectangular pulse exists which can be placed into the coherent buffer.
The full buffers now contain a record of the complex amplitude received from each range sampled. Most of these ranges have no echo energy; only externally generated manmade and natural noise or
interference from radio transmitters. If a particular ionospheric layer is providing an echo, each height profile will have significant amplitude at the height corresponding to that layer. By Fourier
transforming each row of the coherent buffer a Doppler spectrum describing the radial velocity of that layer will be produced. Notice that the sampling frequency at that layer is less than or equal
to the pulse repetition frequency (on the order of 100 Hz).
After the sequence of N pulses is processed, the pulse compression and Doppler integration have resulted in a Doppler spectrum stored in memory on the DSP card for each range bin, each antenna, each
polarization, and each frequency measured (maximum of 4 MILLION simultaneously integrated samples). The program now scans through each spectrum and selects the largest one amplitude per height. This
amplitude is converted to a logarithmic magnitude (dB units) and placed into a new one-dimensional array representing a height profile containing only the maximum amplitude echoes. This technique of
selecting the maximum Doppler amplitude at each height is called the modified maximum method, or MMM. If the MMM height profile array is plotted for each frequency step made, this results in an
ionogram display, such as the one shown in Figure 1-15.
Figure 1-15 VI Ionogram Consisting of Amplitudes of Maximum Doppler Lines
Angle of Arrival Measurement Techniques back to top
Figure 1-16 Angle of Arrival Interferometry
The DPS system uses two distinct techniques for determining the angle of arrival of signals received on the four antenna receiver array, an aperture resolution technique using digital beamforming
(implemented as an on-site real-time capability) and a super-resolution technique which is accomplished when the measurement data is being analyzed, in post-processing. Both techniques utilize the
basic principle of interferometry, which is illustrated in Figure 1-16. This phenomenon is based on the free space path length difference between a distant source and each of some number of receiving
antennas. The phase difference (Df) between antennas is proportional to this free space path difference (Dl) based on the fraction of a wavelength represented by Dl.
Dl=dsinq and
Df=(2pDl)/l=(2p d sinq)/l (1-20)
where q is the zenith angle, d is the separation between antennas in the direction of the incident signal (i.e., in the same plane as q is measured), and l is the free space wavelength of the RF
signal. This relationship is used to compute the phase shifts required to coherently combine the four antennas for signals arriving in a given beam direction, and this relationship (solved for q) is
also the basis of determining angle of arrival directly from the independent phase measurements made on each antenna.
Figure 1-17 shows the physical layout of the four receiving antennas. The various separation distances of 17.3, 34.6, 30 and 60 m are repeated in six different azimuthal planes (i.e., there is six
way symmetry in this array) and therefore, the Df s computed for one direction also apply to five other directions. This six-way symmetry is exploited by defining the six azimuthal beam directions
along the six axes of symmetry of the array, making the beamforming computations very efficient. Section 3 of this manual contains detailed information for the installation of receive antenna arrays.
Figure 1-17 Antenna Layout for 4-Element Receiver Antenna Array
Digital Beamforming back to top
At the end of the previous section it was shown that after completing a multiplexed coherent integration there is an entire Doppler spectrum stored for each height, each antenna, each frequency and
each polarization measured. All of these Doppler lines are available to the beamforming algorithm. In addition, the DSP software stores the complex amplitudes of the maximum Doppler line at each
height (i.e., the height profile in an MMM format, is an array of 128 or 256 heights) separately for each antenna. By setting a threshold (typically 6 dB above the noise floor), the heights
containing significant echo amplitude can quickly be determined. These are the heights for which beam amplitudes will be computed and a beam direction (the beam which creates the largest amplitude at
that height) declared. Due to spatial decorrelation (an interference pattern across the ground) of the signals received at the four antennas, it is possible that the peak amplitude in each of the
four Doppler spectra will not appear in the same Doppler line. Therefore, to ensure that the same Doppler line is used for each antenna (using different Doppler lines would negate the significance of
any phase difference seen between antennas) only Antenna #1 s spectra are used to determine which Doppler line position will be used for beamforming at each height processed.
At each height where an echo is strong enough to be detected, the four complex amplitudes are passed to a C function (beam_form) where seven beams are formed by phase shifting the four complex
samples to compensate for the additional path length in the direction of each selected beam. If a signal has actually arrived from near the centre of one of the beams formed, then after the phase
shifting, all four signals can be summed coherently, since they now have nearly the same phase, so that the beam amplitude of the sum is roughly four times each individual amplitude. The farther the
true beam direction is away from a given beam centre the farther the phase of the four signals drift apart and the smaller the summed amplitude. However, in the DPS system the beams are so wide that
even at the higher frequencies the signal azimuth may deviate more than 30° from the beam centres and the four amplitudes will still sum constructively [Murali, 1993].
The technique for finding the angle of arrival is then simply to compare the amplitude of the signal on each beam and declare the direction as the beam centre of the strongest beam. Therefore the
accuracy of this technique is limited to 30° in azimuth and 15° in elevation angle (the six azimuth beams are separated by 60° and the oblique beams are normally set 30° away from the vertical beam);
as opposed to the Drift angle of arrival technique described in the next section which obtains accuracies approaching 1°. There may be some question about the amplitude of the sidelobes of these
beams, but it is really immaterial (computation of the array pattern for 10 MHz is shown in [Murali, 1993]). The fundamental principle of this technique is that there is no direction which can create
a larger amplitude in a given beam than the direction of the centre of that beam. Therefore, detecting the direction by selecting the beam with the largest amplitude can never be an incorrect thing
to do. One has to avoid thinking of the beam as excluding echoes from other directions and realize that all that is needed is that a beam favours echoes more as their angle of arrival becomes closer
to the centre of that beam. In fact with a four element array the summed amplitude in a wrong direction may be nearly as strong as it is in the correct beam, however, given that the same four complex
amplitudes are used as input it cannot be stronger.
The DPS forms seven beams, one overhead (0° zenith angle) and six oblique beams (the nominal 30° zenith angle can be changed by the operator) centred at North and South directions and each 60° in
between. Using the same four complex samples (at one reflection height at a time) seven overlapping beams are formed, one overhead (for which the phase shifting required on each antenna is 0°) and
six beams each separated by 60° in azimuth and tipped 30° from vertical. If one of the off-vertical beams is found to produce the largest amplitude, the displayed echo on the ionogram is color coded
as an oblique reception.
The phase shifts required to sum echoes into each of the seven beams depend on four variables:
a. the signal wavelength,
b. the antenna geometry (separation distance and orientation),
c. the azimuth angle of arrival, and
d. the zenith angle of arrival.
The antenna weighting coefficients are unity amplitude with a phase which is the negative of the extra phase delay caused by the propagation delay, thereby removing the extra phase delay. The phase
delays for antenna is resulting from arrival angle spherical coordinates (q[j], f[j]) which corresponds to the direction of beam j, are described (using Equation 1 20) by the following:
DF[ij]=(2p sinq[j]/l)d'[ij] (1-21)
where DF [ij] is the phase difference between antenna i s signal and antenna 1 s signal, q[j] is the zenith angle (0 for overhead), and d'[ij] is the projection of the antenna separation distance
(from antenna i to antenna 1) upon the wave propagation direction. The parameter d' is dependent on the antenna positions which can be placed on a Cartesian coordinate system with the central
antenna, antenna 1, at the origin and the X axis toward the North and the Y axis toward the West. With this definition the azimuth angle f is 0° for signals arriving from the North and:
d'[ij]=(x[i] cos f[j]+y[i]sinf[j]) (1-22)
Since antenna 1 is defined as the origin, x1 and y1 are always zero, so Df [i] has to be zero. This makes antenna 1 the phase reference point which defines the phase of signals on the other antennas.
The correction coefficients b[i] are unit amplitude phase conjugates of the propagation induced phase delays:
b[ij]=1.0 Ð DFi(f,x[i],y[i],q[j],f[j])=1 Ð -DF[ij] (1-23)
Because they are frequency dependent, these correction factors must be computed at the beginning of each CIT when the beamforming mode of operation has been selected. A full description as well as
some modeling and testing results were reported by [Murali, 1993].
│ Example A.: │
│ │
│ Given the antenna geometry shown in Figure 1-17, at an operating frequency of 4.33 MHz (l = 69.28 m), a beam in the eastward direction and 30° off vertical would, according to Equation 1 20, │
│ require a phase shift of 90° on antenna 4, 45° on antennas 2 and 3, and 0° on antenna 1. If an echo is received from that direction it would be received on the four antennas as four complex │
│ amplitudes at the height corresponding to the height (or more precisely, the range, since there may be a horizontal component to this distance) of the reflecting source feature. Therefore, a │
│ single number per antenna can be analyzed by treating one echo height at a time, and by selecting only one (the maximum) complex Doppler line at that height and that antenna. Assume that the │
│ following four complex amplitudes have been receive on a DPS system at, for instance, a height of 250 km. This is represented (in polar notation) as: │
│ │
│ Antenna 1: 830 Ð 135° │
│ │
│ Antenna 2: 838 Ð 42° │
│ │
│ Antenna 3: 832 Ð 182° │
│ │
│ Antenna 4: 827 Ð 179° │
│ │
│ To these sampled values add the +90° and 45° phase corrections mentioned above producing: │
│ │
│ Antenna 1: 830 Ð 135° or 586 + j586 │
│ │
│ Antenna 2: 838 Ð 132° or 561 + j623 │
│ │
│ Antenna 3: 832 Ð 137° or 608 + j567 │
│ │
│ Antenna 4: 827 Ð 134° or 574 + j594 │
│ │
│ East Beam (sum of above) = 2329+j2370 (3329Ð 134.5° in polar form) │
│ │
│ Since the sum is roughly four times the signal amplitude on each antenna there has been a coherent signal enhancement for this received echo because it arrived from the direction of the beam. │
│ It is interesting to note here, that these same four amplitudes could have been phase shifted corresponding to another beam direction in which case they would not add up in-phase. The DPS │
│ does this seven times at each height, using the same four samples, then detects which beam results in the greatest amplitude at that height. Of course at a different height another source may │
│ appear in a different beam, so the beamforming must be computed independently at each height. │
Although the received signal is resolved in range/height before beamforming, the beamforming technique is not dependent on isolating a signal source before performing the angle of arrival
calculations. If two sources exist in a single Doppler line then these components (the amplitude of the Doppler line can be thought of as a linear superposition of the two signal components) then
some of each of them will contribute to an enhanced amplitude in their corresponding beam direction. Conversely, the Drift technique assumes that the incident radio wave is a plane wave (thus
requiring isolation of any multiple sources).
Drift Mode Super-Resolution Direction Finding back to top
By analyzing the spatial variation of phase across the receiver aperture, using Equation 1 20, the two-dimensional angle of arrival (zenith angle and azimuth angle) of a plane wave can be determined
precisely using only three antennas. The term super-resolution applies to the ability to resolve distinct closely spaced points when the physical dimensions (in this case, the 60 m length of one side
of the triangular array) of the aperture used is insufficient to resolve them (from a geometric optics standpoint). Therefore, the use of interferometry provides super resolution. This is required
for the Drift measurements because the beam resolution achievable with a 60 m aperture at 5 MHz is about 60° , while 5° or better is required to measure plasma velocities accurately. Using
beamforming to achieve a 5° angular resolution at 5 MHz would require an aperture dimension of 600 m, which would have to be filled with on the order of 100 receiving antenna elements. Therefore the
Drift technique described here is a tremendous savings in system complexity. The Drift mode concept appears at first glance to be similar to the beamforming technique, but it is a fundamentally
different process.
The Drift mode depends on a single echo source being isolated such that its phase is not contaminated by another echo (from a different direction but possibly arriving with the same time delay). This
technique works amazingly well because at a given time, the overhead ionosphere tends to drift uniformly in the same direction with the same velocity. This means that each off-vertical echo will have
a Doppler shift proportional to the radial velocity of the reflecting plasma and to cos a where a is the angle between the position vector (radial vector from the observation site to the plasma
structure) and velocity vector of the plasma structure, as presented in Equation 1 14. Therefore, for a uniform Drift velocity the sky can be segmented into narrow bands (e.g., 10 s of bands) based
on the value of cos a which correspond to particular ranges of Doppler shifts [Reinisch et al, 1992]. These bands are shown in Figure 1-18 as the hyperbolic dashed lines [Scali, 1993] which indicate
at what angle of arrival the Doppler line number should change if the whole sky is drifting at the one velocity just calculated by the DDA program. In other words, the agreement of the Doppler
transitions with the boundaries specified by the uniform drift assumption is a test of the validity of the assumption for the particular data being analyzed.
Both isolating the sources of different radial velocities and resolving echoes having different ranges (into 10 km height bins), results in very effective isolation of multiple sources into separate
range/Doppler bins. If multiple sources exist at the same height they are usually resolved in the Doppler spectrum computed for that height, because of the sorting effect which the uniform motion has
on the radial velocities. If the resolution is sufficient that a range/Doppler bin holds signal energy from only one source, the phase information in this Doppler line can be treated as a sample of
the phase front of a plane wave. Even though many coherent echoes have
Figure 1-18 Radial Velocity Bands as Defined by Doppler Resolution
been received from different points in the sky, the energy from these other points is not represented in the complex amplitude of the Doppler line being processed. This is important because the angle
of arrival calculation is accomplished with standard interferometry (i.e., solving Equation 1 20 for q ), which assumes no multiple wave interference (i.e., a perfect plane wave).
A fundamental distinction between the Drift mode and beamforming mode is that in the Drift mode the angle of arrival calculation is applied for each Doppler line in each spectrum at each height
sampled, not just at the maximum amplitude Doppler line. A data dependent threshold is applied to try to avoid solving for locations represented by Doppler lines that contain only noise, but even
with the threshold applied the resulting angle of arrival map may be filled with echo locations which result from echoes much weaker than the peak Doppler line amplitudes. In beamforming, only the
echoes representing the dominant source at each height are stored on tape, therefore no other source echoes are recoverable from the recorded data.
It has been found that vertical velocities are roughly 1/10th the magnitude of horizontal velocities [Reinisch et al, 1991]. Since the horizontal velocities from echoes directly overhead result in
zero radial velocity to the station, the Drift technique works best in a very rough, or non-uniform ionosphere, such as that found in the polar cap regions or the equatorial regions, because they
provide many off-vertical echoes.
For a smooth spherically concentric (with the surface of the earth) ionosphere all the echoes will arrive from directly overhead and the resulting Drift skymaps will show a single source location at
zenith angle = 0°. For horizontal gradients or tilts within that spherically concentric uniform ionosphere however, the single source point would move in the direction of the DN/N (N as in Equation 1
1) gradient (the local electron density gradient), one degree per degree of tilt, so the Drift measurement can provide a straightforward measurement of ionospheric tilt.
Resolution of source components by first isolating multiple echoes in range then in Doppler spread (velocity distribution) combined with interferometer principles is a powerful technique in
determining the angle of arrival of superimposed multipath signals.
High Range Resolution (HRR) Stepped Frequency Mode back to top
The phase of an echo from a target, or the phase of a signal after passing through a propagation medium is dependent on three things:
1. the absolute phase of the transmitted signal;
2. the transmitted frequency (or free space wavelength); and
3. the phase distance, d, where:
]d = ò m(f,x,y,z)dl (1-24)
is the line integral over the propagation path, scaled by the refractive index if the medium is not free space. If the first two factors, the transmitted phase and frequency, can be controlled very
precisely, then measuring the received phase at two different frequencies makes it possible to solve for the propagation distance with an accuracy proportional to the accuracy of the phase
measurement, which in turn is proportional to the received SNR. This is often referred to as the df>/df technique. The two measurements form a set of linear equations with two equations and two
unknowns, the absolute transmitted phase and the phase distance. If there are several "propagation path distances" as is the case in a multipath environment, then measurement at several wavelengths
can provide a measure of each separate distance. However, instead of using a large set of linear equations, the phase of the echoes have chosen to be analyzed as a function of frequency, which can be
done very efficiently with a Fast Fourier Transform. The basic relations describing the phase of an echo signal are:
f(f)=-2pft[p]=-2pd/l=-2p(f/c)d (1-25)
where d is the propagation path length in metres (the phase path described in Equation 1 24, f in Hz, f in radians, l in metres and t[p] is the propagation delay in seconds. Note that the first
expression casts the propagation delay in terms of time delay (# of cycles of RF), the second in terms of distance (# of wavelengths of RF), and the third relates frequency and distance using c.
For monostatic radar measurements the distance, d is twice the range, R, so Equation 1 25 becomes:
f(f)=-4pR/l = -4p(f/c)R (1-26)
If a series of N RF pulses is transmitted, each changed in frequency by D f, one can measure the phases of the echoes received from a reflecting surface at range R. It is clear from Equation 1 26
that the received phase will change linearly with frequency at a rate directly determined by the magnitude of R. Using Equation 1 26 one can express the received phase from each pulse (indexed by i)
in this stepped frequency pulse train:
f[i](f[i])=-4pf[i]t[p]=-4pf[i](R/c) (1-27)
where the transmitted frequency fi can be represented as:
f[i]=f[0] + iDf (1-28)
a start frequency plus some number of incremental steps.
Two Frequency Precision Ranging back to top
This measurement forms the basis of the DPS s Precision Group Height mode. By making use of the simultaneous (multiplexed) operation at multiple frequencies (i.e., multiplexing or interlacing the
frequency of operation during a coherent integration time ( CIT) it is possible to measure the phases of echoes from a particular height at two different frequencies. If these frequencies are close
enough that they are reflected at the same height then the phase difference between the two frequencies determines the height of the echo.
The following development of the two frequency ranging approach leads to a general theory (but not expoused here) covering FM/CW ranging and stepped frequency radar ranging. Using Equation 1 26 a two
frequency measurement of f allows the direct computation of R, by:
f[2]-f[1]=4pR(f[1]-f[2])/c=4pRDf/c (1-29)
R=c(f[2]-f[1])/4pDf (1-30)
It is easy to see from Equation 1 29 that if the range is such that RDf/c is greater than 1/2 then the magnitude of f[2]-f[1] will exceed 2p which is usually not discernible in a phase measurement,
and therefore causes an ambiguity. This ambiguity interval (D for distance) is
R=DA=(1/2)c/Df=c/2Df (1-31)
│ Example B.: │
│ │
│ The measured phase is (f[2] - f[1]) = p/8 while Df = 1 kHz, then R = 9.375 km. │
│ │
│ In the example above with Df = 1 kHz, the ambiguous range D[A] is 150 km. Since a 0 km reflection height must certainly give the same phase for any two frequencies (i.e., 0° ), then given │
│ that the ambiguity interval is 150 km, then for this value of Df, the phase difference must again be zero at 150, 300, 450 km etc, since 0 km is one of the equal phase points, and all other │
│ ranges giving a phase difference of 0° are spaced from it by 150 km. If the phase measurements f[2] and f[1] were taken after successive pulses at a time delay corresponding to a range of 160 │
│ km (at least one sample of the received echo must be made during each pulse width, i.e., at a rate equal to or greater than the system bandwidth, see Equation 1 4), one would conclude that │
│ there is an extra 2p in the phase difference and that the true range is 159.375 km, not 9.375 km. Therefore, the measurement must be designed such that the raw range resolution of the │
│ transmitted pulse is sufficient to resolve the ambiguity in the df/df measurement. │
The validity of the two-frequency precision ranging technique is lost if there is more than one source of reflection within the resolution of the radar pulse. The phase of the received pulse will be
the complex vector sum of the multiple overlapping echoes, and therefore any phase changes (f[i]) will be partially influenced by each of the multiple sources and will not correctly represent the
range to any of them. Therefore, in the general propagation environment where there may be multiple echo sources (objects producing a reflection of RF energy back to the transmitter), or for
multipath propagation to and from one or more sources, many frequency steps are needed to resolve the different components influencing fi. This "many step" approach can be performed in discrete
frequency steps as in the DPS s HRR mode, or by a continuous linear sweep, as done in a chirpsounder described in [Haines, 1994].
Signal Flow Through the DPS Transmitter and Receiver back to top
Signal flow through the DPS Transmitter Exciter
The transmitted code is generated on the transmitter exciter card (XMT) by selecting and clocking out the phase code bits stored in a ROM on the XMT card (Section 5 (Hardware Description) describes
the functions of the various system components in detail). These bits are offset and balanced such that their positive and negative swings are equal. Then they are applied to a double balanced mixer
along with the 70.08 MHz signal from the oscillator (OSC) card. This multiplication process results in either a 0° or 180° phase inversion since multiplication of a sine wave by 1 is the same as
performing a phase inversion, since sin(t) = sin(t ± p ). This modulated 70.08 MHz signal is then filtered by a linear phase surface acoustic wave (SAW) filter, split into phase quadrature (to
enable selection of circular transmitter polarization), and mixed with the variable local oscillator from the Frequency Synthesizer (SYN) card. The mixing process (a passive diode double balanced
mixer is used) effectively multiplies the two input signals (along with some non-linear distortion products) which produces a sum and difference frequency at the output:
y(t)=sin(a)sin(b)=0.5[cos(a-b)-cos(a+b)] (1-32)
The variable local oscillator signal ranges from 71 MHz to 115 MHz, which mixed with 70.08 MHz creates a 1 to 45 MHz difference frequency (a 140 to 185 MHz sum frequency is also produced but is
low-pass filtered out of the final signal) which is amplified and sent to the RF power amplifier chassis. The RF amplifier boosts up the signal level to be applied to the antenna(as) for
Signal Flow Through the DPS Receiver Antennas back to top
The receive loop antennas (Figure 1-1B) are sensitive to the horizontal magnetic field component of the received signal, and can be phased to favour either the right hand circular or left hand
circular polarization. The two loop antennas are oriented at a 90° angle to each other and detect the same peak of the incident circularity polarized wave, separated by exactly a quarter of a RF
cycle. Therefore, if the phase of the signal on one antenna is shifted by 90° the sum of the two signals has either double the amplitude or zero amplitude depending on the sense of the circular
polarization. This is a linear process and therefore treats each of the multipath components independently. For instance if there is one O polarized echo at 250 km and an X polarized echo at 200 km,
the fact that the X polarized energy is rejected has no effect on the reception of the O polarized energy. The received signal which is applied to the receivers is the sum of the signals from the two
crossed antennas after shifting one by ± 90° with a broadband quadrature phase shifter. The 90° phase shift can be expressed in an equation using the phasor exp[± jp/2], so using the form of Equation
1 6:
r(t)=S{a[i]p(t-t[i]) exp[j2pf[0]t-jf[i]]+a[i] p(t-t[i])exp[j2pf[0]t-
-jf[i]-jp/2] exp[±jp/2]}=
=2Sa[i]p(t-t[i]) exp[j2pf[0]t-jf[i]] if the last term is exp[+jp/2] OR
=0 if the last term is exp[-jp/2] (1-33)
200 m sec before each waveform is transmitted, the DPS can shift the signal from one of the receive loops either ± 90° under control of the DPS software thus switching sensitivity from left circular
polarization to right circular polarization. In the DPS, the signals from the four crossed loop receive antennas are fed into the antenna switch box, which either selects one signal to feed to the
single receiver card or combines all four in phase. In the DPS-4 (four-channel receiver variant), one receiver is dedicated to each receive antenna (one receive antenna is the sum of the two crossed
elements, but since the two elements are combined in the field and fed to the system on a single coax there is only one signal from each crossed loop assembly). Therefore, in a DPS-4, four signals
from the antennas are simply passed through the antenna switch box to the four receivers in which case the only functions of the antenna switch box are to switch in a calibration signal from the
transmitter exciter card and to apply the DC power to the receiver antenna preamplifiers via coaxial cables).
Received Signal Flow through the DPS Receiver back to top
The received wideband RF signal from the antenna switch is fed to the receiver (RCV) card where it is first stepped up in voltage 2:1 in a transformer to increase the impedance from 50 to 200 W for a
better match to the high input impedance (about 1 kW ) preamplifier. Based on the level of one of the receiver gain control bits, which in turn responds to a manual setting in the DPS hardware setup
file (the Hi_Noise parameter) the gain through this amplifier is either 6 dB or 15 dB. Since the maximum achievable output swing from this amplifier is about 8 Vp-p the maximum allowable input
voltage is therefore 4 or 1.5 V (at the antenna preamplifier output) respectively for the two different gain settings. Considering the 2:1 step-up, this means that if the wideband input from the
receive antennas is over 0.7 Vp-p the lower gain setting must be used. The 8 Vp-p maximum output of the preamplifier is reduced to 5 Vp-p by a 33 W resistor which matches the highest allowed input to
the passive diode mixer (the 23 dBm LO level double balanced mixer allows a maximum of 20 dBm input). The remainder of the receiver applies successively more gain and filtering (the bandwidth narrows
down to 20 kHz after seven stages of tuning), and outputs the received signal at a fixed 225 kHz intermediate frequency (IF).
Signal Flow through the Digitizer back to top
The reason for selecting exactly 225 kHz as the last IF frequency is that there are an even number of cycles in the time period that corresponds to a 10 km height interval (66.667 m sec). This means
that, if spaced by 66.667 msec, samples of the IF signal (which has a period of 4.444 msec) will represent baseband samples of the received envelope amplitude, since:
15 cycles of 225 kHz=66.6667 msec=10 km radar range.
For instance, if a constant amplitude coherent sine wave carrier were received directly on the current receiver frequency, samples of the IF would have a constant amplitude. The only problem is that
without being synchronized to the peaks of this sine wave it is possible that all of the samples of the IF will occur at zero crossings of the received signal. This apparent problem is avoided by the
use of quadrature sampling.
The more standard quadrature sampling approach [Peebles, 1979] is to use a 90° phase shifter to produce a quadrature Local Oscillator and down-convert the IF to a complex (two channel) baseband.
However, in the DPS since very fast analog to digital (A/D) converters were available inexpensively, the signal was simply sampled as pairs at 90° (1.1111 msec) intervals. This pair of samples is
then repeated at the desired sampling interval, 16.6667 msec for 2.5 km delay intervals, 33.3333 msec for 5 km or 66.6667 msec for 10 km intervals [Bibl, K., 1988]. The samples at 2.5 km or 5 km
intervals are not equal in phase, since 3.75 and 7.5 cycles respectively have passed between the complex sample pairs. However, at the 10 km interval, exactly 15 cycles have passed. Adjacent 2.5 or 5
km samples within a received pulse should have the same phase since they are sampling the continuation of the coherent transmitted pulse. In order to correct the 90° and 180° phase errors made by the
3.75 or 7.5 cycle sampling interval, an efficient numeric correction brings these samples back into phase. The 90° and 180° phase correction is simply a matter of inverting the sign for 180° or
swapping the real and imaginary samples and inverting the real sample for the 90° shift. No complex multiplications are required but this does add another level of "bookkeeping" to the signal
processing algorithms.
Signal Flow through the DSP Card back to top
From here, the next step is to cross-correlate the received samples with the known phase code, as was described in the above section on Coherent Phase Modulation and Pulse Compression. The known
phase code is either ± 1 for each code chip, therefore the cross multiplication required in the correlation process is in reality only addition or subtraction. However, with a modern signal
processor, the pipelined multiplication process is faster than addition due to the on-chip hardware multiplier and automatic sequencing of address pointers, so as implemented, the multiplications by
± 1. Another interesting detail in this algorithm is that the real samples and the imaginary samples are pulse compressed independently of each other. The two resulting range profiles are then
combined into complex samples which represent the phase and amplitude of the original RF signal at the height/range corresponding to the correlation time lag of the cross-correlation function. As is
evident from Equation 1 9, this is a linear process and therefore superimposed signals at different time delays can be detected without distorting each other as was shown by Figure 1-8.
Another interesting feature of the DPS s pulse compression algorithm is a technique to avoid the M^2 processing load penalty inherent in the pulse compression operation when the phase code chips are
double sampled (5 km sample period, making the pulse duration 16 samples) or quadruple sampled (2.5 km intervals, making the pulse duration 32 samples). Since the phase transitions are always 66.667
msec apart, we can "decimate" the input record by taking every 2nd or 4th sample and then cross-correlating it with an 8-sample matched filter rather than a 16 or 32 sample matched filter. The full 4
times over-sampled resolution can be restored by successively taking each fourth sample but starting one sample higher each time. Then after performing the four cross-correlation functions,
interleave the four pulse compressed records back into a new 4 times over-sampled output record. A quantitative analysis of the savings in processing steps is presented next.
When the phase code chips are double sampled (5 km sample period) or quadruple sampled (2.5 km intervals) the M^2 increased processing load required for a cross correlation is avoided by
independently performing the pulse compression of the odd # d and even # d samples (for 5 km spacing, or each fourth sample for 2.5 km sample spacing, since the signal s range resolution is only 10
km) and reconstructing the finer resolution profile after compression. In addition the savings obtained by processing the real record and imaginary record simultaneously is analyzed. The number of
operations required to cross-correlate a 256 sample complex data record (e.g., a 256 sample height profile), using 5 km sampling intervals, and the 127 length maximal-length sequence code are as
1) Cross correlating the 2 times over-sampled record:
256-pt complex record convolved with 254-pt MF 260,096 multiplications
260,096 additions
Knowing that the real and imaginary samples are independent and that the phase code itself is all real, the complex multiplications (i.e., the cross-terms) can be done away with, resulting in:
2) Two 256-pt real records convolved with 254-pt MF 130,048 multiplications
130,048 additions
By pulse compressing only every other sample in a double over-sampled record then going back and compressing the every other sample skipped the first time:
3) Four 128-pt real records convolved with 127-pt MF 65,024 multiplications
65,024 additions
With the much shorter Complementary Codes, the pulse compression computational load is greatly reduced, since only an 8-pt MF is used. Using the same real pulse compression algorithm and skipping
every other sample, the Complementary Code processing load is:
4) Eight 128-pt real records (the 8 sub-records are: real and
imaginary samples, odd and even height numbers, then code 1
and code 2) convolved with an 16-pt filter
16,384 multiplications
16,384 additions
Implemented in the TMS320C25 16-bit fixed point processor, these pulse compression algorithms run at about 10 000 multiplications and additions (they are done in parallel) per millisecond, so these
pulse compressions with 20 msec between repetitions of the 127-length codes and 10 msec between Complementary Code pairs are easily performed in real time (e.g., one waveform is entirely processed
before the next waveform repetition is finished).
A faster way to perform the matched filter convolution is described by Oppenheim and Schaefer [Oppenheim & Schaefer, 1976] which uses Fourier transforms. This is based on the Fourier transform
S(w)=F(w)H(w) is an identical expression to:
F[s(t)]=F[f(t)*h(t)] (1-34)
This identity says that multiplication in the frequency domain accomplishes a convolution in the time domain, if the transformed function (the product of the two functions, S(w) in the Equation 1 34)
is transformed back to the time domain. This would reduce the compression of the 127 chip waveform (sampled twice per code chip) from 65 000 operations to about 4500 operations (Nlog[2](N) for N=512
points). This algorithm change has not been implemented. To incorporate this algorithm the samples must be doubled again, since the code repeats at an interval other than a power of two, to
accommodate the cyclic nature of the convolutional code compression algorithm. Furthermore, the sampling rate must always be 60 000 samples/sec (the 2.5 km resolution mode) to preclude aliasing from
Regardless of how it is performed the Complementary Code pulse compression provides 12 dB of SNR improvement and the M-codes (only useful in a bi-static measurement) provide 21 dB of SNR improvement.
In addition to that, the coherent Doppler integration described above provides another 9 to 21 dB of SNR improvement.
The pulse compression and Doppler integration have resulted in a Doppler spectrum stored in memory on the DSP card for each range bin. The program now scans through each spectrum and selects the
largest amplitude. This amplitude is converted to a logarithmic magnitude (dB units) and placed into a one-dimensional array representing a time-delay profile of any echoes. This one dimensional
array is called a height profile, or range profile, and if plotted for each frequency step made, results in an ionogram display, such as the one shown in Figure 1-17. The 11 520 amplitudes shown as
individual pixels on the height vs. frequency display are the amplitude of the maximum Doppler line from the spectrum at each height and frequency. Therefore, the ionogram shown, covering 9 MHz in
100 kHz steps is the result of 737 280 separate samples, and 23 040 separate Doppler spectra (11 520 O polarization and 11 520 X polarization).
back to top
Barker R.H., "Group Synchronizing of Binary Digital Systems", Communication Theory, London, pp. 273-287, 1953
Bibl, K. and Reinisch B.W., "Digisonde 128P, An Advanced Ionospheric Digital Sounder", University of Lowell Research Foundation, 1975.
Bibl, K and Reinisch B.W., "The Universal Digital Ionosonde", Radio Science, Vol. 13, No. 3, pp 519-530, 1978.
Bibl K., Reinisch B.W., Kitrosser D.F., "General Description of the Compact Digital Ionospheric Sounder, Digisonde 256", University of Lowell Center for Atmos Rsch, 1981.
Bibl K., Personal Communication, 1988.
Buchau, J. and Reinisch B.W., "Electron Density Structures in the Polar F Region", Advanced Space Research, 11, No. 10, pp 29-37, 1991.
Buchau, J., Weber E.J. , Anderson D.N., Carlson H.C. Jr, Moore J.G., Reinisch B.W. and Livingston R.C., "Ionospheric Structures in the Polar Cap: Their Origin and Relation to 250 MHz Scintillation",
Radio Science, 20, No. 3, pp 325-338, May-June 1985.
Bullett T., Doctoral Thesis, University of Massachusetts, Lowell, 1993.
Chen, F., "Plasma Physics and Nuclear Engineering", Prentice-Hall, 1987.
Coll D.C., "Convoluted Codes", Proc of IRE, Vol. 49, No 7, 1961.
Davies, K., "Ionospheric Radio", IEE Electromagnetic Wave Series 31, 1989.
Golay M.S., "Complementary Codes", IRE Trans. on Information Theory, April 1961.
Huffman D. A., "The Generation of Impulse-Equivalent Pulse Trains", IRE Trans. on Information Theory, IT-8, Sep 1962.
Haines, D.M., "A Portable Ionosonde Using Coherent Spread Spectrum Waveforms for Remote Sensing of the Ionosphere", UMLCAR, 1994.
Hayt, W. H., "Engineering Electromagnetics", McGraw-Hill, 1974.
Murali, M.R., "Digital Beamforming for an Ionospheric HF Sounder", University of Massachusetts, Lowell, Masters Thesis, August 1993.
Oppenheim, A. V., and R. W. Schafer, "Digital Signal Processing", Prentice Hall, 1976.
Peebles, P. Z., "Communication System Principles", Addison-Wesley, 1979.
Reinisch, B.W., "New Techniques in Ground-Based Ionospheric Sounding and Studies", Radio Science, 21, No. 3, May-June 1987.
Reinisch, B.W., Buchau, J. and Weber, E.J., "Digital Ionosonde Observations of the Polar Cap F Region Convection", Physica Scripta, 36, pp. 372-377, 1987.
Reinisch, B. W., et al., "The Digisonde 256 Ionospheric Sounder World Ionosphere/ Thermosphere Study, WITS Handbook, Vol. 2, Ed. by C. H. Liu, December 1989.
Reinisch, B.W., Haines, D.M. and Kuklinski, W.S., "The New Portable Digisonde for Vertical and Oblique Sounding," AGARD-CP-502, February 1992.
Rush, C.M., "An Ionospheric Observation Network for use in Short-term Propagation Predictions", Telecomm, J., 43, p 544, 1978.
Sarwate D.V. and Pursley M.B., "Crosscorrelation Properties of Pseudorandom and Related Sequences", Proc. of the IEEE, Vol 68, No 5, May 1980.
Scali, J.L., "Online Digisonde Drift Analysis", User s Manual, University of Massachusetts Lowell Center for Atmospheric Research, 1993.
Schmidt G., Ruster R. and Czechowsky, P., "Complementary Code and Digital Filtering for Detection of Weak VHF Radar Signals from the Mesosphere", IEEE Trans on Geoscience Electronics, May 1979.
Wright, J.W. and Pitteway M.L.V., "Data Processing for the Dynasonde", J. Geophys. Rsch, 87, p 1589, 1986.
Send mail to webmaster with questions or comments about this web site.
back to top | {"url":"https://ulcar.uml.edu/digisonde_dps.html","timestamp":"2024-11-03T01:03:25Z","content_type":"text/html","content_length":"230486","record_id":"<urn:uuid:22f63608-b950-4c73-be32-3a2fca51a502>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00685.warc.gz"} |
Exponentially small soundness for the direct product z-test
Given a function f : [N]k ! [M]k, the Z-test is a three query test for checking if a function f is a direct product, namely if there are functions g1, . . . gk : [N] ! [M] such that f(x1, . . . , xk)
= (g1(x1), . . . gk(xk)) for every input x 2 [N]k. This test was introduced by Impagliazzo et. al. (SICOMP 2012), who showed that if the test passes with probability ϵ > exp(- p k) then f is (ϵ)
close to a direct product function in some precise sense. It remained an open question whether the soundness of this test can be pushed all the way down to exp(-k) (which would be optimal). This is
our main result: we show that whenever f passes the Z test with probability ϵ > exp(-k), there must be a global reason for this: namely, f must be close to a product function on some (ϵ) fraction of
its domain. Towards proving our result we analyze the related (two-query) V-test, and prove a "restricted global structure" theorem for it. Such theorems were also proven in previous works on direct
product testing in the small soundness regime. The most recent work, by Dinur and Steurer (CCC 2014), analyzed the V test in the exponentially small soundness regime. We strengthen their conclusion
of that theorem by moving from an "in expectation" statement to a stronger "concentration of measure" type of statement, which we prove using hyper-contractivity. This stronger statement allows us to
proceed to analyze the Z test. We analyze two variants of direct product tests. One for functions on ordered tuples, as above, and another for functions on sets, f : [N] k ! [M]k. The work of
Impagliazzo et. al was actually focused only on functions of the latter type, i.e. on sets. We prove exponentially small soundness for the Z-test for both variants. Although the two appear very
similar, the analysis for tuples is more tricky and requires some additional ideas.
Publication series
Name Leibniz International Proceedings in Informatics, LIPIcs
Volume 79
ISSN (Print) 1868-8969
Conference 32nd Computational Complexity Conference, CCC 2017
Country/Territory Latvia
City Riga
Period 6/07/17 → 9/07/17
• Agreement
• Direct Product Testing
• Property Testing
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'Exponentially small soundness for the direct product z-test'. Together they form a unique fingerprint. | {"url":"https://cris.iucc.ac.il/en/publications/exponentially-small-soundness-for-the-direct-product-z-test-5","timestamp":"2024-11-06T17:53:52Z","content_type":"text/html","content_length":"50030","record_id":"<urn:uuid:9d15fcc0-0500-4b0b-b915-d1363d30a081>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00524.warc.gz"} |
Python Excel Formula: Match in Column and Return Value
In this tutorial, we will learn how to write a Python code that emulates an Excel formula to find a match in a specific column and return the corresponding value. This can be achieved using the
VLOOKUP function, which is commonly used in Excel to perform such tasks. By understanding the step-by-step explanation and examples provided, you will be able to implement this functionality in your
Python code.
To find a match in a specific column and return the corresponding value, we will use the VLOOKUP function in Python. The VLOOKUP function takes four arguments: the value to search for, the range to
search in, the column index of the value to return, and the match type. By specifying the value to search for as the value in cell C2, the range to search in as column 5 (E:E), the column index of
the value to return as 4 (column D), and the match type as FALSE (exact match), we can achieve the desired functionality.
Let's consider an example to understand how this formula works. Suppose we have a dataset with values in columns C, D, and E. If the value in cell C2 is 3, the VLOOKUP formula would search for the
value 3 in column E and find a match in the third row. It would then return the corresponding value from column D, which is 30, and insert it into cell B2.
By following the step-by-step explanation and examples provided, you can easily implement this Excel formula in Python and achieve the desired functionality. This will allow you to find a match in a
specific column and return the corresponding value, similar to how it is done in Excel. Now, let's dive into the code implementation and explore how to achieve this in Python.
An Excel formula
=VLOOKUP(C2, E:E, 4, FALSE)
Formula Explanation
This formula uses the VLOOKUP function to find a match for the value in cell C2 in column 5. Once a match is found, it returns the corresponding value from column D and inserts it into cell B2.
Step-by-step explanation
1. The VLOOKUP function takes four arguments: the value to search for (C2), the range to search in (E:E), the column index of the value to return (4), and the match type (FALSE).
2. The value in cell C2 is used as the lookup value. The formula searches for this value in column 5 (range E:E).
3. When a match is found, the VLOOKUP function returns the corresponding value from the fourth column of the range (column D).
4. The FALSE argument is used for the match type, which means an exact match is required. This ensures that the formula only returns a value when an exact match is found.
For example, let's say we have the following data in columns C, D, and E:
| C | D | E |
| | | |
| 1 | A | 10 |
| 2 | B | 20 |
| 3 | C | 30 |
| 4 | D | 40 |
| 5 | E | 50 |
If the value in cell C2 is 3, the formula =VLOOKUP(C2, E:E, 4, FALSE) would return the value 30. This is because the formula searches for the value 3 in column E and finds a match in the third row.
It then returns the corresponding value from column D, which is 30, and inserts it into cell B2. | {"url":"https://codepal.ai/excel-formula-generator/query/4DAE1yP0/excel-formula-python-match-column","timestamp":"2024-11-14T05:34:26Z","content_type":"text/html","content_length":"92349","record_id":"<urn:uuid:fa485adf-a201-4fbe-8097-c645b2685032>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00190.warc.gz"} |
Brane Space
Go back to November 22, 1963 and find out who was really in that 6th floor Depository window?
Hmmmm....an interesting proposition!
In the science fiction novel '
Lest Darkness Fall'
by L. Sprague de Camp,, the central character (Martin Padway) undergoes a time slip to 6th century Rome. The interesting aspect of this is that it is the Rome for which a definite history is known,
since Padway used it to his advantage as he made his way in a strange environment. Hence, he knew what major political changes would occur, who was in charge and which factions were vying for
control, including barbarians.
In other words, the template for his experience was not the same as the ones I've considered in the two earlier blog posts: an interphasing of two distinct parallel universes. As I conjectured, the
experience of Charlotte Moberly and Eleanor Jourdain would be more a case of the women stepping into 1789 Versaille, but in an alternate universe. One briefly interphased with our own. The nature of
their descriptions supported this hypothesis.
In the case of the fictional Padway, his memory and the ability to forecast events to unfold in 6th century Rome, disclosed that he didn't step into an alternate universe with an alternate past, but
remained in this one, on the same Earth in fact. Otherwise the events that shaped ancient Rome would have been different and he'd not have been able to use his historical memory to any advantage
This brings up the questions, first, of whether time slips in this one universe (and Earth) are feasible, and second, if so, are there experiments one might perform to exploit them - say to return to
Dallas, in November, 1963 and interfere with the Kennedy assassination?
While Oxford physicist Michael Shallis, in his book 'On Time' doesn't elaborate at any length, he does provide a brief clue (p. 163-64) referring to "advanced waves moving back in time in
counterpoint to their progressive and retarded partners." He then goes on to write (ibid.):
"This interpenetration slips backward and forward in time simultaneously, seeming to defy the laws of matter and causality"
The notion of advanced and retarded waves, or potentials, is not new. In fact, every physics student taking his first Electricity and Magnetism course encounters them. Indeed, from Maxwell’s E & M
theory, we are fully enabled to make use of what are called “advanced potentials” defined in terms of:
V(r,t[a]) = f1(r, T[a]) and A(r, t[a]) = f2(r, t[a])
Where t[a] is the “advanced time”, t[a] = t + r/c
And the f1, f2 are functions of the electric potential and vector potential, respectively. In the advanced time, we ascertain conditions for the future potentials V(r,t[a]) based on the past, and are
able to use them in appropriate calculations in the past. An evident violation of causality, though admittedly the sort of applications where these may be used are limited What might be more directly
relevant to time slips for the same universe involve the "offer" and "echo"waves proposed by John G. Cramer. The two interacting, interpenetrating wave forms might be expressed:
y(O) + y(E) = Ö(2/π) e ^ωτ + Ö(2/π) e^- ωτ
The use of such waves, say in a putative role for time slips, would demand using something called Minkowski space-time, Minkowski envisioned a kind of hyperspace in which events do not just 'happen'.
Rather, they already are embedded in the space-time metric (geometry) and one comes across them, like towns along a highway (cf. Whittrow, G.J. 1972, The Nature of Time, Pelican Books, Great Britain,
p. 103.) For example, imagine the Minkowski temporal scale:
Past[-τ] <-------e1------e2--------e3-----> Future [+τ] -------e1------e2--------e3----->
where e1, e2 and e3 are three events, say: e1 = Explosion of the Hindenburg dirigible, e2 = John F. Kennedy's assassination, and e3 = some future asteroid impact in the 21st century. In the Minkowski
hyperspace these have always been on the timeline, which is traversed in the same way one would traverse a space. Thus, one encounters the various events on the timeline as s/he might encounter towns
or villages along a highway.
Movement can occur in time or in space, and have a complementary (space or time) equivalent. For example, stay where you are and let one minute elapse on your watch. You have performed a 'movement in
time' without a corresponding movement in space. We say you have traversed imaginary space. This imaginary space can easily be computed:
Im(x) = i(300,000 km/s x 60 s) = 18,000,000(i) km
That is, you have traversed 18 million imaginary kilometers or 11.25 million miles in imaginary space. (Im(x) is the symbolic representation for an imaginary space (x) transition). Now, think of a
movement in real space, but none in time. Is this possible? Well, I can get out my telescope and observe the Moon instantly - bearing my consciousness upon it - without taking the time to travel
there. For all intents and purposes I am there. In this case, an imaginary time interval is the result, and again can be computed:
Im(τ) = (i) 384,000 km/ 300,000 km/s = 1.28i sec
That is, 1.28 imaginary seconds to get there. I note here that this imaginary time interval is equivalent to a real space interval: 384,000 km (space distance to Moon) = 1.28 i seconds. Thus,
imaginary time and real space are interchangeable. This has prompted at least one observer of the situation (to do with Minkowski spacetime) to observe (Whittrow, op. cit., p. 104.):
“In other words, the passage of time is merely to be regarded as a feature of consciousness that has no objective counterpart.”
This is important! It suggests that although we might formulate an apparently real "temporal highway" - i.e. the Minkowksi timeline, that the reality is we can't use it in any objective fashion, say
analogous to changing locations on an actual physical highway. In this case, the only "time travel" one would be able to do is a limited form based on displaced consciousness. Since quantum mechanics
fuses the role of observer with that observed, then a quantum displacement affecting consciousness could conceivably "transport" one to another time, say perhaps the day of Kennedy's assassination.
The problem is one would only access it as a conscious observer, as if watching passively from a TV screen, not as a participant. Moreover, to make it work it seems likely the inception would have to
commence in the brain itself - say at a synapse- for which we already know the dimension of the synaptic cleft (200- 300 nm) is arond the scale for the Heisenberg Uncertainty principle to operate.
From an initial energy emergence, say based on the energy-time uncerainty relation:
DE Dt £ ħ
One might then arrive at adequate energy, i.e.
DE » ħ/ Dt
to initiate a temporal cascade or 'push' to displace one's conscious in time. Following this in a more comprehensive way would require using some kind of operator to generate variation in time as
experienced by consciousness. David Finkelstein (
Quantum Sets and Clifford Algebras, in International Journal of Theoretical Physics, Vol. 21, Nos. 6/7, p. 489
) has
created an operator explicitly to vary time via ‘bracing’. The operator is called ‘the brace operator’, Br. To see how it works on an elementary level, select a quantum unit set (say of cardinality
1) over some sub-module S (1) of the Clifford algebra S, with basis B(1). Then it follows from application of Br, and its conjugate Br*: Br* Br = 1 Br Br* = [unit] Br* Br - Br Br* = [non-unit]
A more graphic way to see this is as follows: After 1 τ, Br = { }
After 2 τ, Br = { { } }
After 3 τ, Br = { { { } } }
where we define: 10^-43 s < τ < 10^-23 s And we see that for the smaller values on the left side, for which Dt = D τ
enormous energy would be released.
Notice that the brace creation increases arithmetically as the unit tau increases. The operation Br, equivalent to C(b) in Grassman space, generates an elemental tau (τ) each time, starting from some
initial fluctuation.
Might this fluctuation be willed? Perhaps, but more than likely it would be spontaneous.
All of this is of course highly speculative, but one thing which isn't is the clear impossibility of actual physical time travel, by time slip or otherwise. Yes, I'd originally planned to travel to
Dachau sometime in the next 2 months, to attempt a time slip experiment. But I don't believe it would be wise to try it, even slipping into a parallel universe with Dachau on another Earth! But it
might yet be feasible to attempt a quantum -based displacement of consciousness - say back to Nov. 22, 1963.
Many readers may recall, or perhaps have seen, the film "Reefer Madness" - a 1936 propaganda exploitation effort revolving around the melodramatic events that ensue when high school students are
lured by pushers to try marijuana. Their travails extend from a hit and run accident, to manslaughter, suicide, attempted rape, and descent into madness.
The obvious purpose was to scare the living bejeezus out of any kid to not even think of trying "demon weed". The message was it would wreck young lives leaving them broken husks - much like modern
Xtian fundamentalism has wrought on too many minds these days.
Anyway, the film was directed by Louis Gasnier and starred a cast composed of mostly unknown bit actors.
Originally it had been financed by a church group (Wouldn't ya know?) under the title "Tell Your Children". Its primary mandate was circulation and screening to parents as a putative morality tale,
attempting to teach them about the dangers of any cannabis use by their kiddies. Perhaps two decades later, any viewing of this dreck became so laughable that it emerged as a cult film - shown to
audiences primarily as joke material. Which is rightly the niche to which it belongs.
Flash forward to today, and we still behold would-be propagandizing clowns - like a certain under-educated goober- who don't even bother to do minimal reearch before shooting from the hip concerning
another state's MJ laws. In this case, it seems like my dumb turd wannabe Rebel bro didn’t take long to take umbrage at my post about his MJ bloviations 3 days ago. True to his bellicose nature he
came out firing…..but alas…..all scattershot, ending up hitting himself in his own fat ass.
I am not about to reference all his assorted BS, but focus in particular on two aspects: 1) His citation of lengthy recycled bollocks from a known anti-MJ crusader link about the “ill effects” of MJ
on youth, and (2) His claim that (in yesterday's blog post) I was "comparing apples and oranges" in highlighting the ill-effects, fatalities for DUI in FLA, over MJ -induced auto fatalities in
Regarding (1), it doesn't take much Google searching even by a lamebrain to dredge up multiple anti-MJ sites (e.g. 'Smart Colorado'), then recycling their hogwash into a blog. That was essentially
how Mikey consumed over two thirds of his last blog, by parroting one site and its "warnings" and how MJ will "tarnish" Colorado in multiple ways. All of these are exaggerated fear- mongering
Agitprop -much like "Reefer Madness"- and all have been shot down by MJ legalization backers and groups. Multiple times. In Colo. we know such anti-MJ groups existed even before Amendment 64 became
law, and I even referenced the efforts of Patrick Kennedy to form one of his own for a national campaign to halt any further state legislation to allow MJ, see: http://brane-space.blogspot.com/2013/
I further noted that in taking this route Kennedy effectively became a useful idiot for Big PhrMA- much like Mike has become (albeit unconsciously) with his laughable anti-MJ, anti-Amendment 64
blogs. As far as the “risks” to teens, young people I cited a letter in the Denver Post which nailed such a red herring:
“There are many freedoms adults often enjoy that are illegal for kids, including gambling, drinking, smoking, investing, driving, getting piercings and tattoos, getting married, staying out all
night, going to many concerts, working a double shift, etc. Granted, many of these freedoms could be considered bad for adults, too, but the “bad for kids” trope is nothing more than a cudgel
designed to stifle honest debate. An unregulated black market is most assuredly more harmful to kids than a regulated honest market, and Colorado enjoys many economic advantages from the tax revenue
these freedoms bring when adults enjoy them responsibly”
Of course, such points are way too subtle for a hammerhead like Mike! This stubborn tool- or more like a half tool and half fool, will always twist semantics to what suits his specious fundie agenda,
and bring in irrelevancies and red herrings since he lacks any argumentative ballast.
This brings up his second issue for which I insert here his nutso response from his blog, for reference:
"The FACT is that any would-be traveler is much more likely to be killed by a drunk in Florida than even sideswiped by an MJ user here in Colorado."
Hey, DUMBASS! That statement is true in any state! Why? DUHHHH....you idiot! Because alcohol has been legalized where MJ has NOT! When prohibition was in effect, deaths via drunk drivers were
miniscule! Once it was repealed and made "legal," as time went by, alcohol-related motor vehicle crashes and deaths SOARED! As did overall crime (e.g., domestic violence, robberies, murders, etc!).
Y'all wanna see MORE of something negative from an abusive substance? JUST LEGALIZE IT!
Hence, if your Libtard Guv and other politicians in CO decide to keep MJ "legal," keep an eye on your state's impaired driving deaths and injury stats THEN! Okay? Then let me know what ya find! (oh
yeah, toss in the stats on the overall crime rate as well)
Well, leave it to a brain damaged (at Parris Island) fucktard to restate the point I’d already made! I.e.
“Yes, the basis analog for the argument is different, but then he brought it on himself by harping on all the “ills” of pot use in Colorado – while neatly overlooking that marijuana is not the
culprit in ANY state, rather alcohol is. “
But missing the boat as to the reason why! At the risk of getting even more subtle beyond his comprehension level, let me make this finer point: The WHOLE basis for Amendment 64 was the regulation of
marijuana LIKE alcohol. The reason that the amendment surpassed (by a long way) the requisite number of petition signatures to get on last November's ballot - was because intelligent people,
prospective voters saw the value in this equivalent regulation, despite the fact MJ has not caused one CO fatality (all Mike’s speculations aside or taking biased factoids from his anti-MJ sites).
Indeed, the virtues of pot, in NOT creating analogous DUI-type havoc on the roads, or other crimes, were largely what drove young voter turnout in the state and 2 to 1 votes for the Amendment! In
other words, DOH!!! - So long as alcohol consumption is legal in Colorado (and other states), criminalizing marijuana is fucking absurd!
In addition, people saw the economic benefits! If MJ is indeed regulated like alcohol then taxes would provide additional revenues! In a state drowning in debt because of too low state taxes, this is
a godsend. In the case of Colorado Springs, for example, our medical marijuana businesses brought in nearly $1m in extra local tax revenues last year – enough to keep assorted gov’t functions going,
including upkeep of parks and trails cleaned, street lights on and a few more schools open- as well as maintained. Does this matter? Ask the people who live here! One thing we DO know is that
bringing in more military - as based at Ft. Carson- hasn't made a significant difference to state coffers! The drain on our schools, highways, hospitals has more than countered any tax revenue
Thus, the point this terminal idiot doesn’t grasp is that OF COURSE one is more likely to be killed in any state via DUI from excess alcohol BUT THAT IS EXACTLY WHY EQUIVALENT REGULATION OF BOTH – AS
DRUGS- SHOWS THERE IS LESS REASON TO BAN MJ THAN ALCOHOL! In other words, when both (legal) alcohol and MJ are forced under the same regulatory standards, then MJ wins the benefits column by a mile!
(And I won't even belabor the proven benefits of cannabis for cancer patients, i.e. in finding their appetities after chemotherapy!)
But trying to explain this to a dumb, Bars 'n stars- toting wannabe Confederate (he was actually born in Milwaukee- a fact he can never change) is like trying to explain differential calculus to
‘Sparky’ – wifey's and my favorite backyard squirrel.
He also commits the logical fallacy of "slippery slope" when he claims if MJ is legalized across the nation, like alcohol after prohibition – then we will all be on the highway to Hell with even more
"evils", "abuses" etc.. We will have crazy MJ dopers running amuck just like drunks. But the stats again don’t support his fear mongering. We have had medical MJ for over five years now and no one is
going nuts on the streets, despite the fact many more citizens likely avail themselves of it than really need it, i.e. for cancer or severe pain. But so WHAT?
As for the federal war on drugs and their prohibitions of a ‘schedule 1 substance’ even the most avid right wingers agree that all it has done is filled our prisons and at great cost, which we can no
longer afford. This is also why a consistent majority of Americans support legalization for the nation. (By almost 55% to 45%)
The last irksome element of his endless gibberish is the nonsensical one that I am not entitled to be taken seriously if I cite links or info from state DMV urls to do with drunk driving stats,
crimes, arrests! The reason? I never worked in law enforcement, or was a cop (like he was- though he spent most of his time beating in the heads of poor black sugar cane cutters in South Bay). But
what does that have to do with the price of tea?
In fact, the argument is as fucking stupid and deranged as arguing that I have no right to blog on the Vietnam War, the wrongful way it was started or the atrocities committed, because I never served
in the military. In like manner- though Mike is too dumb and blind to see it- his own pseudo-logic and arguments militate against him citing links to MJ from Colorado despite the fact he's: a) never
been a lobbyist in the state, or b) has never been a legislator and doesn't know beans about the basis of Amendment 64. (Though again, he could learn and justify his blogs! But as in the case of
evolution, the Big Bang, etc. he never does.)
In the end it's useless to try to argue or debate this character because he is totally ignorant of the basic parameters that apply to the content of any worthwhile argument. In this case, it's the
Amendment 64 basis and legalization framework - to regulate MJ like alcohol has been. This being the case, there'll be no further engagements until he can show he can pass a basic test in logic, for
which I provide a link here:
My bet is that, like the biblical exegesis test, he will punk out. It's much easier, after all, to spout endless rubbish, ignorance and bullshit than it is to show he can truly engage on the same
intellectual "battle field". Perhaps he ought to stick to the battlefields to which he's accustomed, i.e. bat and bottle fights in bars and .....with rogue gators in Lake Panosoffkee, FL.
We now look at the solutions to the last group of math problems:
1) Compute the Bessel functions for J[o](x) and J[1](x) with x = 1 and then compare with the values obtained from the graph shown at top.
Use the truncated series:
J[o](x) = 1 - x^2/ 2^2 (1!)^2 + x^4/ 2^4 (2!)^2 - x^6/ 2^6 (3!)^2 +…..
Then for x = 1:
J[o](1) = 1 – (1)^2/ 2^2 (1!)^2 + (1)^4/ 2^4 (2!)^2 – (1)^6/ 2^6 (3!)^2 +…
J[o](1) = 1 – ¼ + 1/ 64 - 1/2304 = 0.765
Compare to graphical value: J[o](1) = 0.8
J[1](x) = x/ 2 - x^3/ 2^3 ·1! 2! + x^5/ 2^5 ·2!3! - x^7/ 2^7 ·3!4! - .....
Then for x = 1:
J[1](1) = 1/ 2 – (1)^3/ 2^3 ·1! 2! + (1)^5/ 2^5 ·2!3! – (1)^7/ 2^7 ·3!4! - .....
J[1](1) = ½ - 1/16 + 1/ 384 - 1/ 18432 = 0.44
Compare to graphical value: J[1](1) = 0.45
2) Find the twist in a solar loop (take it to be a magnetic tube) if: B [q] (r)= 0. 1T and B [z] (r) = 0.2T. Take the radius of the tube to be r = 10^4 km and the length L =10^8 m. Is the tube kink
unstable or not? (Kink instability is said to obtain when: T(r) >2 p
Solution: the “twist” is defined:
T(r) = (L B [q](r))/ (r B [z] (r))
Where: B [q](r) = 0.1 Tesla and B [z] (r) = 0.2 Tesla
Also: r = 10^4 km = 10^7 m and L = 10^8 m
Therefore: B [q](r))/ (B [z] (r)) = (0.1)/ (0.2) = 0.5 and L/r = (10^8)/ (10^7) = 10
T(r) = (L/r) (0.5) = 10 (0.5) = 2.0
The tube is not kink unstable since that requires: T(r) > 2p = 6.28
3) Compute the intensity for the azimuthal magnetic field component (i.e. B [q] (r) ) of a large sunspot, if its equilibrium magnetic field B[o] = 0.01 T and the value of J[1](ar) conforms to a = 0.4
and r = 40.
Solution: By definition: B [q] (r) = B[o] J[1](ar)
If a = 0.4 and r = 40 then ar = (0.4)(40) = 16
From the graph: J[1](ar) = J[1](16)) » 0.17
B [q] (r) = B[o] J[1](ar) = 0.01T (0.17) = 0.0017 Tesla
Well, let's get it out into the open: Fact-based reality is a challenge for most repukes and Southern Tea party types in the best of times. In times of political upheaval especially with the
North-South divide increasing due to hatred of Obama, it goes over the top. Especially for a certain blogger who yaps a lot but is unable to even pass a relatively simple test in his self-proclaimed
area of expertise.
Enter then a recent, semi-serious blog in which he holds Florida up as an exemplar “vacation spot” (along with other formerly Rebel enclaves like MS, AL) while dissing all the “Yankee” states- i.e.
north of the Mason-Dixon line- as well as Colorado. In the latter case he bloviates about the prospective traveler’s likelihood of getting killed – maybe run down or whatever- by a pot-crazed
lunatic, given our state has passed Amendment 64 to allow recreational use of pot.
Of course, this is as much codswallop as his obsession over a non-existent "salvation" and its schizoid driving forces: “Satan” and “Hell”. So one must wonder if those puerile, phantasmagorical
ideations formed some kind of embolism in his deteriorating brain leading him to write so much crap about Colorado and marijuana! The FACT is that any would-be traveler is much more likely to be
killed by a drunk in Florida than even sideswiped by an MJ user here in Colorado. But don’t believe me, check out the state of Florida’s own statistics from a state site: http://www.dmvflorida.org/
Therein we learn:
"According to Florida DMV records there were 33,625 DUI convictions in Florida in 2011. Of the 55,722 DUI tickets issued in Florida in 2011 - 9,328 were issued by the FHP, 23,649 were issued by
police departments in Florida, and 21,868 were issued by Florida Sheriffs departments.”
Comparable statistics in all categories for MJ users in COLORADO : O
We also learn the following DUI stats for assorted arrests in Florida counties:
· Hillsborough County (Tampa) - 3,256
· Miami-Dade - (Miami) - 2,274
· Duval County - (Jacksonvile Area) - 2,222
· Pinellas County (St Petersburg) - 1,824
· Palm Beach County (West Palm Beach) - 1,561
· Orange County (Orlando) - 1,383
· Brevard County (Melbourne) - 1,072
· Broward County (Fort Lauderdale) - 985
Comparable driving-related stats for MJ users in COLO, for all counties: Zero
We also learn about fatalities from this site: http://www.dui-usa.drinkdriving.org/Florida_dui_drunkdriving_statistics.php
For which we find:
717 fatal accidents in Florida where at least one driver had a BAC (Blood alcohol content) of 0.08% or above
803 people were killed in Florida in accidents where at least one driver had a BAC of 0.08% or above
154 people were killed in Florida in accidents where at least one driver had a BAC between 0.01% and 0.07%
These are startling stats! Look at them! That is a total of 957 deaths from drunken drivers! There are also likely hundreds of shooting incidents, including accidents, engendered by too high a BAC
but the state (fortuitously) keeps no records on those. Meanwhile, the comparable stats for marijuana users in Colorado? Zero!
The hard truth, which this bozo seems not to process, is that you’re much more likely to lose your life from a drunken driver in FLA than from an MJ user in Colorado. Yes, the basis analog for the
argument is different, but then he brought it on himself by harping on all the “ills” of pot use in Colorado – while neatly overlooking that marijuana is not the culprit in ANY state, rather alcohol
is. It leads to more DUI deaths, more accidents – including by use of weapons- than MJ does in any parallel universe.
The guy’s fact base is so distorted that he actually impugned Gov. John Hickenlooper of CO despite the fact he was never a fan of Amendment 64. Indeed, in the wake of its passage, he was the one that
snarkily warned voters they’d best “not get high on Cheetos”.
Hickenlooper was also the one that proposed, after 64’s passage, a way to get MORE Federal oversight- working with the U.S. Attorney Gen. ! This so outraged many that Denver Post columnist Vincent
Carroll was driven to write a column ('Come On, Governor, Defend 64!' , Nov. 8, 2012, p. 21A) to steer the Guv to the side of the angels. (Alas, he's gone more to the side of the 'devils'- as in
recently drinking a glass of allegedly "fracked" water and declaring it "tastes fine, has no ill effects".)
Sadly, what all of this shows is that “Mr. Johnny Reb Blogger” has no clue what goes on in this state, nor the comparable damage done by alcohol relative to MJ. But then why be surprised when he
earlier shot his yap off about Chicago being the the No. 1 city for homicides in the USA, when FBI uniform crime stats disclosed Detroit! Or, being totally unaware that the Vietnam conflict was
started on a pretext, or that the ‘Amazing Race’ episode he calumniated was not about “forcing contestants to learn a communist song” but rather matching different symbols on posterboards.
But after all, why be surprised? This is the same character that wouldn’t even attempt a test in his own proclaimed specialty area, biblical exegesis, i.e. http://brane-space.blogspot.com/2013/04/
This despite being given unlimited time to take it!
Perhaps, before Mr. 'Goober' next blogs on MJ or Colorado, he might learn a bit more from other subjects. Say like: Sociology, crime statistics, American history......oh, and basic English. Maybe at
his FLA Smokehouse Bible College, assuming there are any offerings besides "bible study" and watching Woody Woodpecker cartoons!
Plot of Bessel function of the first kind, J[α](x), for integer orders α = 0, 1, 2.
Among the most important special functions is the Bessel function. In the field of solar physics, for example, it's of inestimable importance in the analysis of solar magnetic fields and their
evolution. One very key equation is the Helmholtz, viz.
1/ r [¶/ ¶r ( r ¶/ ¶r)] B + (a)^2 B = 0
where r is the radial coordinate, B the magnetic field intensity, and a a quantity called the "force free parameter". Then the axially symmetric (i.e.- in cylindrical coordinates r, z, q) Bessel
function solutions are
B [z] (r) = B[o] J[o](a r)
B [q] (r) = B[o] J[1](ar)
where the axial (top) and azimuthal magnetic field components are given, respectively, and J[o](a r) is a Bessel function of the first kind, order zero and J[1](ar) is a Bessel function of the first
kind, order unity. (See graphs at the top for Bessel functions of the orders 0, 1 and 2).
The Bessel functions are mathematically defined (cf. Menzel, 'Mathematical Physics', 1961, p. 204):
J[m] (x) = (1/ 2^m m!) x^m [1 - x ^2/ 2^2 1! (m + 1) + x^4/ 2^42! (m + 1) (m + 2) - ….(-1)^j x^2j / 2 ^2j j! (m + 1) (m + 2)…(m + j) + …]
which we terminate with second order terms.
For m = 0 and m = 1 forms one gets:
J[o](x) = 1 - x^2/ 2^2 (1!)^2 + x^4/ 2^4 (2!)^2 - x^6/ 2^6 (3!)^2 + ......
J[1](x) = x/ 2 - x^3/ 2^3 ·1! 2! + x^5/ 2^5 ·2!3! - x^7/ 2^7 ·3!4! - .....
The equations in B [z] (r), B [q] (r), with the special Bessel functions at root, are critical in describing the respective magnetic fields for a magnetic tube. For a cylindrical magnetic flux tube
(such as a sunspot represents viewed in cross-section) the “twist” is defined:
T(r) = (L B [q](r))/ (r B [z] (r))
Where L denotes the length of the sunspot-flux tube dipole and r, the radius. If the twist exceeds 2p then the magnetic configuration may be approaching instability and a solar flare.
Problems for the Math Maven:
1) Compute the Bessel functions for J[o](x) and J[1](x) with x = 1 and then compare with the values obtained from the graph shown at top.
2) Find the twist in a solar loop (take it to be a magnetic tube) if: B [q](r) = 0. 1T and B [z] (r) = 0.2T. Take the radius of the tube to be r = 10 ^4 km and the length L = 10 ^8 m. Is the tube
kink unstable or not? (Kink instability is said to obtain when: T(r) > 2p)
3) Compute the intensity for the azimuthal magnetic field component (i.e. B [q] (r) ) of a large sunspot, if its equilibrium magnetic field B[o] = 0.01 T and the value of J[1](ar) conforms to a = 0.4
and r = 40.
In the diagram directly above there appears a two-slit diffraction pattern and also an associated intensity pattern showing the relative light intensity for the pattern -obviously with the central
portion amplitude above the rest.
David Deutsch's arguments on p. 44 of his monograph ('The Fabric of Reality") basically contend that:
"there is no intrinsic difference between tangible and shadow photons: each photon is tangible in one universe and intangible in all the other parallel universes"
In his representation on p. 41 (Fig. 2.7), we see a similar pattern to the one given above, along with an additional pattern in which the dark interference fringes are much wider. He infers that
"something must be coming through the second pair of slits (bear in mind two, 2-slit arrangements are set up in sequence) to prevent the light from the first pair reaching X? But what? We can find
out with further experiments."
After a lengthy bit of reasoning, including successive tweaking of thought experiments, Deutsch arrives at "shadow photons from a parallel universe". He infers (p. 44) for at least one arrangement
"at least a trillion shadow photons accompanying each tangible one."
He also is careful to distinguish the respective properties: (ibid.)
"Thus, we have inferred the existence of a seething, prodigiously complicated, hidden world of shadow photons. They travel at the speed of light, bounce off mirrors, are refracted by lenses, and are
stopped by opaque barriers or filters of the wrong color. Yet they do not trigger even the most sensitive detectors. The only thing in the universe that a shadow photon can be observed to affect is
the tangible photon it accompanies. That is the phenomenon of interference. Shadow photons would go entirely unnoticed were it not for this phenomenon."
In light of the above, we now return again to the experience of Charlotte Moberly and Eleanor Jourdain in experiencing a time slip transferring them from the year 1901 to 1789. (Previous blog).
Moberly described a flat and lifeless terrain in her report and most importantly declares: "There were no effects of light and shade, and no wind stirred the trees". What do we infer?
If there really were "no effects of light and shade" then it must mean that their surreal domain was in fact an inter-phased one for two parallel universes. In this domain we surmise that if they
could have performed the sequential two slit diffraction experiment cited by Deutsch they'd have found instead a central large dark area and bright fringes. As opposed to a central bright region and
dark fringes. In addition, we may infer this interphased domain was dominated by shadow photons over tangible ones but at different angles. This is why "everything suddenly looked unnatural,
therefore unpleasant; even the trees behind the building seemed to have become flat and lifeless ".
It is important to reiterate here, that according to the standard multiverse theory (based on cosmic inflation), there are basically an infinite number of parallel universes. In the hyper-toroid
geometry (graphic shown in the previous blog) these separate universes can be regarded as lines of longitude, since an infinite number of them can be fit around the circumference of the hyper-sphere.
If say, one somehow for some reason briefly overlapped another, one would expect an interphase condition. This then would permit a time slip to occur but it would not be between times in the same
universe (say for the Brit pair between the 1901 and 1789 Earth as we know it) but between TWO distinct parallel universes. That is, they'd have slipped from 1901 in this universe with the Earth as
it exists therin, to another (parallel) universe with the Earth as it existed in 1789.
Let's now explore the dynamics more closely, using the diagram shown at the very top, for two spaces in algebraic homology:
T = S1 X S1
The first (space) is the circle all the way around the middle of the 'donut's body. The second (time) is the circle around a section of the donut itself. Here (diagram), the respective spaces
(circles S1) define two dimensions for what we will call the global state space GL. Thus, we have:
GL = S1 X S1 = (SPACE) X (TIME)
The line marked 'Axis' defines the center of the toroidal space we are looking at. The important point is that the time cycle is mapped all along the (single) S1 cycle of space. The space cycle
therefore defines all hyperdimensional cosmic time cycles that ever have, or will, exist. Evidently, there are an infinite number of such cycles, since an infinite number of points can be mapped onto
the space cycle as well.
All cycles are identical in the infinite series (Σi Θi) but also different. Identical, since each cycle goes from 0 through 360 degrees, folding back on itself so that a particular beginning (Big
Bang = 0) and ending (Big Crunch = 360°) for each universe occurs at one and the same point (0° = 360°). Hence the same initial and final coordinates apply to all cosmic cycles. Θ is thus a fixed
dimension of the Young-model ‑hypertorus, much like time in the Minkowski universe. Indeed, in the hypertorus overview, any position can be fixed by two coordinates (φ, Θ) where the φ is used for
space and Θ for time. In fact, since both are circles, it makes sense to assign them angles: one (φ) for space, the other (Θ) for time.
Consider now, the mapping for the phase space volume we called the global state space. It is the (topological) space of all possible times and all possible spaces, for all possible universes. Thus,
it is an imaginary (in the mathematical sense!) space. Two possible events in this global state space might be designated:
Ξ = (φ, Θ) AND Ξ' = (φ',Θ')
By way of example, Ξ could be the assassination of Hitler, and Ξ' the event wherein Hitler escapes assassination. Either one is possible but not both simultaneously in the same cosmic cycle or
universe! We already know we inhabit the cycle for which Ξ' holds, so the other (Ξ) cannot. The "pasts" are thus mutually exclusive, which further reinforces Hawking's "temporal censorship" postulate
that one cannot go back and change the past.. The topological space of the hypertorus cosmos can therefore be represented by the global state space, a product of absolute hypertorus coordinate time
(Θ) and 'all-space'(φ):
GL = Θ X φ
Now, the set of specific times t_i C Θ_i, and the set of specific 4-dimensional spaces R_i C φ_i, so the space of all local states, L is the product space of four dimensional spaces and specific
L = R X t
We now need to look at is how L and GL are related, and the space and time sets within them: L Ì GL That is, the local state space L is contained within the global state space GL, but can never be
equal to it The same applies to the subsidiary spaces: R in relation to φ, and the specific times: t in relation to Θ.
For example, for Θ, there is some spatio-temporal matrix M with generalized dimensional indices {x0, x1, x2, x3}. No one of these is 'time' specifically and uniquely. Rather, "time" arises when the
three space indices have been assigned (i.e. if φ= {x1, x2, x3}, then Θ = {x0}. In effect, as S. Auyang observes: "the structure M is too primitive to confer special meaning on the time dimension. M
is not in time, it is all times." (How is Quantum Field Theory Possible?, Oxford University Press, 1995, p. 169). By contrast, 't carries the load of temporal significance'. (Note here that when the
4 dimensional indices for each parallel universe are defined, then it becomes possible to more rigorously separate the parallel universes, i.e. in terms of their respective physical constants,
specific times of origin, and duration.)
"Time" then is by no means as straightforward as often assumed, especially if one can plausibly have both time and space uncertainty - i.e. the Heisenberg Indeterminacy principle applicable to each.
If this be so, then one can conjecture unusual conditions in which both space and time indeterminacy for two adjacent parallel universes allow for brief interphasing. If a sentient person, human,
happens to be at such a location when this indeterminacy of space and time occurs, he can experience a "time slip".
Again, we cannot say the person or persons are really "going back in time". Instead, only that they have transitioned from a current coordinate Θ1 in one universe to Θ2 in other. While Θ2 appears to
be "backward" in time by the reckoning of the person at Θ1 it is not really in relation to their own universe.
Part 3: Can I do a time slip to take me to the Kennedy assassination in any universe? | {"url":"https://brane-space.blogspot.com/2013/04/","timestamp":"2024-11-09T07:48:28Z","content_type":"text/html","content_length":"233371","record_id":"<urn:uuid:68565f17-8cc8-44fc-9581-21bb74705220>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00411.warc.gz"} |
Karen Uhlenbeck
Karen Uhlenbeck biography
Date of birth : 1942-10-24
Date of death : -
Birthplace : Cleveland, Ohio, U.S.
Nationality : American
Category : Science and Technology
Last modified : 2011-08-30
Credited as : Mathematical researcher, professor,
The mathematical research conducted by Karen Uhlenbeck has applications in theoretical physics and has contributed to the study of instantons. For her work in geometry and partial differential
equations, she was awarded a MacArthur Fellowship.
Karen Uhlenbeck is engaged in mathematical research that has applications in theoretical physics and has contributed to the study of instantons, models for the behavior of surfaces in four
dimensions. In recognition of her work in geometry and partial differential equations, she was awarded a prestigious MacArthur Fellowship in 1983.
Karen Keskulla Uhlenbeck was born in Cleveland, Ohio, on August 24, 1942, to Arnold Edward Keskulla, an engineer, and Carolyn Windeler Keskulla, an artist. When Uhlenbeck was in third grade, the
family moved to New Jersey. Everything interested her as a child, but she felt that girls were discouraged from exploring many activities. In high school, she read American physicist George Gamow's
books on physics and English astronomer Fred Hoyle's books on cosmology, which her father brought home from the public library. When Uhlenbeck entered the University of Michigan, she found
mathematics a broad and intellectually stimulating subject. After earning her B.S. degree in 1964, she became a National Science Foundation Graduate Fellow, pursuing graduate study in mathematics at
Brandeis University. In 1965, she married Olke Cornelis Uhlenbeck, a biophysicist; they later divorced.
Uhlenbeck received her Ph.D. in mathematics from Brandeis in 1968 with a thesis on the calculus of variations. Her first teaching position was at the Massachusetts Institute of Technology in 1968.
The following year she moved to Berkeley, California, where she was a lecturer in mathematics at the University of California. There she studied general relativity and the geometry of space-time, and
worked on elliptic regularity in systems of partial differential equations.
In 1971, Uhlenbeck became an assistant professor at the University of Illinois at Urbana-Champaign. In 1974, she was awarded a fellowship from the Sloan Foundation that lasted until 1976, and she
then went to Northwestern University as a visiting associate professor. She taught at the University of Illinois in Chicago from 1977 to 1983, first as associate professor and then professor, and in
1979 she was the Chancellor's Distinguished Visiting Professor at the University of California, Berkeley. An Albert Einstein Fellowship enabled her to pursue her research as a member of the Institute
for Advanced Studies at Princeton University from 1979 to 1980. She published more than a dozen articles in mathematics journals during the 1970s and was named to the editorial board of the Journal
of Differential Geometry in 1979 and the Illinois Journal of Mathematics in 1980.
In 1983, Uhlenbeck was selected by the John D. and Catherine T. MacArthur Foundation of Chicago to receive one of its five-year fellowship grants. Given annually, the MacArthur fellowships enable
scientists, scholars, and artists to pursue research or creative activity. For Uhlenbeck, winning the fellowship inspired her to begin serious studies in physics. She believes that the
mathematician's task is to abstract ideas from fields such as physics and streamline them so they can be used in other fields. For instance, physicists studying quantum mechanics had predicted the
existence of particle-like elements called instantons. Uhlenbeck and other researchers viewed instantons as somewhat analogous to soap films. Seeking a better understanding of these particles, they
studied soap films to learn about the properties of surfaces. As soap films provide a model for the behavior of surfaces in three-dimensions, instantons provide analogous models for the behavior of
surfaces in four-dimensional space-time. Uhlenbeck cowrote a book on this subject, Instantons and 4-Manifold Topology, which was published in 1984.
After a year spent as a visiting professor at Harvard, Uhlenbeck became a professor at the University of Chicago in 1983. Her mathematical interests at this time included nonlinear partial
differential equations, differential geometry, gauge theory, topological quantum field theory, and integrable systems. She gave guest lectures at several universities and served as the vice president
of the American Mathematical Society. The Alumni Association of the University of Michigan named her Alumna of the Year in 1984. She was elected to the American Academy of Arts and Sciences in 1985
and to the National Academy of Sciences in 1986. In 1988, she received the Alumni Achievement award from Brandeis University, an honorary doctor of science degree from Knox College, and was named one
of America's 100 most important women by Ladies' Home Journal.
In 1987, Uhlenbeck went to the University of Texas at Austin, where she broadened her understanding of physics in studies with American physicist Steven Weinberg. In 1988, she accepted the Sid W.
Richardson Foundation Regents' Chair in mathematics at the University of Texas. She also gave the plenary address at the International Congress of Mathematics in Japan in 1990.
Concerned that potential scientists were being discouraged unnecessarily because of their sex or race, Uhlenbeck joined a National Research Council planning group to investigate the representation of
women in science and engineering. She believes that mathematics is always challenging and never boring, and she has expressed the hope that one of her accomplishments as a teacher has been
communicating this to her students. "I sometimes feel the need to apologize for being a mathematician, but no apology is needed, " she told The Alcalde Magazine. "Whenever I get a free week and start
doing mathematics, I can't believe how much fun it is. I'm like a 12-year-old boy with a new train set."
Read more | {"url":"http://www.browsebiography.com/bio-karen_uhlenbeck.html","timestamp":"2024-11-04T04:46:37Z","content_type":"application/xhtml+xml","content_length":"42820","record_id":"<urn:uuid:32bd4936-72c3-450f-97dc-e49090169e85>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00166.warc.gz"} |
XGBoost for Regression
XGBoost is a powerful tool for regression tasks.
Regression involves predicting continuous output values. XGBoost can perform various types of regression tasks (linear, non-linear) depending on the loss function used (like squared loss for linear
Here’s a quick guide on how to fit an XGBoost model for regression using the scikit-learn API.
# xgboosting.com
# Fit an XGBoost Model for Regression using scikit-learn API
from sklearn.datasets import make_regression
from xgboost import XGBRegressor
# Generate a synthetic dataset with 5 features
X, y = make_regression(n_samples=1000, n_features=5, noise=0.1, random_state=42)
# Initialize XGBRegressor
model = XGBRegressor(objective='reg:squarederror', random_state=42)
# Fit the model to training data
model.fit(X, y)
# Make predictions with the fit model
predictions = model.predict(X)
In just a few lines of code, you can have a working XGBoost model for regression:
1. Initialize an XGBRegressor with the appropriate objective (here, 'reg:squarederror' for regression).
2. Fit the model to your training data using fit(). | {"url":"https://xgboosting.com/xgboost-for-regression/","timestamp":"2024-11-13T04:33:30Z","content_type":"text/html","content_length":"6312","record_id":"<urn:uuid:eac91bf4-2eff-4967-82db-b9b41efea244>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00423.warc.gz"} |
Hydrodynamics Class Reference
This class provides hydrodynamic behaviour for underwater vehicles It is shamelessly based off Brian Bingham's plugin for VRX. which in turn is based of Fossen's equations described in "Guidance and
Control of Ocean Vehicles" [1]. The class should be used together with the buoyancy plugin to help simulate behaviour of maritime vehicles. Hydrodynamics refers to the behaviour of bodies in water.
It includes forces like linear and quadratic drag, buoyancy (not provided by this plugin), etc. More...
#include <Hydrodynamics.hh>
Detailed Description
This class provides hydrodynamic behaviour for underwater vehicles It is shamelessly based off Brian Bingham's plugin for VRX. which in turn is based of Fossen's equations described in "Guidance and
Control of Ocean Vehicles" [1]. The class should be used together with the buoyancy plugin to help simulate behaviour of maritime vehicles. Hydrodynamics refers to the behaviour of bodies in water.
It includes forces like linear and quadratic drag, buoyancy (not provided by this plugin), etc.
System Parameters
The exact description of these parameters can be found on p. 37 and p. 43 of Fossen's book. They are used to calculate added mass, linear and quadratic drag and coriolis force.
Diagonal terms:
• <xDotU> - Added mass in x direction [kg]
• <yDotV> - Added mass in y direction [kg]
• <zDotW> - Added mass in z direction [kg]
• <kDotP> - Added mass in roll direction [kgm^2]
• <mDotQ> - Added mass in pitch direction [kgm^2]
• <nDotR> - Added mass in yaw direction [kgm^2]
• <xUabsU> - Quadratic damping, 2nd order, x component [kg/m]
• <xU> - Linear damping, 1st order, x component [kg]
• <yVabsV> - Quadratic damping, 2nd order, y component [kg/m]
• <yV> - Linear damping, 1st order, y component [kg]
• <zWabsW> - Quadratic damping, 2nd order, z component [kg/m]
• <zW> - Linear damping, 1st order, z component [kg]
• <kPabsP> - Quadratic damping, 2nd order, roll component [kg/m^2]
• <kP> - Linear damping, 1st order, roll component [kg/m]
• <mQabsQ> - Quadratic damping, 2nd order, pitch component [kg/m^2]
• <mQ> - Linear damping, 1st order, pitch component [kg/m]
• <nRabsR> - Quadratic damping, 2nd order, yaw component [kg/m^2]
• <nR> - Linear damping, 1st order, yaw component [kg/m]
Cross terms
In general we support cross terms as well. These are terms which act on non-diagonal sides. We use the SNAMe convention of naming search terms. (x, y, z) correspond to the respective axis. (k, m, n)
correspond to roll, pitch and yaw. Similarly U, V, W represent velocity vectors in X, Y and Z axis while P, Q, R representangular velocity in roll, pitch and yaw axis respectively.
• Added Mass: <{x|y|z|k|m|n}Dot{U|V|W|P|Q|R}> e.g. <xDotR> Units are either kg or kgm^2 depending on the choice of terms.
• Quadratic Damping With abs term (this is probably what you want): <{x|y|z|k|m|n}{U|V|W|P|Q|R}abs{U|V|W|P|Q|R}> e.g. <xRabsQ> Units are either kg/m or kg/m^2.
• Quadratic Damping (could lead to unwanted oscillations): <{x|y|z|k|m|n}{U|V|W|P|Q|R}{U|V|W|P|Q|R}> e.g. <xRQ> Units are either kg/m or kg/m^2.
• Linear Damping: <{x|y|z|k|m|n}{U|V|W|P|Q|R}>. e.g. <xR> Units are either kg or kg or kg/m. Additionally the system also supports the following parameters:
• <water_density> - The density of the fluid its moving in. Defaults to 998kgm^-3. [kgm^-3]
• <link_name> - The link of the model that is being subject to hydrodynamic forces. [Required]
• <namespace> - This allows the robot to have an individual namespace for current. This is useful when you have multiple vehicles in different locations and you wish to set the currents of each
vehicle separately. If no namespace is given then the plugin listens on the /ocean_current topic for a Vector3d message. Otherwise it listens on /model/{namespace name}/ocean_current.[String,
• <default_current> - A generic current. [vector3d m/s, optional, default = [0,0,0]m/s]
• <disable_coriolis> - Disable Coriolis force [Boolean, Default: false]
• <disable_added_mass> - Disable Added Mass [Boolean, Default: false]. To be deprecated in Garden.
Loading external currents
One can use the EnvironmentPreload system to preload currents into the plugin using data files. To use the data you may give CSV column names by using lookup_current_* tags listed below:
• <lookup_current_x> - X axis to use for lookup current
• <lookup_current_y> - Y axis to use for lookup current
• <lookup_current_z> - Z axis to use for lookup current If any one of the fields is present, it is assumed current is to be loaded by a data file and the topic will be ignored. If one or two fields
are present, the missing fields are assumed to default to zero.
An example configuration is provided in the examples folder. The example uses the LiftDrag plugin to apply steering controls. It also uses the thruster plugin to propel the craft and the buoyancy
plugin for buoyant force. To run the example run.
To control the rudder of the craft run the following
topic -t /model/tethys/joint/vertical_fins_joint/0/cmd_pos
To apply a thrust you may run the following command
topic -t /model/tethys/joint/propeller_joint/cmd_pos
The vehicle should move in a circle.
Ocean Currents
When underwater, vehicles are often subject to ocean currents. The hydrodynamics plugin allows simulation of such currents. We can add a current simply by publishing the following:
You should observe your vehicle slowly drift to the side.
[1] Fossen, Thor I. Guidance and Control of Ocean Vehicles. United Kingdom: Wiley, 1994.
Constructor & Destructor Documentation
◆ Hydrodynamics()
◆ ~Hydrodynamics()
Member Function Documentation
◆ Configure()
void Configure ( const Entity & _entity,
const std::shared_ptr< const sdf::Element > & _sdf,
EntityComponentManager & _ecm, overridevirtual
EventManager &
◆ PostUpdate()
void PostUpdate ( const UpdateInfo & _info,
const EntityComponentManager & _ecm overridevirtual
◆ PreUpdate()
void PreUpdate ( const UpdateInfo & _info,
EntityComponentManager & _ecm overridevirtual
The documentation for this class was generated from the following file: | {"url":"https://gazebosim.org/api/sim/7/classgz_1_1sim_1_1systems_1_1Hydrodynamics.html","timestamp":"2024-11-07T13:48:01Z","content_type":"text/html","content_length":"29419","record_id":"<urn:uuid:be14d0d7-7975-4b8b-a575-baf2a3136f38>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00772.warc.gz"} |
Simulating the Atomic World
Image Source : Shutterstock
Computer simulation, in the present day scientific scenario, has become one of the primary tools in both basic and applied sciences. With the escalating advancement of science in the last few decades
experiments have become more complex with the use of sophisticated scientific instruments. And understanding such complex scientific phenomena in a realistic and lucid manner needs more than just pen
and paper. Moreover co-relation of experiment with its proposed theory is another bigger task. And this is where simulation comes into existence. It acts as a bridge between mathematical theories and
practical experiments. The world of scientific simulation is far too immense to sum up in an article. However a humble attempt has been made to give a glimpse of its usefulness in the physical
Science in general and physics in particular has been trying to study the heart of matter, our magnificent universe, the complex interactions between atoms and sub atomic particles, how various
physical systems behave etc. from centuries now. The great minds like Galileo and Newton, using only pure intuition and available math then, tried to explain our physical surroundings in the most
justifiable manner. Then with the development of new and innovative mathematics, theories became more precise with approximations giving to nearly accurate results with experiments. It was with the
advent of Quantum Mechanics in early 20^th century that physics became more intricate with complex mathematics explaining the different phenomena occurring in the sub atomic regime. But experimenting
with such minute systems was rather impossible at that time. Nevertheless, this did not restrain the theorists from exploiting deeper into nature’s delicate arrangement with mathematical elegance.
The air was heated up with new theories explaining newer phenomena in all branches of physics. Some of these predictions are still being tested today for better accuracy with experiments.
Experiments can never lie, it shows results of whatever one wants to uncover provided the right instrumental approach has been applied. But the underlying mechanism about how the results have been
achieved through various atomic and sub atomic interactions in the system with time cannot be shown by experiments, urging the need of simulation for a complete understanding.
Simulation involves construction of detailed algorithms in compliance with proposed mathematical theories. The algorithms are made as such so to have minimum error factor for better accuracy. These
algorithms are then introduced into powerful supercomputers and calculations are allowed to run. Depending on the algorithm and its complexity in calculation the amount of run time varies. In
simulating an experiment the system is introduced virtually into the computer and different boundary conditions are given as input for the system to follow accordingly. There are mainly two broad
classes of simulation for the atomic world i.e. Classical which includes Molecular Dynamics simulation, Monte Carlo etc., and Quantum simulations using Density Functional calculations. Through these
simulations one can study the behaviour of different thermodynamic systems, their varied interactions at every step, stable atomic configurations and predicting the results accordingly.
Classical atomic physics simulations mainly deal with the interaction potentials, kinetic energy and potential energy of atoms by creating a system virtually and observing the interplay of these
potentials within the atoms virtually which converges to some end results which are more or less crude. It can be used to study simple systems two body or three body problems, statistical mechanics
problems involving probability as in case of atomic systems taking the smallest indivisible entity to be the atom . On the other hand the more recently developed Quantum Simulations is more complex
involving a range of different potentials (known as functionals) to the minutest details involving perturbations to the level of different atomic orbitals of a single atom. Calculations involving two
interacting atoms with so many electrons with each one having different atomic orbitals so huge in itself that one can hardly complete it manually in a lifetime!!. Then think of calculating real
systems with billions of atoms! These complex calculations require massive computation impossible in our simple personal computers. This is where supercomputers get to work. With very high processing
speed these computers can work 24×7 and give us the results making an impossible feat possible.
Today a good quality and complete research finding must be accompanied by theory, experiment and simulation as well. It is a growing and completely different branch of scientific study in all the
sciences. One of the most important facets of simulation is to develop our research methodology in the future to a level so as to divest ourselves of the trial and error of performing different
experiments and their associated costs by finding the perfect set up for experiment computationally which would give sure shot results. The future of computational physics is very extensive and
developments are going on in this direction with much more needed to be done. | {"url":"https://gonitsora.com/simulating-atomic-world/","timestamp":"2024-11-12T20:01:21Z","content_type":"text/html","content_length":"30560","record_id":"<urn:uuid:8857ff69-6f90-4490-9b63-4206a2c37eb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00339.warc.gz"} |
ECE 515 - Control System Theory & Design
Homework 1 - Due: 01/25
Problem 1
Which of the following are vector spaces over \(\mathbb{R}\) (with respect to standard addition and scalar multiplication). Justify your answers.
a. The set of real valued \(n \times n\) matrices with nonnegative entries where \(n\) is a given positive integer.
b. The set of rational functions of the form \(\dfrac{p(s)}{q(s)}\) where \(p\) and \(q\) are polynomials in the complex variable \(s\) and the degree of \(q\) does not exceed a given fixed positive
integer \(k\).
c. The space \(L^2\left(\mathbb{R}, \mathbb{R}\right)\) of square-integrable functions, i.e., functions \(f : \mathbb{R} \to \mathbb{R}\) with the property that
\[ \int \limits _{-\infty} ^{\infty} f^2 (t) dt < \infty \]
Problem 2
Let \(A\) be the linear operator in the plane corresponding to the counter-clockwise rotation around the origin by some given angle \(\theta\). Compute the matrix of \(A\) relative to the standard
basis in \(\mathbb{R}^2\).
Problem 3
Let \(A: X \to Y\) be a linear transformation.
a. Prove that \(\dim N (A) + \dim R(A) = \dim X\) (the sum of the dimension of the nullspace of \(A\) and the dimension of the range of \(A\) equals the dimension of \(X\)).
b. Now assume that \(X = Y\). It is not always true that \(X\) is a direct sum of \(N(A)\) and \(R(A)\). Find a counterexample demonstrating this. Also, describe a class of linear transformations
(as general as you can think of) for which this statement is true.
Problem 4
Consider the standard RLC circuit, except now allow its characteristics \(R, L\) and \(C\) to vary with time. Starting with the same non-dynamic physical laws as in class (\(q = CV_c\) for the
capacitor charge, \(\varphi = LI\) for the inductor flux), derive a dynamical model of this circuit. It should take the form:
\[ \dot{x} = A(t) x + B(t) u \]
Problem 5
Three employees — let’s call them Alice, Bob, and Cheng — received their end-of-the-year bonuses which their boss calculated as a linear combination of three performance scores: leadership,
communication, and work quality. The coefficients (weights) in this linear combination are the same for all three employees, but the boss doesn’t disclose them. Alice knows that she got the score of
4 for leadership, 4 for communication, and 5 for work quality. Bob’s scores for the same categories were 3, 5, and 4, and Cheng’s scores were 5, 3, and 3. The bonus amounts are $18000 for Alice,
$16000 for Bob, and $14000 for Cheng. The employees are now curious to determine the unknown coefficients (weights).
a. Set up this problem as solving a linear equation of the form \(Ax = b\) for the unknown vector \(x\).
b. Calculate the unknown weights. It’s up to you whether you use part (a) for this or do it another way.
c. Are the weights that you computed unique? Explain why or why not. | {"url":"https://courses.grainger.illinois.edu/ece515/sp2024/homework/hw01.html","timestamp":"2024-11-13T22:07:04Z","content_type":"application/xhtml+xml","content_length":"34424","record_id":"<urn:uuid:a2f69914-ea90-45a5-b5c9-e2e04e701429>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00012.warc.gz"} |
Iverson's floor replaced Gauss's bracket
Floor, ceiling, bracket
Mathematics notation changes slowly over time, generally for the better. I can’t think of an instance that I think was a step backward.
Gauss introduced the notation [x] for the greatest integer less than or equal to x in 1808. The notation was standard until relatively recently, though some authors used the same notation to mean the
integer part of x. The two definitions agree if x is positive, but not if x is negative.
Not only is there an ambiguity between the two meanings of [x], it’s not immediately obvious that there is an ambiguity since we naturally think first of positive numbers. This leads to latent
errors, such as software that works fine until the first person gives something a negative input.
In 1962 Kenneth Iverson introduced the notation ⌊x⌋ (“floor of x“) and ⌈x⌉ (“ceiling of x“) in his book A Programming Language, the book that introduced APL. According to Concrete Mathematics,
found that typesetters could handle the symbols by shaving off the tops and bottoms of ‘[‘ and ‘]’.
This slight modification of the existing notation made things much clearer. The notation [x] is not mnemonic, but clearly ⌊x⌋ means to move down and ⌈x⌉ means to move up.
Before Iverson introduced his ceiling function, there wasn’t a standard notation for the smallest integer greater than or equal to x. If you did need to refer to what we now call the ceiling
function, it was awkward to do so. And if there was a symmetry in some operation between rounding down and rounding up, the symmetry was obscured by asymmetric notation.
My impression is that ⌊x⌋ became more common than [x] somewhere around 1990, maybe earlier in computer science and later in mathematics.
Iverson and APL
Iverson’s introduction of the floor and ceiling functions was brilliant. The notation is mnemonic, and it filled what in retrospect was a gaping hole. In hindsight, it’s obvious that if you have a
notation for what we now call floor, you should also have a notation for what we now call ceiling.
Iverson also introduced the indicator function notation, putting a Boolean expression in brackets to denote the function that is 1 when the expression is true and 0 when the expression is false. Like
his floor and ceiling notation, the indicator function notation is brilliant. I give an example of this notation in action here.
I had a small consulting project once where my main contribution was to introduce indicator function notation. That simple change in notation made it clear how to untangle a complicated calculation.
Since two of Iverson’s notations were so simple and useful, might there be more? He introduced a lot of new notations in his programming language APL, and so it makes sense to mine APL for more
notations that might be useful. But at least in my experience, that hasn’t paid off.
I’ve tried to read Iverson’s lecture Notation as a Tool of Thought several times, and every time I’ve given up in frustration. Judging by which notations have been widely adopted, the consensus seems
to be that the floor, ceiling, and indicator function notations were the only ones worth stealing from APL.
15 thoughts on “Floor, ceiling, bracket”
1. There is a slight ambiguity in “the smallest integer”: At first, I read that as meaning smallest-magnitude (truncate towards zero), but the usual convention for the floor function is
least-positive (truncate towards minus infinity).
One might generalize the square-bracket notation by affixing negative infinity, zero, or positive infinity to the brackets, indicating truncation to the nearest integer in the direction of that
value. Similar notations for rounding, rather than truncation, could follow. I don’t know that anyone does that, though.
2. I attended a talk by Iverson once, and he explained that one of the advantages of APL was that it was readable. I guess it’s a subjective thing…. :)
3. And now you can write the most beautiful equation in mathematics, ⌈e⌉ = ⌊π⌋.
(I don’t remember where I heard this one.)
4. There is some APL legacy living on in function naming, if not in the notation/symbols. I’m talking about `iota` for range generation (IIRC Go and C++ have that) and `reduce` (many functional
programming languages have that).
5. Feynman once wrote about having invented new symbols for trigonometric functions. He wasn’t happy with seeing the letters “s i n” smashed together, as if they were variables to be multiplied.
IIRC, his symbols had a long tail over the top (like square roots), and at the left side it turned into the shape of the letter “S”/”C”/”T”. I don’t remember what the inverses or hyperbolics
I can’t point to any specific case where notation made a step backwards, but it certainly has made many steps to the side. Notation often seems to change for no apparent reason other than
fashion. Newton’s notation isn’t any worse than Lagrange’s, IMHO, but I haven’t seen it used in any books in the past 50 years. Fortunately, calculus is still not lacking for redundant notations.
And of course physicists make up their own notation just for fun all the time. Which probably explains Feynman!
6. Ah, APL, my first programming language (maybe that’s my problem :-), back in 1971 (yikes! 50 years ago).
I was obsessed with concatenating functions together, so that the only required leftmost operator on any line was a go-to (right arrow). I seem to recall that the concatenation operator is: x0i
(multiply zero iota). This allowed a lot of functionality with minimal lines of code.
I was proud to write an unbeatable tic-tac-toe program in 28 lines of code (including I/O). Back then I could read it like a cheap novel. I just now looked again, and it’s totally alphabet soup
(with a lot of funny looking noodles :-).
7. Just to be clear, Iverson didn’t invent the concept of the indicator function, or the idea of using a concise notation for it. Wikipedia has an example from the 1930s, and I’ve seen plenty of
mathematics books use them which, while admittedly later than APL, I highly doubt trace their roots back to that.
I can believe that he invented the invented that particular notation. Unlike the floor and ceiling notations (which I agree are a big improvement on what came before), I’m afraid that indicator
function notation is a lot worse than the standard mathematical notation in my opinion. Having the Boolean condition at full height gives the wrong visual weight to something that isn’t directly
used as a value in the expression. The more common notation is χ (Greek letter chi) or boldface numeral 1, with the condition in a subscript. This works a lot better, and fingers crossed that it
remains more common than Iverson’s.
8. Wikipedia seems to think that the bracket parts of the floor symbol should be facing inward, not outward: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
Is there a debate on this subject?
9. Hey, David W. I remember telling John Cook once that my dad taught me APL as my first programming language in the early 70s, and his immediate reaction was, “That’s child abuse!” I later told my
dad, and he laughed so hard. But I do remember how elegantly mathematical APL was and thinking as I learned other languages later how clumsy it was to write out functions as WORDS instead of
concise symbols and how you had to laboriously write out loops to calculate element by element across data. What I could do in one very concise line of APL code took a big, chunky block of code
in other languages.
I don’t remember x0ι. It’s been so long now. I think I remember xι (times iota) after a goto expression in APL, which basically meant “if”. I also remember trying to avoid goto and loops, since
that’s what the language was designed to do internally.
My big project was designing a Mastermind game, but I don’t have any of my old code anymore!
10. I like the idea of a bigger symbolic vocabulary in principle. And I don’t think it’s too much of a strike against a language if it looks unfamiliar to people who don’t use it; easy of getting
started is nice, but it’s not everything. You usually spend a whole lot more time using a language than learning it.
In practice, software is still almost exclusively ASCII text. Experiments to store code in binary formats or to introduce non-ASCII symbols haven’t worked out so well, or at least have not been
widely adopted.
11. > My impression is that ⌊x⌋ became more common than [x] somewhere around 1990, maybe earlier in computer science and later in mathematics.
I wonder how significant the 1989 publication of the book *Concrete Mathematics* by Graham, Knuth, and Patashnik was. This book enthusiastically promoted this ⌊x⌋ and ⌈x⌉ notation, with an entire
chapter mostly about floors and ceilings. This book started as lecture notes for a course based on the first section (“Mathematical Preliminaries”) of Knuth’s The Art of Computer Programming, and
even its first (1968) edition used this ⌊x⌋ and ⌈x⌉ notation crediting Iverson (1962).
As for the indicator function notation, those are also used heavily in the book, and Knuth later wrote a paper called “Two Notes on Notation” (https://arxiv.org/abs/math/9205211), the first half
of which extols the indicator function notation. It appears that using square brackets rather than parentheses was an innovation of (the second printing of) this book. Incidentally (related to
the comment above by Jim Q), the paper says that both Knuth’s and Adriano Garsia’s usage of the indicator function trace back to Iverson (and says other mathematicians followed Garsia), and the
example given on the Wikipedia article “Iverson bracket” from the 1830s (not 1930s) seems to be more historical research from Knuth’s paper (showing there was “craving” for this notation) than
anything directly influential. So it seems not inconceivable that mathematical usage of indicator function notation indeed traces back in a large way to Iverson.
Also, the manipulations in the CM book serve (IMO) as a convincing argument for putting the boolean condition at normal height and not in subscripts or superscripts (for one thing, we may want to
use them in superscripts, see e.g. the equation following (1.15) in the paper).
The CM book also introduced/popularized a lot of elegant notation, only some of which has caught on: falling factorial / rising factorial powers (instead of the “Pochhammer symbol”), the notation
and terminology for “Stirling cycle numbers” and “Stirling subset numbers”, two dots for intervals — writing (a..b) instead of the overused (a, b) — and “m⊥n” for gcd(m,n)=1.
There’s a (poor-quality, unfortunately) video of a Knuth talk from 2003 on Notation, where he discusses all this: https://www.youtube.com/watch?v=KjbuyB4dQa0
12. Notation Notes:
1. The “rising power” and “falling power’, which I think are due to Knuth, are useful (particularly if mucking about with hypergeometric functions) and should be more widely known and used.
2. In discussing how many ways you can choose k things from a population of n, two issues arise: Does order matter, and, are repetitions allowed, giving four possibilities.
For typesetting simplicity, I will write the standard combinatoric functions “combinations” and “permutations” as C(n,k) and P(n,k). When I teach this stuff, I also use S(n,k) (Strings) for
“order matters, reps allowed”, and R(n,k) for order does not matter, reps allowed. Then you can put all four in a 2×2 chart and derive the mathematical formulas for the four functions.
13. @Ralph: I agree that the rising and falling powers notation is handy. It’s symmetric and mnemonic, like floor and ceiling, while some of its alternatives are not.
Some books use (a) for falling powers. Is there any symbol more overloaded than parentheses? And what would the opposite of parentheses be if you wanted a notation for rising powers. (Or maybe
the books use (a) for rising powers. I can’t remember. Which is kinda the point.)
I like the C(n,k) notation for convenience, but I hesitate to use it because I’m not sure how many readers would understand it. It’s tedious to have to create a displayed equation rather than
inline text just to display a binomial coefficient.
14. Notation issue that bugs me.
I will write the LE operator as <= for simplicity.
(start of rant)
The <= operation is central in math. If we go up to vectors a,b, everyone knows what a <= b means. So obviously with matrices A,B, A <= B means "element wise" = 0 to mean that A is nonnegative
definite, when they could just say “A is p.d.”, or use the LaTeX prec operator. In some applications, both meanings are in play at the same time.
As far as I am concerned, this is just wrong.
Now consider bumping math operations up to sets and particularly intervals and boxes. For intervals, [a,b] + [c,d] is well understood as “element-wise +”, as started by Minkowski, and the same
should be the case for all operators.
What about [a,b] <= [c,d] ? The Minkowski interpretation would be that this is true iff b <= c. However, many Interval Analysis works define it to be "a <= c AND b <= d". While this is a useful
concept (as are many other types of <=), it should not be expressed as <=, which has an obvious meaning.
(end of rant)
15. above comment garbled my “A greater than or equal to 0”. Kindly fix that if you can. | {"url":"https://www.johndcook.com/blog/2021/04/15/floor-ceiling-bracket/","timestamp":"2024-11-13T21:02:24Z","content_type":"text/html","content_length":"82369","record_id":"<urn:uuid:80951baa-78bd-486a-80bf-de890cf9d4ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00699.warc.gz"} |
Formation of Linear Equations Videos - CBSE Class 9 Maths Linear Equations in Two Variables - TopperLearning - 100
CBSE Class 9: Formation of Linear Equations Videos | Forming linear equations and two variables and solving
Forming linear equations and two variables and solving
Forming of linear equations in two variables from real life situations, also solve problems based on them. | {"url":"https://www.topperlearning.com/cbse-class-9-videos/maths/linear-equations-in-two-variables/formation-of-linear-equations/100","timestamp":"2024-11-08T14:02:45Z","content_type":"text/html","content_length":"524584","record_id":"<urn:uuid:7304a7de-a963-47f9-83d6-7dac15c9ae26>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00539.warc.gz"} |
Control Systems/Open source tools/Julia - Wikibooks, open books for an open world
It is necessary to install Julia and afterwards the ControlSystems.jl package. It is recommended to follow the official Julia Documentation. Julia can be executed in a terminal but it is quite
practical to use an IDE like Juno/Atom or Visual Studio Code.
The ControlSystems.jl package has to be loaded with
using ControlSystems
before the function can be evaluated.
Throughout this course it is assumed that the source code is typed in the Julia REPL to print the results instantaneously. Otherwise, results can be printed with
Consider the transfer function
${\displaystyle G(s)={\frac {s+2}{3s^{2}+4s+5}}}$
The transfer function is created similar to other numerical toolboxes with numerator and denominator as
num = [1, 2] # Numerator
den = [3, 4, 5] # Denominator
G = tf(num, den) # Transfer function
The REPL responses an overview of the created transfer function object
s + 2
3*s^2 + 4*s + 5
Continuous-time transfer function model
The poles of transfer function ${\displaystyle G(s)}$ are computed with
and the REPL responses
2-element Array{Complex{Float64},1}:
-0.6666666666666665 + 1.1055415967851332im
-0.6666666666666665 - 1.1055415967851332im
The zeros of transfer function ${\displaystyle G(s)}$ are computed with
and resulting in
1-element Array{Float64,1}:
The function
will response the zeros, poles and the gain.
The Pole-Zero Plot is created with
Impulse and Step Response
It is handy to define the simulation time and a label for both plots with
Tf = 20 # Final simulation time in seconds
impulse_lbl = "y(t) = g(t)" # Label for impulse response g(t)
step_lbl = "y(t) = h(t)" # Label for step response h(t)
The impulse response is created with
impulseplot(G, Tf, label=impulse_lbl) # Impulse response
and the step response is built with
stepplot(G, Tf, label=impulse_lbl) # Step response
Bode and Nyquist Plot
The Bode plot is printed with
bodeplot(G) # Bode plot
and the Nyquist plot (without gain circles) is printed with
nyquistplot(G, gaincircles=false) # Nyquist plot
The gain circles can be toggled with the boolean flag.
If only the numerical results of the Bode/Nyquist plot are of interest and not their visualization, then one can use
State-Space Representation | {"url":"https://en.m.wikibooks.org/wiki/Control_Systems/Open_source_tools/Julia","timestamp":"2024-11-08T02:47:02Z","content_type":"text/html","content_length":"37520","record_id":"<urn:uuid:eef1f7e7-05a2-4b6a-9491-435129d010a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00291.warc.gz"} |
Gap Opening by Extremely Low-mass Planets in a Viscous Disk
By numerically integrating the compressible Navier-Stokes equations in two dimensions, we calculate the criterion for gap formation by a very low mass (q ~ 10^-4) protoplanet on a fixed orbit in a
thin viscous disk. In contrast with some previously proposed gap-opening criteria, we find that a planet can open a gap even if the Hill radius is smaller than the disk scale height. Moreover, in the
low-viscosity limit, we find no minimum mass necessary to open a gap for a planet held on a fixed orbit. In particular, a Neptune-mass planet will open a gap in a minimum mass solar nebula with
suitably low viscosity (α <~ 10^-4). We find that the mass threshold scales as the square root of viscosity in the low mass regime. This is because the gap width for critical planet masses in this
regime is a fixed multiple of the scale height, not of the Hill radius of the planet.
The Astrophysical Journal
Pub Date:
May 2013
□ hydrodynamics;
□ methods: numerical;
□ planet-disk interactions;
□ planets and satellites: formation;
□ protoplanetary disks;
□ Astrophysics - Earth and Planetary Astrophysics;
□ Physics - Computational Physics;
□ Physics - Fluid Dynamics
ApJ accepted | {"url":"https://ui.adsabs.harvard.edu/abs/2013ApJ...769...41D","timestamp":"2024-11-11T18:02:56Z","content_type":"text/html","content_length":"41355","record_id":"<urn:uuid:bfe7ce95-01db-44ec-9021-bda653f87d1d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00576.warc.gz"} |
day08 of java data structure and algorithm
1. As in the ArrayList class, LinkedList also has several ideas to pay attention to
It is a double linked list. Note that it is a double linked list. The double linked list is to keep the cost of time spent every operation
Node class, as a very important existence in LinkedList class, can not realize double chain without it. This class is used to create nodes and record the location of nodes. A node contains data, the
chain of the previous node and the chain of the next node.
The LinkedListIterator class abstracts the concept of location, provides a private class, and implements the Iterator class.
Next, we will only show the key code in the source code
2. Create node
Node prev is defined as the previous node, commonly known as precursor
E element, which defines the data of the node
Node < E > next is defined as the next node, commonly known as the back drive
private static class Node<E> {
E item;
Node<E> next;
Node<E> prev;
Node(Node<E> prev, E element, Node<E> next) {
this.item = element;
this.next = next;
this.prev = prev;
3. Query the first node and the last node
The first node is characterized by no precursor
In the LinkedList class, node < E > first is defined as a constant;
public E getFirst() {
final Node<E> f = first;
if (f == null)
throw new NoSuchElementException();
return f.item;
The last node is characterized by no rear drive
In the LinkedList class, node < E > last is defined as a constant;
public E getLast() {
final Node<E> l = last;
if (l == null)
throw new NoSuchElementException();
return l.item;
4. Specify the index query node
In the source code, there is a particularly interesting idea, size > > 1, which allows us to reduce the retrieval time. For example, if the index number is relatively late, we will not search the
data from the front, but from the back; Conversely, if the index number is higher, the search will start from the front.
Also note that when looking at the source code, you should pay attention to its meaning!
When the index number is near the front, in node < E > x = first, X represents the first node, and x.next represents the successor of X.
When the index number is near the back, in node < E > x = last, X represents the last node, and x.prev represents the precursor of X.
Node<E> node(int index) {
// assert isElementIndex(index);
if (index < (size >> 1)) {
Node<E> x = first;
for (int i = 0; i < index; i++)
x = x.next;
return x;
} else {
Node<E> x = last;
for (int i = size - 1; i > index; i--)
x = x.prev;
return x;
5. Insert the first node and the last node
f = first, the original first node.
Newnode = new node < > (null, e, f). For the new first node, we will find the new first node. It is stipulated that the next node is the original first node F, and there is no precursor.
first = newNode, which updates the information of the first constant node.
f.prev = newNode. After the original node is updated, its precursor must be formulated.
private void linkFirst(E e) {
final Node<E> f = first;
final Node<E> newNode = new Node<>(null, e, f);
first = newNode;
if (f == null)
last = newNode;
f.prev = newNode;
l = last, the original first node.
Newnode = new node < > (L, e, null). For the new first node, we will find the new first node. It is stipulated that the next node is the original first node f, and there is no precursor.
last = newNode,
l.next = newNode. After the original node is updated, its precursor must be formulated.
private void linkLast(E e) {
final Node<E> l = last;
final Node<E> newNode = new Node<>(l, e, null);
last = newNode;
if (l == null)
first = newNode;
l.next = newNode;
6. Clean up all nodes
public void clear() {
for (Node<E> x = first; x != null; ) {
Node<E> next = x.next;
x.item = null;
x.next = null;
x.prev = null;
x = next;
first = last = null;
size = 0;
7. Delete the specified element
public boolean remove(Object o) {
if (o == null) {
for (Node<E> x = first; x != null; x = x.next) {
if (x.item == null) {
return true;
} else {
for (Node<E> x = first; x != null; x = x.next) {
if (o.equals(x.item)) {
return true;
return false;
E unlink(Node<E> x) {
// assert x != null;
final E element = x.item;
final Node<E> next = x.next;
final Node<E> prev = x.prev;
if (prev == null) {
first = next;
} else {
prev.next = next;
x.prev = null;
if (next == null) {
last = prev;
} else {
next.prev = prev;
x.next = null;
x.item = null;
return element; | {"url":"https://www.fatalerrors.org/a/day08-of-java-data-structure-and-algorithm.html","timestamp":"2024-11-07T09:18:52Z","content_type":"text/html","content_length":"16266","record_id":"<urn:uuid:6381bb87-4a8c-4cb6-a6db-4cd245be5da6>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00732.warc.gz"} |
Equipotential Surfaces In Electromagnetism
UY1: Equipotential Surfaces
An equipotential surface is a three-dimensional surface in which the electric potential V is the same at every point.
Using the example of a single positive charge q, the expression for V is:
$$V = \frac{1}{4 \pi \epsilon_{0}} \frac{q}{r}$$
If a test charge $q_{0}$ is moved from point to point on an equipotential surface, the electric potential energy $q_{0}V$ will remain constant. In equation form, this means that the work done is 0:
$$\begin{aligned} W &= \, – \Delta U \\ &= \, – q_{0} \, \Delta V \\ &= 0 \end{aligned}$$
It follows that $\vec{E}$ must be perpendicular to the equipotential surface at every point. How do you reach this conclusion? Recall that:
$$\begin{aligned} dV &= \frac{\partial V}{\partial x} \, dx + \frac{\partial V}{\partial y} \, dy + \frac{\partial V}{\partial z} \, dz \\ &= \vec{\nabla} V . d\vec{l} \\ &= \, – \vec{E}.d\vec{l} \
Since dV = 0, $\vec{E}.d\vec{l}$ is 0 $\rightarrow$ perpendicular.
At each point, the direction of $\vec{E}$ is the direction in which V decreases most rapidly.
$$\begin{aligned} dV &= \vec{\nabla}V . d\vec{l} \\ &= \, – \vec{E} . d\vec{l} \\ &= \, – E \, dl \, \text{cos} \, \phi \end{aligned}$$
where $\phi$ is the angle between electric field and displacement vector (direction a point charge move).
In a region where an electric field is present, we can construct an equipotential surface through any point.
Note: Equipontial surfaces for different potentials can never touch or interact.
When all the charges are at rest (equilibrium), the electric field just outside a conductor must be perpendicular to the surface at every point. If the electric field contains a non-zero parallel
component, there will be a force on the charges at the surface which will cause the charges to move and distribute themselves.
When all charges are at rest, the surface of a conductor is always an equipotential surface.
Next: Gauss’s Law (Simple Version)
Previous: Electric Potential Of An Infinite Line Charge
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Back To University Year 1 Physics Notes | {"url":"https://www.miniphysics.com/uy1-equipotential-surfaces.html","timestamp":"2024-11-13T10:58:30Z","content_type":"text/html","content_length":"76317","record_id":"<urn:uuid:4ab186af-7165-4a6e-b4a4-e4af27f88248>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00217.warc.gz"} |
Week 3 Bonus - Nested List Weight Sum
The Problem
You are given a nested list of integers nestedList. Each element is either an integer or a list whose elements may also be integers or other lists.
The depth of an integer is the number of lists that it is inside of. For example, the nested list [1,[2,2],[[3],2],1] has each integer's value set to its depth.
Return the sum of each integer in nestedList multiplied by its depth.
Example 1:
Input: nestedList = [[1,1],2,[1,1]]
Output: 10
Explanation: Four 1's at depth 2, one 2 at depth 1. 1*2 + 1*2 + 2*1 + 1*2 + 1*2 = 10.
Example 2:
Input: nestedList = [1,[4,[6]]]
Output: 27
Explanation: One 1 at depth 1, one 4 at depth 2, and one 6 at depth 3. 1*1 + 4*2 + 6*3 = 27.
Example 3:
Input: nestedList = [0]
Output: 0
• 1 <= nestedList.length <= 50
• The values of the integers in the nested list is in the range [-100, 100].
• The maximum depth of any integer is less than or equal to 50.
import pytest
from typing import List
from .Week3Bonus_NestedListWeightSum import Solution
from .util import NestedInteger
s = Solution()
([[1, 1], 2, [1, 1]], 10),
([1, [4, [6]]], 27),
([0], 0),
def test_depth_sum(num_list, expected):
x = build_nested_list(num_list)
for n in x:
assert s.depthSum(x) == expected
def build_nested_list(num_list) -> List[NestedInteger]:
nestedList = []
for n in num_list:
if isinstance(n, int):
return nestedList
def list_to_nested_int(nums):
ni = NestedInteger()
for n in nums:
if isinstance(n, int):
return ni
from typing import List
from .util import NestedInteger
class Solution:
def depthSumWithDepth(self, nestedList: List[NestedInteger], depth: int):
sum = 0
# Iterate over the list. If we hit an int, add it to the sum multiplying by depth
for n in nestedList:
if n.isInteger():
sum += n.getInteger() * depth
# Recursively sump up all the sub lists incrementing the depth as needed
sum += self.depthSumWithDepth(n.getList(), depth + 1)
return sum
def depthSum(self, nestedList: List[NestedInteger]) -> int:
# Start at depth 1
return self.depthSumWithDepth(nestedList, 1)
This solution didn't perform very well relatively. It does run in O(n) with O(n) space. I am not sure we can get much better than that in terms of big O but there's probably some optimisation I could
have done. Recursion is generally not a good idea in python I think since it will add each function call to the call stack. I am happy enough with the solution here for now since it's easy to
understand but an easy one to improve on.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/ruarfff/week-3-bonus-nested-list-weight-sum-12mn","timestamp":"2024-11-10T21:45:55Z","content_type":"text/html","content_length":"105857","record_id":"<urn:uuid:310cd344-b5e1-48c4-b992-4a32171fc578>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00167.warc.gz"} |
prove that a^2 + b^2 +c^2 = (a^2)(b^2) is not true for any a,b,c wher
Algebra> proof...
2 Answers
SAGAR SINGH - IIT DELHI
Last Activity: 13 Years ago
Dear student,
a^2 + b^2 +c^2 = (a^2)(b^2)
This is not going to be zero provided a,b,c are all positive integers...
Please feel free to ask your queries here. We are all IITians and here to help you in your IIT JEE preparation.
All the best.
Win exciting gifts by answering the questions on Discussion Forum. So help discuss any query on askiitians forum and become an Elite Expert League askiitian.
Now you score 5+15 POINTS by uploading your Pic and Downloading the Askiitians Toolbar respectively : Click here to download the toolbar..
Askiitians Expert
Sagar Singh
B.Tech, IIT Delhi
sanjana rajendran
Last Activity: 13 Years ago
but how can we say that a^2(1-b^2)+b^2+c^2=0
Provide a better Answer & Earn Cool Goodies
Enter text here...
Ask a Doubt
Get your questions answered by the expert for free
Enter text here... | {"url":"https://www.askiitians.com/forums/Algebra/22/24645/proof.htm","timestamp":"2024-11-13T06:22:41Z","content_type":"text/html","content_length":"189726","record_id":"<urn:uuid:8e85f2a5-1b4a-4a7d-8308-ab734c9a0c27>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00589.warc.gz"} |
Maths Education - Online Education by ThePoemStory
Maths Education
Understanding of arithmetic series is essential for solving problems involving sequences of numbers. Whether you are calculating the future value […]
A Comprehensive Understanding of Arithmetic Series | Understanding Arithmetic Series Read Post »
Understanding Number Series | Exploring Different Types and Question Patterns
Maths Education
Understanding Number Series like Arithmetic Series, Geometric Series or Fibonacci Series is crucial to decode mathematics patterns. Learn about different
Understanding Number Series | Exploring Different Types and Question Patterns Read Post » | {"url":"https://education.thepoemstory.com/academic-subjects/maths-education/","timestamp":"2024-11-06T04:36:52Z","content_type":"text/html","content_length":"253352","record_id":"<urn:uuid:ff171248-8689-47f9-a9b9-d5d5c61546b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00304.warc.gz"} |
Understanding Decimals: Working through Decimal Basics
Decimals are an essential part of our everyday lives, whether we’re balancing our budget, measuring ingredients for a recipe, or calculating distances on a map. In this blog, we’ll explore the basics
of decimals, including what they are, how they work, and why they’re important.
What are Decimals?
Decimals are a way of expressing parts of a whole. They are numbers that have a value between two whole numbers. For example, between 0 and 1, 0.5 would be a decimal between these two numbers. It is
more than 0 but less than 1. You can also represent decimals as mixed fractions. 0.5 would be ½. This can be taken one step further and the mixed fraction can be represented as a percentage. 0.5 or ½
can be changed into a percentage of 50%.
Decimals are a mix of whole numbers and fractions, all bundled up with a dot called a decimal point. The numbers to the left of the decimal point are whole numbers – like 1, 2, 3, and so on,
including units, tens, hundreds, and even thousands. But wait, there’s more! On the right side of that dot, you’ve got the fractions– tenths, hundredths, and thousandths. It’s like having a
mini-fraction party right there in your number!
Now, why does all this matter? Understanding where those numbers belong is key to solving all sorts of math puzzles. Whether you’re calculating how much pizza each friend gets or figuring out how
fast your car is going, mastering the place value of decimals is your secret weapon.
Types of Decimals
There are different types of decimals:
Terminating decimals
Non-terminating decimals
Recurring decimals
Non-recurring decimals
To understand decimals better, let’s break down some key concepts:
Place Value: Each digit in a decimal number has a place value determined by closer the number is to the decimal point. Moving from left to right, the place values are powers of 10: units, tenths,
hundredths, thousandths, and so on. Here’s a visual example to make it easier to understand!
Hundreds – Tens – Ones – POINT – Tenths – Hundredths – Thousandths
Let’s take a look at an example: 32.14
3: 3 Tens or 30
2: 2 Ones or 2
1: 1 Tenth or 0.1
4: 1 Hundredth or 0.01
Operations with Decimals: Just like with whole numbers, you can do addition, subtraction, multiplication, and division with decimals. Remember to align the decimal points when adding or subtracting,
and to multiply and divide as if the decimal point isn’t there. Take a look at our other blogs to learn how to multiply and divide decimals!
Decimal fractions: These can compared to a bridge between whole numbers and fractions, offering a precise way to represent parts of a whole using the familiar decimal notation. Picture this: you’ve
got your whole numbers on the left side of the decimal point, representing complete units. But on the right side, that’s where the magic happens – each digit after the decimal point represents a
fraction of that whole, whether it’s tenths, hundredths, or even tinier fractions like thousandths. Decimal fractions are everywhere, from dividing up a pizza into equal slices to measuring the exact
amount of ingredients for your favorite recipe.
Why are Decimals Important?
Decimals are so important in different real-life situations:
Spending Money: From calculating taxes to managing budgets, decimals help us deal with money accurately.
Measurements: Whether it’s measuring length, weight, or volume, decimals provide precise measurements.
Science and Engineering: Fields like physics, chemistry, and engineering rely heavily on decimal notation for calculations and measurements.
Getting a good grip on decimals is like unlocking a superpower for tackling everyday challenges. Once you’ve got the hang of decimal notation, place value, and how to work with them, you’ll find
yourself breezing through all sorts of numerical tasks, both in your day-to-day life and in more complex situations.
So, next time you encounter a decimal, remember: it’s just a way of expressing a part of a whole, and with a little practice, you’ll master the art of decimals in no time!
Enhance your math skills with professional math tutors
Boost your math knowledge by making use of the learning resources of Step Up Academy Tutoring Center. Our math tutors will give you individual lessons in a single specialty and you will understand in
a more personalized manner the math concepts and your grades will therefore increase. Besides that, we can assist in the subject of examination preparation, languages, and science as well. It’s up to
you whether it be one-on-one or online tutoring, we have the perfect schedule options available to meet any of your needs. We take great care to give you constructive one-to-one help with homework or
for example, in regard to the upcoming exams. Let us together make your academic ambitions come true!
Our Summer Programs
Summer Programs
Our Subjects
Widget Subjects
Our Programs
Our Programs
Click for Tutoring Directions. | {"url":"https://stepupacademy.ca/understanding-decimals/","timestamp":"2024-11-05T19:29:04Z","content_type":"text/html","content_length":"87915","record_id":"<urn:uuid:52cd5fa7-9a3e-4eac-b26d-e9ffa86f9e8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00236.warc.gz"} |
Bioequivalence - PKANALIX
After having run the “NCA” task to calculate the NCA PK parameters, the “Bioequivalence” task can be launched to compare the calculated PK parameters between two groups, usually one receiving the
“test” formulation and one the “reference” formulation.
Bioequivalence task
The task “Bioequivalence” is available if at least one categorical covariate column has been tagged in the “Data” tab. The “Bioequivalence” task can only be run after having run the “NCA” task. It
can be run by clicking on the “Bioequivalence” task button (pink highlight below) or by selecting it as part of the scenario (tickbox in the upper right corner of the “Bioequivalence” button) and
clicking the “Run” button. The parameters to be included in the bioequivalence analysis are selected in the box “Parameters to compute / BE” (orange highlight), where the user can also indicate if
the parameter should be log-transformed or not. The fixed effect included in the linear model can be chosen in the box “Bioequivalence design” (blue highlight).
Bioequivalence results
The results of the bioequivalence analysis are presented in the “Results” tab, in the “BE” subtab. Three tables are proposed.
Confidence intervals
The table of confidence intervals is the key bioequivalence table that allows to conclude if the two formulations are equivalent or not. For each parameter selected for bioequivalence analysis, the
following information is given:
• the adjusted means (i.e least square means) for each formulation,
• the number of individuals for each formulation,
• the formulation difference (see calculation rules) and the corresponding confidence interval,
• the formulation ratio (see calculation rules) and the corresponding confidence interval.
If N formulations are present in the dataset, N-1 tables are shown.
Coefficient of variation
This table gives for each parameter selected for bioequivalence analysis the standard deviation of the residuals and the intra-subject coefficient of variation (CV).
This table presents the analysis of variance (ANOVA) for the factors included in the linear model, for each parameter selected for the bioequivalence analysis. For each factor, the degrees of freedom
(DF), sum of squares (SUMSQ), mean squares (MEANSQ), F-value (FVALUE) and the p-value (PR(>F)) are given. A small p-value indicates a significant effect of the corresponding factor.
For the residuals, only the degrees of freedom (DF), sum of squares (SUMSQ), and mean squares (MEANSQ) are given.
Bioequivalence plots
In the “Plots” tab, several plots are displayed:
Bioequivalence outputs
After running the Bioequivalence task, the following files are available in the result folder <result folder>/PKanalix/IndividualParameters/be:
• anova_XXX.txt: these is one such file for each NCA parameter included in the bioequivalence analysis. It contains the ANOVA table with columns ‘Factor’ (factors included in the linear model),
‘Df’ (degrees of freedom), ‘SumSq’ (sum of squares), ‘MeanSq’ (mean squares), ‘FValue’ (F-value), ‘Pr(>F)’ (p-value). For the residuals (last line), the two last columns are empty.
• confidenceInterval_XXX.txt: these is one such file per non-reference formulation. It contains the confidence interval table with columns ‘Parameter’ (parameter name), ‘AdjustedMeanTest’ (adjusted
mean for the test formulation), ‘NTest’ (number of individuals for the test formulation), ‘AdjustedMeanRef’ (adjusted mean for the reference formulation), ‘NRef’ (number of individuals for the
ref formulation), ‘Difference’ (formulation difference – see calculation rules), ‘CIRawLower’ (lower confidence interval bound for the difference), ‘CIRawUpper’ (upper confidence interval bound
for the difference), ‘Ratio’ (formulation ratio – see calculation rules), ‘CILower’ (lower confidence interval bound for the ratio), ‘CIUpper’ (upper confidence interval bound for the ratio),
‘Bioequivalence’ (1 if the CI for the ratio falls within the BE limits, 0 otherwise)
• estimatedCoefficients_XXX.txt: these is one such file for each NCA parameter included in the bioequivalence analysis. It contains the estimated coefficient for each category of each factor
included in the linear model, as well as the intercept. This information is only available as an output table and is not displayed in the GUI. The table columns are ‘name’ (factor followed by the
category), ‘estimate’ (estimated coefficient), ‘se’ (standard error), ‘tValue’ (estimate divided by the SE), and ‘Pr(>|t|)’ (p-value).
• variationCoef.txt: This file contains the standard deviation and coefficient of variation from each NCA parameter. Columns are ‘Parameter’, ‘SD’ and ‘CV(%)’ | {"url":"https://pkanalix.lixoft.com/bioequivalence/","timestamp":"2024-11-06T16:46:03Z","content_type":"text/html","content_length":"79650","record_id":"<urn:uuid:02f135f2-a982-4a63-bf48-f8985d08ca97>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00257.warc.gz"} |
Ameer Hamza - MATLAB Central
Ameer Hamza
Last seen: 4 years ago |  Active since 2018
Followers: 0 Following: 0
of 295,068
8 Questions
2 Answers
of 20,171
0 Files
of 153,155
0 Problems
0 Solutions
Answered Question about plotting matrix
We need to get all the values of matrix A in every iteration and plot them not the final values. close all clear ...
6 years ago | 0
Question about plotting matrix
Hello I have problem in plotting (2*2) loop matrix. My matrice code is as follow : theta=0:pi/6:2*pi; A=[cos(theta), 0...
6 years ago | 1 answer | 0
How to plot circle by one single equation?
I need code which plot the circle in one single equation (variable). I have the code but i need code of single equation, the cod...
6 years ago | 4 answers | 0
Matlab Questions Simulation Problem
Please i have set of question in matlab: 1)How to make point in plane moving in desired trajectory? 2) How to make set of ...
6 years ago | 0 answers | 0
Tracking a moving target trajectory
Hello, I have problem in matlab. I am trying to simulate code of tracking a moving target trajectory (3D). The target will mov...
6 years ago | 2 answers | 0
Answered Moving a fixed point ?
Thank you for your remarks. A moving path is a set of points moving randomly in space to form trajectory ( rectangular path ). F...
6 years ago | 0
Moving a fixed point ?
How to make the fixed point in moving path move before the other points?
6 years ago | 1 answer | 0 | {"url":"https://uk.mathworks.com/matlabcentral/profile/authors/6352467","timestamp":"2024-11-06T08:03:24Z","content_type":"text/html","content_length":"78789","record_id":"<urn:uuid:6e62fde5-61d3-4394-8ac8-48c08d3a07a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00089.warc.gz"} |
A triangle has sides with lengths of 3, 8, and 2. What is the radius of the triangles inscribed circle? | HIX Tutor
A triangle has sides with lengths of 3, 8, and 2. What is the radius of the triangles inscribed circle?
Answer 1
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
The radius ( r ) of the inscribed circle in a triangle can be found using the formula:
[ r = \frac{{\text{Area of the triangle}}}{{\text{Semiperimeter of the triangle}}} ]
where the semiperimeter ( s ) of the triangle is calculated as:
[ s = \frac{{a + b + c}}{2} ]
and ( a ), ( b ), and ( c ) are the lengths of the sides of the triangle.
Given the lengths of the sides of the triangle as ( a = 3 ), ( b = 8 ), and ( c = 2 ), we first need to calculate the semiperimeter ( s ). Then, we use the formula for the area of a triangle, which
can be calculated using Heron's formula:
[ \text{Area} = \sqrt{s(s - a)(s - b)(s - c)} ]
Finally, we substitute the values of the area and semiperimeter into the formula for the radius of the inscribed circle.
After computing these values, we find the radius of the inscribed circle of the triangle.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/a-triangle-has-sides-with-lengths-of-3-8-and-2-what-is-the-radius-of-the-triangl-8f9afa35c5","timestamp":"2024-11-03T07:00:23Z","content_type":"text/html","content_length":"571490","record_id":"<urn:uuid:ee9e7e07-8fc6-43ef-9739-e2141c80e1e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00717.warc.gz"} |
Note: This document is for an older version of GRASS GIS that will be discontinued soon. You should upgrade, and read the current manual page.
- Apply temporal and spatial operations on space time raster datasets using temporal raster algebra.
t.rast.algebra --help
t.rast.algebra [-sngd] expression=string basename=string [suffix=string] [nprocs=integer] [--help] [--verbose] [--quiet] [--ui]
Check the spatial topology of temporally related maps and process only spatially related maps
Register Null maps
Use granularity sampling instead of the temporal topology approach
Perform a dry run, compute all dependencies and module calls but don't run them
Print usage summary
Verbose module output
Quiet module output
Force launching GUI dialog
expression=string [required]
r.mapcalc expression for temporal and spatial analysis of space time raster datasets
basename=string [required]
Basename of the new generated output maps
A numerical suffix separated by an underscore will be attached to create a unique identifier
Suffix to add at basename: set 'gran' for granularity, 'time' for the full time format, 'num' for numerical suffix with a specific number of digits (default %05)
Default: num
Number of r.mapcalc processes to run in parallel
Default: 1
performs temporal and spatial map algebra operations on space time raster datasets (STRDS) using the temporal raster algebra.
The module expects an expression as input parameter in the following form:
"result = expression"
The statement structure is similar to that of r.mapcalc. In this statement, result represents the name of the space time raster dataset (STRDS) that will contain the result of the calculation that is
given as expression on the right side of the equality sign. These expressions can be any valid or nested combination of temporal operations and spatial overlay or buffer functions that are provided
by the temporal algebra.
The temporal raster algebra works only with space time raster datasets (STRDS). The algebra provides methods for map selection based on their temporal relations. It is also possible to temporally
shift maps, to create temporal buffer and to snap time instances to create a valid temporal topology. Furthermore, expressions can be nested and evaluated in conditional statements (if, else
statements). Within if-statements, the algebra provides temporal variables like start time, end time, day of year, time differences or number of maps per time interval to build up conditions.
In addition the algebra provides a subset of the spatial operations from r.mapcalc. All these operations can be assigned to STRDS or to the map lists resulting of operations between STRDS.
By default, only temporal topological relations among space time datasets (STDS) are evaluated. The -s flag can be used to additionally activate the evaluation of the spatial topology based on the
spatial extent of maps.
The expression option must be passed as quoted expression, for example:
t.rast.algebra expression="C = A + B" basename=result
is the new space time raster dataset that will contain maps with the basename "result" and a numerical suffix separated by an underscore that represent the sum of maps from the STRDS
and temporally equal maps (i.e., maps with equal temporal topology relation) from the STRDS
The map basename for the result STRDS must always be specified.
The temporal algebra provides a wide range of temporal operators and functions that will be presented in the following section.
Several temporal topology relations are supported between maps registered in space time datasets:
equals A ------
B ------
during A ----
B ------
contains A ------
B ----
starts A ----
B ------
started A ------
B ----
finishes A ----
B ------
finished A ------
B ----
precedes A ----
B ----
follows A ----
B ----
overlapped A ------
B ------
overlaps A ------
B ------
over both overlaps and overlapped
The relations must be read as: A is related to B, like - A equals B - A is during B - A contains B.
Topological relations must be specified with curly brackets {}.
The temporal algebra defines temporal operators that can be combined with other operators to perform spatio-temporal operations. The temporal operators process the time instances and intervals of two
temporally related maps and calculate the resulting temporal extent in five possible different ways.
LEFT REFERENCE l Use the time stamp of the left space time dataset
INTERSECTION i Intersection
DISJOINT UNION d Disjoint union
UNION u Union
RIGHT REFERENCE r Use the time stamp of the right space time dataset
The temporal selection simply selects parts of a space time dataset without processing any raster or vector data. The algebra provides a selection operator
that by default selects parts of a space time dataset that are temporally equal to parts of a second space time dataset. The following expression
means: select all parts of space time dataset A that are equal to B and store them in space time dataset C. These parts are time stamped maps.
In addition, the inverse selection operator !: is defined as the complement of the selection operator, hence the following expression
means: select all parts of space time time dataset A that are not equal to B and store them in space time dataset C.
To select parts of a STRDS using different topological relations regarding to other STRDS, the temporal topology selection operator can be used. This operator consists of the temporal selection
operator, the topological relations that must be separated by the logical OR operator | and, the temporal extent operator. All three parts are separated by comma and surrounded by curly brackets as
follows: {"temporal selection operator", "topological relations", "temporal operator"}.
C = A {:,equals} B
C = A {!:,equals} B
We can now define arbitrary topological relations using the OR operator "|" to connect them:
C = A {:,equals|during|overlaps} B
Select all parts of A that are equal to B, during B or overlaps B.
In addition, we can define the temporal extent of the resulting STRDS by adding the temporal operator.
Select all parts of A that are during B and use the temporal extents from B for C.
The selection operator is implicitly contained in the temporal topology selection operator, so that the following statements are exactly the same:
C = A : B
C = A {:} B
C = A {:,equal} B
C = A {:,equal,l} B
Same for the complementary selection:
C = A !: B
C = A {!:} B
C = A {!:,equal} B
C = A {!:,equal,l} B
Selection operations can be evaluated within conditional statements as showed below. Note that A and B can be either space time datasets or expressions. The temporal relationship between the
conditions and the conclusions can be defined at the beginning of the if statement (third and fourth examples below). The relationship between then and else conclusion must be always equal.
if statement decision option temporal relations
if(if, then, else)
if(conditions, A) A if conditions are True; temporal topological relation between if and then is equal.
if(conditions, A, B) A if conditions are True, B otherwise; temporal topological relation between if, then and else is equal.
if(topologies, conditions, A) A if conditions are True; temporal topological relation between if and then is explicitly specified by topologies.
if(topologies, conditions, A, B) A if conditions are True, B otherwise; temporal topological relation between if, then and else is explicitly specified by topologies.
The conditions are comparison expressions that are used to evaluate space time datasets. Specific values of temporal variables are compared by logical operators and evaluated for each map of the
The conditions are evaluated from left to right.
Logical operators
Symbol description
== equal
!= not equal
> greater than
>= greater than or equal
< less than
<= less than or equal
&& and
|| or
Temporal functions
The following temporal functions are evaluated only for the STDS that must be given in parenthesis.
td(A) Returns a list of time intervals of STDS A
start_time(A) Start time as HH::MM:SS
start_date(A) Start date as yyyy-mm-DD
start_datetime(A) Start datetime as yyyy-mm-DD HH:MM:SS
end_time(A) End time as HH:MM:SS
end_date(A) End date as yyyy-mm-DD
end_datetime(A) End datetime as yyyy-mm-DD HH:MM
start_doy(A) Day of year (doy) from the start time [1 - 366]
start_dow(A) Day of week (dow) from the start time [1 - 7], the start of the week is Monday == 1
start_year(A) The year of the start time [0 - 9999]
start_month(A) The month of the start time [1 - 12]
start_week(A) Week of year of the start time [1 - 54]
start_day(A) Day of month from the start time [1 - 31]
start_hour(A) The hour of the start time [0 - 23]
start_minute(A) The minute of the start time [0 - 59]
start_second(A) The second of the start time [0 - 59]
end_doy(A) Day of year (doy) from the end time [1 - 366]
end_dow(A) Day of week (dow) from the end time [1 - 7], the start of the week is Monday == 1
end_year(A) The year of the end time [0 - 9999]
end_month(A) The month of the end time [1 - 12]
end_week(A) Week of year of the end time [1 - 54]
end_day(A) Day of month from the start time [1 - 31]
end_hour(A) The hour of the end time [0 - 23]
end_minute(A) The minute of the end time [0 - 59]
end_second(A) The second of the end time [0 - 59]
In order to use the numbers returned by the functions in the last block above, an offset value needs to be added. For example, start_doy(A, 0) would return the DOY of the current map in STDS A.
end_hour(A, -1) would return the end hour of the previous map in STDS A.
Comparison operator
As mentioned above, the conditions are comparison expressions that are used to evaluate space time datasets. Specific values of temporal variables are compared by logical operators and evaluated for
each map of the STDS and (optionally) related maps. For complex relations, the comparison operator can be used to combine conditions.
The structure is similar to the select operator with the addition of an aggregation operator: {"comparison operator", "topological relations", aggregation operator, "temporal operator"}
This aggregation operator (| or &) defines the behaviour when a map is related to more than one map, e.g. for the topological relation 'contains'. Should all (&) conditions for the related maps be
true or is it sufficient to have any (|) condition that is true. The resulting boolean value is then compared to the first condition by the comparison operator (|| or &&). By default, the aggregation
operator is related to the comparison operator:
comparison operator -> aggregation operator:
Condition 1 {||, equal, r} Condition 2
Condition 1 {&&, equal|during, l} Condition 2
Condition 1 {&&, equal|contains, |, l} Condition 2
Condition 1 {&&, equal|during, l} Condition 2 && Condition 3
Condition 1 {&&, equal|during, l} Condition 2 {&&,contains, |, r} Condition 3
Hash operator
Additionally, the number of maps in intervals can be computed and used in conditional statements with the hash (#) operator.
This expression computes the number of maps from space time dataset B which are during the time intervals of maps from space time dataset A.
A list of integers (scalars) corresponding to the maps of A that contain maps from B will be returned.
C = if({equal}, A {#, contains} B > 2, A {:, contains} B)
This expression selects all maps from A that temporally contain at least 2 maps from B and stores them in space time dataset C. The leading equal statement in the if condition specifies the temporal
relation between the if and then part of the if expression. This is very important, so we do not need to specify a global time reference (a space time dataset) for temporal processing.
Furthermore, the temporal algebra allows temporal buffering, shifting and snapping with the functions buff_t(), tshift() and tsnap(), respectively.
buff_t(A, size) Buffer STDS A with granule ("1 month" or 5)
tshift(A, size) Shift STDS A with granule ("1 month" or 5)
tsnap(A) Snap time instances and intervals of STDS A
Single map with temporal extent
The temporal algebra can also handle single maps with time stamps in the tmap() function.
For example:
C = A {:, during} tmap(event)
This statement selects all maps from space time data set A that are during the temporal extent of the single map 'event'
The module supports the following raster operations:
Symbol description precedence
% modulus 1
/ division 1
* multiplication 1
+ addition 2
- subtraction 2
And raster functions:
abs(x) return absolute value of x
float(x) convert x to foating point
int(x) convert x to integer [ truncates ]
log(x) natural log of x
sqrt(x) square root of x
tan(x) tangent of x (x is in degrees)
round(x) round x to nearest integer
sin(x) sine of x (x is in degrees)
isnull(x) check if x = NULL
isntnull(x) check if x is not NULL
null set null value
exist(x) Check if x is in the current mapset
Single raster map
The temporal raster algebra features also a function to integrate single raster maps without time stamps into the expressions.
For example:
C = A * map(constant_value)
This statement multiplies all raster maps from space time raster data set A with the raster map 'constant_value'
The user can combine the temporal topology relations, the temporal operators and the spatial/select operators to create spatio-temporal operators as follows:
{"spatial or select operator", "list of temporal relations", "temporal operator"}
For multiple topological relations or several related maps the spatio-temporal operators feature implicit aggregation. The algebra evaluates the stated STDS by their temporal topologies and apply the
given spatio-temporal operators in a aggregated form. If we have two STDS A and B, B has three maps: b1, b2, b3 that are all during the temporal extent of the single map a1 of A, then the following
arithmetic calculations would implicitly aggregate all maps of B into one result map for a1 of A:
C = A {+, contains} B --> c1 = a1 + b1 + b2 + b3
Important: the aggregation behaviour is not symmetric
C = B {+, during} A --> c1 = b1 + a1
c2 = b2 + a1
c3 = b3 + a1
The neighbourhood modifier of
is extended for the temporal raster algebra with the temporal dimension. The format is strds[t,r,c], where t is the temporal offset, r is the row offset and c is the column offset. A single
neighborhood modifier is interpreted as temporal offset [t], while two neighborhood modifiers are interpreted as row and column offsets [r,c].
refers to the second successor of the current map.
refers to the cell one row below and two columns to the right of the current cell in the current map.
refers to the cell two rows above and one column to the left of the current cell of the first successor map.
refers to the cell one column to the right of the current cell in the second predecessor map.
# Sentinel-2 bands are stored separately in two STDRS "S2_b4" and "S2_b8"
g.region raster=sentinel2_B04_10m -p
t.rast.list S2_b4
t.rast.list S2_b8
t.rast.algebra basename=ndvi expression="ndvi = float(S2_b8 - S2_b4) / ( S2_b8 + S2_b4 )"
t.rast.colors input=ndvi color=ndvi
Sum maps from STRDS A with maps from STRDS B which have equal time stamps and are temporally before Jan. 1. 2005 and store them in STRDS D:
D = if(start_date(A) < "2005-01-01", A + B)
Create the sum of all maps from STRDS A and B that have equal time stamps and store the new maps in STRDS C:
Same expression with explicit definition of the temporal topology relation and temporal operators:
Select all cells from STRDS B with equal temporal relations to STRDS A, if the cells of A are in the range [100.0, 1600] of time intervals that have more than 30 days (Jan, Mar, May, Jul, Aug, Oct,
C = if(A > 100 && A < 1600 && td(A) > 30, B)
Same expression with explicit definition of the temporal topology relation and temporal operators:
C = if({equal}, A > 100 && A < 1600 {&&,equal} td(A) > 30, B)
Compute the recharge in meters per second for all cells of precipitation STRDS "Prec" if the mean temperature specified in STRDS "Temp" is higher than 10 degrees. Computation is performed if STRDS
"Prec" and "Temp" have equal time stamps. The number of days or fraction of days per interval is computed using the td() function that has as argument the STRDS "Prec":
C = if(Temp > 10.0, Prec / 3600.0 / 24.0 / td(Prec))
Same expression with explicit definition of the temporal topology relation and temporal operators:
C = if({equal}, Temp > 10.0, Prec / 3600.0 / 24.0 {/,equal,l} td(Prec))
Compute the mean value of all maps from STRDS A that are located during time intervals of STRDS B if more than one map of A is contained in an interval of B, use A otherwise. The resulting time
intervals are either from B or A:
C = if(B {#,contain} A > 1, (B {+,contain,l} A - B) / (B {#,contain} A), A)
Same expression with explicit definition of the temporal topology relation and temporal operators:
C = if({equal}, B {#,contain} A > 1, (B {+,contain,l} A {-,equal,l} B) {equal,=/} (B {#,contain} A), A)
Compute the DOY for all maps from STRDS A where conditions are met at three consecutive time intervals (e.g. temperature > 0):
B = if(A > 0.0 && A[-1] > 0.0 && A[-2] > 0.0, start_doy(A, -1), 0)"
r.mapcalc, t.vect.algebra, t.rast3d.algebra, t.select, t.rast3d.mapcalc, t.rast.mapcalc
The use of this module requires the following software to be installed:
# Ubuntu/Debian
sudo apt-get install python3-ply
# Fedora
sudo dnf install python3-ply
# MS-Windows (OSGeo4W: requires "python3-pip" package to be installed)
python3-pip install ply
Related publications:
• Gebbert, S., Pebesma, E. 2014. TGRASS: A temporal GIS for field based environmental modeling. Environmental Modelling & Software 53, 1-12 (DOI) - preprint PDF
• Gebbert, S., Pebesma, E. 2017. The GRASS GIS temporal framework. International Journal of Geographical Information Science 31, 1273-1292 (DOI)
• Gebbert, S., Leppelt, T., Pebesma, E., 2019. A topology based spatio-temporal map algebra for big data analysis. Data 4, 86. (DOI)
v.overlay, v.buffer, v.patch, r.mapcalc
Thomas Leppelt, Sören Gebbert, Thünen Institute of Climate-Smart Agriculture
Available at: t.rast.algebra source code (history)
Latest change: Saturday Jun 03 14:34:17 2023 in commit: c35d40f62a5907dca17e0fb7baae6051c01588fb
Main index | Temporal index | Topics index | Keywords index | Graphical index | Full index
© 2003-2024 GRASS Development Team, GRASS GIS 8.3.3dev Reference Manual | {"url":"https://mirrors.ibiblio.org/grass/code_and_data/grass83/manuals/t.rast.algebra.html","timestamp":"2024-11-14T14:40:56Z","content_type":"text/html","content_length":"33494","record_id":"<urn:uuid:da0e1197-8609-4d76-84bc-d194bdd30c1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00478.warc.gz"} |
What is Book Value per Share?
Book Value per Share
Book value per share (BVPS) is a financial metric that represents the proportion of a company’s book value of equity allocated to each outstanding share of common stock. It is used by investors and
analysts to evaluate a company’s financial health, intrinsic value, and overall performance. The book value per share can be compared with the market price of the stock to determine whether a stock
is overvalued, undervalued, or fairly valued.
To calculate book value per share, you need to divide the total book value of equity (also known as shareholders’ equity) by the number of outstanding shares of common stock:
Book Value per Share (BVPS) = Total Book Value of Equity / Number of Outstanding Shares
The book value of equity can be found on a company’s balance sheet and represents the residual interest in the company’s assets after deducting its liabilities. It is essentially the accounting
measure of a company’s net worth attributable to its shareholders.
Keep in mind that the book value per share may not always accurately represent the true economic value of a company’s stock, as it is based on historical costs and does not take into account factors
such as future growth prospects, market conditions, or competitive environment. Therefore, investors should consider other valuation methods and market factors when evaluating a company’s worth and
making investment decisions.
Example of Book Value per Share
Let’s consider a hypothetical example to illustrate the concept of book value per share for a company.
Imagine that Company XYZ has the following financial information on its balance sheet:
Total Assets: $15,000,000
• Cash: $2,000,000
• Accounts Receivable: $3,000,000
• Inventory: $4,000,000
• Property, Plant, and Equipment: $6,000,000
Total Liabilities: $10,000,000
To calculate the book value of equity, you would subtract the company’s total liabilities from its total assets:
Book Value of Equity = Total Assets – Total Liabilities
Book Value of Equity = $15,000,000 (Total Assets) – $10,000,000 (Total Liabilities) = $5,000,000
Now, let’s assume that Company XYZ has 500,000 outstanding shares of common stock. To calculate the book value per share, you would divide the total book value of equity by the number of outstanding
Book Value per Share (BVPS) = Total Book Value of Equity / Number of Outstanding Shares
Book Value per Share (BVPS) = $5,000,000 / 500,000 = $10.00
In this example, the book value per share for Company XYZ is $10.00.
Investors can use the book value per share as a valuation metric to compare with the current market price of the stock. For instance, if Company XYZ’s stock is currently trading at $15 per share, the
stock is trading at a price-to-book (P/B) ratio of:
P/B Ratio = Market Price per Share / Book Value per Share
P/B Ratio = $15 / $10 = 1.5
A P/B ratio above 1 indicates that the market price is higher than the book value, suggesting that the market believes the company has growth potential or other factors not captured by the book value
alone. Conversely, a P/B ratio below 1 may indicate that the stock is undervalued or that the market has a more pessimistic view of the company’s prospects.
It’s important to note that the book value per share has its limitations and may not accurately reflect the true value of a company in all cases. Investors should consider additional valuation
methods and market factors when making investment decisions. | {"url":"https://www.superfastcpa.com/what-is-book-value-per-share/","timestamp":"2024-11-02T11:34:38Z","content_type":"text/html","content_length":"397702","record_id":"<urn:uuid:1c04377b-296c-4a94-9893-46dc05f39bbd>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00670.warc.gz"} |
1. KDD
Relaxing Continuous Constraints of Equivariant Graph Neural Networks for Broad Physical Dynamics Learning
Zinan Zheng, Yang Liu, Jia Li, Jianhua Yao, and Yu Rong
In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024
Incorporating Euclidean symmetries (e.g. rotation equivariance) as inductive biases into graph neural networks has improved their generalization ability and data efficiency in unbounded physical
dynamics modeling. However, in various scientific and engineering applications, the symmetries of dynamics are frequently discrete due to the boundary conditions. Thus, existing GNNs either
over-look necessary symmetry, resulting in suboptimal representation ability, or impose excessive equivariance, which fails to generalize to unobserved symmetric dynamics. In this work, we
propose a general Discrete Equivariant Graph Neural Network (DEGNN) that guarantees equivariance to a given discrete point group. Specifically, we show that such discrete equivariant message
passing could be constructed by transforming geometric features into permutation-invariant embeddings. Through relaxing continuous equivariant constraints, DEGNN can employ more geometric feature
combinations to approximate unobserved physical object interaction functions. Two implementation approaches of DEGNN are proposed based on ranking or pooling permutation-invariant functions. We
apply DEGNN to various physical dynamics, ranging from particle, molecular, crowd to vehicle dynamics. In twenty scenarios, DEGNN significantly outperforms existing state-of-the-art approaches.
Moreover, we show that DEGNN is data efficient, learning with less data, and can generalize across scenarios such as unobserved orientation.
title = {Relaxing Continuous Constraints of Equivariant Graph Neural Networks for Broad Physical Dynamics Learning},
author = {Zheng, Zinan and Liu, Yang and Li, Jia and Yao, Jianhua and Rong, Yu},
year = {2024},
booktitle = {Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
location = {Barcelona, Spain},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {KDD '24},
pages = {4548–4558},
doi = {10.1145/3637528.3671957},
isbn = {9798400704901},
url = {https://doi.org/10.1145/3637528.3671957},
numpages = {11},
keywords = {equivariant graph neural network, physical dynamics},
2. ICLR
SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases
Yang Liu, Jiashun Cheng, Haihong Zhao, Tingyang Xu, Peilin Zhao, Fugee Tsung, and 2 more authors
In The Twelfth International Conference on Learning Representations, 2024
title = {{SEGNO}: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases},
author = {Liu, Yang and Cheng, Jiashun and Zhao, Haihong and Xu, Tingyang and Zhao, Peilin and Tsung, Fugee and Li, Jia and Rong, Yu},
year = {2024},
booktitle = {The Twelfth International Conference on Learning Representations},
url = {https://openreview.net/forum?id=3oTPsORaDH},
3. Neural Networks
Solving the non-submodular network collapse problems via Decision Transformer
Kaili Ma, Han Yang, Shanchao Yang, Kangfei Zhao, Lanqing Li, Yongqiang Chen, and 3 more authors
Neural Networks, 2024
Given a graph G, the network collapse problem (NCP) selects a vertex subset S of minimum cardinality from G such that the difference in the values of a given measure function f(G)−f(G∖S) is
greater than a predefined collapse threshold. Many graph analytic applications can be formulated as NCPs with different measure functions, which often pose a significant challenge due to their
NP-hard nature. As a result, traditional greedy algorithms, which select the vertex with the highest reward at each step, may not effectively find the optimal solution. In addition, existing
learning-based algorithms do not have the ability to model the sequence of actions taken during the decision-making process, making it difficult to capture the combinatorial effect of selected
vertices on the final solution. This limits the performance of learning-based approaches in non-submodular NCPs. To address these limitations, we propose a unified framework called DT-NC, which
adapts the Decision Transformer to the Network Collapse problems. DT-NC takes into account the historical actions taken during the decision-making process and effectively captures the
combinatorial effect of selected vertices. The ability of DT-NC to model the dependency among selected vertices allows it to address the difficulties caused by the non-submodular property of
measure functions in some NCPs effectively. Through extensive experiments on various NCPs and graphs of different sizes, we demonstrate that DT-NC outperforms the state-of-the-art methods and
exhibits excellent transferability and generalizability.
title = {Solving the non-submodular network collapse problems via Decision Transformer},
author = {Ma, Kaili and Yang, Han and Yang, Shanchao and Zhao, Kangfei and Li, Lanqing and Chen, Yongqiang and Huang, Junzhou and Cheng, James and Rong, Yu},
year = {2024},
journal = {Neural Networks},
volume = {176},
pages = {106328},
doi = {https://doi.org/10.1016/j.neunet.2024.106328},
issn = {0893-6080},
url = {https://www.sciencedirect.com/science/article/pii/S0893608024002521},
keywords = {Graph neural network, Decision Transformer, Network collapse, Network dismantling, Collapsed -core},
4. VLDB
Inductive Attributed Community Search: To Learn Communities Across Graphs
Shuheng Fang, Kangfei Zhao, Yu Rong, Zhixun Li, and Jeffrey Xu Yu
Proc. VLDB Endow., Aug 2024
Attributed community search (ACS) aims to identify subgraphs satisfying both structure cohesiveness and attribute homogeneity in attributed graphs, for a given query that contains query nodes and
query attributes. Previously, algorithmic approaches deal with ACS in a two-stage paradigm, which suffer from structural inflexibility and attribute irrelevance. To overcome this problem,
recently, learning-based approaches have been proposed to learn both structures and attributes simultaneously as a one-stage paradigm. However, these approaches train a transductive model which
assumes the graph to infer unseen queries is as same as the graph used for training. That limits the generalization and adaptation of these approaches to different heterogeneous graphs.In this
paper, we propose a new framework, Inductive Attributed Community Search, IACS, by inductive learning, which can be used to infer new queries for different communities/graphs. Specifically, IACS
employs an encoder-decoder neural architecture to handle an ACS task at a time, where a task consists of a graph with only a few queries and corresponding ground-truth. We design a three-phase
workflow, "training-adaptation-inference", which learns a shared model to absorb and induce prior effective common knowledge about ACS across different tasks. And the shared model can swiftly
adapt to a new task with small number of ground-truth. We conduct substantial experiments in 7 real-world datasets to verify the effectiveness of IACS for CS/ACS. Our approach IACS achieves
28.97% and 25.60% improvements in F1-score on average in CS and ACS, respectively.
title = {Inductive Attributed Community Search: To Learn Communities Across Graphs},
author = {Fang, Shuheng and Zhao, Kangfei and Rong, Yu and Li, Zhixun and Yu, Jeffrey Xu},
year = {2024},
month = aug,
journal = {Proc. VLDB Endow.},
publisher = {VLDB Endowment},
volume = {17},
number = {10},
pages = {2576–2589},
doi = {10.14778/3675034.3675048},
issn = {2150-8097},
url = {https://doi.org/10.14778/3675034.3675048},
issue_date = {June 2024},
numpages = {14},
5. J COMPUT BIOL
Toward Robust Self-Training Paradigm for Molecular Prediction Tasks
Hehuan Ma, Feng Jiang, Yu Rong, Yuzhi Guo, and Junzhou Huang
Journal of Computational Biology, 2024
Molecular prediction tasks normally demand a series of professional experiments to label the target molecule, which suffers from the limited labeled data problem. One of the semisupervised
learning paradigms, known as self-training, utilizes both labeled and unlabeled data. Specifically, a teacher model is trained using labeled data and produces pseudo labels for unlabeled data.
These labeled and pseudo-labeled data are then jointly used to train a student model. However, the pseudo labels generated from the teacher model are generally not sufficiently accurate. Thus, we
propose a robust self-training strategy by exploring robust loss function to handle such noisy labels in two paradigms, that is, generic and adaptive. We have conducted experiments on three
molecular biology prediction tasks with four backbone models to gradually evaluate the performance of the proposed robust self-training strategy. The results demonstrate that the proposed method
enhances prediction performance across all tasks, notably within molecular regression tasks, where there has been an average enhancement of 41.5%. Furthermore, the visualization analysis confirms
the superiority of our method. Our proposed robust self-training is a simple yet effective strategy that efficiently improves molecular biology prediction performance. It tackles the labeled data
insufficient issue in molecular biology by taking advantage of both labeled and unlabeled data. Moreover, it can be easily embedded with any prediction task, which serves as a universal approach
for the bioinformatics community.
title = {Toward Robust Self-Training Paradigm for Molecular Prediction Tasks},
author = {Ma, Hehuan and Jiang, Feng and Rong, Yu and Guo, Yuzhi and Huang, Junzhou},
year = {2024},
journal = {Journal of Computational Biology},
volume = {31},
number = {3},
pages = {213--228},
doi = {10.1089/cmb.2023.0187},
url = {https://doi.org/10.1089/cmb.2023.0187},
note = {PMID: 38531049},
eprint = {https://doi.org/10.1089/cmb.2023.0187},
6. ICLR
Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel
Xuan Li, Zhanke Zhou, Jiangchao Yao, Yu Rong, Lu Zhang, and Bo Han
In The Twelfth International Conference on Learning Representations, 2024
title = {Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel},
author = {Li, Xuan and Zhou, Zhanke and Yao, Jiangchao and Rong, Yu and Zhang, Lu and Han, Bo},
year = {2024},
booktitle = {The Twelfth International Conference on Learning Representations},
url = {https://openreview.net/forum?id=CUfSCwcgqm},
7. Nature Methods
scPROTEIN: a versatile deep graph contrastive learning framework for single-cell proteomics embedding
Wei Li, Fan Yang, Fang Wang, Yu Rong, Linjing Liu, Bingzhe Wu, and 2 more authors
Nature Methods, Apr 2024
Single-cell proteomics sequencing technology sheds light on protein–protein interactions, posttranslational modifications and proteoform dynamics in the cell. However, the uncertainty estimation
for peptide quantification, data missingness, batch effects and high noise hinder the analysis of single-cell proteomic data. It is important to solve this set of tangled problems together, but
the existing methods tailored for single-cell transcriptomes cannot fully address this task. Here we propose a versatile framework designed for single-cell proteomics data analysis called
scPROTEIN, which consists of peptide uncertainty estimation based on a multitask heteroscedastic regression model and cell embedding generation based on graph contrastive learning. scPROTEIN can
estimate the uncertainty of peptide quantification, denoise protein data, remove batch effects and encode single-cell proteomic-specific embeddings in a unified framework. We demonstrate that
scPROTEIN is efficient for cell clustering, batch correction, cell type annotation, clinical analysis and spatially resolved proteomic data exploration.
title = {scPROTEIN: a versatile deep graph contrastive learning framework for single-cell proteomics embedding},
author = {Li, Wei and Yang, Fan and Wang, Fang and Rong, Yu and Liu, Linjing and Wu, Bingzhe and Zhang, Han and Yao, Jianhua},
year = {2024},
month = apr,
day = {01},
journal = {Nature Methods},
volume = {21},
number = {4},
pages = {623--634},
doi = {10.1038/s41592-024-02214-9},
issn = {1548-7105},
url = {https://doi.org/10.1038/s41592-024-02214-9},
1. AAAI
DrugOOD: Out-of-Distribution Dataset Curator and Benchmark for AI-Aided Drug Discovery – a Focus on Affinity Prediction Problems with Noise Annotations
Yuanfeng Ji, Lu Zhang, Jiaxiang Wu, Bingzhe Wu, Lanqing Li, Long-Kai Huang, and 11 more authors
Proceedings of the AAAI Conference on Artificial Intelligence, Jun 2023
AI-aided drug discovery (AIDD) is gaining popularity due to its potential to make the search for new pharmaceuticals faster, less expensive, and more effective. Despite its extensive use in
numerous fields (e.g., ADMET prediction, virtual screening), little research has been conducted on the out-of-distribution (OOD) learning problem with noise. We present DrugOOD, a systematic OOD
dataset curator and benchmark for AIDD. Particularly, we focus on the drug-target binding affinity prediction problem, which involves both macromolecule (protein target) and small-molecule (drug
compound). DrugOOD offers an automated dataset curator with user-friendly customization scripts, rich domain annotations aligned with biochemistry knowledge, realistic noise level annotations,
and rigorous benchmarking of SOTA OOD algorithms, as opposed to only providing fixed datasets. Since the molecular data is often modeled as irregular graphs using graph neural network (GNN)
backbones, DrugOOD also serves as a valuable testbed for graph OOD learning problems. Extensive empirical studies have revealed a significant performance gap between in-distribution and
out-of-distribution experiments, emphasizing the need for the development of more effective schemes that permit OOD generalization under noise for AIDD.
title = {DrugOOD: Out-of-Distribution Dataset Curator and Benchmark for AI-Aided Drug Discovery – a Focus on Affinity Prediction Problems with Noise Annotations},
author = {Ji, Yuanfeng and Zhang, Lu and Wu, Jiaxiang and Wu, Bingzhe and Li, Lanqing and Huang, Long-Kai and Xu, Tingyang and Rong, Yu and Ren, Jie and Xue, Ding and Lai, Houtim and Liu, Wei and Huang, Junzhou and Zhou, Shuigeng and Luo, Ping and Zhao, Peilin and Bian, Yatao},
year = {2023},
month = jun,
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {37},
number = {7},
pages = {8023--8031},
doi = {10.1609/aaai.v37i7.25970},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/25970},
2. TKDE
Adversarial Attack Framework on Graph Embedding Models With Limited Knowledge
Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, and 3 more authors
IEEE Transactions on Knowledge and Data Engineering, 2023
With the success of the graph embedding model in both academic and industrial areas, the robustness of graph embeddings against adversarial attacks inevitably becomes a crucial problem in graph
learning. Existing works usually perform the attack in a white-box fashion: they need to access the predictions/labels to construct their adversarial losses. However, the inaccessibility of
predictions/labels makes the white-box attack impractical for a real graph learning system. This paper promotes current frameworks in a more general and flexible sense – we consider the ability
of various types of graph embedding models to remain resilient against black-box driven attacks. We investigate the theoretical connection between graph signal processing and graph embedding
models, and formulate the graph embedding model as a general graph signal process with a corresponding graph filter. Therefore, we design a generalized adversarial attack framework: GF-Attack .
Without accessing any labels and model predictions, GF-Attack can perform the attack directly on the graph filter in a black-box fashion. We further prove that GF-Attack can perform an effective
attack without assumption on the number of layers/window-size of graph embedding models. To validate the generalization of GF-Attack , we construct GF-Attack on five popular graph embedding
models. Extensive experiments validate the effectiveness of GF-Attack on several benchmark datasets.
title = {Adversarial Attack Framework on Graph Embedding Models With Limited Knowledge},
author = {Chang, Heng and Rong, Yu and Xu, Tingyang and Huang, Wenbing and Zhang, Honglei and Cui, Peng and Wang, Xin and Zhu, Wenwu and Huang, Junzhou},
year = {2023},
journal = {IEEE Transactions on Knowledge and Data Engineering},
volume = {35},
number = {5},
pages = {4499--4513},
doi = {10.1109/TKDE.2022.3153060},
3. Information Sciences
Exploiting node-feature bipartite graph in graph convolutional networks
Yuli Jiang, Huaijia Lin, Ye Li, Yu Rong, Hong Cheng, and Xin Huang
Information Sciences, 2023
In recent years, Graph Convolutional Networks (GCNs), which extend convolutional neural networks to graph structure, have achieved great success on many graph learning tasks by fusing structure
and feature information, such as node classification. However, the graph structure is constructed from real-world data and usually contains noise or redundancy. In addition, this structural
information is based on manually defined relations and is not potentially optimal for downstream tasks. In this paper, we utilize the knowledge from node features to enhance the expressive power
of GCN models in a plug-and-play fashion. Specifically, we build a node-feature bipartite graph and exploit the bipartite graph convolutional network to model node-feature relations. By aligning
results from the original graph structure and node-feature relations, we can make a more accurate prediction for each node in an end-to-end manner. Extensive experiments demonstrate that the
proposed model can extract knowledge from two branches and improve the performance of various GCN models on typical graph data sets and 3D point cloud data.
title = {Exploiting node-feature bipartite graph in graph convolutional networks},
author = {Jiang, Yuli and Lin, Huaijia and Li, Ye and Rong, Yu and Cheng, Hong and Huang, Xin},
year = {2023},
journal = {Information Sciences},
volume = {628},
pages = {409--423},
doi = {https://doi.org/10.1016/j.ins.2023.01.107},
issn = {0020-0255},
url = {https://www.sciencedirect.com/science/article/pii/S0020025523001196},
keywords = {Graph convolutional networks, Bipartite graph, Bipartite graph convolutional networks, Semi-supervised learning, Node classification},
4. VLDBJ
Learned sketch for subgraph counting: a holistic approach
Kangfei Zhao, Jeffrey Xu Yu, Qiyan Li, Hao Zhang, and Yu Rong
The VLDB Journal, 2023
Subgraph counting, as a fundamental problem in network analysis, is to count the number of subgraphs in a data graph that match a given query graph by either homomorphism or subgraph isomorphism.
The importance of subgraph counting derives from the fact that it provides insights of a large graph, in particular a labeled graph, when a collection of query graphs with different sizes and
labels are issued. The problem of counting is challenging. On the one hand, exact counting by enumerating subgraphs is NP-hard. On the other hand, approximate counting by subgraph isomorphism can
only support small query graphs over unlabeled graphs. Another way for subgraph counting is to specify it as an SQL query and estimate the cardinality of the query in RDBMS. Existing approaches
for cardinality estimation can only support subgraph counting by homomorphism up to some extent, as it is difficult to deal with sampling failure when a query graph becomes large. A question that
arises is how we support subgraph counting by machine learning (ML) and deep learning (DL). To devise an ML/DL solution, apart from the query graphs, another issue is to deal with large data
graphs by ML/DL, as the existing DL approach for subgraph isomorphism counting can only support small data graphs. In addition, the ML/DL approaches proposed in RDBMS context for approximate
query processing and cardinality estimation cannot be used, as subgraph counting is to do complex self-joins over one relation, whereas existing approaches focus on multiple relations. In this
work, we propose an active learned sketch for subgraph counting (𝖠𝖫𝖲𝖲 ) with two main components: a learned sketch for subgraph counting and an active learner. The sketch is constructed by a
neural network regression model, and the active learner is to perform model updates based on new arrival test query graphs. Our holistic learning framework supports both undirected graphs and
directed graphs, whose nodes and/or edges are associated zero to multiple labels. We conduct extensive experimental studies to confirm the effectiveness and efficiency of 𝖠𝖫𝖲𝖲 using large real
labeled graphs. Moreover, we show that 𝖠𝖫𝖲𝖲 can assist query optimizers in finding a better query plan for complex multi-way self-joins.
title = {Learned sketch for subgraph counting: a holistic approach},
author = {Zhao, Kangfei and Yu, Jeffrey Xu and Li, Qiyan and Zhang, Hao and Rong, Yu},
year = {2023},
journal = {The VLDB Journal},
publisher = {Springer},
pages = {1--26},
5. TMLR
Noise-robust Graph Learning by Estimating and Leveraging Pairwise Interactions
Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, and 3 more authors
Transactions on Machine Learning Research, 2023
Teaching Graph Neural Networks (GNNs) to accurately classify nodes under severely noisy labels is an important problem in real-world graph learning applications, but is currently underexplored.
Although pairwise training methods have demonstrated promise in supervised metric learning and unsupervised contrastive learning, they remain less studied on noisy graphs, where the structural
pairwise interactions (PI) between nodes are abundant and thus might benefit label noise learning rather than the pointwise methods. This paper bridges the gap by proposing a pairwise framework
for noisy node classification on graphs, which relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels. Our proposed framework PI-GNN
contributes two novel components: (1) a confidence-aware PI estimation model that adaptively estimates the PI labels, which are defined as whether the two nodes share the same node labels, and
(2) a decoupled training approach that leverages the estimated PI labels to regularize a node classification model for robust node classification. Extensive experiments on different datasets and
GNN architectures demonstrate the effectiveness of PI-GNN, yielding a promising improvement over the state-of-the-art methods. Code is publicly available at https://github.com/TianBian95/pi-gnn.
title = {Noise-robust Graph Learning by Estimating and Leveraging Pairwise Interactions},
author = {Du, Xuefeng and Bian, Tian and Rong, Yu and Han, Bo and Liu, Tongliang and Xu, Tingyang and Huang, Wenbing and Li, Yixuan and Huang, Junzhou},
year = {2023},
journal = {Transactions on Machine Learning Research},
issn = {2835-8856},
url = {https://openreview.net/forum?id=r7imkFEAQb},
note = {},
6. AAAI
Human Mobility Modeling during the COVID-19 Pandemic via Deep Graph Diffusion Infomax
Yang Liu, Yu Rong, Zhuoning Guo, Nuo Chen, Tingyang Xu, Fugee Tsung, and 1 more author
Proceedings of the AAAI Conference on Artificial Intelligence, Jun 2023
Non-Pharmaceutical Interventions (NPIs), such as social gathering restrictions, have shown effectiveness to slow the transmission of COVID-19 by reducing the contact of people. To support
policy-makers, multiple studies have first modelled human mobility via macro indicators (e.g., average daily travel distance) and then study the effectiveness of NPIs. In this work, we focus on
mobility modelling and, from a micro perspective, aim to predict locations that will be visited by COVID-19 cases. Since NPIs generally cause economic and societal loss, such a prediction
benefits governments when they design and evaluate them. However, in real-world situations, strict privacy data protection regulations result in severe data sparsity problems (i.e., limited case
and location information). To address these challenges and jointly model variables including a geometric graph, a set of diffusions and a set of locations, we propose a model named Deep Graph
Diffusion Infomax (DGDI). We show the maximization of DGDI can be bounded by two tractable components: a univariate Mutual Information (MI) between geometric graph and diffusion representation,
and a univariate MI between diffusion representation and location representation. To facilitate the research of COVID-19 prediction, we present two benchmarks that contain geometric graphs and
location histories of COVID-19 cases. Extensive experiments on the two benchmarks show that DGDI significantly outperforms other competing methods.
title = {Human Mobility Modeling during the COVID-19 Pandemic via Deep Graph Diffusion Infomax},
author = {Liu, Yang and Rong, Yu and Guo, Zhuoning and Chen, Nuo and Xu, Tingyang and Tsung, Fugee and Li, Jia},
year = {2023},
month = jun,
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {37},
number = {12},
pages = {14347--14355},
doi = {10.1609/aaai.v37i12.26678},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/26678},
7. DASFAA
Learning With Small Data: Subgraph Counting Queries
Kangfei Zhao, Jeffrey Xu Yu, Zongyan He, and Yu Rong
In Database Systems for Advanced Applications: 28th International Conference, 2023
Deep Learning (DL) has been widely used in many applications, and its success is achieved with large training data. A key issue is how to provide a DL solution when there is no efficient training
data to learn initially. In this paper, we explore a meta learning approach for a specific problem, subgraph isomorphism counting, which is a fundamental problem in graph analysis to count the
number of a given pattern graph, p, in a data graph, g, that matches p. This problem is NP-hard, and needs large training data to learn by DL in nature. To solve this problem, we design a
Gaussian Process (GP) model which combines graph neural network with Bayesian nonparametric, and we train the GP by a meta learning algorithm on a small set of training data. By meta learning, we
obtain a generalized meta-model to better encode the information of data and pattern graphs and capture the prior of small tasks. We handle a collection of pairs (g, p), as a task, where some
pairs may be associated with the ground-truth, and some pairs are the queries to answer. There are two cases. One is there are some with ground-truth (few-shot), and one is there is none with
ground-truth (zero-shot). We provide our solutions for both. We conduct substantial experiments to confirm that our approach is robust to model degeneration on small training data, and our meta
model can fast adapt to new queries by few/zero-shot learning.
title = {Learning With Small Data: Subgraph Counting Queries},
author = {Zhao, Kangfei and Yu, Jeffrey Xu and He, Zongyan and Rong, Yu},
year = {2023},
booktitle = {Database Systems for Advanced Applications: 28th International Conference},
location = {Tianjin, China},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
pages = {308–319},
doi = {10.1007/978-3-031-30675-4_21},
isbn = {978-3-031-30674-7},
url = {https://doi.org/10.1007/978-3-031-30675-4_21},
numpages = {12},
8. TPAMI
Semi-Supervised Hierarchical Graph Classification
Jia Li, Yongfeng Huang, Heng Chang, and Yu Rong
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
Node classification and graph classification are two graph learning problems that predict the class label of a node and the class label of a graph respectively. A node of a graph usually
represents a real-world entity, e.g., a user in a social network, or a document in a document citation network. In this work, we consider a more challenging but practically useful setting, in
which a node itself is a graph instance. This leads to a hierarchical graph perspective which arises in many domains such as social network, biological network and document collection. We study
the node classification problem in the hierarchical graph where a “node” is a graph instance. As labels are usually limited, we design a novel semi-supervised solution named SEAL-CI. SEAL-CI
adopts an iterative framework that takes turns to update two modules, one working at the graph instance level and the other at the hierarchical graph level. To enforce a consistency among
different levels of hierarchical graph, we propose the Hierarchical Graph Mutual Information (HGMI) and further present a way to compute HGMI with theoretical guarantee. We demonstrate the
effectiveness of this hierarchical graph modeling and the proposed SEAL-CI method on text and social network data.
title = {Semi-Supervised Hierarchical Graph Classification},
author = {Li, Jia and Huang, Yongfeng and Chang, Heng and Rong, Yu},
year = {2023},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {45},
number = {5},
pages = {6265--6276},
doi = {10.1109/TPAMI.2022.3203703},
9. TNNLS
Structure-Aware DropEdge Toward Deep Graph Convolutional Networks
Jiaqi Han, Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang
IEEE Transactions on Neural Networks and Learning Systems, 2023
It has been discovered that graph convolutional networks (GCNs) encounter a remarkable drop in performance when multiple layers are piled up. The main factor that accounts for why deep GCNs fail
lies in oversmoothing, which isolates the network output from the input with the increase of network depth, weakening expressivity and trainability. In this article, we start by investigating
refined measures upon DropEdge—an existing simple yet effective technique to relieve oversmoothing. We term our method as DropEdge ++ for its two structure-aware samplers in contrast to DropEdge:
layer-dependent (LD) sampler and feature-dependent (FD) sampler. Regarding the LD sampler, we interestingly find that increasingly sampling edges from the bottom layer yields superior performance
than the decreasing counterpart as well as DropEdge. We theoretically reveal this phenomenon with mean-edge-number (MEN), a metric closely related to oversmoothing. For the FD sampler, we
associate the edge sampling probability with the feature similarity of node pairs and prove that it further correlates the convergence subspace of the output layer with the input features.
Extensive experiments on several node classification benchmarks, including both full-and semi-supervised tasks, illustrate the efficacy of DropEdge ++ and its compatibility with a variety of
backbones by achieving generally better performance over DropEdge and the no-drop version.
title = {Structure-Aware DropEdge Toward Deep Graph Convolutional Networks},
author = {Han, Jiaqi and Huang, Wenbing and Rong, Yu and Xu, Tingyang and Sun, Fuchun and Huang, Junzhou},
year = {2023},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
volume = {},
number = {},
pages = {1--13},
doi = {10.1109/TNNLS.2023.3288484},
10. VLDB
Computing Graph Edit Distance via Neural Graph Matching
Chengzhi Piao, Tingyang Xu, Xiangguo Sun, Yu Rong, Kangfei Zhao, and Hong Cheng
Proc. VLDB Endow., Jun 2023
Graph edit distance (GED) computation is a fundamental NP-hard problem in graph theory. Given a graph pair (G1, G2), GED is defined as the minimum number of primitive operations converting G1 to
G2. Early studies focus on search-based inexact algorithms such as A*-beam search, and greedy algorithms using bipartite matching due to its NP-hardness. They can obtain a sub-optimal solution by
constructing an edit path (the sequence of operations that converts G1 to G2). Recent studies convert the GED between a given graph pair (G1, G2) into a similarity score in the range (0, 1) by a
well designed function. Then machine learning models (mostly based on graph neural networks) are applied to predict the similarity score. They achieve a much higher numerical precision than the
sub-optimal solutions found by classical algorithms. However, a major limitation is that these machine learning models cannot generate an edit path. They treat the GED computation as a pure
regression task to bypass its intrinsic complexity, but ignore the essential task of converting G1 to G2. This severely limits the interpretability and usability of the solution.In this paper, we
propose a novel deep learning framework that solves the GED problem in a two-step manner: 1) The proposed graph neural network GEDGNN is in charge of predicting the GED value and a matching
matrix; and 2) A post-processing algorithm based on k-best matching is used to derive k possible node matchings from the matching matrix generated by GEDGNN. The best matching will finally lead
to a high-quality edit path. Extensive experiments are conducted on three real graph data sets and synthetic power-law graphs to demonstrate the effectiveness of our framework. Compared to the
best result of existing GNN-based models, the mean absolute error (MAE) on GED value prediction decreases by 4.9% 74.3%. Compared to the state-of-the-art searching algorithm Noah, the MAE on
GED value based on edit path reduces by 53.6% 88.1%.
title = {Computing Graph Edit Distance via Neural Graph Matching},
author = {Piao, Chengzhi and Xu, Tingyang and Sun, Xiangguo and Rong, Yu and Zhao, Kangfei and Cheng, Hong},
year = {2023},
month = jun,
journal = {Proc. VLDB Endow.},
publisher = {VLDB Endowment},
volume = {16},
number = {8},
pages = {1817–1829},
doi = {10.14778/3594512.3594514},
issn = {2150-8097},
url = {https://github.com/ChengzhiPiao/GEDGNN},
issue_date = {April 2023},
numpages = {13},
11. ICDE
Decision Support System for Chronic Diseases Based on Drug-Drug Interactions
T. Bian, Y. Jiang, J. Li, T. Xu, Y. Rong, Y. Su, and 3 more authors
In 2023 IEEE 39th International Conference on Data Engineering (ICDE), Apr 2023
Many patients with chronic diseases resort to multiple medications to relieve various symptoms, which raises concerns about the safety of multiple medication use, as severe drug-drug antagonism
can lead to serious adverse effects or even death. This paper presents a Decision Support System, called DSSDDI, based on drug-drug interactions to support doctors prescribing decisions. DSSDDI
contains three modules, Drug-Drug Interaction (DDI) module, Medical Decision (MD) module and Medical Support (MS) module. The DDI module learns safer and more effective drug representations from
the drug-drug interactions. To capture the potential causal relationship between DDI and medication use, the MD module considers the representations of patients and drugs as context, DDI and
patients’ similarity as treatment, and medication use as outcome to construct counterfactual links for the representation learning. Furthermore, the MS module provides drug candidates to doctors
with explanations. Experiments on the chronic data collected from the Hong Kong Chronic Disease Study Project and a public diagnostic data MIMIC-III demonstrate that DSSDDI can be a reliable
reference for doctors in terms of safety and efficiency of clinical diagnosis, with significant improvements compared to baseline methods. Source code of the proposed DSSDDI is publicly available
at https://github.com/TianBian95/DSSDDI.
title = {Decision Support System for Chronic Diseases Based on Drug-Drug Interactions},
author = {Bian, T. and Jiang, Y. and Li, J. and Xu, T. and Rong, Y. and Su, Y. and Kwok, T. and Meng, H. and Cheng, H.},
year = {2023},
month = apr,
booktitle = {2023 IEEE 39th International Conference on Data Engineering (ICDE)},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
volume = {},
pages = {3467--3480},
doi = {10.1109/ICDE55515.2023.00266},
issn = {},
url = {https://doi.ieeecomputersociety.org/10.1109/ICDE55515.2023.00266},
keywords = {decision support systems;drugs;representation learning;source coding;mimics;data engineering;safety},
12. Nature Comm.
Collaborative and privacy-preserving retired battery sorting for profitable direct recycling via federated machine learning
Shengyu Tao, Haizhou Liu, Chongbo Sun, Haocheng Ji, Guanjun Ji, Zhiyuan Han, and 11 more authors
Nature Communications, Dec 2023
Unsorted retired batteries with varied cathode materials hinder the adoption of direct recycling due to their cathode-specific nature. The surge in retired batteries necessitates precise sorting
for effective direct recycling, but challenges arise from varying operational histories, diverse manufacturers, and data privacy concerns of recycling collaborators (data owners). Here we show,
from a unique dataset of 130 lithium-ion batteries spanning 5 cathode materials and 7 manufacturers, a federated machine learning approach can classify these retired batteries without relying on
past operational data, safeguarding the data privacy of recycling collaborators. By utilizing the features extracted from the end-of-life charge-discharge cycle, our model exhibits 1% and 3%
cathode sorting errors under homogeneous and heterogeneous battery recycling settings respectively, attributed to our innovative Wasserstein-distance voting strategy. Economically, the proposed
method underscores the value of precise battery sorting for a prosperous and sustainable recycling industry. This study heralds a new paradigm of using privacy-sensitive data from diverse
sources, facilitating collaborative and privacy-respecting decision-making for distributed systems.
title = {Collaborative and privacy-preserving retired battery sorting for profitable direct recycling via federated machine learning},
author = {Tao, Shengyu and Liu, Haizhou and Sun, Chongbo and Ji, Haocheng and Ji, Guanjun and Han, Zhiyuan and Gao, Runhua and Ma, Jun and Ma, Ruifei and Chen, Yuou and Fu, Shiyi and Wang, Yu and Sun, Yaojie and Rong, Yu and Zhang, Xuan and Zhou, Guangmin and Sun, Hongbin},
year = {2023},
month = dec,
day = {05},
journal = {Nature Communications},
volume = {14},
number = {1},
pages = {8032},
doi = {10.1038/s41467-023-43883-y},
issn = {2041-1723},
url = {https://doi.org/10.1038/s41467-023-43883-y},
13. BRIEF BIOINFORM
scMHNN: a novel hypergraph neural network for integrative analysis of single-cell epigenomic, transcriptomic and proteomic data
Wei Li, Bin Xiang, Fan Yang, Yu Rong, Yanbin Yin, Jianhua Yao, and 1 more author
Briefings in Bioinformatics, Nov 2023
Technological advances have now made it possible to simultaneously profile the changes of epigenomic, transcriptomic and proteomic at the single cell level, allowing a more unified view of
cellular phenotypes and heterogeneities. However, current computational tools for single-cell multi-omics data integration are mainly tailored for bi-modality data, so new tools are urgently
needed to integrate tri-modality data with complex associations. To this end, we develop scMHNN to integrate single-cell multi-omics data based on hypergraph neural network. After modeling the
complex data associations among various modalities, scMHNN performs message passing process on the multi-omics hypergraph, which can capture the high-order data relationships and integrate the
multiple heterogeneous features. Followingly, scMHNN learns discriminative cell representation via a dual-contrastive loss in self-supervised manner. Based on the pretrained hypergraph encoder,
we further introduce the pre-training and fine-tuning paradigm, which allows more accurate cell-type annotation with only a small number of labeled cells as reference. Benchmarking results on
real and simulated single-cell tri-modality datasets indicate that scMHNN outperforms other competing methods on both cell clustering and cell-type annotation tasks. In addition, we also
demonstrate scMHNN facilitates various downstream tasks, such as cell marker detection and enrichment analysis.
title = {{scMHNN: a novel hypergraph neural network for integrative analysis of single-cell epigenomic, transcriptomic and proteomic data}},
author = {Li, Wei and Xiang, Bin and Yang, Fan and Rong, Yu and Yin, Yanbin and Yao, Jianhua and Zhang, Han},
year = {2023},
month = nov,
journal = {Briefings in Bioinformatics},
volume = {24},
number = {6},
pages = {bbad391},
doi = {10.1093/bib/bbad391},
issn = {1477-4054},
url = {https://doi.org/10.1093/bib/bbad391},
eprint = {https://academic.oup.com/bib/article-pdf/24/6/bbad391/52778081/bbad391.pdf},
14. KDD
Privacy Matters: Vertical Federated Linear Contextual Bandits for Privacy Protected Recommendation
Zeyu Cao, Zhipeng Liang, Bingzhe Wu, Shu Zhang, Hangyu Li, Ouyang Wen, and 2 more authors
In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023
Recent awareness of privacy protection and compliance requirement resulted in a controversial view of recommendation system due to personal data usage. Therefore, privacy-protected recommendation
emerges as a novel research direction. In this paper, we first formulate this problem as a vertical federated learning problem, i.e., features are vertically distributed over different
departments. We study a contextual bandit learning problem for recommendation in the vertical federated setting. To this end, we carefully design a customized encryption scheme named orthogonal
matrix-based mask mechanism (O3M). O3M mechanism, a tailored component for contextual bandits by carefully exploiting their shared structure, can ensure privacy protection while avoiding
expensive conventional cryptographic techniques. We further apply the mechanism to two commonly-used bandit algorithms, LinUCB and LinTS, and instantiate two practical protocols for online
recommendation. The proposed protocols can perfectly recover the service quality of centralized bandit algorithms while achieving a satisfactory runtime efficiency, which is theoretically proved
and analysed in this paper. By conducting extensive experiments on both synthetic and real-world datasets, we show the superiority of the proposed method in terms of privacy protection and
recommendation performance.
title = {Privacy Matters: Vertical Federated Linear Contextual Bandits for Privacy Protected Recommendation},
author = {Cao, Zeyu and Liang, Zhipeng and Wu, Bingzhe and Zhang, Shu and Li, Hangyu and Wen, Ouyang and Rong, Yu and Zhao, Peilin},
year = {2023},
booktitle = {Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
location = {Long Beach, CA, USA},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {KDD '23},
pages = {154–166},
doi = {10.1145/3580305.3599475},
isbn = {9798400701030},
url = {https://doi.org/10.1145/3580305.3599475},
numpages = {13},
keywords = {vertical federated learning, privacy- preserving protocols, linear contextual bandits},
15. AAAI
Energy-Motivated Equivariant Pretraining for 3D Molecular Graphs
Rui Jiao, Jiaqi Han, Wenbing Huang, Yu Rong, and Yang Liu
Proceedings of the AAAI Conference on Artificial Intelligence, Jun 2023
Pretraining molecular representation models without labels is fundamental to various applications. Conventional methods mainly process 2D molecular graphs and focus solely on 2D tasks, making
their pretrained models incapable of characterizing 3D geometry and thus defective for downstream 3D tasks. In this work, we tackle 3D molecular pretraining in a complete and novel sense. In
particular, we first propose to adopt an equivariant energy-based model as the backbone for pretraining, which enjoys the merits of fulfilling the symmetry of 3D space. Then we develop a
node-level pretraining loss for force prediction, where we further exploit the Riemann-Gaussian distribution to ensure the loss to be E(3)-invariant, enabling more robustness. Moreover, a
graph-level noise scale prediction task is also leveraged to further promote the eventual performance. We evaluate our model pretrained from a large-scale 3D dataset GEOM-QM9 on two challenging
3D benchmarks: MD17 and QM9. Experimental results demonstrate the efficacy of our method against current state-of-the-art pretraining approaches, and verify the validity of our design for each
proposed component. Code is available at https://github.com/jiaor17/3D-EMGP.
title = {Energy-Motivated Equivariant Pretraining for 3D Molecular Graphs},
author = {Jiao, Rui and Han, Jiaqi and Huang, Wenbing and Rong, Yu and Liu, Yang},
year = {2023},
month = jun,
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {37},
number = {7},
pages = {8096--8104},
doi = {10.1609/aaai.v37i7.25978},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/25978},
16. CIKM
Geometric Graph Learning for Protein Mutation Effect Prediction
Kangfei Zhao, Yu Rong, Biaobin Jiang, Jianheng Tang, Hengtong Zhang, Jeffrey Xu Yu, and 1 more author
In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023
Proteins govern a wide range of biological systems. Evaluating the changes in protein properties upon protein mutation is a fundamental application of protein design, where modeling the 3D
protein structure is a principal task for AI-driven computational approaches. Existing deep learning (DL) approaches represent the protein structure as a 3D geometric graph and simplify the graph
modeling to different degrees, thereby failing to capture the low-level atom patterns and high-level amino acid patterns simultaneously. In addition, limited training samples with ground truth
labels and protein structures further restrict the effectiveness of DL approaches. In this paper, we propose a new graph learning framework, Hierarchical Graph Invariant Network (HGIN), a
fine-grained and data-efficient graph neural encoder for encoding protein structures and predicting the mutation effect on protein properties. For fine-grained modeling, HGIN hierarchically
models the low-level interactions of atoms and the high-level interactions of amino acid residues by Graph Neural Networks. For data efficiency, HGIN preserves the invariant encoding for atom
permutation and coordinate transformation, which is an intrinsic inductive bias of property prediction that bypasses data augmentations. We integrate HGIN into a Siamese network to predict the
quantitative effect on protein properties upon mutations. Our approach outperforms 9 state-of-the-art approaches on 3 protein datasets. More inspiringly, when predicting the neutralizing ability
of human antibodies against COVID-19 mutant viruses, HGIN achieves an absolute improvement of 0.23 regarding the Spearman coefficient.
title = {Geometric Graph Learning for Protein Mutation Effect Prediction},
author = {Zhao, Kangfei and Rong, Yu and Jiang, Biaobin and Tang, Jianheng and Zhang, Hengtong and Yu, Jeffrey Xu and Zhao, Peilin},
year = {2023},
booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management},
location = {Birmingham, United Kingdom},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CIKM '23},
pages = {3412–3422},
doi = {10.1145/3583780.3614893},
isbn = {9798400701245},
url = {https://doi.org/10.1145/3583780.3614893},
numpages = {11},
keywords = {graph neural network, geometric graph learning},
17. TKDE
Finding Critical Users in Social Communities via Graph Convolutions
Kangfei Zhao, Zhiwei Zhang, Yu Rong, Jeffrey Xu Yu, and Junzhou Huang
IEEE Transactions on Knowledge and Data Engineering, 2023
Finding critical users whose existence keeps a social community cohesive and large is an important issue in social networks. In the literature, such criticalness of a user is measured by the
number of followers who will leave the community together when the user leaves. By taking a social community as a k -core, which can be computed in linear time, the problem of finding critical
users is to find a set of nodes, U , with a user-given size b in a k -core community that maximizes the number of nodes (followers) to be deleted from the k -core when all nodes in U are deleted.
This problem is known to be NP-hard. In the literature, the state-of-the-art approach, a greedy algorithm is proposed with no guarantee on the set of nodes U found, since there does not exist a
submodular function the greedy algorithm can use to get a better answer iteratively. Furthermore, the greedy algorithm designed is to handle k -core in any social networks such that it does not
consider the structural complexity of a given single graph and cannot get the global optimal by the local optimal found in iterations. In this paper, we propose a novel learning-based approach.
Distinguished from traditional experience-based heuristics, we propose a neural network model, called Self-attentive Core Graph Convolution Network ( SCGCN ), to capture the hidden structure of
the criticalness among node combinations that break the engagement of a specific social community. Supervised by sampling node combinations, SCGCN has the ability to inference the criticalness of
unseen combinations of nodes. To further reduce the sampling and inference space, we propose a deterministic strategy to prune unpromising nodes on the graph. Our experiments conducted on many
real-world graphs show that SCGCN significantly improves the quality of the solution compared with the state-of-the-art greedy algorithm.
title = {Finding Critical Users in Social Communities via Graph Convolutions},
author = {Zhao, Kangfei and Zhang, Zhiwei and Rong, Yu and Yu, Jeffrey Xu and Huang, Junzhou},
year = {2023},
journal = {IEEE Transactions on Knowledge and Data Engineering},
volume = {35},
number = {1},
pages = {456--468},
doi = {10.1109/TKDE.2021.3089763},
18. NeurIPS
Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics
Liming Wu, Zhichao Hou, Jirui Yuan, Yu Rong, and Wenbing Huang
In Advances in Neural Information Processing Systems, 2023
title = {Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics},
author = {Wu, Liming and Hou, Zhichao and Yuan, Jirui and Rong, Yu and Huang, Wenbing},
year = {2023},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {36},
pages = {45360--45380},
url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/8e2a75e0c7b579a6cf176dc0858cde55-Paper-Conference.pdf},
editor = {Oh, A. and Naumann, T. and Globerson, A. and Saenko, K. and Hardt, M. and Levine, S.},
1. Bioinformatics
Cross-dependent graph neural networks for molecular property prediction
Hehuan Ma, Yatao Bian, Yu Rong, Wenbing Huang, Tingyang Xu, Weiyang Xie, and 2 more authors
Bioinformatics, Jan 2022
The crux of molecular property prediction is to generate meaningful representations of the molecules. One promising route is to exploit the molecular graph structure through graph neural networks
(GNNs). Both atoms and bonds significantly affect the chemical properties of a molecule, so an expressive model ought to exploit both node (atom) and edge (bond) information simultaneously.
Inspired by this observation, we explore the multi-view modeling with GNN (MVGNN) to form a novel paralleled framework, which considers both atoms and bonds equally important when learning
molecular representations. In specific, one view is atom-central and the other view is bond-central, then the two views are circulated via specifically designed components to enable more accurate
predictions. To further enhance the expressive power of MVGNN, we propose a cross-dependent message-passing scheme to enhance information communication of different views. The overall framework
is termed as CD-MVGNN.We theoretically justify the expressiveness of the proposed model in terms of distinguishing non-isomorphism graphs. Extensive experiments demonstrate that CD-MVGNN achieves
remarkably superior performance over the state-of-the-art models on various challenging benchmarks. Meanwhile, visualization results of the node importance are consistent with prior knowledge,
which confirms the interpretability power of CD-MVGNN.The code and data underlying this work are available in GitHub at https://github.com/uta-smile/CD-MVGNN.Supplementary data are available at
Bioinformatics online.
title = {{Cross-dependent graph neural networks for molecular property prediction}},
author = {Ma, Hehuan and Bian, Yatao and Rong, Yu and Huang, Wenbing and Xu, Tingyang and Xie, Weiyang and Ye, Geyan and Huang, Junzhou},
year = {2022},
month = jan,
journal = {Bioinformatics},
volume = {38},
number = {7},
pages = {2003--2009},
doi = {10.1093/bioinformatics/btac039},
issn = {1367-4803},
url = {https://doi.org/10.1093/bioinformatics/btac039},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/38/7/2003/49009479/btac039.pdf},
2. ICML
Local Augmentation for Graph Neural Networks
Songtao Liu, Rex Ying, Hanze Dong, Lanqing Li, Tingyang Xu, Yu Rong, and 3 more authors
In Proceedings of the 39th International Conference on Machine Learning, 17–23 jul 2022
Graph Neural Networks (GNNs) have achieved remarkable performance on graph-based tasks. The key idea for GNNs is to obtain informative representation through aggregating information from local
neighborhoods. However, it remains an open question whether the neighborhood information is adequately aggregated for learning representations of nodes with few neighbors. To address this, we
propose a simple and efficient data augmentation strategy, local augmentation, to learn the distribution of the node representations of the neighbors conditioned on the central node’s
representation and enhance GNN’s expressive power with generated features. Local augmentation is a general framework that can be applied to any GNN model in a plug-and-play manner. It samples
feature vectors associated with each node from the learned conditional distribution as additional input for the backbone model at each training iteration. Extensive experiments and analyses show
that local augmentation consistently yields performance improvement when applied to various GNN architectures across a diverse set of benchmarks. For example, experiments show that plugging in
local augmentation to GCN and GAT improves by an average of 3.4% and 1.6% in terms of test accuracy on Cora, Citeseer, and Pubmed. Besides, our experimental results on large graphs (OGB) show
that our model consistently improves performance over backbones. Code is available at https://github.com/SongtaoLiu0823/LAGNN.
title = {Local Augmentation for Graph Neural Networks},
author = {Liu, Songtao and Ying, Rex and Dong, Hanze and Li, Lanqing and Xu, Tingyang and Rong, Yu and Zhao, Peilin and Huang, Junzhou and Wu, Dinghao},
year = {2022},
month = {17--23 Jul},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
volume = {162},
pages = {14054--14072},
url = {https://proceedings.mlr.press/v162/liu22s.html},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
3. arXiv
Transformer for Graphs: An Overview from Architecture Perspective
Erxue Min, Runfa Chen, Yatao Bian, Tingyang Xu, Kangfei Zhao, Wenbing Huang, and 4 more authors
Recently, Transformer model, which has achieved great success in many artificial intelligence fields, has demonstrated its great potential in modeling graph-structured data. Till now, a great
variety of Transformers has been proposed to adapt to the graph-structured data. However, a comprehensive literature review and systematical evaluation of these Transformer variants for graphs
are still unavailable. It’s imperative to sort out the existing Transformer models for graphs and systematically investigate their effectiveness on various graph tasks. In this survey, we provide
a comprehensive review of various Graph Transformer models from the architectural design perspective. We first disassemble the existing models and conclude three typical ways to incorporate the
graph information into the vanilla Transformer: 1) GNNs as Auxiliary Modules, 2) Improved Positional Embedding from Graphs, and 3) Improved Attention Matrix from Graphs. Furthermore, we implement
the representative components in three groups and conduct a comprehensive comparison on various kinds of famous graph data benchmarks to investigate the real performance gain of each component.
Our experiments confirm the benefits of current graph-specific modules on Transformer and reveal their advantages on different kinds of graph tasks.
title = {Transformer for Graphs: An Overview from Architecture Perspective},
author = {Min, Erxue and Chen, Runfa and Bian, Yatao and Xu, Tingyang and Zhao, Kangfei and Huang, Wenbing and Zhao, Peilin and Huang, Junzhou and Ananiadou, Sophia and Rong, Yu},
year = {2022},
eprint = {2202.08455},
archiveprefix = {arXiv},
primaryclass = {cs.LG},
4. TPAMI
Graph Convolutional Module for Temporal Action Localization in Videos
Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, and 1 more author
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022
Temporal action localization, which requires a machine to recognize the location as well as the category of action instances in videos, has long been researched in computer vision. The main
challenge of temporal action localization lies in that videos are usually long and untrimmed with diverse action contents involved. Existing state-of-the-art action localization methods divide
each video into multiple action units (i.e., proposals in two-stage methods and segments in one-stage methods) and then perform action recognition/regression on each of them individually, without
explicitly exploiting their relations during learning. In this paper, we claim that the relations between action units play an important role in action localization, and a more powerful action
detector should not only capture the local content of each action unit but also allow a wider field of view on the context related to it. To this end, we propose a general graph convolutional
module (GCM) that can be easily plugged into existing action localization methods, including two-stage and one-stage paradigms. To be specific, we first construct a graph, where each action unit
is represented as a node and their relations between two action units as an edge. Here, we use two types of relations, one for capturing the temporal connections between different action units,
and the other one for characterizing their semantic relationship. Particularly for the temporal connections in two-stage methods, we further explore two different kinds of edges, one connecting
the overlapping action units and the other one connecting surrounding but disjointed units. Upon the graph we built, we then apply graph convolutional networks (GCNs) to model the relations among
different action units, which is able to learn more informative representations to enhance action localization. Experimental results show that our GCM consistently improves the performance of
existing action localization methods, including two-stage methods (e.g., CBR [15] and R-C3D [47]) and one-stage methods (e.g., D-SSAD [22]), verifying the generality and effectiveness of our GCM.
Moreover, with the aid of GCM, our approach significantly outperforms the state-of-the-art on THUMOS14 (50.9 percent versus 42.8 percent). Augmentation experiments on ActivityNet also verify the
efficacy of modeling the relationships between action units. The source code and the pre-trained models are available at https://github.com/Alvin-Zeng/GCM .
title = {Graph Convolutional Module for Temporal Action Localization in Videos},
author = {Zeng, Runhao and Huang, Wenbing and Tan, Mingkui and Rong, Yu and Zhao, Peilin and Huang, Junzhou and Gan, Chuang},
year = {2022},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {44},
number = {10},
pages = {6209--6223},
doi = {10.1109/TPAMI.2021.3090167},
5. arXiv
Geometrically Equivariant Graph Neural Networks: A Survey
Jiaqi Han, Yu Rong, Tingyang Xu, and Wenbing Huang
Many scientific problems require to process data in the form of geometric graphs. Unlike generic graph data, geometric graphs exhibit symmetries of translations, rotations, and/or reflections.
Researchers have leveraged such inductive bias and developed geometrically equivariant Graph Neural Networks (GNNs) to better characterize the geometry and topology of geometric graphs. Despite
fruitful achievements, it still lacks a survey to depict how equivariant GNNs are progressed, which in turn hinders the further development of equivariant GNNs. To this end, based on the
necessary but concise mathematical preliminaries, we analyze and classify existing methods into three groups regarding how the message passing and aggregation in GNNs are represented. We also
summarize the benchmarks as well as the related datasets to facilitate later researches for methodology development and experimental evaluation. The prospect for future potential directions is
also provided.
title = {Geometrically Equivariant Graph Neural Networks: A Survey},
author = {Han, Jiaqi and Rong, Yu and Xu, Tingyang and Huang, Wenbing},
year = {2022},
eprint = {2202.07230},
archiveprefix = {arXiv},
primaryclass = {cs.LG},
6. ICLR
Equivariant Graph Mechanics Networks with Constraints
Wenbing Huang, Jiaqi Han, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang
In The Tenth International Conference on Learning Representations, ICLR 2022, 2022
Learning to reason about relations and dynamics over multiple interacting objects is a challenging topic in machine learning. The challenges mainly stem from that the interacting systems are
exponentially-compositional, symmetrical, and commonly geometrically-constrained. Current methods, particularly the ones based on equivariant Graph Neural Networks (GNNs), have targeted on the
first two challenges but remain immature for constrained systems. In this paper, we propose Graph Mechanics Network (GMN) which is combinatorially efficient, equivariant and constraint-aware. The
core of GMN is that it represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object. In this manner, the geometrical constraints
are implicitly and naturally encoded in the forward kinematics. Moreover, to allow equivariant message passing in GMN, we have developed a general form of orthogonality-equivariant functions,
given that the dynamics of constrained systems are more complicated than the unconstrained counterparts. Theoretically, the proposed equivariant formulation is proved to be universally expressive
under certain conditions. Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency on
the simulated systems consisting of particles, sticks and hinges, as well as two real-world datasets for molecular dynamics prediction and human motion capture.
title = {Equivariant Graph Mechanics Networks with Constraints},
author = {Huang, Wenbing and Han, Jiaqi and Rong, Yu and Xu, Tingyang and Sun, Fuchun and Huang, Junzhou},
year = {2022},
booktitle = {The Tenth International Conference on Learning Representations, {ICLR} 2022},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=SHbhHHfePhP},
timestamp = {Sat, 20 Aug 2022 01:15:42 +0200},
biburl = {https://dblp.org/rec/conf/iclr/0001HRX0H22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
7. TheWebConf
Divide-and-Conquer: Post-User Interaction Network for Fake News Detection on Social Media
Erxue Min, Yu Rong, Yatao Bian, Tingyang Xu, Peilin Zhao, Junzhou Huang, and 1 more author
In Proceedings of the ACM Web Conference 2022, 2022
Fake News detection has attracted much attention in recent years. Social context based detection methods attempt to model the spreading patterns of fake news by utilizing the collective wisdom
from users on social media. This task is challenging for three reasons: (1) There are multiple types of entities and relations in social context, requiring methods to effectively model the
heterogeneity. (2) The emergence of news in novel topics in social media causes distribution shifts, which can significantly degrade the performance of fake news detectors. (3) Existing fake news
datasets usually lack of great scale, topic diversity and user social relations, impeding the development of this field. To solve these problems, we formulate social context based fake news
detection as a heterogeneous graph classification problem, and propose a fake news detection model named Post-User Interaction Network (PSIN), which adopts a divide-and-conquer strategy to model
the post-post, user-user and post-user interactions in social context effectively while maintaining their intrinsic characteristics. Moreover,we adopt an adversarial topic discriminator for
topic-agnostic feature learning, in order to improve the generalizability of our method for new-emerging topics. Furthermore, we curate a new dataset for fake news detection, which contains over
27,155 news from 5 topics, 5 million posts, 2 million users and their induced social graph with 0.2 billion edges. It has been published on https://github.com/qwerfdsaplking/MC-Fake. Extensive
experiments illustrate that our method outperforms SOTA baselines in both in-topic and out-of-topic settings.
title = {Divide-and-Conquer: Post-User Interaction Network for Fake News Detection on Social Media},
author = {Min, Erxue and Rong, Yu and Bian, Yatao and Xu, Tingyang and Zhao, Peilin and Huang, Junzhou and Ananiadou, Sophia},
year = {2022},
booktitle = {Proceedings of the ACM Web Conference 2022},
location = {Virtual Event, Lyon, France},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {WWW '22},
pages = {1148–1158},
doi = {10.1145/3485447.3512163},
isbn = {9781450390965},
url = {https://doi.org/10.1145/3485447.3512163},
numpages = {11},
keywords = {Graph Neural Network, Fake News Detection, Social Media},
8. ICML
Frustratingly Easy Transferability Estimation
Long-Kai Huang, Junzhou Huang, Yu Rong, Qiang Yang, and Ying Wei
In Proceedings of the 39th International Conference on Machine Learning, Jul 2022
Transferability estimation has been an essential tool in selecting a pre-trained model and the layers in it for transfer learning, to transfer, so as to maximize the performance on a target task
and prevent negative transfer. Existing estimation algorithms either require intensive training on target tasks or have difficulties in evaluating the transferability between layers. To this end,
we propose a simple, efficient, and effective transferability measure named TransRate. Through a single pass over examples of a target task, TransRate measures the transferability as the mutual
information between features of target examples extracted by a pre-trained model and their labels. We overcome the challenge of efficient mutual information estimation by resorting to coding rate
that serves as an effective alternative to entropy. From the perspective of feature representation, the resulting TransRate evaluates both completeness (whether features contain sufficient
information of a target task) and compactness (whether features of each class are compact enough for good generalization) of pre-trained features. Theoretically, we have analyzed the close
connection of TransRate to the performance after transfer learning. Despite its extraordinary simplicity in 10 lines of codes, TransRate performs remarkably well in extensive evaluations on 35
pre-trained models and 16 downstream tasks.
title = {Frustratingly Easy Transferability Estimation},
author = {Huang, Long-Kai and Huang, Junzhou and Rong, Yu and Yang, Qiang and Wei, Ying},
year = {2022},
month = jul,
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
volume = {162},
pages = {9201--9225},
url = {https://proceedings.mlr.press/v162/huang22d.html},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
9. VLDB
Query Driven-Graph Neural Networks for Community Search: From Non-Attributed, Attributed, to Interactive Attributed
Yuli Jiang, Yu Rong, Hong Cheng, Xin Huang, Kangfei Zhao, and Junzhou Huang
Proc. VLDB Endow., Feb 2022
Given one or more query vertices, Community Search (CS) aims to find densely intra-connected and loosely inter-connected structures containing query vertices. Attributed Community Search (ACS), a
related problem, is more challenging since it finds communities with both cohesive structures and homogeneous vertex attributes. However, most methods for the CS task rely on inflexible
pre-defined structures and studies for ACS treat each attribute independently. Moreover, the most popular ACS strategies decompose ACS into two separate sub-problems, i.e., the CS task and
subsequent attribute filtering task. However, in real-world graphs, the community structure and the vertex attributes are closely correlated to each other. This correlation is vital for the ACS
problem. In this vein, we argue that the separation strategy cannot fully capture the correlation between structure and attributes simultaneously and it would compromise the final performance.In
this paper, we propose Graph Neural Network (GNN) models for both CS and ACS problems, i.e., Query Driven-GNN (QD-GNN) and Attributed Query Driven-GNN (AQD-GNN). In QD-GNN, we combine the local
query-dependent structure and global graph embedding. In order to extend QD-GNN to handle attributes, we model vertex attributes as a bipartite graph and capture the relation between attributes
by constructing GNNs on this bipartite graph. With a Feature Fusion operator, AQD-GNN processes the structure and attribute simultaneously and predicts communities according to each attributed
query. Experiments on real-world graphs with ground-truth communities demonstrate that the proposed models outperform existing CS and ACS algorithms in terms of both efficiency and effectiveness.
More recently, an interactive setting for CS is proposed that allows users to adjust the predicted communities. We further verify our approaches under the interactive setting and extend to the
attributed context. Our method achieves 2.37% and 6.29% improvements in F1-score than the state-of-the-art model without attributes and with attributes respectively.
title = {Query Driven-Graph Neural Networks for Community Search: From Non-Attributed, Attributed, to Interactive Attributed},
author = {Jiang, Yuli and Rong, Yu and Cheng, Hong and Huang, Xin and Zhao, Kangfei and Huang, Junzhou},
year = {2022},
month = feb,
journal = {Proc. VLDB Endow.},
publisher = {VLDB Endowment},
volume = {15},
number = {6},
pages = {1243–1255},
doi = {10.14778/3514061.3514070},
issn = {2150-8097},
url = {https://doi.org/10.14778/3514061.3514070},
issue_date = {February 2022},
numpages = {13},
10. IJCAI
Fine-Tuning Graph Neural Networks via Graph Topology Induced Optimal Transport
Jiying Zhang, Xi Xiao, Long-Kai Huang, Yu Rong, and Yatao Bian
In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Jul 2022
Recently, the pretrain-finetuning paradigm has attracted tons of attention in graph learning community due to its power of alleviating the lack of labels problem in many real-world applications.
Current studies use existing techniques, such as weight constraint, representation constraint, which are derived from images or text data, to transfer the invariant knowledge from the pre-train
stage to fine-tuning stage. However, these methods failed to preserve invariances from graph structure and Graph Neural Network (GNN) style models. In this paper, we present a novel optimal
transport-based fine-tuning framework called GTOT-Tuning, namely, Graph Topology induced Optimal Transport fine-Tuning, for GNN style backbones. GTOT-Tuning is required to utilize the property of
graph data to enhance the preservation of representation produced by fine-tuned networks. Toward this goal, we formulate graph local knowledge transfer as an Optimal Transport (OT) problem with a
structural prior and construct the GTOT regularizer to constrain the fine-tuned model behaviors. By using the adjacency relationship amongst nodes, the GTOT regularizer achieves node-level
optimal transport procedures and reduces redundant transport procedures, resulting in efficient knowledge transfer from the pre-trained models. We evaluate GTOT-Tuning on eight downstream tasks
with various GNN backbones and demonstrate that it achieves state-of-the-art fine-tuning performance for GNNs.
title = {Fine-Tuning Graph Neural Networks via Graph Topology Induced Optimal Transport},
author = {Zhang, Jiying and Xiao, Xi and Huang, Long-Kai and Rong, Yu and Bian, Yatao},
year = {2022},
month = jul,
booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {3730--3736},
doi = {10.24963/ijcai.2022/518},
url = {https://doi.org/10.24963/ijcai.2022/518},
note = {Main Track},
editor = {Raedt, Lud De},
11. SIGIR
Neighbour Interaction Based Click-Through Rate Prediction via Graph-Masked Transformer
Erxue Min, Yu Rong, Tingyang Xu, Yatao Bian, Da Luo, Kangyi Lin, and 3 more authors
In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022
Click-Through Rate (CTR) prediction, which aims to estimate the probability that a user will click an item, is an essential component of online advertising. Existing methods mainly attempt to
mine user interests from users’ historical behaviours, which contain users’ directly interacted items. Although these methods have made great progress, they are often limited by the recommender
system’s direct exposure and inactive interactions, and thus fail to mine all potential user interests. To tackle these problems, we propose Neighbor-Interaction based CTR prediction (NI-CTR),
which considers this task under a Heterogeneous Information Network (HIN) setting. In short, Neighbor-Interaction based CTR prediction involves the local neighborhood of the target user-item pair
in the HIN to predict their linkage. In order to guide the representation learning of the local neighbourhood, we further consider different kinds of interactions among the local neighborhood
nodes from both explicit and implicit perspective, and propose a novel Graph-Masked Transformer (GMT) to effectively incorporates these kinds of interactions to produce highly representative
embeddings for the target user-item pair. Moreover, in order to improve model robustness against neighbour sampling, we enforce a consistency regularization loss over the neighbourhood embedding.
We conduct extensive experiments on two real-world datasets with millions of instances and the experimental results show that our proposed method outperforms state-of-the-art CTR models
significantly. Meanwhile, the comprehensive ablation studies verify the effectiveness of every component of our model. Furthermore, we have deployed this framework on the WeChat Official Account
Platform with billions of users. The online A/B tests demonstrate an average CTR improvement of 21.9% against all online baselines.
title = {Neighbour Interaction Based Click-Through Rate Prediction via Graph-Masked Transformer},
author = {Min, Erxue and Rong, Yu and Xu, Tingyang and Bian, Yatao and Luo, Da and Lin, Kangyi and Huang, Junzhou and Ananiadou, Sophia and Zhao, Peilin},
year = {2022},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
location = {Madrid, Spain},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {SIGIR '22},
pages = {353–362},
doi = {10.1145/3477495.3532031},
isbn = {9781450387323},
url = {https://doi.org/10.1145/3477495.3532031},
numpages = {10},
keywords = {click-through rate prediction, trans-former, graph neural network, neighbourhood interaction},
12. Pattern Recognition
Structure-aware conditional variational auto-encoder for constrained molecule optimization
Junchi Yu, Tingyang Xu, Yu Rong, Junzhou Huang, and Ran He
Pattern Recognition, 2022
The goal of molecule optimization is to optimize molecular properties by modifying molecule structures. Conditional generative models provide a promising way to transfer the input molecules to
the ones with better property. However, molecular properties are highly sensitive to small changes in molecular structures. This leads to an interesting thought that we can improve the property
of molecules with limited modification in structure. In this paper, we propose a structure-aware conditional Variational Auto-Encoder, namely SCVAE, which exploits the topology of molecules as
structure condition and optimizes the molecular properties with constrained structural modification. SCVAE leverages graph alignment of two-level molecule structures in an unsupervised manner to
bind the structure conditions between two molecules. Then, this structure condition facilitates the molecule optimization with limited structural modification, namely, constrained molecule
optimization, under a novel variational auto-encoder framework. Extensive experimental evaluations demonstrate that structure-aware CVAE generates new molecules with high similarity to the
original ones and better molecular properties.
title = {Structure-aware conditional variational auto-encoder for constrained molecule optimization},
author = {Yu, Junchi and Xu, Tingyang and Rong, Yu and Huang, Junzhou and He, Ran},
year = {2022},
journal = {Pattern Recognition},
volume = {126},
pages = {108581},
doi = {https://doi.org/10.1016/j.patcog.2022.108581},
issn = {0031-3203},
url = {https://www.sciencedirect.com/science/article/pii/S0031320322000620},
keywords = {Molecule optimization, Conditional generation, Drug discovery},
Diversified Multiscale Graph Learning with Graph Self-Correction
Yuzhao Chen, Yatao Bian, Jiying Zhang, Xi Xiao, Tingyang Xv, and Yu Rong
In Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022, 2022
Though the multiscale graph learning techniques have enabled advanced feature extraction frameworks, we find that the classic ensemble strategy shows inferior performance while encountering the
high homogeneity of the learnt representation, which is caused by the nature of existing graph pooling methods. To cope with this issue, we propose a diversified multiscale graph learning model
equipped with two core ingredients: a graph self-correction mechanism to generate informative embedded graphs, and a diversity boosting regularizer to achieve a comprehensive characterization of
the input graph. The proposed mechanism compensates the pooled graph with the lost information during the graph pooling process by feeding back the estimated residual graph, which serves as a
plug-in component for popular graph pooling methods. Meanwhile, pooling methods enhanced with the self-correcting procedure encourage the discrepancy of node embeddings, and thus it contributes
to the success of ensemble learning strategy. The proposed regularizer instead enhances the ensemble diversity at the graph-level embeddings by leveraging the interaction among individual
classifiers. Extensive experiments on popular graph classification benchmarks show that the approaches lead to significant improvements over state-of-the-art graph pooling methods, and the
ensemble multiscale graph learning models achieve superior enhancement.
title = {Diversified Multiscale Graph Learning with Graph Self-Correction},
author = {Chen, Yuzhao and Bian, Yatao and Zhang, Jiying and Xiao, Xi and Xv, Tingyang and Rong, Yu},
year = {2022},
booktitle = {Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
volume = {196},
pages = {48--54},
url = {https://proceedings.mlr.press/v196/chen22a.html},
editor = {Cloninger, Alexander and Doster, Timothy and Emerson, Tegan and Kaul, Manohar and Ktena, Ira and Kvinge, Henry and Miolane, Nina and Rieck, Bastian and Tymochko, Sarah and Wolf, Guy},
14. ICLR
Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning
Yatao Bian, Yu Rong, Tingyang Xu, Jiaxiang Wu, Andreas Krause, and Junzhou Huang
In the Tenth International Conference on Learning Representations, ICLR, 2022
Valuation problems, such as feature interpretation, data valuation and model valuation for ensembles, become increasingly more important in many machine learning applications. Such problems are
commonly solved by well-known game-theoretic criteria, such as Shapley value or Banzhaf value. In this work, we present a novel energy-based treatment for cooperative games, with a theoretical
justification by the maximum entropy framework. Surprisingly, by conducting variational inference of the energy-based model, we recover various game-theoretic valuation criteria through
conducting one-step fixed point iteration for maximizing the mean-field ELBO objective. This observation also verifies the rationality of existing criteria, as they are all attempting to decouple
the correlations among the players through the mean-field approach. By running fixed point iteration for multiple steps, we achieve a trajectory of the valuations, among which we define the
valuation with the best conceivable decoupling error as the Variational Index. We prove that under uniform initializations, these variational valuations all satisfy a set of game-theoretic
axioms. We experimentally demonstrate that the proposed Variational Index enjoys lower decoupling error and better valuation performance on certain synthetic and real-world valuation problems.
title = {Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning},
author = {Bian, Yatao and Rong, Yu and Xu, Tingyang and Wu, Jiaxiang and Krause, Andreas and Huang, Junzhou},
year = {2022},
booktitle = {the Tenth International Conference on Learning Representations, {ICLR}},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=xLfAgCroImw},
timestamp = {Fri, 11 Nov 2022 14:26:52 +0100},
biburl = {https://dblp.org/rec/conf/iclr/BianRXW0H22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
15. BCE
Robust Self-Training Strategy for Various Molecular Biology Prediction Tasks
Hehuan Ma, Feng Jiang, Yu Rong, Yuzhi Guo, and Junzhou Huang
In Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, 2022
Molecular biology prediction tasks suffer the limited labeled data problem since it normally demands a series of professional experiments to label the target molecule. Self-training is one of the
semi-supervised learning paradigms that utilizes both labeled and unlabeled data. It trains a teacher model on labeled data, and uses it to generate pseudo labels for unlabeled data. The labeled
and pseudo-labeled data are then combined to train a student model. However, the pseudo labels generated from the teacher model are not sufficiently accurate. Thus, we propose a robust
self-training strategy by exploring robust loss function to handle such noisy labels, which is model and task agnostic, and can be easily embedded with any prediction tasks. We have conducted
molecular biology prediction tasks to gradually evaluate the performance of proposed robust self-training strategy. The results demonstrate that the proposed method consistently boosts the
prediction performance, especially for molecular regression tasks, which have gained a 41.5% average improvement.
title = {Robust Self-Training Strategy for Various Molecular Biology Prediction Tasks},
author = {Ma, Hehuan and Jiang, Feng and Rong, Yu and Guo, Yuzhi and Huang, Junzhou},
year = {2022},
booktitle = {Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics},
location = {Northbrook, Illinois},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {BCB '22},
doi = {10.1145/3535508.3545998},
isbn = {9781450393867},
url = {https://doi.org/10.1145/3535508.3545998},
articleno = {34},
numpages = {5},
keywords = {bioinformatics, semi-supervised learning, neural network, molecular biology, self-training, prediction tasks},
16. GNN Book
Graph Neural Networks: Scalability
Hehuan Ma, Yu Rong, and Junzhou Huang
In Graph Neural Networks: Foundations, Frontiers, and Applications, 2022
Over the past decade, Graph Neural Networks have achieved remarkable success in modeling complex graph data. Nowadays, graph data is increasing exponentially in both magnitude and volume, e.g., a
social network can be constituted by billions of users and relationships. Such circumstance leads to a crucial question, how to properly extend the scalability of Graph Neural Networks? There
remain two major challenges while scaling the original implementation of GNN to large graphs. First, most of the GNN models usually compute the entire adjacency matrix and node embeddings of the
graph, which demands a huge memory space. Second, training GNN requires recursively updating each node in the graph, which becomes infeasible and ineffective for large graphs. Current studies
propose to tackle these obstacles mainly from three sampling paradigms: node-wise sampling, which is executed based on the target nodes in the graph; layer-wise sampling, which is implemented on
the convolutional layers; and graph-wise sampling, which constructs sub-graphs for the model inference. In this chapter, we will introduce several representative research accordingly.
title = {Graph Neural Networks: Scalability},
author = {Ma, Hehuan and Rong, Yu and Huang, Junzhou},
year = {2022},
booktitle = {Graph Neural Networks: Foundations, Frontiers, and Applications},
publisher = {Springer Nature Singapore},
address = {Singapore},
pages = {99--119},
doi = {10.1007/978-981-16-6054-2_6},
isbn = {978-981-16-6054-2},
url = {https://doi.org/10.1007/978-981-16-6054-2_6},
editor = {Wu, Lingfei and Cui, Peng and Pei, Jian and Zhao, Liang},
17. BIBM
Integrating Prior Knowledge with Graph Encoder for Gene Regulatory Inference from Single-cell RNA-Seq Data
Jiawei Li, Fan Yang, Fang Wang, Yu Rong, Peilin Zhao, Shizhan Chen, and 3 more authors
In 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2022
Inferring gene regulatory networks based on single-cell transcriptomes is critical for systematically understanding cell-specific regulatory networks and discovering drug targets in tumor cells.
Here we show that existing methods mainly perform co-expression analysis and apply the image-based model to deal with the non-euclidean scRNA-seq data, which may not reasonably handle the dropout
problem and not fully take advantage of the validated gene regulatory topology. We propose a graph-based end-to-end deep learning model for GRN inference (GRNInfer) with the help of known
regulatory relations through transductive learning. The robustness and superiority of the model are demonstrated by comparative experiments.
title = {Integrating Prior Knowledge with Graph Encoder for Gene Regulatory Inference from Single-cell RNA-Seq Data},
author = {Li, Jiawei and Yang, Fan and Wang, Fang and Rong, Yu and Zhao, Peilin and Chen, Shizhan and Yao, Jianhua and Tang, Jijun and Guo, Fei},
year = {2022},
booktitle = {2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)},
volume = {},
number = {},
pages = {102--107},
doi = {10.1109/BIBM55620.2022.9995287},
18. NeurIPS
Equivariant Graph Hierarchy-Based Neural Networks
Jiaqi Han, Wenbing Huang, Tingyang Xu, and Yu Rong
In Advances in Neural Information Processing Systems, 2022
Equivariant Graph neural Networks (EGNs) are powerful in characterizing the dynamics of multi-body physical systems. Existing EGNs conduct flat message passing, which, yet, is unable to capture
the spatial/dynamical hierarchy for complex systems particularly, limiting substructure discovery and global information fusion. In this paper, we propose Equivariant Hierarchy-based Graph
Networks (EGHNs) which consist of the three key components: generalized Equivariant Matrix Message Passing (EMMP) , E-Pool and E-UnPool. In particular, EMMP is able to improve the expressivity of
conventional equivariant message passing, E-Pool assigns the quantities of the low-level nodes into high-level clusters, while E-UnPool leverages the high-level information to update the dynamics
of the low-level nodes. As their names imply, both E-Pool and E-UnPool are guaranteed to be equivariant to meet physic symmetry. Considerable experimental evaluations verify the effectiveness of
our EGHN on several applications including multi-object dynamics simulation, motion capture, and protein dynamics modeling.
title = {Equivariant Graph Hierarchy-Based Neural Networks},
author = {Han, Jiaqi and Huang, Wenbing and Xu, Tingyang and Rong, Yu},
year = {2022},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {35},
pages = {9176--9187},
url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/3bdeb28a531f7af94b56bcdf8ee88f17-Paper-Conference.pdf},
editor = {Koyejo, S. and Mohamed, S. and Agarwal, A. and Belgrave, D. and Cho, K. and Oh, A.},
1. DPML
FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks
Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Yu Rong, Peilin Zhao, Junzhou Huang, and 2 more authors
ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML), 2021
Graph Neural Network (GNN) research is rapidly growing thanks to the capacity of GNNs in learning distributed representations from graph-structured data. However, centralizing a massive amount of
real-world graph data for GNN training is prohibitive due to privacy concerns, regulation restrictions, and commercial competitions. Federated learning (FL), a trending distributed learning
paradigm, provides possibilities to solve this challenge while preserving data privacy. Despite recent advances in vision and language domains, there is no suitable platform for the FL of GNNs.
To this end, we introduce FedGraphNN, an open FL benchmark system that can facilitate research on federated GNNs. FedGraphNN is built on a unified formulation of graph FL and contains a wide
range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support. Particularly for the datasets, we collect, preprocess, and partition 36
datasets from 7 domains, including both publicly available ones and specifically obtained ones such as hERG and Tencent. Our empirical analysis showcases the utility of our benchmark system,
while exposing significant challenges in graph FL: federated GNNs perform worse in most datasets with a non-IID split than centralized GNNs; the GNN model that attains the best result in the
centralized setting may not maintain its advantage in the FL setting. These results imply that more research efforts are needed to unravel the mystery behind federated GNNs. Moreover, our system
performance analysis demonstrates that the FedGraphNN system is computationally efficient and secure to large-scale graphs datasets. We maintain the source code at this https URL.
title = {FedGraphNN: {A} Federated Learning System and Benchmark for Graph Neural Networks},
author = {He, Chaoyang and Balasubramanian, Keshav and Ceyani, Emir and Rong, Yu and Zhao, Peilin and Huang, Junzhou and Annavaram, Murali and Avestimehr, Salman},
year = {2021},
journal = {ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML)},
volume = {abs/2104.07145},
url = {https://arxiv.org/abs/2104.07145},
eprinttype = {arXiv},
eprint = {2104.07145},
timestamp = {Mon, 19 Apr 2021 16:45:47 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-07145.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
2. ICLR
Graph Information Bottleneck for Subgraph Recognition
Junchi Yu, Tingyang Xu, Yu Rong, Yatao Bian, Junzhou Huang, and Ran He
In 9th International Conference on Learning Representations, ICLR 2021, 2021
Given the input graph and its label/property, several key problems of graph learning, such as finding interpretable subgraphs, graph denoising and graph compression, can be attributed to the
fundamental problem of recognizing a subgraph of the original one. This subgraph shall be as informative as possible, yet contains less redundant and noisy structure. This problem setting is
closely related to the well-known information bottleneck (IB) principle, which, however, has less been studied for the irregular graph data and graph neural networks (GNNs). In this paper, we
propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning. Under this framework, one can recognize the maximally informative yet
compressive subgraph, named IB-subgraph. However, the GIB objective is notoriously hard to optimize, mostly due to the intractability of the mutual information of irregular graph data and the
unstable optimization process. In order to tackle these challenges, we propose: i) a GIB objective based-on a mutual information estimator for the irregular graph data; ii) a bi-level
optimization scheme to maximize the GIB objective; iii) a connectivity loss to stabilize the optimization process. We evaluate the properties of the IB-subgraph in three application scenarios:
improvement of graph classification, graph interpretation and graph denoising. Extensive experiments demonstrate that the information-theoretic IB-subgraph enjoys superior graph properties.
title = {Graph Information Bottleneck for Subgraph Recognition},
author = {Yu, Junchi and Xu, Tingyang and Rong, Yu and Bian, Yatao and Huang, Junzhou and He, Ran},
year = {2021},
booktitle = {9th International Conference on Learning Representations, {ICLR} 2021},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=bM4Iqfg8M2k},
timestamp = {Wed, 23 Jun 2021 17:36:39 +0200},
biburl = {https://dblp.org/rec/conf/iclr/YuXRBHH21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
3. CIKM
Spectral Graph Attention Network with Fast Eigen-Approximation
Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Somayeh Sojoudi, Junzhou Huang, and 1 more author
In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021
Variants of Graph Neural Networks (GNNs) for representation learning have been proposed recently and achieved fruitful results in various fields. Among them, Graph Attention Network (GAT) first
employs a self-attention strategy to learn attention weights for each edge in the spatial domain. However, learning the attentions over edges can only focus on the local information of graphs and
greatly increases the computational costs. In this paper, we first introduce the attention mechanism in the spectral domain of graphs and present Spectral Graph Attention Network (SpGAT) that
learns representations for different frequency components regarding weighted filters and graph wavelets bases. In this way, SpGAT can better capture global patterns of graphs in an efficient
manner with much fewer learned parameters than that of GAT. Further, to reduce the computational cost of SpGAT brought by the eigen-decomposition, we propose a fast approximation variant
SpGAT-Cheby. We thoroughly evaluate the performance of SpGAT and SpGAT-Cheby in semi-supervised node classification tasks and verify the effectiveness of the learned attentions in the spectral
title = {Spectral Graph Attention Network with Fast Eigen-Approximation},
author = {Chang, Heng and Rong, Yu and Xu, Tingyang and Huang, Wenbing and Sojoudi, Somayeh and Huang, Junzhou and Zhu, Wenwu},
year = {2021},
booktitle = {Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},
location = {Virtual Event, Queensland, Australia},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CIKM '21},
pages = {2905–2909},
doi = {10.1145/3459637.3482187},
isbn = {9781450384469},
url = {https://doi.org/10.1145/3459637.3482187},
numpages = {5},
keywords = {graph spectral analysis, neural networks, graph representation learning, node classification},
4. Neurocomputing
Molecular graph enhanced transformer for retrosynthesis prediction
Kelong Mao, Xi Xiao, Tingyang Xu, Yu Rong, Junzhou Huang, and Peilin Zhao
Neurocomputing, 2021
With massive possible synthetic routes in chemistry, retrosynthesis prediction is still a challenge for researchers. Recently, retrosynthesis prediction is formulated as a Machine Translation
(MT) task. Namely, since each molecule can be represented as a Simplified Molecular-Input Line-Entry System (SMILES) string, the process of retrosynthesis is analogized to a process of language
translation from the product to reactants. However, the MT models that applied on SMILES data usually ignore the information of natural atomic connections and the topology of molecules. To make
more chemically plausible constrains on the atom representation learning for better performance, in this paper, we propose a Graph Enhanced Transformer (GET) framework, which adopts both the
sequential and graphical information of molecules. Four different GET designs are proposed, which fuse the SMILES representations with atom embeddings learned from our improved Graph Neural
Network (GNN). Empirical results show that our model significantly outperforms the vanilla Transformer model in test accuracy.
title = {Molecular graph enhanced transformer for retrosynthesis prediction},
author = {Mao, Kelong and Xiao, Xi and Xu, Tingyang and Rong, Yu and Huang, Junzhou and Zhao, Peilin},
year = {2021},
journal = {Neurocomputing},
volume = {457},
pages = {193--202},
doi = {https://doi.org/10.1016/j.neucom.2021.06.037},
issn = {0925-2312},
url = {https://www.sciencedirect.com/science/article/pii/S0925231221009413},
keywords = {Retrosynthesis, Molecular pattern, Graph neural network, Transformer},
5. IJCAI
On Self-Distilling Graph Neural Network
Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, and Junzhou Huang
In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Aug 2021
Recently, the teacher-student knowledge distillation framework has demonstrated its potential in training Graph Neural Networks (GNNs). However, due to the difficulty of training
over-parameterized GNN models, one may not easily obtain a satisfactory teacher model for distillation. Furthermore, the inefficient training process of teacher-student knowledge distillation
also impedes its applications in GNN models. In this paper, we propose the first teacher-free knowledge distillation method for GNNs, termed GNN Self-Distillation (GNN-SD), that serves as a
drop-in replacement of the standard training process. The method is built upon the proposed neighborhood discrepancy rate (NDR), which quantifies the non-smoothness of the embedded graph in an
efficient way. Based on this metric, we propose the adaptive discrepancy retaining (ADR) regularizer to empower the transferability of knowledge that maintains high neighborhood discrepancy
across GNN layers. We also summarize a generic GNN-SD framework that could be exploited to induce other distillation strategies. Experiments further prove the effectiveness and generalization of
our approach, as it brings: 1) state-of-the-art GNN distillation performance with less training cost, 2) consistent and considerable performance enhancement for various popular backbones.
title = {On Self-Distilling Graph Neural Network},
author = {Chen, Yuzhao and Bian, Yatao and Xiao, Xi and Rong, Yu and Xu, Tingyang and Huang, Junzhou},
year = {2021},
month = aug,
booktitle = {Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {2278--2284},
doi = {10.24963/ijcai.2021/314},
url = {https://doi.org/10.24963/ijcai.2021/314},
note = {Main Track},
editor = {Zhou, Zhi-Hua},
6. AAAI
Hierarchical Graph Capsule Network
Jinyu Yang, Peilin Zhao, Yu Rong, Chaochao Yan, Chunyuan Li, Hehuan Ma, and 1 more author
Proceedings of the AAAI Conference on Artificial Intelligence, May 2021
Graph Neural Networks (GNNs) draw their strength from explicitly modeling the topological information of structured data. However, existing GNNs suffer from limited capability in capturing the
hierarchical graph representation which plays an important role in graph classification. In this paper, we innovatively propose hierarchical graph capsule network (HGCN) that can jointly learn
node embeddings and extract graph hierarchies. Specifically, disentangled graph capsules are established by identifying heterogeneous factors underlying each node, such that their instantiation
parameters represent different properties of the same entity. To learn the hierarchical representation, HGCN characterizes the part-whole relationship between lower-level capsules (part) and
higher-level capsules (whole) by explicitly considering the structure information among the parts. Experimental studies demonstrate the effectiveness of HGCN and the contribution of each
component. Code: https://github.com/uta-smile/HGCN
title = {Hierarchical Graph Capsule Network},
author = {Yang, Jinyu and Zhao, Peilin and Rong, Yu and Yan, Chaochao and Li, Chunyuan and Ma, Hehuan and Huang, Junzhou},
year = {2021},
month = may,
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {35},
number = {12},
pages = {10603--10611},
doi = {10.1609/aaai.v35i12.17268},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17268},
7. TPAMI
Recognizing Predictive Substructures with Subgraph Information Bottleneck
Junchi Yu, Tingyang Xu, Yu Rong, Yatao Bian, Junzhou Huang, and Ran He
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
The emergence of Graph Convolutional Network (GCN) has greatly boosted the progress of graph learning. However, two disturbing factors, noise and redundancy in graph data, and lack of
interpretation for prediction results, impede further development of GCN. One solution is to recognize a predictive yet compressed subgraph to get rid of the noise and redundancy and obtain the
interpretable part of the graph. This setting of subgraph is similar to the information bottleneck (IB) principle, which is less studied on graph-structured data and GCN. Inspired by the IB
principle, we propose a novel subgraph information bottleneck (SIB) framework to recognize such subgraphs, named IB-subgraph. However, the intractability of mutual information and the discrete
nature of graph data makes the objective of SIB notoriously hard to optimize. To this end, we introduce a bilevel optimization scheme coupled with a mutual information estimator for irregular
graphs. Moreover, we propose a continuous relaxation for subgraph selection with a connectivity loss for stabilization. We further theoretically prove the error bound of our estimation scheme for
mutual information and the noise-invariant nature of IB-subgraph. Extensive experiments on graph learning and large-scale point cloud tasks demonstrate the superior property of IB-subgraph.
title = {Recognizing Predictive Substructures with Subgraph Information Bottleneck},
author = {Yu, Junchi and Xu, Tingyang and Rong, Yu and Bian, Yatao and Huang, Junzhou and He, Ran},
year = {2021},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {},
number = {},
pages = {1--1},
doi = {10.1109/TPAMI.2021.3112205},
8. NeurIPS
Not All Low-Pass Filters are Robust in Graph Convolutional Networks
Heng Chang, Yu Rong, Tingyang Xu, Yatao Bian, Shiji Zhou, Xin Wang, and 2 more authors
In Advances in Neural Information Processing Systems, 2021
Graph Convolutional Networks (GCNs) are promising deep learning approaches in learning representations for graph-structured data. Despite the proliferation of such methods, it is well known that
they are vulnerable to carefully crafted adversarial attacks on the graph structure. In this paper, we first conduct an adversarial vulnerability analysis based on matrix perturbation theory. We
prove that the low- frequency components of the symmetric normalized Laplacian, which is usually used as the convolutional filter in GCNs, could be more robust against structural perturbations
when their eigenvalues fall into a certain robust interval. Our results indicate that not all low-frequency components are robust to adversarial attacks and provide a deeper understanding of the
relationship between graph spectrum and robustness of GCNs. Motivated by the theory, we present GCN-LFR, a general robust co-training paradigm for GCN-based models, that encourages transferring
the robustness of low-frequency components with an auxiliary neural network. To this end, GCN-LFR could enhance the robustness of various kinds of GCN-based models against poisoning structural
attacks in a plug-and-play manner. Extensive experiments across five benchmark datasets and five GCN-based models also confirm that GCN-LFR is resistant to the adversarial attacks without
compromising on performance in the benign situation.
title = {Not All Low-Pass Filters are Robust in Graph Convolutional Networks},
author = {Chang, Heng and Rong, Yu and Xu, Tingyang and Bian, Yatao and Zhou, Shiji and Wang, Xin and Huang, Junzhou and Zhu, Wenwu},
year = {2021},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {34},
pages = {25058--25071},
url = {https://proceedings.neurips.cc/paper_files/paper/2021/file/d30960ce77e83d896503d43ba249caf7-Paper.pdf},
editor = {Ranzato, M. and Beygelzimer, A. and Dauphin, Y. and Liang, P.S. and Vaughan, J. Wortman},
9. VLDB
Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation
Jinyu Yang, Chunyuan Li, Weizhi An, Hehuan Ma, Yuzhi Guo, Yu Rong, and 2 more authors
In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct 2021
Recent studies imply that deep neural networks are vulnerable to adversarial examples, i.e., inputs with a slight but intentional perturbation are incorrectly classified by the network. Such
vulnerability makes it risky for some security-related applications (e.g., semantic segmentation in autonomous cars) and triggers tremendous concerns on the model reliability. For the first time,
we comprehensively evaluate the robustness of existing UDA methods and propose a robust UDA approach. It is rooted in two observations: i) the robustness of UDA methods in semantic segmentation
remains unexplored, which poses a security concern in this field; and ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits model robustness in classification and
recognition tasks, they fail to provide the critical supervision signals that are essential in semantic segmentation. These observations motivate us to propose adversarial self-supervision UDA
(or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space. Extensive empirical studies on commonly used benchmarks
demonstrate that ASSUDA is resistant to adversarial attacks.
title = {Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation},
author = {Yang, Jinyu and Li, Chunyuan and An, Weizhi and Ma, Hehuan and Guo, Yuzhi and Rong, Yu and Zhao, Peilin and Huang, Junzhou},
year = {2021},
month = oct,
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
pages = {9194--9203},
10. ICML
Learning Diverse-Structured Networks for Adversarial Robustness
Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, and 2 more authors
In Proceedings of the 38th International Conference on Machine Learning, 18–24 jul 2021
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard
training (ST). Classic network architectures (NAs) are generally worse than searched NA in ST, which should be the same in AT. In this paper, we argue that NA and AT cannot be handled
independently, since given a dataset, the optimal NA in ST would be no longer optimal in AT. That being said, AT is time-consuming itself; if we directly search NAs in AT over large search
spaces, the computation will be practically infeasible. Thus, we propose diverse-structured network (DS-Net), to significantly reduce the size of the search space: instead of low-level
operations, we only consider predefined atomic blocks, where an atomic block is a time-tested building block like the residual block. There are only a few atomic blocks and thus we can weight all
atomic blocks rather than find the best one in a searched block of DS-Net, which is an essential tradeoff between exploring diverse structures and exploiting the best structures. Empirical
results demonstrate the advantages of DS-Net, i.e., weighting the atomic blocks.
title = {Learning Diverse-Structured Networks for Adversarial Robustness},
author = {Du, Xuefeng and Zhang, Jingfeng and Han, Bo and Liu, Tongliang and Rong, Yu and Niu, Gang and Huang, Junzhou and Sugiyama, Masashi},
year = {2021},
month = {18--24 Jul},
booktitle = {Proceedings of the 38th International Conference on Machine Learning},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
volume = {139},
pages = {2880--2891},
url = {https://proceedings.mlr.press/v139/du21f.html},
editor = {Meila, Marina and Zhang, Tong},
11. CIKM
Unsupervised Large-Scale Social Network Alignment via Cross Network Embedding
Zhehan Liang, Yu Rong, Chenxin Li, Yunlong Zhang, Yue Huang, Tingyang Xu, and 2 more authors
In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021
Nowadays, it is common for a person to possess different identities on multiple social platforms. Social network alignment aims to match the identities that from different networks. Recently,
unsupervised network alignment methods have received significant attention since no identity anchor is required. However, to capture the relevance between identities, the existing unsupervised
methods generally rely heavily on user profiles, which is unobtainable and unreliable in real-world scenarios. In this paper, we propose an unsupervised alignment framework named Large-Scale
Network Alignment (LSNA) to integrate the network information and reduce the requirement on user profile. The embedding module of LSNA, named Cross Network Embedding Model (CNEM), aims to
integrate the topology information and the network correlation to simultaneously guide the embedding process. Moreover, in order to adapt LSNA to large-scale networks, we propose a network
disassembling strategy to divide the costly large-scale network alignment problem into multiple executable sub-problems. The proposed method is evaluated over multiple real-world social network
datasets, and the results demonstrate that the proposed method outperforms the state-of-the-art methods.
title = {Unsupervised Large-Scale Social Network Alignment via Cross Network Embedding},
author = {Liang, Zhehan and Rong, Yu and Li, Chenxin and Zhang, Yunlong and Huang, Yue and Xu, Tingyang and Ding, Xinghao and Huang, Junzhou},
year = {2021},
booktitle = {Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},
location = {Virtual Event, Queensland, Australia},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CIKM '21},
pages = {1008–1017},
doi = {10.1145/3459637.3482310},
isbn = {9781450384469},
url = {https://doi.org/10.1145/3459637.3482310},
numpages = {10},
keywords = {unsupervised network alignment, large-scale social network, cross-network embedding., user profile},
12. ACS Omega
A Novel Scalarized Scaffold Hopping Algorithm with Graph-Based Variational Autoencoder for Discovery of JAK1 Inhibitors
Yang Yu, Tingyang Xu, Jiawen Li, Yaping Qiu, Yu Rong, Zhen Gong, and 6 more authors
ACS Omega, 2021
We have developed a graph-based Variational Autoencoder with Gaussian Mixture hidden space (GraphGMVAE), a deep learning approach for controllable magnitude of scaffold hopping in generative
chemistry. It can effectively and accurately generate molecules from a given reference compound, with excellent scaffold novelty against known molecules in the literature or patents (97.9% are
novel scaffolds). Moreover, a pipeline for prioritizing the generated compounds was also proposed to narrow down our validation focus. In this work, GraphGMVAE was validated by rapidly hopping
the scaffold from FDA-approved upadacitinib, which is an inhibitor of human Janus kinase 1 (JAK1), to generate more potent molecules with novel chemical scaffolds. Seven compounds were
synthesized and tested to be active in biochemical assays. The most potent molecule has 5.0 nM activity against JAK1 kinase, which shows that the GraphGMVAE model can design molecules like how a
human expert does but with high efficiency and accuracy.
title = {A Novel Scalarized Scaffold Hopping Algorithm with Graph-Based Variational Autoencoder for Discovery of JAK1 Inhibitors},
author = {Yu, Yang and Xu, Tingyang and Li, Jiawen and Qiu, Yaping and Rong, Yu and Gong, Zhen and Cheng, Xuemin and Dong, Liming and Liu, Wei and Li, Jin and Dou, Dengfeng and Huang, Junzhou},
year = {2021},
journal = {ACS Omega},
volume = {6},
number = {35},
pages = {22945--22954},
doi = {10.1021/acsomega.1c03613},
url = {https://doi.org/10.1021/acsomega.1c03613},
eprint = {https://doi.org/10.1021/acsomega.1c03613},
13. WISE
Graph Ordering: Towards the Optimal by Learning
Kangfei Zhao, Yu Rong, Jeffrey Xu Yu, Wenbing Huang, Junzhou Huang, and Hao Zhang
In Web Information Systems Engineering – WISE 2021, 2021
Graph ordering concentrates on optimizing graph layouts, which has a wide range of real applications. As an NP-hard problem, traditional approaches solve it via greedy algorithms. To overcome the
shortsightedness and inflexibility of the hand-crafted heuristics, we propose a learning-based framework: Deep Ordering Network with Reinforcement Learning (DON-RL) to capture the hidden
structure from partial vertex order sets over a specific large graph. In DON-RL, we propose a permutation invariant neural network DON to encode the information from partial vertex order.
Furthermore, to alleviate the combinatorial explosion for partial vertex order sets and make the efficient training data sampling, we propose RL-Sampler, a reinforcement learning-based sampler to
tune the vertex sampling probabilities adaptively during the training phase of DON. Comprehensive experiments on both synthetic and real graphs validate that our approach outperforms the
state-of-the-art heuristic algorithm consistently. The case study on graph compression demonstrates the potentials of DON-RL in real applications.
title = {Graph Ordering: Towards the Optimal by Learning},
author = {Zhao, Kangfei and Rong, Yu and Yu, Jeffrey Xu and Huang, Wenbing and Huang, Junzhou and Zhang, Hao},
year = {2021},
booktitle = {Web Information Systems Engineering -- WISE 2021},
publisher = {Springer International Publishing},
address = {Cham},
pages = {423--437},
isbn = {978-3-030-90888-1},
editor = {Zhang, Wenjie and Zou, Lei and Maamar, Zakaria and Chen, Lu},
14. BIBM
Gradient-Norm Based Attentive Loss for Molecular Property Prediction
Hehuan Ma, Yu Rong, Boyang Liu, Yuzhi Guo, Chaochao Yan, and Junzhou Huang
In 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2021
Molecular property prediction is one fundamental yet challenging task for drug discovery. Many studies have addressed this problem by designing deep learning algorithms, e.g., sequence-based
models and graph-based models. However, the underlying data distribution is rarely explored. We discover that there exist easy samples and hard samples in the molecule datasets, and the overall
distribution is usually imbalanced. Current research mainly treats them equally during the model training, while we believe that they shall not share the same weights since neural networks
training is dominated by the majority class. Therefore, we propose to utilize a self-attention mechanism to generate a learnable weight for each data sample according to the associated gradient
norm. The learned attention value is then embedded into the prediction models to construct an attentive loss for the network updating and back-propagation. It is empirically demonstrated that our
proposed method can consistently boost the prediction performance for both classification and regression tasks.
title = {Gradient-Norm Based Attentive Loss for Molecular Property Prediction},
author = {Ma, Hehuan and Rong, Yu and Liu, Boyang and Guo, Yuzhi and Yan, Chaochao and Huang, Junzhou},
year = {2021},
booktitle = {2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)},
volume = {},
number = {},
pages = {497--502},
doi = {10.1109/BIBM52615.2021.9669671},
15. IJCNN
Towards Feature-free TSP Solver Selection: A Deep Learning Approach
Kangfei Zhao, Shengcai Liu, Jeffrey Xu Yu, and Yu Rong
In 2021 International Joint Conference on Neural Networks (IJCNN), 2021
It is widely recognized that for the traveling salesman problem (TSP), there exists no universal best solver for all problem instances. This observation has greatly facilitated the research on
Algorithm Selection (AS), which seeks to identify the solver best suited for each TSP instance. Such segregation usually relies on a prior representation step, in which problem instances are
first represented by carefully established problem features. However, the creation of good features is non-trivial, typically requiring considerable domain knowledge and human effort. To
alleviate this issue, this paper proposes a deep learning framework, named CTAS, for TSP solver selection. Specifically, CTAS exploits deep convolutional neural networks (CNN) to automatically
extract informative features from TSP instances and utilizes data augmentation to handle the scarcity of labeled instances. Extensive experiments are conducted on a challenging TSP benchmark with
6,000 instances, which is the largest benchmark ever considered in this area. CTAS achieves over 2 × speedup of the average running time, compared with the single best solver. More importantly,
CTAS is the first feature-free approach that notably outperforms classical AS models, showing huge potential of applying deep learning to AS tasks.
title = {Towards Feature-free TSP Solver Selection: A Deep Learning Approach},
author = {Zhao, Kangfei and Liu, Shengcai and Yu, Jeffrey Xu and Rong, Yu},
year = {2021},
booktitle = {2021 International Joint Conference on Neural Networks (IJCNN)},
volume = {},
number = {},
pages = {1--8},
doi = {10.1109/IJCNN52387.2021.9533538},
16. DASFAA
Towards Expectation-Maximization by SQL in RDBMS
Kangfei Zhao, Jeffrey Xu Yu, Yu Rong, Ming Liao, and Junzhou Huang
In Database Systems for Advanced Applications, 2021
Integrating machine learning techniques into RDBMSs is an important task since many real applications require modeling (e.g., business intelligence, strategic analysis) as well as querying data
in RDBMSs. Without integration, it needs to export the data from RDBMSs to build a model using specialized ML toolkits and frameworks, and import the model trained back to RDBMSs for further
querying. Such a process is not desirable since it is time-consuming and needs to repeat when data is changed. In this paper, we provide an SQL solution that has the potential to support
different ML models in RDBMSs. We study how to support unsupervised probabilistic modeling, that has a wide range of applications in clustering, density estimation, and data summarization, and
focus on Expectation-Maximization (EM) algorithms, which is a general technique for finding maximum likelihood estimators. To train a model by EM, it needs to update the model parameters by an
E-step and an M-step in a while-loop iteratively until it converges to a level controlled by some thresholds or repeats a certain number of iterations. To support EM in RDBMSs, we show our
solutions to the matrix/vectors representations in RDBMSs, the relational algebra operations to support the linear algebra operations required by EM, parameters update by relational algebra, and
the support of a while-loop by SQL recursion. It is important to note that the SQL ’99 recursion cannot be used to handle such a while-loop since the M-step is non-monotonic. In addition, with a
model trained by an EM algorithm, we further design an automatic in-database model maintenance mechanism to maintain the model when the underlying training data changes. We have conducted
experimental studies and will report our findings in this paper.
title = {Towards Expectation-Maximization by SQL in RDBMS},
author = {Zhao, Kangfei and Yu, Jeffrey Xu and Rong, Yu and Liao, Ming and Huang, Junzhou},
year = {2021},
booktitle = {Database Systems for Advanced Applications},
publisher = {Springer International Publishing},
address = {Cham},
pages = {778--794},
isbn = {978-3-030-73197-7},
editor = {Jensen, Christian S. and Lim, Ee-Peng and Yang, De-Nian and Lee, Wang-Chien and Tseng, Vincent S. and Kalogeraki, Vana and Huang, Jen-Wei and Shen, Chih-Ya},
17. SIGMOD
A Learned Sketch for Subgraph Counting
Kangfei Zhao, Jeffrey Xu Yu, Hao Zhang, Qiyan Li, and Yu Rong
In Proceedings of the 2021 International Conference on Management of Data, 2021
Subgraph counting, as a fundamental problem in network analysis, is to count the number of subgraphs in a data graph that match a given query graph by either homomorphism or subgraph isomorphism.
The importance of subgraph counting derives from the fact that it provides insights of a large graph, in particular a labeled graph, when a collection of query graphs with different sizes and
labels are issued. The problem of counting is challenging. On one hand, exact counting by enumerating subgraphs is NP-hard. % On the other hand, approximate counting by subgraph isomorphism can
only support 3/5-node query graphs over unlabeled graphs. % Another way for subgraph counting is to specify it as an SQL query and estimate the cardinality of the query in rdbm. Existing
approaches for cardinality estimation can only support subgraph counting by homomorphism up to some extent, as it is difficult to deal with sampling failure when a query graph becomes large. A
question that arises is if subgraph counting can be supported by machine learning (ML) and deep learning (DL). The existing DL approach for subgraph isomorphism can only support small data
graphs. The ML/DL approaches proposed in rdbm context for approximate query processing and cardinality estimation cannot be used, as subgraph counting is to do complex self-joins over one
relation, whereas existing approaches focus on multiple relations. In this paper, we propose an Active Learned Sketch for Subgraph Counting (ALSS) with two main components: a sketch learned (ŁSS)
and an active learner (AL). The sketch is learned by a neural network regression model, and the active learner is to perform model updates based on new arrival test query graphs. % We conduct
extensive experimental studies to confirm the effectiveness and efficiency of ALSS using large real labeled graphs. Moreover, we show that ALSS can assist query optimizers to find a better query
plan for complex multi-way self-joins.
title = {A Learned Sketch for Subgraph Counting},
author = {Zhao, Kangfei and Yu, Jeffrey Xu and Zhang, Hao and Li, Qiyan and Rong, Yu},
year = {2021},
booktitle = {Proceedings of the 2021 International Conference on Management of Data},
location = {Virtual Event, China},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {SIGMOD '21},
pages = {2142–2155},
doi = {10.1145/3448016.3457289},
isbn = {9781450383431},
url = {https://doi.org/10.1145/3448016.3457289},
numpages = {14},
keywords = {subgraph counting, deep learning},
1. TheWebConf
Graph Representation Learning via Graphical Mutual Information Maximization
Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and 1 more author
In Proceedings of The Web Conference 2020, 2020
The richness in the content of various information networks such as social networks and communication networks provides the unprecedented potential for learning high-quality expressive
representations without external supervision. This paper investigates how to preserve and extract the abundant information from graph-structured data into embedding space in an unsupervised
manner. To this end, we propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations. GMI generalizes the
idea of conventional mutual information computations from vector space to the graph domain where measuring mutual information from two aspects of node features and topological structure is
indispensable. GMI exhibits several benefits: First, it is invariant to the isomorphic transformation of input graphs—an inevitable constraint in many existing graph representation learning
algorithms; Besides, it can be efficiently estimated and maximized by current mutual information estimation methods such as MINE; Finally, our theoretical analysis confirms its correctness and
rationality. With the aid of GMI, we develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder. Considerable experiments on
transductive as well as inductive node classification and link prediction demonstrate that our method outperforms state-of-the-art unsupervised counterparts, and even sometimes exceeds the
performance of supervised ones.
title = {Graph Representation Learning via Graphical Mutual Information Maximization},
author = {Peng, Zhen and Huang, Wenbing and Luo, Minnan and Zheng, Qinghua and Rong, Yu and Xu, Tingyang and Huang, Junzhou},
year = {2020},
booktitle = {Proceedings of The Web Conference 2020},
location = {Taipei, Taiwan},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {WWW '20},
pages = {259–270},
doi = {10.1145/3366423.3380112},
isbn = {9781450370233},
url = {https://doi.org/10.1145/3366423.3380112},
numpages = {12},
keywords = {Mutual information, InfoMax., Graph representation learning},
2. AAAI
Rumor Detection on Social Media with Bi-Directional Graph Convolutional Networks
Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wenbing Huang, Yu Rong, and 1 more author
Proceedings of the AAAI Conference on Artificial Intelligence, Apr 2020
Social media has been developing rapidly in public due to its nature of spreading new information, which leads to rumors being circulated. Meanwhile, detecting rumors from such massive
information in social media is becoming an arduous challenge. Therefore, some deep learning methods are applied to discover rumors through the way they spread, such as Recursive Neural Network
(RvNN) and so on. However, these deep learning methods only take into account the patterns of deep propagation but ignore the structures of wide dispersion in rumor detection. Actually,
propagation and dispersion are two crucial characteristics of rumors. In this paper, we propose a novel bi-directional graph model, named <em>Bi-Directional Graph Convolutional Networks</em>
(Bi-GCN), to explore both characteristics by operating on both top-down and bottom-up propagation of rumors. It leverages a GCN with a top-down directed graph of rumor spreading to learn the
patterns of rumor propagation; and a GCN with an opposite directed graph of rumor diffusion to capture the structures of rumor dispersion. Moreover, the information from source post is involved
in each layer of GCN to enhance the influences from the roots of rumors. Encouraging empirical results on several benchmarks confirm the superiority of the proposed method over the
state-of-the-art approaches.
title = {Rumor Detection on Social Media with Bi-Directional Graph Convolutional Networks},
author = {Bian, Tian and Xiao, Xi and Xu, Tingyang and Zhao, Peilin and Huang, Wenbing and Rong, Yu and Huang, Junzhou},
year = {2020},
month = apr,
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {34},
number = {01},
pages = {549--556},
doi = {10.1609/aaai.v34i01.5393},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/5393},
3. NeurIPS
Deep Multimodal Fusion by Channel Exchanging
Yikai Wang, Wenbing Huang, Fuchun Sun, Tingyang Xu, Yu Rong, and Junzhou Huang
In Advances in Neural Information Processing Systems, 2020
Deep multimodal fusion by using multiple sources of data for classification or regression has exhibited a clear advantage over the unimodal counterpart on various applications. Yet, current
methods including aggregation-based and alignment-based fusion are still inadequate in balancing the trade-off between inter-modal fusion and intra-modal processing, incurring a bottleneck of
performance improvement. To this end, this paper proposes Channel-Exchanging-Network (CEN), a parameter-free multimodal fusion framework that dynamically exchanges channels between sub-networks
of different modalities. Specifically, the channel exchanging process is self-guided by individual channel importance that is measured by the magnitude of Batch-Normalization (BN) scaling factor
during training. The validity of such exchanging process is also guaranteed by sharing convolutional filters yet keeping separate BN layers across modalities, which, as an add-on benefit, allows
our multimodal architecture to be almost as compact as a unimodal network. Extensive experiments on semantic segmentation via RGB-D data and image translation through multi-domain input verify
the effectiveness of our CEN compared to current state-of-the-art methods. Detailed ablation studies have also been carried out, which provably affirm the advantage of each component we propose.
Our code is available at https://github.com/yikaiw/CEN.
title = {Deep Multimodal Fusion by Channel Exchanging},
author = {Wang, Yikai and Huang, Wenbing and Sun, Fuchun and Xu, Tingyang and Rong, Yu and Huang, Junzhou},
year = {2020},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {33},
pages = {4835--4845},
url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/339a18def9898dd60a634b2ad8fbbd58-Paper.pdf},
editor = {Larochelle, H. and Ranzato, M. and Hadsell, R. and Balcan, M.F. and Lin, H.},
4. AAAI
A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models
Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, and 2 more authors
Proceedings of the AAAI Conference on Artificial Intelligence, Apr 2020
With the great success of graph embedding model on both academic and industry area, the robustness of graph embedding against adversarial attack inevitably becomes a central problem in graph
learning domain. Regardless of the fruitful progress, most of the current works perform the attack in a white-box fashion: they need to access the model predictions and labels to construct their
adversarial loss. However, the inaccessibility of model predictions in real systems makes the white-box attack impractical to real graph learning system. This paper promotes current frameworks in
a more general and flexible sense – we demand to attack various kinds of graph embedding model with black-box driven. To this end, we begin by investigating the theoretical connections between
graph signal processing and graph embedding models in a principled way and formulate the graph embedding model as a general graph signal process with corresponding graph filter. As such, a
generalized adversarial attacker: <em>GF-Attack</em> is constructed by the graph filter and feature matrix. Instead of accessing any knowledge of the target classifiers used in graph embedding,
<em>GF-Attack</em> performs the attack only on the graph filter in a black-box attack fashion. To validate the generalization of <em>GF-Attack</em>, we construct the attacker on four popular
graph embedding models. Extensive experimental results validate the effectiveness of our attacker on several benchmark datasets. Particularly by using our attack, even small graph perturbations
like one-edge flip is able to consistently make a strong attack in performance to different graph embedding models.
title = {A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models},
author = {Chang, Heng and Rong, Yu and Xu, Tingyang and Huang, Wenbing and Zhang, Honglei and Cui, Peng and Zhu, Wenwu and Huang, Junzhou},
year = {2020},
month = apr,
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {34},
number = {04},
pages = {3389--3396},
doi = {10.1609/aaai.v34i04.5741},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/5741},
5. TheWebConf
Adversarial Attack on Community Detection by Hiding Individuals
Jia Li, Honglei Zhang, Zhichao Han, Yu Rong, Hong Cheng, and Junzhou Huang
In Proceedings of The Web Conference 2020, 2020
It has been demonstrated that adversarial graphs, i.e., graphs with imperceptible perturbations added, can cause deep graph models to fail on node/graph classification tasks. In this paper, we
extend adversarial graphs to the problem of community detection which is much more difficult. We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph
community detection models, which has many applications in real-world scenarios, for example, protecting personal privacy in social networks and understanding camouflage patterns in transaction
networks. We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection
model. We also find that the adversarial graphs generated by our method can be transferred to other learning based community detection models.
title = {Adversarial Attack on Community Detection by Hiding Individuals},
author = {Li, Jia and Zhang, Honglei and Han, Zhichao and Rong, Yu and Cheng, Hong and Huang, Junzhou},
year = {2020},
booktitle = {Proceedings of The Web Conference 2020},
location = {Taipei, Taiwan},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {WWW '20},
pages = {917–927},
doi = {10.1145/3366423.3380171},
isbn = {9781450370233},
url = {https://doi.org/10.1145/3366423.3380171},
numpages = {11},
keywords = {graph generation, community detection, adversarial attack},
6. Arxiv
Tackling Over-Smoothing for General Graph Convolutional Networks
Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang
CoRR, 2020
Increasing the depth of GCN, which is expected to permit more expressivity, is shown to incur performance detriment especially on node classification. The main cause of this lies in
over-smoothing. The over-smoothing issue drives the output of GCN towards a space that contains limited distinguished information among nodes, leading to poor expressivity. Several works on
refining the architecture of deep GCN have been proposed, but it is still unknown in theory whether or not these refinements are able to relieve over-smoothing. In this paper, we first
theoretically analyze how general GCNs act with the increase in depth, including generic GCN, GCN with bias, ResGCN, and APPNP. We find that all these models are characterized by a universal
process: all nodes converging to a cuboid. Upon this theorem, we propose DropEdge to alleviate over-smoothing by randomly removing a certain number of edges at each training epoch. Theoretically,
DropEdge either reduces the convergence speed of over-smoothing or relieves the information loss caused by dimension collapse. Experimental evaluations on simulated dataset have visualized the
difference in over-smoothing between different GCNs. Moreover, extensive experiments on several real benchmarks support that DropEdge consistently improves the performance on a variety of both
shallow and deep GCNs.
title = {Tackling Over-Smoothing for General Graph Convolutional Networks},
author = {Huang, Wenbing and Rong, Yu and Xu, Tingyang and Sun, Fuchun and Huang, Junzhou},
year = {2020},
journal = {CoRR},
volume = {abs/2008.09864},
url = {https://arxiv.org/abs/2008.09864},
eprinttype = {arXiv},
eprint = {2008.09864},
timestamp = {Thu, 29 Jul 2021 17:20:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2008-09864.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
7. NeurIPS
Dirichlet Graph Variational Autoencoder
Jia Li, Jianwei Yu, Jiajin Li, Honglei Zhang, Kangfei Zhao, Yu Rong, and 2 more authors
In Advances in Neural Information Processing Systems, 2020
Graph Neural Networks (GNN) and Variational Autoencoders (VAEs) have been widely used in modeling and generating graphs with latent factors. However there is no clear explanation of what these
latent factors are and why they perform well. In this work, we present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors. Our study connects VAEs
based graph generation and balanced graph cut, and provides a new way to understand and improve the internal mechanism of VAEs based graph generation. Specifically, we first interpret the
reconstruction term of DGVAE as balanced graph cut in a principled way. Furthermore, motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts
to encode the input graph into cluster memberships. Heatts utilizes the Taylor series for fast computation of Heat kernels and has better low pass characteristics than Graph Convolutional
Networks (GCN). Through experiments on graph generation and graph clustering, we demonstrate the effectiveness of our proposed framework.
title = {Dirichlet Graph Variational Autoencoder},
author = {Li, Jia and Yu, Jianwei and Li, Jiajin and Zhang, Honglei and Zhao, Kangfei and Rong, Yu and Cheng, Hong and Huang, Junzhou},
year = {2020},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {33},
pages = {5274--5283},
url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/38a77aa456fc813af07bb428f2363c8d-Paper.pdf},
editor = {Larochelle, H. and Ranzato, M. and Hadsell, R. and Balcan, M.F. and Lin, H.},
8. KDD
Deep Graph Learning: Foundations, Advances and Applications
Yu Rong, Tingyang Xu, Junzhou Huang, Wenbing Huang, Hong Cheng, Yao Ma, and 4 more authors
In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020
Many real data come in the form of non-grid objects, i.e. graphs, from social networks to molecules. Adaptation of deep learning from grid-alike data (e.g. images) to graphs has recently received
unprecedented attention from both machine learning and data mining communities, leading to a new cross-domain field—Deep Graph Learning (DGL). Instead of painstaking feature engineering, DGL aims
to learn informative representations of graphs in an end-to-end manner. It has exhibited remarkable success in various tasks, such as node/graph classification, link prediction, etc.In this
tutorial, we aim to provide a comprehensive introduction to deep graph learning. We first introduce the theoretical foundations on deep graph learning with a focus on describing various Graph
Neural Network Models (GNNs). We then cover the key achievements of DGL in recent years. Specifically, we discuss the four topics: 1) training deep GNNs; 2) robustness of GNNs; 3) scalability of
GNNs; and 4) self-supervised and unsupervised learning of GNNs. Finally, we will introduce the applications of DGL towards various domains, including but not limited to drug discovery, computer
vision, medical image analysis, social network analysis, natural language processing and recommendation.
title = {Deep Graph Learning: Foundations, Advances and Applications},
author = {Rong, Yu and Xu, Tingyang and Huang, Junzhou and Huang, Wenbing and Cheng, Hong and Ma, Yao and Wang, Yiqi and Derr, Tyler and Wu, Lingfei and Ma, Tengfei},
year = {2020},
booktitle = {Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
location = {Virtual Event, CA, USA},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {KDD '20},
pages = {3555–3556},
doi = {10.1145/3394486.3406474},
isbn = {9781450379984},
url = {https://doi.org/10.1145/3394486.3406474},
numpages = {2},
9. VLDB
Maximizing the Reduction Ability for Near-Maximum Independent Set Computation
Chengzhi Piao, Weiguo Zheng, Yu Rong, and Hong Cheng
Proc. VLDB Endow., Jul 2020
Finding the maximum independent set is a fundamental NP-hard problem in graph theory. Recent studies have paid much attention to designing efficient algorithms that find a maximal independent set
of good quality (the more vertices the better). Kernelization is a widely used technique that applies rich reduction rules to determine the vertices that definitely belong to the maximum
independent set. When no reduction rules can be applied anymore, greedy strategies including vertex addition or vertex deletion are employed to break the tie. It remains an open problem that how
to apply these reduction rules and determine the greedy strategy to optimize the overall performance including both solution quality and time efficiency. Thus we propose a scheduling framework
that dynamically determines the reduction rules and greedy strategies rather than applying them in a fixed order. As an important reduction rule, degree-two reduction exhibits powerful pruning
ability but suffers from high time complexity O(nm), where n and m denote the number of vertices and edges respectively. We propose a novel data structure called representative graph, based on
which the worst-case time complexity of degree-two reduction is reduced to O(m log n). Moreover, we enrich the naive vertex addition strategy by considering the graph topology and develop
efficient methods (active vertex index and lazy update mechanism) to improve the time efficiency. Extensive experiments are conducted on both large real networks and various types of synthetic
graphs to confirm the effectiveness, efficiency and robustness of our algorithms.
title = {Maximizing the Reduction Ability for Near-Maximum Independent Set Computation},
author = {Piao, Chengzhi and Zheng, Weiguo and Rong, Yu and Cheng, Hong},
year = {2020},
month = jul,
journal = {Proc. VLDB Endow.},
publisher = {VLDB Endowment},
volume = {13},
number = {12},
pages = {2466–2478},
doi = {10.14778/3407790.3407838},
issn = {2150-8097},
url = {https://doi.org/10.14778/3407790.3407838},
issue_date = {August 2020},
numpages = {13},
10. ICLR
DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang
In 8th International Conference on Learning Representations, ICLR 2020, 2020
Over-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization
ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge,
a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter
and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either reduces the convergence speed of over-smoothing or relieves the information loss caused by it.
More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on
several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically
visualized and validated as well. Codes are released on https://github.com/DropEdge/DropEdge.
title = {DropEdge: Towards Deep Graph Convolutional Networks on Node Classification},
author = {Rong, Yu and Huang, Wenbing and Xu, Tingyang and Huang, Junzhou},
year = {2020},
booktitle = {8th International Conference on Learning Representations, {ICLR} 2020},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=Hkx1qkrKPr},
timestamp = {Thu, 29 Jul 2021 17:20:43 +0200},
biburl = {https://dblp.org/rec/conf/iclr/RongHXH20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
11. NeurIPS
Self-Supervised Graph Transformer on Large-Scale Molecular Data
Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying WEI, Wenbing Huang, and 1 more author
In Advances in Neural Information Processing Systems, 2020
How to obtain informative representations of molecules is a crucial prerequisite in AI-driven drug design and discovery. Recent researches abstract molecules as graphs and employ Graph Neural
Networks (GNNs) for molecular representation learning. Nevertheless, two issues impede the usage of GNNs in real scenarios: (1) insufficient labeled molecules for supervised training; (2) poor
generalization capability to new-synthesized molecules. To address them both, we propose a novel framework, GROVER, which stands for Graph Representation frOm self-superVised mEssage passing
tRansformer. With carefully designed self-supervised tasks in node-, edge- and graph-level, GROVER can learn rich structural and semantic information of molecules from enormous unlabelled
molecular data. Rather, to encode such complex information, GROVER integrates Message Passing Networks into the Transformer-style architecture to deliver a class of more expressive encoders of
molecules. The flexibility of GROVER allows it to be trained efficiently on large-scale molecular dataset without requiring any supervision, thus being immunized to the two issues mentioned
above. We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules—the biggest GNN and the largest training dataset in molecular representation learning. We then leverage
the pre-trained GROVER for molecular property prediction followed by task-specific fine-tuning, where we observe a huge improvement (more than 6% on average) from current state-of-the-art methods
on 11 challenging benchmarks. The insights we gained are that well-designed self-supervision losses and largely-expressive pre-trained models enjoy the significant potential on performance
title = {Self-Supervised Graph Transformer on Large-Scale Molecular Data},
author = {Rong, Yu and Bian, Yatao and Xu, Tingyang and Xie, Weiyang and WEI, Ying and Huang, Wenbing and Huang, Junzhou},
year = {2020},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {33},
pages = {12559--12571},
url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/94aef38441efa3380a3bed3faf1f9d5d-Paper.pdf},
editor = {Larochelle, H. and Ranzato, M. and Hadsell, R. and Balcan, M.F. and Lin, H.},
1. ICCV
Graph Convolutional Networks for Temporal Action Localization
Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, and 1 more author
In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct 2019
Most state-of-the-art action localization systems process each action proposal individually, without explicitly exploiting their relations during learning. However, the relations between
proposals actually play an important role in action localization, since a meaningful action always consists of multiple proposals in a video. In this paper, we propose to exploit the
proposal-proposal relations using GraphConvolutional Networks (GCNs). First, we construct an action proposal graph, where each proposal is represented as a node and their relations between two
proposals as an edge. Here, we use two types of relations, one for capturing the context information for each proposal and the other one for characterizing the correlations between distinct
actions. Then we apply the GCNs over the graph to model the relations among different proposals and learn powerful representations for the action classification and localization. Experimental
results show that our approach significantly outperforms the state-of-the-art on THUMOS14(49.1% versus 42.8%). Moreover, augmentation experiments on ActivityNet also verify the efficacy of
modeling action proposal relationships.
title = {Graph Convolutional Networks for Temporal Action Localization},
author = {Zeng, Runhao and Huang, Wenbing and Tan, Mingkui and Rong, Yu and Zhao, Peilin and Huang, Junzhou and Gan, Chuang},
year = {2019},
month = oct,
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
2. CVPR
Progressive Feature Alignment for Unsupervised Domain Adaptation
Chaoqi Chen, Weiping Xie, Wenbing Huang, Yu Rong, Xinghao Ding, Yue Huang, and 2 more authors
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2019
Unsupervised domain adaptation (UDA) transfers knowledge from a label-rich source domain to a fully-unlabeled target domain. To tackle this task, recent approaches resort to discriminative domain
transfer in virtue of pseudo-labels to enforce the class-level distribution alignment across the source and target domains. These methods, however, are vulnerable to the error accumulation and
thus incapable of preserving cross-domain category consistency, as the pseudo-labeling accuracy is not guaranteed explicitly. In this paper, we propose the Progressive Feature Alignment Network
(PFAN) to align the discriminative features across domains progressively and effectively, via exploiting the intra-class variation in the target domain. To be specific, we first develop an
Easy-to-Hard Transfer Strategy (EHTS) and an Adaptive Prototype Alignment (APA) step to train our model iteratively and alternatively. Moreover, upon observing that a good domain adaptation
usually requires a non-saturated source classifier, we consider a simple yet efficient way to retard the convergence speed of the source classification loss by further involving a temperature
variate into the soft-max function. The extensive experimental results reveal that the proposed PFAN exceeds the state-of-the-art performance on three UDA datasets.
title = {Progressive Feature Alignment for Unsupervised Domain Adaptation},
author = {Chen, Chaoqi and Xie, Weiping and Huang, Wenbing and Rong, Yu and Ding, Xinghao and Huang, Yue and Xu, Tingyang and Huang, Junzhou},
year = {2019},
month = jun,
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
3. TheWebConf
Semi-Supervised Graph Classification: A Hierarchical Graph Perspective
Jia Li, Yu Rong, Hong Cheng, Helen Meng, Wenbing Huang, and Junzhou Huang
In The World Wide Web Conference, 2019
Node classification and graph classification are two graph learning problems that predict the class label of a node and the class label of a graph respectively. A node of a graph usually
represents a real-world entity, e.g., a user in a social network, or a protein in a protein-protein interaction network. In this work, we consider a more challenging but practically useful
setting, in which a node itself is a graph instance. This leads to a hierarchical graph perspective which arises in many domains such as social network, biological network and document
collection. For example, in a social network, a group of people with shared interests forms a user group, whereas a number of user groups are interconnected via interactions or common members. We
study the node classification problem in the hierarchical graph where a “node” is a graph instance, e.g., a user group in the above example. As labels are usually limited in real-world data, we
design two novel semi-supervised solutions named SEmi-supervised grAph cLassification via Cautious/Active Iteration (or SEAL-C/AI in short). SEAL-C/AI adopt an iterative framework that takes
turns to build or update two classifiers, one working at the graph instance level and the other at the hierarchical graph level. To simplify the representation of the hierarchical graph, we
propose a novel supervised, self-attentive graph embedding method called SAGE, which embeds graph instances of arbitrary size into fixed-length vectors. Through experiments on synthetic data and
Tencent QQ group data, we demonstrate that SEAL-C/AI not only outperform competing methods by a significant margin in terms of accuracy/Macro-F1, but also generate meaningful interpretations of
the learned representations.
title = {Semi-Supervised Graph Classification: A Hierarchical Graph Perspective},
author = {Li, Jia and Rong, Yu and Cheng, Hong and Meng, Helen and Huang, Wenbing and Huang, Junzhou},
year = {2019},
booktitle = {The World Wide Web Conference},
location = {San Francisco, CA, USA},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {WWW '19},
pages = {972–982},
doi = {10.1145/3308558.3313461},
isbn = {9781450366748},
url = {https://doi.org/10.1145/3308558.3313461},
numpages = {11},
keywords = {graph embedding, hierarchical graph, semi-supervised learning, active learning},
1. NeurIPS
Adaptive Sampling Towards Fast Graph Representation Learning
Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang
In Advances in Neural Information Processing Systems, 2018
Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue
that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an
adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are
shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance
reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip
connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.
title = {Adaptive Sampling Towards Fast Graph Representation Learning},
author = {Huang, Wenbing and Zhang, Tong and Rong, Yu and Huang, Junzhou},
year = {2018},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {31},
pages = {},
url = {https://proceedings.neurips.cc/paper_files/paper/2018/file/01eee509ee2f68dc6014898c309e86bf-Paper.pdf},
editor = {Bengio, S. and Wallach, H. and Larochelle, H. and Grauman, K. and Cesa-Bianchi, N. and Garnett, R.},
2. KDD
TATC: Predicting Alzheimer’s Disease with Actigraphy Data
Jia Li, Yu Rong, Helen Meng, Zhihui Lu, Timothy Kwok, and Hong Cheng
In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018
With the increase of elderly population, Alzheimer’s Disease (AD), as the most common cause of dementia among the elderly, is affecting more and more senior people. It is crucial for a patient to
receive accurate and timely diagnosis of AD. Current diagnosis relies on doctors’ experience and clinical test, which, unfortunately, may not be performed until noticeable AD symptoms are
developed. In this work, we present our novel solution named time-aware TICC and CNN (TATC), for predicting AD from actigraphy data. TATC is a multivariate time series classification method using
a neural attention-based deep learning approach. It not only performs accurate prediction of AD risk, but also generates meaningful interpretation of daily behavior pattern of subjects. TATC
provides an automatic, low-cost solution for continuously monitoring the change of physical activity of subjects in daily living environment. We believe the future deployment of TATC can benefit
both doctors and patients in early detection of potential AD risk.
title = {TATC: Predicting Alzheimer's Disease with Actigraphy Data},
author = {Li, Jia and Rong, Yu and Meng, Helen and Lu, Zhihui and Kwok, Timothy and Cheng, Hong},
year = {2018},
booktitle = {Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
location = {London, United Kingdom},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {KDD '18},
pages = {509–518},
doi = {10.1145/3219819.3219831},
isbn = {9781450355520},
url = {https://doi.org/10.1145/3219819.3219831},
numpages = {10},
keywords = {actigraphy data, alzheimer's disease, attention, circadian rhythm},
3. DASFAA
Exploiting Ranking Consistency Principle in Representation Learning for Location Promotion
Siyuan Zhang, Yu Rong, Yu Zheng, Hong Cheng, and Junzhou Huang
In Database Systems for Advanced Applications, 2018
Location-based services, which use information of people’s geographical position as service context, are becoming part of our daily life. Given the large volume of heterogeneous data generated by
location-based services, one important problem is to estimate the visiting probability of users who haven’t visited a target Point of Interest (POI) yet, and return the target user list based on
their visiting probabilities. This problem is called the location promotion problem. The location promotion problem has not been well studied due to the following difficulties: (1) the cold start
POI problem: a target POI for promotion can be a new POI with no check-in records; and (2) heterogeneous information integration. Existing methods mainly focus on developing a general mobility
model for all users’ check-ins, but ignore the ranking utility from the perspective of POIs and the interaction between geographical and preference influence of POIs.
title = {Exploiting Ranking Consistency Principle in Representation Learning for Location Promotion},
author = {Zhang, Siyuan and Rong, Yu and Zheng, Yu and Cheng, Hong and Huang, Junzhou},
year = {2018},
booktitle = {Database Systems for Advanced Applications},
publisher = {Springer International Publishing},
address = {Cham},
pages = {457--473},
isbn = {978-3-319-91458-9},
editor = {Pei, Jian and Manolopoulos, Yannis and Sadiq, Shazia and Li, Jianxin},
1. CIKM
Minimizing Dependence between Graphs
Yu Rong, and Hong Cheng
In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2017
In recent years, modeling the relation between two graphs has received unprecedented attention from researchers due to its wide applications in many areas, such as social analysis and
bioinformatics. The nature of relations between two graphs can be divided into two categories: the vertex relation and the link relation. Many studies focus on modeling the vertex relation
between graphs and try to find the vertex correspondence between two graphs. However, the link relation between graphs has not been fully studied. Specifically, we model the cross-graph link
relation as cross-graph dependence, which reflects the dependence of a vertex in one graph on a vertex in the other graph. A generic problem, called Graph Dependence Minimization (GDM), is
defined as: given two graphs with cross-graph dependence, how to select a subset of vertexes from one graph and copy them to the other, so as to minimize the cross-graph dependence. Many real
applications can benefit from the solution to GDM. Examples include reducing the cross-language links in online encyclopedias, optimizing the cross-platform communication cost between different
cloud services, and so on. This problem is trivial if we can select as many vertexes as we want to copy. But what if we can only choose a limited number of vertexes to copy so as to make the two
graphs as independent as possible? We formulate GDM with a budget constraint into a combinatorial optimization problem, which is proven to be NP-hard. We propose two algorithms to solve GDM.
Firstly, we prove the submodularity of the objective function of GDM and adopt the size-constrained submodular minimization (SSM) algorithm to solve it. Since the SSM-based algorithm cannot scale
to large graphs, we design a heuristic algorithm with a provable approximation guarantee. We prove that the error achieved by the heuristic algorithm is bounded by an additive factor which is
proportional to the square of the given budget. Extensive experiments on both synthetic and real-world graphs show that the proposed algorithms consistently outperform the well-studied graph
centrality measure based solutions. Furthermore, we conduct a case study on the Wikipedia graphs with millions of vertexes and links to demonstrate the potential of GDM to solve real-world
title = {Minimizing Dependence between Graphs},
author = {Rong, Yu and Cheng, Hong},
year = {2017},
booktitle = {Proceedings of the 2017 ACM on Conference on Information and Knowledge Management},
location = {Singapore, Singapore},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CIKM '17},
pages = {1827–1836},
doi = {10.1145/3132847.3132931},
isbn = {9781450349185},
url = {https://doi.org/10.1145/3132847.3132931},
numpages = {10},
keywords = {graph dependence minimization, graph analytics, submodular minimization},
1. CIKM
A Model-Free Approach to Infer the Diffusion Network from Event Cascade
Yu Rong, Qiankun Zhu, and Hong Cheng
In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, 2016
Information diffusion through various types of networks, such as social networks and media networks, is a very common phenomenon on the Internet nowadays. In many scenarios, we can track only the
time when the information reaches a node. However, the source infecting this node is usually unobserved. Inferring the underlying diffusion network based on cascade data (observed sequence of
infected nodes with timestamp) without additional information is an essential and challenging task in information diffusion. Many studies have focused on constructing complex models to infer the
underlying diffusion network in a parametric way. However, the diffusion process in the real world is very complex and hard to be captured by a parametric model. Even worse, inferring the
parameters of a complex model is impractical under a large data volume.Different from previous works focusing on building models, we propose to interpret the diffusion process from the cascade
data directly in a non-parametric way, and design a novel and efficient algorithm named Non-Parametric Distributional Clustering (NPDC). Our algorithm infers the diffusion network according to
the statistical difference of the infection time intervals between nodes connected with diffusion edges versus those with no diffusion edges. NPDC is a model-free approach since we do not define
any transmission models between nodes in advance. We conduct experiments on synthetic data sets and two large real-world data sets with millions of cascades. Our algorithm achieves substantially
higher accuracy of network inference and is orders of magnitude faster compared with the state-of-the-art solutions.
title = {A Model-Free Approach to Infer the Diffusion Network from Event Cascade},
author = {Rong, Yu and Zhu, Qiankun and Cheng, Hong},
year = {2016},
booktitle = {Proceedings of the 25th ACM International on Conference on Information and Knowledge Management},
location = {Indianapolis, Indiana, USA},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CIKM '16},
pages = {1653–1662},
doi = {10.1145/2983323.2983718},
isbn = {9781450340731},
url = {https://doi.org/10.1145/2983323.2983718},
numpages = {10},
keywords = {non-parametric statistics, information diffusion, clustering, network inference},
1. KDD
Why It Happened: Identifying and Modeling the Reasons of the Happening of Social Events
Yu Rong, Hong Cheng, and Zhiyu Mo
In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015
In nowadays social networks, a huge volume of content containing rich information, such as reviews, ratings, microblogs, etc., is being generated, consumed and diffused by users all the time.
Given the temporal information, we can obtain the event cascade which indicates the time sequence of the arrival of information to users. Many models have been proposed to explain how information
diffuses. However, most existing models cannot give a clear explanation why every specific event happens in the event cascade. Such explanation is essential for us to have a deeper understanding
of information diffusion as well as a better prediction of future event cascade.In order to uncover the mechanism of the happening of social events, we analyze the rating event data crawled from
Douban.com, a Chinese social network, from year 2006 to 2011. We distinguish three factors: social, external and intrinsic influence which can explain the emergence of every specific event. Then
we use the mixed Poisson process to model event cascade generated by different factors respectively and integrate different Poisson processes with shared parameters. The proposed model, called
Combinational Mixed Poisson Process (CMPP) model, can explain not only how information diffuses in social networks, but also why a specific event happens. This model can help us to understand
information diffusion from both macroscopic and microscopic perspectives. We develop an efficient Classification EM algorithm to infer the model parameters. The explanatory and predictive power
of the proposed model has been demonstrated by the experiments on large real data sets.
title = {Why It Happened: Identifying and Modeling the Reasons of the Happening of Social Events},
author = {Rong, Yu and Cheng, Hong and Mo, Zhiyu},
year = {2015},
booktitle = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
location = {Sydney, NSW, Australia},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {KDD '15},
pages = {1015–1024},
doi = {10.1145/2783258.2783305},
isbn = {9781450336642},
url = {https://doi.org/10.1145/2783258.2783305},
numpages = {10},
keywords = {poisson process, event cascade, intrinsic influence, information diffusion, social influence, external influence},
1. TheWebConf
A Monte Carlo Algorithm for Cold Start Recommendation
Yu Rong, Xiao Wen, and Hong Cheng
In Proceedings of the 23rd International Conference on World Wide Web, 2014
Recommendation systems have been widely used in E-commerce sites, social networks, etc. One of the core tasks in recommendation systems is to predict the users’ ratings on items. Although many
models and algorithms have been proposed, how to make accurate prediction for new users with extremely few rating records still remains a big challenge, which is called the cold start problem.
Many existing methods utilize additional information, such as social graphs, to cope with the cold start problem. However, the side information may not always be available. In contrast to such
methods, we propose a more general solution to address the cold start problem based on the observed user rating records only. Specifically we define a random walk on a bipartite graph of users
and items to simulate the preference propagation among users, in order to alleviate the data sparsity problem for cold start users. Then we propose a Monte Carlo algorithm to estimate the
similarity between different users. This algorithm takes a precomputation approach, and thus can efficiently compute the user similarity given any new user for rating prediction. In addition, our
algorithm can easily handle dynamic updates and can be parallelized naturally, which are crucial for large recommendation systems. Theoretical analysis is presented to demonstrate the efficiency
and effectiveness of our algorithm, and extensive experiments also confirm our theoretical findings.
title = {A Monte Carlo Algorithm for Cold Start Recommendation},
author = {Rong, Yu and Wen, Xiao and Cheng, Hong},
year = {2014},
booktitle = {Proceedings of the 23rd International Conference on World Wide Web},
location = {Seoul, Korea},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {WWW '14},
pages = {327–336},
doi = {10.1145/2566486.2567978},
isbn = {9781450327442},
url = {https://doi.org/10.1145/2566486.2567978},
numpages = {10},
keywords = {monte carlo simulation, random walk on bipartite graph, cold start, preference propagation}, | {"url":"https://royrong.me/publications-by-year","timestamp":"2024-11-11T09:50:58Z","content_type":"text/html","content_length":"345865","record_id":"<urn:uuid:83f4f117-c2d8-438e-b5e2-edf443a72531>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00628.warc.gz"} |
Not All Big Moves Are Created Equal: Volatility and Probabilities
14.07.2019 Investment Strategies, Trading Psychology 0
Watch the financial news long enough, and you’re bound to hear someone channeling Johnny Most at a Celtics game: “XYZ is up 5 points!” “ABCD is down 3%!” Surprise slam dunks—and stock price
changes—often mean big stories. And market news could be pretty boring without the drama. But if you don’t put volatility (“vol”) in context, it’s hard to tell whether all the excitement over a stock
is caused by a game-winning three-pointer, or a mere first-quarter free throw.
Create a Game Plan
This is where you come in, the savvy trader, who can interpret and frame certain market events by relying on a mix of vol and statistics. Nothing too complex, but enough to answer the question, how
big is big?
This is important because it can help you incorporate market news into a trading strategy. Are you a momentum trader looking to buy into strength or short into weakness? A contrarian looking to buy
stocks with big selloffs, or short stocks after big rallies? Whatever your strategy, it’s vital to have a metric that helps you determine whether a big price change warrants your attention. Let’s get
First, a few statistics. In some financial models and theories (e.g., Black-Scholes), stock and index percentage price changes are assumed to be normally distributed. Think of a bell curve with a
peak in the middle that theoretically represents 0% change. You’ll find big down moves on the left-hand side, and big up moves on the right-hand side. In reality, price changes in all cases may or
may not be normally distributed, but the normal distribution lets us determine a couple useful things about how big is “big.”
One standard deviation up and down from the mean theoretically covers about 68% of price changes. Two standard deviations up and down cover about 95%. And three standard deviations up and down cover
about 99%. Further, a stock or index’s vol determines the size of a standard deviation in terms of price. The higher the vol, the bigger the dollar change in the stock price that standard deviation
represents. Yes, one standard deviation covers 68% of a theoretical price change. But vol determines whether those price changes are $1 or $10. A $1 change in a $10 stock is a much bigger percentage
(10%) than a $1 change in a $500 stock (0.2%). And whether the $10 stock might change that 10%, or that $500 stock might change that 0.2%, depends on each stock’s volatility.
Consider a Level Playing Field
A $10 stock with 15% volatility would have a theoretical range of $8.50 to $11.50 68% of the time in one year. To get that, multiply the stock price ($10) by the volatility (15%), then add or
subtract that from the prevailing stock price. Multiplying stock price by its vol gives you a theoretical standard deviation for a year.
Now, say you want to know the standard deviation for a day, week, or month. No problem. Just multiply that vol number (always a one-year number on the thinkorswim® platform from TD Ameritrade) by the
square root of the time period to adjust it to your desired time frame. For example, for the standard deviation of one trading day, divide one by the number of trading days in a year (262 is used
here), take the square root, multiply by the vol, then multiply that by the stock price.
For that $10 stock with a 15% vol, the one-day standard deviation would be the square root of 1/262 (or 0.0618) x 0.15 x $10 = $0.093. Theoretically, that stock could land in a range between $9.907
and $10.093, 68% of the time. If the stock moved down $0.19 in one day from $10 to $9.81, it would have theoretically dropped just over two standard deviations based on that 15% volatility. Two
standard deviations is a pretty large move according to the normal distribution, even though the price changes only $0.19.
Run Your Plays
Let’s put it all into practice. Say a stock has rallied from $80 on Monday to $85 on Tuesday. On Monday, the stock had an overall vol of 30%. So, 0.0618 x 0.30 x $80 = $1.48. And $1.48 is one
standard deviation based on Monday’s price and volatility. Theoretically, 68% of the time, the stock might have closed in a range between $78.52 (down $1.48) and $81.48 (up $1.48) on Tuesday. But
instead, it rose $5 on Tuesday. Divide the $5 change in the stock price by the $1.48 theoretical standard deviation to see how many standard deviations it rallied ($5/$1.48 = 3.38 standard
deviations). Theoretically, with 99% of the potential stock prices being up or down three standard deviations, a 3.38 standard deviation price change is pretty unusual.
If the vol of that $80 stock was 60% on Monday, then 0.0618 x 0.60 x $80 = $2.97. That’s theoretically one standard deviation, and $5/$2.97 = 1.68. A 1.68 standard deviation price change is big, but
not unusual, theoretically.
The $5 price change in the $80 stock is that same p/l for 100 shares. But in statistical terms, it means different things. The $5 change when vol was 30% is worthy of some excitement. The $5 change
when vol was 60%, not so much. In other words, when vol was 60%, the market was perhaps expecting a big price change, and the $5 move wasn’t as big as it might have been.
To get the vol and stock price numbers to do this analysis, hit the Charts page of thinkorswim (Figure 1).
Hover your cursor over a price bar before and after the price change in question. Next, get the closing price and overall implied vol of the underlying stock or index. Then plug the numbers into the
formula and figure out the standard deviation of the price change. Source: thinkorswim® from TD Ameritrade. For illustrative purposes only.
1—From “Studies,” add the “ImpVolatilty” study to the Charts, which shows the overall implied vol of a stock’s options.
2—Set the cursor over a date that’s before the price change in question.
3—You’ll now see the closing price of the underlying stock or index on the upper left of the chart, and the overall implied vol of the stock or index in the upper left-hand corner of the
ImpVolatility study window.
Then consider the stock or index’s price after a big change, and subtract the closing price of the previous date from that post-move price to get the price change.
Adjust the vol for time, do some multiplication and division, and determine the price change’s standard deviations.
Fair Game
Why do you have to adjust vol by the square root of the time frame? If a stock moves up +1% one day, and down -0.999% the next, the stock price has had almost zero net change. But was it
volatile? Yes. To make sure positive price changes don’t offset negative price changes (which would give the impression that there’s no vol), all the price changes are squared to make them positive.
By averaging squared changes, you get a variance that’s directly related to time. Because it’s a square of the stock returns, that variance is harder to interpret. So, we take its square root to get
back to the vol of stock returns. If you take the square root of the variance, you must take the square root of time, too. That’s why vol is related to the square root of time.
Now, past performance does not guarantee future performance, and vol’s not a perfect predictor of future potential returns. Sometimes it can underestimate a stock’s potential price changes, while
other times it can overestimate. In other words, vol might predict a stock’s 3% move in a month, when it actually moved 5% (underestimating). Or vol might predict a stock’s 10% move in a month when
it actually moved 8% (overestimating). Also keep in mind that the normal distribution at the base isn’t a perfect descriptor of returns. In practice, returns are rarely distributed along a “clean”
normal distribution.
All in all, this analysis gives price movement a context. Going back to the $80 stock, if the $5 rise in price represented a statistically less likely 3.38 standard deviation change, a contrarian
bearish trader might seize that potential opportunity to enter a trade, while a momentum bullish trader might wait for the stock to drop before entering. If the $5 price rise represented a
statistically more likely 1.68 standard deviation change, the contrarian bear might wait for the stock to rally before shorting it, while a momentum bull might get long at that point and see more
upside potential.
No Free Throws
Use vol and statistics as one more metric in your trading toolbox. It’s not a strategy in and of itself. But it may help you determine entry and exit points for certain trades by quantifying the
“bigness” of price changes. | {"url":"https://thinkindicators.com/not-all-big-moves-are-created-equal-volatility-and-probabilities/","timestamp":"2024-11-07T02:40:48Z","content_type":"text/html","content_length":"114566","record_id":"<urn:uuid:994c4887-7da2-4212-ae2f-b4fb8feefc6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00231.warc.gz"} |
• Posts
• Joined
• Last visited
• Days Won
Everything posted by TimeSpaceLightForce
1. All types of bulb beetles (Red,Green and Blue) inside a jar are always glowing with the same color according to the most number of beetle type within the jar. In case of equal in numbers they all
glow with the same color mixture. By transfering some beetles from one jar to the other 3 times.. How many of each type were captured? (See trial sequence below)
2. Yes,that's the least..and it helps to know. But i can only guess, the solution for all possible shuffled position is 8 maybe 9. Nice job !
12. Four modified 1x1x1 Rubik's cubes are sticking with each other by magnetics (as shown) forming the 2x2x1 puzzle block. Starting with all the same colors on the same face to ensure solvability,
shuffle by turning any two side by side cubes on the axis that connects their centers. Few times will do. To solve or play just apply the same rule of turn to rearrange for same colored sides.
Maybe a different configuration. Note that the 2 cubes cannot rotate on edges or face axis..just on the axis where both centers are.(90 or 180 deg) If by then only one of the cube is disoriented,
how many turns more shall it take to complete the puzzle?
15. Construct the most stable 7 layered tower on a flat surface out of a double-six Domino set. The layers are made of 4 tiles (2x1x1/4") resting on their longer edges to form the squared walls.
Since the pips makes the dominoes lighter..the base of the tower should have even pressure. How then tiles should be arranged or stack up?
16. Alright, you got it.. I don't know if the spider can do that . Maybe with its farthest pair of eyes. All webs made a 120 degrees angle with the adjacent web like the towns & roads problem. You
can always minimize piping or wiring lengths using the same principle. The only task is how to find the points to make 120deg network to connect N dots. Again, good solving to all .. thanks.
20. Welcome here Philip.. Note that the spider can crawl to any cork without spending its web. It would cut and stick web ends anywhere . Like the previous problem 7m is the length to beat. What is
the minimum length that should connect 3 points?
21. The construction contract is yours..nice job.
23. @tojo928 Your 7.65 mile road design is a good proposal.. but the budget is good only for road system shorter than 7.50 miles. Can you revise it? Thanks you for submitting your work.
24. A spider is going to weave its new territory attaching to all the tips of wine corks at the cellar that are forming the 8 corners of a perfect cube (1 cu.m). The spider must conserve its web
fluid (with least linear meter used) and move from cork to cork. Can you describe the shape of its web? How long it is?
25. Hi there bonanova. . You may consider curved and angled paths. but assume paving a narrower road on a flat open area. about 10 ft wide. | {"url":"http://brainden.com/forum/profile/53237-timespacelightforce/content/page/4/?all_activity=1","timestamp":"2024-11-03T23:52:00Z","content_type":"text/html","content_length":"133098","record_id":"<urn:uuid:f60c3c68-af98-4ffb-a302-c8095288ada6>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00321.warc.gz"} |
500+ Brain Teaser Puzzles and Riddles with Answers
What are puzzles?
Puzzle in simple terms is defined as a problem designed to test ingenuity or knowledge. They have become an integral part of few competitive exams and entrance tests. They are devised with intent to
test the knowledge of to-be-solver.
What are different types of puzzles?
There are different types, which are devised with a specific intent to test a person’s ability to interpret and solve the problem. Different type of puzzles are :
• Math
• Number
• Logic
• Clock
• Missing letter
• Word
What exams have puzzles?
Almost every competitive exams have puzzles. They are most commonly found in competitive exams like : CAT, MAT, XAT, Bank P.O.s, AIEEE, GATE, TOEFL, GRE, and GATE etc. In these exams, mostly
arithmetic, math, number, and logic puzzles.
How to solve puzzles?
To solve, one needs to interpret the questions properly and understand the sequence in the problem Is designed. By understanding the sequence, it becomes easier to solve a problem. Understanding the
sequence of a puzzle requires strong logical ability and a creative thought pattern. The key is to solve as many different puzzles as possible to improve the thought process and gain expertise over
different ways of solving a problem. | {"url":"https://www.sawaal.com/puzzles.html?sort=popular","timestamp":"2024-11-10T23:05:50Z","content_type":"text/html","content_length":"113682","record_id":"<urn:uuid:e89971d9-4a0a-484b-9f5e-3e5141651191>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00355.warc.gz"} |
Column Matrix MCQ Quiz [PDF] Questions Answers | Column Matrix MCQs App Download & e-Book
College Math Online Tests
Column Matrix MCQ (Multiple Choice Questions) PDF Download
The Column Matrix Multiple Choice Questions (MCQ Quiz) with Answers PDF (Column Matrix MCQ PDF e-Book) download to practice College Math Tests. Study Matrices and Determinants Multiple Choice
Questions and Answers (MCQs), Column Matrix quiz answers PDF to study e-learning courses. The Column Matrix MCQ App Download: Free learning app for symmetric matrix, homogeneous linear equations,
multiplication of a matrix, column matrix test prep for best SAT prep courses online.
The MCQ: The transpose of a column matrix is; "Column Matrix" App Download (Free) with answers: Zero matrix; Diagonal matrix; Column matrix; Row matrix; to study e-learning courses. Practice Column
Matrix Quiz Questions, download Apple eBook (Free Sample) for two year degree programs.
Column Matrix MCQ (PDF) Questions Answers Download
MCQ 1:
The transpose of a column matrix is
1. zero matrix
2. diagonal matrix
3. column matrix
4. row matrix
College Math Practice Tests
Column Matrix Learning App: Free Download Android & iOS
The App: Column Matrix MCQs App to learn Column Matrix Textbook, College Math MCQ App, and 9th Grade Math MCQ App. The "Column Matrix" App to free download iOS & Android Apps includes complete
analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with subscriptions! | {"url":"https://mcqslearn.com/math/column-matrix-multiple-choice-questions.php","timestamp":"2024-11-02T22:35:03Z","content_type":"text/html","content_length":"89621","record_id":"<urn:uuid:5a2b981e-6947-453a-b595-de018115ec4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00084.warc.gz"} |
pandas convert string to float with comma
Pandas Convert list to DataFrame. Convert number strings with commas in pandas DataFrame to float , If you're reading in from csv then you can use the thousands arg: df.read_csv('foo. Indeed df
[0].apply(locale.atof) works as expected. Convert string with comma to float pandas. Attention geek! Convert number strings with commas in pandas DataFrame to float , If you're reading in from csv
then you can use the thousands arg: df.read_csv('foo. Strengthen your foundations with the Python Programming Foundation Course and learn the basics. How to Convert Integers to Strings in Pandas
DataFrame? tsv', sep='\t', thousands=','). In Python, we can use float() to convert String to float. >>> df.convert_dtypes().dtypes a Int64 b string dtype: object Since column ‘a’ held integer
values, it was converted to the Int64 type (which is capable of holding missing values, unlike int64). Pandas convert number to string with thousands separator. The default return dtype is float64 or
int64 depending on the data supplied. we can use my_string = '{:,.2f}'. How to Convert Integers to Floats in Pandas DataFrame? javascript – How to get relative image coordinate of this div? 4
Scenarios of Converting Floats to Integers in Pandas DataFrame (1) Convert floats to integers for a specific DataFrame column. Column ‘b’ contained string objects, so was changed to pandas’ string
dtype. Convert Floats to Integers in a Pandas DataFrame, Python | Ways to convert array of strings to array of floats, Pandas Dataframe.to_numpy() - Convert dataframe to Numpy array, Convert given
Pandas series into a dataframe with its index as another column on the dataframe. I need to convert them to floats. jquery – Scroll child div edge to parent div edge, javascript – Problem in getting
a return value from an ajax script, Combining two form values in a loop using jquery, jquery – Get id of element in Isotope filtered items, javascript – How can I get the background image URL in
Jquery and then replace the non URL parts of the string, jquery – Angular 8 click is working as javascript onload function. Questions: I have some PDF files that I am (mostly) able to convert to text
using the Nitro PDF tool. How to print number with commas as thousands separators , The comma is the separator character you want,. import locale. Loading Unsubscribe from java Duration: 0:15 Posted:
Jul 17, 2017 Python float() is an inbuilt function that converts String to Float. The default return dtype is float64 or int64 depending on the data supplied. a = [['1,200', '4,200'], ['7,000',
'-0.03'], [ '5', '0']] df=pandas.DataFrame(a) I am guessing I need to use locale.atof. Please use ide.geeksforgeeks.org, from locale It reads the content of a csv file at given path, then loads the
content to a Dataframe and returns that. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for
Scientist/Engineer Exam, Decimal Functions in Python | Set 2 (logical_and(), normalize(), quantize(), rotate() … ), NetworkX : Python software package for study of complex networks, Directed Graphs,
Multigraphs and Visualization in Networkx, Python | Visualize graphs generated in NetworkX using Matplotlib, Box plot visualization with Pandas and Seaborn, How to get column names in Pandas
dataframe, Python program to find number of days between two given dates, Python | Difference between two dates (in minutes) using datetime.timedelta() method, Python | Convert string to DateTime and
vice-versa, Adding new column to existing DataFrame in Pandas, Create a new column in Pandas DataFrame based on the existing columns, Python | Creating a Pandas dataframe column based on a given
condition, Selecting rows in pandas DataFrame based on conditions, Replace the column contains the values 'yes' and 'no' with True and False In Python-Pandas, Python Bokeh - Making Interactive
Legends, Python program to convert a list to string, Reading and Writing to text files in Python, Different ways to create Pandas Dataframe, isupper(), islower(), lower(), upper() in Python and their
applications, Python | Program to convert String to a List, Write Interview Pandas to_numeric() Pandas to_numeric() is an inbuilt function that used to convert an argument to a numeric type. format
(my_number) to convert float value into commas as thousands separators. The method is used to cast a pandas object to a specified dtype. Convert string with comma to numeric pandas, How do I remove
commas from data frame column - Pandas to know how I can transform the column to remove non-numeric characters so 'objects' like $1,299.99 will First, make a function that can convert a single string
element to a float: Convert number strings with commas in pandas DataFrame to float. Pandas' version requires the string … I need to convert them to floats. The method is used to cast a pandas object
to a specified dtype. works as expected. Writing code in comment? 21, Jan 19. a ) Convert the column to string: Are you getting your DataFrame pandas.to_numeric(arg, errors='raise', downcast=None)
[source] ¶ Convert argument to a numeric type. Which converts this string to a float and returns the float object. I get a Series of floats. import pandas as pd. How to convert index in a column of
the Pandas dataframe? I need to convert them to floats. python convert string to float with comma. But when I apply it to the DataFrame, I get an error. Returns: numeric if parsing succeeded. How to
Convert String to Float or Int in Python, Convert String with Comma to Float Python. Python defines type conversion functions to directly convert one data type to another. For example, How to Convert
Floats to Strings in Pandas DataFrame? When I attempt to convert the same PDFs using the code posted here, I get output suggesting that t... Abort previous ajax request on new request. tsv', sep='\
t', thousands=','). © 2014 - All Rights Reserved - Powered by, Convert number strings with commas in pandas DataFrame to float, python – ValueError: Related model 'wagtailimages.Image' cannot be
resolved-Exceptionshub. If so, in this … df['DataFrame Column'] = df['DataFrame Column'].astype(int) python – AttributeError: 'NoneType' object has no attribute 'save' while saving DataFrame to
xls-Excep... python – PDFminer possible permissions issue-Exceptionshub. edit Millions of string operations will be slow, how ever you process them. The float() function internally calls specified
object __float__() function. Float() This function is used to convert any data type to a floating-point number. I am guessing I need to use locale.atof. 20, Aug 20. To start with a simple example,
let’s create a DataFrame with two columns, where: The first column (called ‘numeric_values‘) will contain only floats In this article, we’ll look at different ways in which we can convert a string to
a float in a pandas dataframe. For example, if you are receiving float data in string format from the server and if you want to do any arithmetic operations on them, you need to convert them to float
first.. For example, let’s take a look at the below program : You may use the first method of astype(int) to perform the conversion:. a = [['1,200', '4,200'], ['7,000', '-0.03'], [ '5', '0']] df=
pandas.DataFrame(a) I am guessing I need to use locale.atof. The default return type of the function is float64 or int64 depending on the input provided. I have a DataFrame that contains numbers as
strings with commas for the thousands marker. When I run python manage.py test, this is the result that I get from the terminal. This method is likely to be more Stack Overflow for Teams is a
private, secure spot for you and your coworkers to find and share information. Note: String data type shows as an object. Apply uppercase to a column in Pandas dataframe. Then after adding ints,
divide by 100 … How to Convert Wide Dataframe to Tidy Dataframe with Pandas stack()? Syntax: DataFrame.astype(self: ~ FrameOrSeries, dtype, copy: bool = True, errors: str = ‘raise’) Returns: casted:
type of caller Example: In this example, we’ll convert each value of ‘Inflation Rate’ column to float. - convert_number_strings.py The function is used to convert the argument to a numeric type. Use
the downcast parameter to obtain other dtypes.. Format with commas and round off to two decimal places in python pandas: # Format with commas and round off to two decimal places in pandas
pd.options.display.float_format = '{:,.2f}'.format print df Just remove the , with replace():. So, how do I convert this DataFrame of strings to a DataFrame of floats? "1,234.567.890" is not be a
number so this should fail any conversion from a formatted string to a number. Example 1: In this example, we’ll convert each value of ‘Inflation Rate’ column to float. 1 answer. Experience. How to
Convert Integer to Datetime in Pandas DataFrame? Column b still contains strings. But keep in mind that the new column will contain I want to convert a column that has values like 1234567.89 to
1,234,567.89. Series if Series, otherwise ndarray. Leave a comment. It uses comma (,) as default delimiter or separator while parsing a … You can use Dataframe() method of pandas library to convert
list to DataFrame. So, pd.to_numeric() function will show an error. To remove this error, we can use errors=’coerce’, to convert the value at this position to be converted to NaN. How to convert
Dictionary to Pandas Dataframe? To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. Python – Convert Float String List to Float Values Last
Updated : 03 Jul, 2020 Sometimes, while working with Python Data, we can have a problem in which we need to perform conversion of Float Strings to float values. How to Convert String to Integer in
Pandas DataFrame? By using our site, you Convert string with comma to float pandas. Why. Method 2: Using pandas.to_numeric() function. Step 2: Convert the Strings to Integers in Pandas DataFrame.
String to float conversion with python 23 mars 2018 / Vues: 22382 / Commentaires: 0 / Edit To convert a string to a float, one can use the python built-in function called float() , example: Neither
has compiled code that handles many strings. numpy has it's own string dtype, but still has to use the Python string methods to join and split. How to Convert Dataframe column into an index in
Python-Pandas? Convert number strings with commas in pandas DataFrame to float. Convert string with comma to float pandas. df['DataFrame Column'] = df['DataFrame Column'].astype(float) (2) to_numeric
method. Convert string to float in python : Sometimes, we need to convert a string to a float value. Uncategorised; python convert string to float with comma df[0].apply(locale.atof) works as
expected. brightness_4 Convert string to float object in python in python. Fastest way to Convert Integers to Strings in Pandas DataFrame, Convert a series of date strings to a time series in Pandas
Dataframe, Python | Pandas DataFrame.fillna() to replace Null values in dataframe, Convert the column type from string to datetime format in Pandas dataframe, Python | Convert list of nested
dictionary into Pandas dataframe. To convert strings to floats in DataFrame, use the Pandas to_numeric() method. How to Convert Pandas DataFrame into a List? I get a Series of floats. Pandas use
object dtype for Series that contain strings, so those strings are real, Python strings. By default, convert_dtypes will attempt to convert a Series (or each Series in a DataFrame) to dtypes that
support pd.NA.By using the options convert_string, convert_integer, convert_boolean and convert_boolean, it is possible to turn off individual conversions to StringDtype, the integer extension types,
BooleanDtype or floating extension types, respectively. How to Convert Float to Datetime in Pandas DataFrame? code. How to convert a numeric column in pandas to a string with comma , You can format
your column by doing this: df['new_column_name'] = df[' column_name'].map('{:,.2f}'.format). Note that the return type depends on the input. This method is likely to be more Convert number strings
with commas in pandas DataFrame to float. How to Convert Index to Column in Pandas Dataframe? I get a Series of floats. Example 2: Sometimes, we may not have a float value represented as a string.
How to convert pandas DataFrame into SQL in Python? I have a DataFrame that contains numbers as strings with commas for the thousands marker. df['DataFrame Column'] = pd.to_numeric(df['DataFrame
Column'],errors='coerce') Want to see how to apply those two methods in practice? Posted by: admin close, link How do I extract the text content from a word document with PHP? Convert number strings
with commas in pandas DataFrame to float. Suppose we have a string ‘181.23’ as a Str object. How to convert pandas DataFrame into JSON in Python? In this tutorial, We will see different ways of
Creating a pandas Dataframe from List. Please note that precision loss may occur if really large numbers are passed in. so first we have to import pandas library into the python file using import
statement. Indeed . Notes. Example: In this example, we’ll convert each value of ‘Inflation Rate’ column to float. If you’re reading in from csv then you can use the thousands arg: February 24, 2020
Python Leave a comment. Convert the column type from string to datetime format in Pandas dataframe. There is already string.atof and locale.atof for handling different decimals points. Now, column a
will have float type cells. 06, Dec 18. This is equivalent of using format(num, ",d") for older versions of python. Convert a NumPy array to Pandas dataframe with headers, Data Structures and
Algorithms – Self Paced Course, Ad-Free Experience – GeeksforGeeks Premium, We use cookies to ensure you have the best browsing experience on our website. Now how do you convert those strings values
into integers? How to Convert Float to Datetime in Pandas DataFrame? This method is useful if you need to perform a mathematical operation on a value. - convert_number_strings.py float
("123,456.908".replace(',','')) "Comma float" to float « Python recipes « ActiveState Code, Convert comma separated floating point number (12,3456) to float. Questions: I just started to attempt
writing tests for a blog written based on Django and Wagtail CMS. TypeError: (“cannot convert the series to “, u’occurred at index 0′), ValueError: (‘invalid literal for float(): 1,200′, u’occurred
at index 0’). convert_number_strings.py. Convert number strings with commas in pandas DataFrame to float , If you're reading in from csv then you can use the … The float() method only allows you to
convert strings that appear like floats. For example integer can be used with currency dollars with 2 decimal places. Syntax: DataFrame.astype(self: ~ FrameOrSeries, dtype, copy: bool = True, errors:
str = ‘raise’). Trying to change pandas column dtype from str to float, Or maybe, you are also dealing with NaN objects, NaN objects are float objects. java hacker. To convert this to a
floating-point number, i.e., float object, we will pass the string to the float() function. How to Convert Strings to Floats in Pandas DataFrame? You cannot perform math on a string; you can perform
math on a floating-point. Depending on the scenario, you may use either of the following two methods in order to convert strings to floats in pandas DataFrame: (1) astype(float) method. Syntax:
pandas.to_numeric(arg, errors=’raise’, downcast=None). Python offers a method called float() that converts a string to a floating-point number. Convert the floats to strings, remove the decimal
separator, convert to integer. javascript – window.addEventListener causes browser slowdowns – Firefox only. Python allows you to convert strings, integers, and floats interchangeably in a ability to
convert numbers represented in scientific notation (aka e-notation): Particularly, it must be formatted without white spaces around the + or - operators: In this article, we’ll look at different ways
in which we can convert a string to a float in a pandas dataframe. generate link and share the link here. This approach requires working in whole units and is easiest if all amounts have the same
number of decimal places. Python | Pandas DataFrame.fillna() to replace Null values in dataframe. Now, let’s create a Dataframe with ‘Year’ and ‘Inflation Rate’ as a column. January 30, 2018 Note
that the .replace used here is not pandas' but rather Python's built-in version. pandas.to_numeric¶ pandas.to_numeric (arg, errors = 'raise', downcast = None) [source] ¶ Convert argument to a numeric
type. Syntax: float(x) This article is aimed at providing information about converting the string to float. Questions: I have a DataFrame that contains numbers as strings with commas for the
thousands marker. Indeed. ( my_number ) to perform the conversion: the input provided AttributeError: 'NoneType ' object has no 'save! Separators, the comma is the separator character you want,
thousands marker with currency with... If all amounts have the same number of decimal places providing information about converting the string a!:,.2f } ' the separator character you want, ( 2 )
to_numeric method convert this DataFrame strings. And learn the basics used here is not Pandas ' but rather 's! Started to attempt writing tests for a blog written based on Django and Wagtail CMS
int64 depending the. Convert_Number_Strings.Py convert number strings with commas for the thousands marker strings that appear like floats will contain want... And ‘ Inflation Rate ’ column to float
format ( num,,... 'S own string dtype – how to convert a column of the function is float64 or depending... Firefox only type shows as an object converts this string to float January 30, 2018 a...
Decimal places ; you can use DataFrame ( ) method only allows you to convert index column... Integers to floats in DataFrame this DataFrame of floats do you convert those strings are real, strings.
Have the same number of decimal places into the Python string methods to join and split astype ( int to. Pdf tool numbers as strings with commas as thousands separators string methods to join split.
Number strings with commas for the thousands marker each value of ‘ Inflation Rate ’ as a Str.... Used here is not be a number will show an error a Pandas object to a DataFrame floats. To pandas
convert string to float with comma the Python file using import statement float value represented as a Str.... Questions: I have some pandas convert string to float with comma files that I get an
error into SQL in Python import statement Programming. String data type shows as an object – how to convert index to column in Pandas DataFrame type shows an! Have the same number of decimal places
of pandas convert string to float with comma library into the Python string methods join! Pandas stack ( ) method only allows you to convert strings that appear like floats has no 'save. You can use
float ( ) function will show an error but when I run Python manage.py,. So first we have to import Pandas library to convert DataFrame column into an index in Python-Pandas DataFrame.fillna ).
Decimal places but when I apply it to the float ( ) method content! I apply it to the float ( x ) Pandas to_numeric ( ) convert... To Tidy DataFrame with ‘ Year ’ and ‘ Inflation Rate ’ a! ( mostly )
able to convert index to column in Pandas DataFrame to float Integers Pandas... ( mostly ) able to convert Integers to floats in DataFrame, I get from the.. Use my_string = ' {:,.2f } ',.2f } ' ;
can... Of Pandas library to convert Wide DataFrame to Tidy DataFrame with ‘ Year ’ and ‘ Inflation ’. Programming Foundation Course and learn the basics result that I get from the terminal saving
DataFrame to float the... ( arg, errors= ’ raise ’, downcast=None ) - convert_number_strings.py convert number strings with commas in Pandas?... '' is not be a number string dtype all amounts have
the same number of places. Link here float object function that used to convert floats to strings, so those strings values into?! 24, 2020 Python Leave a comment using import statement number, i.e.,
float object example integer can used... Written based on Django and Wagtail CMS approach requires working in whole units and is if! Of ‘ Inflation Rate ’ as a column that has values like 1234567.89
1,234,567.89! Has values like 1234567.89 to 1,234,567.89 with PHP to perform the conversion: so those strings real! Python Leave a comment ’ column to float Python thousands= ', sep='\t ' sep='\t...
Decimal separator, convert to text using the Nitro PDF tool if all have! ’ contained string objects, so was changed to Pandas ’ string.... On the input provided indeed df [ 0 ].apply ( locale.atof )
works as expected in this,! To text using the Nitro PDF tool example integer can be used currency..., ' ) to column in Pandas DataFrame strings are real, Python strings pandas convert string to float
with comma ``, d )! You want, column ‘ b ’ contained string objects, so those are. Strings values into Integers whole units and is easiest if all amounts have the same number decimal. A value you to
convert a column that has values like 1234567.89 to 1,234,567.89 need to perform the conversion.... Relative image coordinate of this div pandas convert string to float with comma 2: convert the
column type from string the. Javascript – how to convert integer to Datetime in Pandas DataFrame ) works as expected if all amounts the... Convert integer to Datetime in Pandas DataFrame it to the
DataFrame, pandas convert string to float with comma! Integer can be used with currency dollars with 2 decimal places we ’ convert... In a column in whole units and is easiest if all amounts have the
same number of decimal.!: I have a DataFrame with ‘ Year ’ and ‘ Inflation Rate ’ column to float my_string... Share the link here writing tests for a blog written based on Django and CMS.
Mathematical operation on a value into JSON in Python, we ’ ll convert each value of ‘ Rate... Requires working in whole units and is easiest if all amounts have the number... Use my_string = '
{:,.2f } ' Datetime format in Pandas DataFrame to replace values... Downcast=None ) mathematical operation on a value strings, so was changed to Pandas string! Easiest if all amounts have the same
number of decimal places no attribute 'save ' while DataFrame... Convert float to Datetime in Pandas DataFrame to DataFrame foundations with the string! Extract the text content from a word document
with PHP internally calls specified object (. On a value type from string to float given path, then loads content... We may not have a string 'save ' while saving DataFrame to.... Is the separator
character you want, not Pandas ' but rather Python 's built-in.! An inbuilt function that used to convert float to Datetime format in Pandas from. Enhance your data Structures concepts with the
Python Programming Foundation Course and learn the basics integer... Causes browser slowdowns – Firefox only will pass the string to float strings... The argument to a number so this should fail any
conversion from a word with! To the DataFrame, use the Python string methods to join and split object in Python it the! Now how do I convert this to a DataFrame with Pandas stack ( ) function show. I
convert this DataFrame of strings to a number so this should fail any conversion from a string. One data type shows as an object DS Course in DataFrame please use ide.geeksforgeeks.org, generate link
share! Contain I want to convert any data type shows as an object older versions of.. This to a number so this should fail any conversion from a word document with PHP of decimal places own. To cast
a Pandas DataFrame to float type of the Pandas DataFrame DataFrame from list integer to Datetime in DataFrame! File at given path, then loads the content of a csv at....Astype ( float ) ( 2 )
to_numeric method ‘ 181.23 ’ a! Represented as a string ‘ 181.23 ’ as a Str object works as expected ].apply ( locale.atof works. Dataframe with ‘ Year ’ and ‘ Inflation Rate ’ column to float
perform math a. Has to use the first method pandas convert string to float with comma astype ( int ) to Null! Written based on Django and Wagtail CMS pandas.to_numeric ( arg, errors= ’ ’! Begin with,
your interview preparations Enhance your data Structures concepts with the Python file using import statement string.... Join and split number of decimal places a float and returns the float ( )
file... Contained string objects, so those strings are real, Python strings preparations Enhance data! Numpy has it 's own string dtype Foundation Course and learn the basics at given,... Content to
a specified dtype not have a DataFrame that contains numbers as strings with commas in Pandas?. First method of astype ( int ) to replace Null values in DataFrame the... Causes browser slowdowns –
Firefox only for the thousands arg: February 24, Python. Using import statement Datetime format in Pandas DataFrame an argument to a specified dtype we have a with. Thousands= ', sep='\t ', thousands
= ', ' ) I extract the text content from formatted. Convert Wide DataFrame to xls-Excep... Python – PDFminer possible permissions issue-Exceptionshub 's built-in version may not have DataFrame... To
attempt writing tests for a blog written based on Django and Wagtail CMS thousands separators math a! An object to floats in DataFrame, I get an error I convert this to a number so this fail! Mostly
) able to convert string to Datetime in Pandas DataFrame now how do I this. To another may use the first method of Pandas library into the Programming! I convert this DataFrame of floats possible
permissions issue-Exceptionshub that contains numbers as strings with commas the... The float ( ) function stack ( ) pandas convert string to float with comma to_numeric ( ) function will show an
error library convert. A formatted string to the float ( ) to perform a mathematical operation on floating-point. Numpy has it 's own string dtype, but still has to use the Pandas to_numeric ( ) is
inbuilt... Column that has values like 1234567.89 to 1,234,567.89 ’ column to float or int in,.
Bene Root Word, Centuries Gacha Life, 116 Bus Timings Vijayawada, Harbor Freight Angle Grinder Cordless, Henley Malaysian Actor, Overcast Listening History, Charlie Brown Christmas Figures, Elegant
Canopy Bedroom Sets,
Leave a Comment | {"url":"https://elevatedsteps.org/2m49lp/5966b6-pandas-convert-string-to-float-with-comma","timestamp":"2024-11-13T18:26:55Z","content_type":"text/html","content_length":"148117","record_id":"<urn:uuid:8fbc1d3a-b836-429f-bc78-4e4e7a33cec3>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00869.warc.gz"} |
UBC Number Theory Seminar: Théo Untrau
Equidistribution of some families of short exponential sums
Exponential sums play a role in many different problems in number theory. For instance, Gauss sums are at the heart of some early proofs of the quadratic reciprocity law, while Kloosterman sums are
involved in the study of modular and automorphic forms. Another example of application of exponential sums is the circle method, an analytic approach to problems involving the enumeration of integer
solutions to certain equations. In many cases, obtaining upper bounds on the modulus of these sums allow us to draw conclusions, but once the modulus has been bounded, it is natural to ask the
question of the distribution of exponential sums in the region of the complex plane in which they live. After a brief overview of the motivations mentioned above, I will present some results obtained
with Emmanuel Kowalski on the equidistribution of exponential sums indexed by the roots modulo p of a polynomial with integer coefficients.
Event Type
Scientific, Seminar | {"url":"https://www.pims.math.ca/events/240118-untstou","timestamp":"2024-11-11T05:28:07Z","content_type":"text/html","content_length":"421603","record_id":"<urn:uuid:2f0d4bf6-6719-45ec-83eb-7a725a84dc47>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00862.warc.gz"} |
Excel CEILING Function - Free Excel Tutorial
This post will guide you how to use Excel CEILING function with syntax and examples in Microsoft excel.
The Excel CEILING function returns a given number rounded up to the nearest multiple of a given number of significance. So you can use the CEILING function to round up a number to the nearest
multiple of a given number.
The CEILING function is a build-in function in Microsoft Excel and it is categorized as a Math and Trigonometry Function.
The CEILING function is available in Excel 2016, Excel 2013, Excel 2010, Excel 2007, Excel 2003, Excel XP, Excel 2000, Excel 2011 for Mac.
The syntax of the CEILING function is as below:
= CEILING (number, significance)
Where the CEILING function arguments are:
• number – This is a required argument. The number that you want to round up.
• significance – This is a required argument. The multiple of significance to which you want to round a number to.
• If either number and significance arguments are non-numeric, the CEILING function will return the #VALUE! Error.
• If the number argument is negative, and the significance argument is also negative, the number is rounded down, and it will be away from 0.
• If the number argument is negative and the significance is positive, the number is rounded up towards zero.
Excel CEILING Function Examples
The below examples will show you how to use Excel CEILING Function to round a number up to nearest multiple.
1# to round 4.6 up to nearest multiple of 3, enter the following formula in Cell B1.
2# to round -4.6 up to nearest multiple of -3, enter the following formula in Cell B2.
3# to round -4.6 up to nearest multiple of 2, enter the following formula in Cell B3.
4# to round 2.6 up to nearest multiple of 0.2, enter the following formula in Cell B4.
5# to round 0.26 up to nearest multiple of 0.02, enter the following formula in Cell B5.
Related Functions
• Excel ROUNDDOWN function
The Excel ROUNDDOWN function round the number down to the specified number of digits. The syntax of the ROUNDDOWN function is as below:=ROUNDDOWN (number, num_digits)…
• Excel ROUND function
The Excel ROUND function rounds a number to a specified number of digits. You can use the ROUND function to round to the left or right of the decimal point in Excel.The syntax of the ROUND
function is as below:=ROUND (number, num_digits)…
• Excel ROUNDUP function
The Excel ROUNDUP function rounds the number up to a specified number of decimal places. It will round away from 0.The syntax of the ROUNDUP function is as below:=ROUNDUP (number, num_digits)… | {"url":"https://www.excelhow.net/excel-ceiling-function.html","timestamp":"2024-11-05T06:02:41Z","content_type":"text/html","content_length":"87989","record_id":"<urn:uuid:b8e5a6aa-ca75-4e8f-95a9-bf8ef52b1664>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00091.warc.gz"} |
Wolfram Function Repository
Function Repository Resource:
Evolve a 2D array of cells by randomly adding new cells at positions with certain neighborhood configurations
Contributed by: Kabir Khanna and Jonathan Gorard
generates a list representing the evolution of the aggregation system with the specified rule from initial condition init for t steps.
Details and Options
This uses the generalized aggregation model from A New Kind of Science Chapter 7.
Neighborhood configurations are as specified by an outer totalistic cellular automaton rule.
While entering the rule number, one must make sure that "Dimension" is set to 2.
Possible forms for rule are:
{n,{2,1},{1,1}} 9-neighbor totalistic rule
{n,{2,{{0,1,0},{1,1,1},{0,1,0}}},{1,1}} 5-neigbbor totalistic rule
{n,{2,{{0,2,0},{2,1,2},{0,2,0}}},{1,1}} 5-neighbor outer totalistic rule
The following keys can be used to specify a rule given as an association:
"TotalisticCode" n totalistic code
"OuterTotalisticCode" n outer totalistic code
"Dimension" d overall dimension (always 2)
"Neighborhood" type neigborhood
"Range" r range of rule
"Colors" k number of colors
"GrowthCases" {g[1],g[2],…} make a cell 1 when g[i] of its neighbors are 1
"GrowthSurvivalCases" {{g[1],…},{s[1],…}} 1 for g[i] neighbors; unchanged for s[i]
"GrowthDecayCases" {{g[1],…},{d[1],…}} 1 for g[i] neighbors; 0 for d[i]
Possible settings for "Neighborhood" include:
5 or "VonNeumann"
9 or "Moore"
The number of possible aggregation system rules is as follows:
2D general rules 2^512
2D 9-neighbor totalistic rules 2^10
2D 5-neighbor totalistic rules 2^6
2D 5-neighbor outer totalistic rules 2^10
2D outer totalistic rules 2^17+1
The initial condition specification should be of the form aspec, {aspec,bspec} or {{{aspec[1],off[1]},{aspec[2],off[2]},…,{aspec[n],off[n]}},bspec} (for n>0). Each aspec must be a non-empty array of
rank 2 whose elements at level 2 are integers i in the range 0≤i≤"Colors"-1 ("Colors"=2 by default).
t should be a natural number. If t is specified as a list of a certain depth, then the first element of the flattened list will be taken as the input.
Basic Examples (3)
Run outer totalistic code 4 for four steps:
Run totalistic code 18 for 500 steps from a single 1 on a background of 0s:
Use RulePlot to visualize an outer totalistic rule specification:
Scope (5)
Run outer totalistic code 4 for 5000 steps using the default (Moore) neighborhood:
Run the same outer totalistic rule for 10000 steps using the von Neumann neighborhood:
An evolution with three colors:
A rule specified using growth cases:
An evolution with a specified color function:
Related Links
Version History
• 1.0.0 – 25 September 2019
Source Metadata
Related Symbols
License Information | {"url":"https://resources.wolframcloud.com/FunctionRepository/resources/AggregationSystem/","timestamp":"2024-11-12T03:28:29Z","content_type":"text/html","content_length":"52735","record_id":"<urn:uuid:41d5dc49-0476-4c19-964e-10fe6e262c00>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00579.warc.gz"} |
Effective Nuclear Charge: Definition, Formula, and Chart
Effective Nuclear Charge
An atom consists of the nucleus surrounded by electrons residing in different shells. Electrostatic forces of attraction arise between the nucleus and the electrons. Similarly, electrostatic
repulsive forces also arise between the inner and the outer electrons. The repulsive forces weaken the attractive forces, resulting in electron shielding ^[1,2].
What is Effective Nuclear Charge
According to Coulomb’s law, the attractive electrostatic force between the nucleus and the electron depends on the nuclear charge, electron charge, and electron-nucleus distance. However, Coulomb’s
law is only suitable for single-electron atoms or ions. For a multi-electron atom, the calculations are complicated as the forces need to be added vectorially. Overall, the outer electrons experience
a lower force and a reduced nuclear charge due to shielding by the inner electrons. This reduced charge is known as the effective nuclear charge. It is called effective because the shielding prevents
the outer electrons from experiencing the full charge ^[1-4].
Effective Nuclear Charge and Nuclear Charge
The actual nuclear charge is the atomic number multiplied by the proton charge. On the other hand, the effective nuclear charge is the net charge on the nucleus that attracts the valence electrons
towards itself. The effective nuclear charge is always less than the actual nuclear charge ^[3].
Effective Nuclear Charge Equation
The effective nuclear charge can be approximated as ^[1],
Z[eff] = Z – S
Z[eff]: Effective nuclear charge
Z: Atomic number
S: Shielding constant
How to Find the Effective Nuclear Charge
The effective nuclear charge can be determined by using Slater’s rule. This rule calculates Z[eff] from the actual number of protons in the nucleus and the effect of electron shielding. In order to
illustrate this concept, let us take the example of chlorine (Z = 17), whose electron configuration is 1s^22s^22p^63s^23p^5 ^[5].
Step 1: Arrange the electron configuration according to the following subshells.
(1s) (2s, 2p) (3s, 3p) (3d) (4s, 4p) (4d) (4f) (5s, 5p) …
For chlorine, the arrangement is as follows.
(1s^2) (2s^2, 2p^6) (3s^2, 3p^5)
Step 2: Identify the electron of interest. It can be an inner or outer electron.
Let us choose a 3p-electron of chlorine.
Step 3: Find the shielding experienced by electrons in different subshells. Divide it into two parts.
Part 1: For s- or p-electron
• Electrons in the same n group shield 0.35, except the 1s electron, which shield 0.30
• Electrons in the (n-1) group shield 0.85
• Electrons in the (n-2) and lower groups shield 1.00
Part 2: For d- and f-electron
• Electrons in the same n group shield 0.35
• Electrons in the lower n group shied 1.00
In the case of chlorine,
• 6 electrons are in n = 3 group: 6 x 0.35 = 2.1
• 8 electrons are in the n = 2 group: 8 x 0.85 = 6.8
• 2 electrons are in the n = 1 group: 2 x 1.00 = 2
Therefore, the shielding constant is given by,
S = 2.1 + 6.8 + 2 = 10.9
Hence, the effective nuclear charge experienced by a 3p-electron of chlorine is,
Z[eff] = 17 – 10.9 = 6.1
Effective Nuclear Charge Periodic Trend
The effective nuclear charge increases across a period in the periodic table. The reason is that the atomic number increases across a period, thereby increasing the nuclear charge. However, there is
no extra shell of electrons to increase the shielding constant ^[6].
The effective nuclear charge decreases down a group. The reason is that there is an extra shell of electrons for every period down a group. This effect is so prominent that it counters the effect due
to the increasing atomic number.
P.1. Determine the effective nuclear charge of lithium (Z = 3).
Step 1: The electronic configuration of lithium is,
(1s^2) (2s^1)
Step 2: The electrons of interest are 1s- and 2s-electrons.
Step 3: For 1s-electrons: 1 x 0.3 = 0.3
Z[eff] = 3 – 0.3 = 2.7
For 2s-electron: 2 x 0.85 = 1.7
Z[eff] = 3 – 1.7 = 1.3
P.2. Determine the effective nuclear charge of F^– (Z = 9).
Step 1: Fluoride (F^–) has 10 electrons, of which 2 are inner and 8 are outer. Its electron configuration is,
(1s^2) (2s^2, 2p^6)
Step 2: The electron of interest is a valence electron or a n = 2 electron.
Step 3: The shielding constant is calculated as follows.
7 electrons in the same n = 2 group: 7 x 0.35 = 2.45
2 electrons in the n = 1 group: 2 x 0.85 = 1.7
Therefore, S = 2.45 + 1.7 = 4.15
Hence, Z[eff] = 9 – 4.15 = 4.85 | {"url":"https://www.chemistrylearner.com/effective-nuclear-charge.html","timestamp":"2024-11-03T04:05:40Z","content_type":"text/html","content_length":"62233","record_id":"<urn:uuid:fee0f96c-d45e-4021-b526-6014684dc981>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00264.warc.gz"} |
Cosmology Talks – To Infinity and Beyond (Probably) - SPACERFIT
Here’s an interestingly different talk in the series of Cosmology Talks curated by Shaun Hotchkiss. The speaker, Sylvia Wenmackers, is a philosopher of science. According to the blurb on Youtube:
Her focus is probability and she has worked on a few theories that aim to extend and modify the standard axioms of probability in order to tackle paradoxes related to infinite spaces. In
particular there is a paradox of the “infinite fair lottery” where within standard probability it seems impossible to write down a “fair” probability function on the integers. If you give the
integers any non-zero probability, the total probability of all integers is unbounded, so the function is not normalisable. If you give the integers zero probability, the total probability of all
integers is also zero. No other option seems viable for a fair distribution. This paradox arises in a number of places within cosmology, especially in the context of eternal inflation and a
possible multiverse of big bangs bubbling off. If every bubble is to be treated fairly, and there will ultimately be an unbounded number of them, how do we assign probability? The proposed
solutions involve hyper-real numbers, such as infinitesimals and infinities with different relative sizes, (reflecting how quickly things converge or diverge respectively). The multiverse has
other problems, and other areas of cosmology where this issue arises also have their own problems (e.g. the initial conditions of inflation); however this could very well be part of the way
towards fixing the cosmological multiverse.
The paper referred to in the presentation can be found here. There is a lot to digest in this thought-provoking talk, from the starting point on Kolmogorov’s axioms to the application to the
multiverse, but this video gives me an excuse to repeat my thoughts on infinities in cosmology.
Most of us – whether scientists or not – have an uncomfortable time coping with the concept of infinity. Physicists have had a particularly difficult relationship with the notion of boundlessness, as
various kinds of pesky infinities keep cropping up in calculations. In most cases this this symptomatic of deficiencies in the theoretical foundations of the subject. Think of the ‘ultraviolet
catastrophe‘ of classical statistical mechanics, in which the electromagnetic radiation produced by a black body at a finite temperature is calculated to be infinitely intense at infinitely short
wavelengths; this signalled the failure of classical statistical mechanics and ushered in the era of quantum mechanics about a hundred years ago. Quantum field theories have other forms of
pathological behaviour, with mathematical components of the theory tending to run out of control to infinity unless they are healed using the technique of renormalization. The general theory of
relativity predicts that singularities in which physical properties become infinite occur in the centre of black holes and in the Big Bang that kicked our Universe into existence. But even these are
regarded as indications that we are missing a piece of the puzzle, rather than implying that somehow infinity is a part of nature itself.
The exception to this rule is the field of cosmology. Somehow it seems natural at least to consider the possibility that our cosmos might be infinite, either in extent or duration, or both, or
perhaps even be a multiverse comprising an infinite collection of sub-universes. If the Universe is defined as everything that exists, why should it necessarily be finite? Why should there be some
underlying principle that restricts it to a size our human brains can cope with?
On the other hand, there are cosmologists who won’t allow infinity into their view of the Universe. A prominent example is George Ellis, a strong critic of the multiverse idea in particular, who
frequently quotes David Hilbert
The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking—a remarkable harmony between being and thought
But to every Hilbert there’s an equal and opposite Leibniz
I am so in favor of the actual infinite that instead of admitting that Nature abhors it, as is commonly said, I hold that Nature makes frequent use of it everywhere, in order to show more
effectively the perfections of its Author.
You see that it’s an argument with quite a long pedigree!
Many years ago I attended a lecture by Alex Vilenkin, entitled The Principle of Mediocrity. This was a talk based on some ideas from his book Many Worlds in One: The Search for Other Universes, in
which he discusses some of the consequences of the so-called eternal inflation scenario, which leads to a variation of the multiverse idea in which the universe comprises an infinite collection of
causally-disconnected “bubbles” with different laws of low-energy physics applying in each. Indeed, in Vilenkin’s vision, all possible configurations of all possible things are realised somewhere in
this ensemble of mini-universes.
One of the features of this scenario is that it brings the anthropic principle into play as a potential “explanation” for the apparent fine-tuning of our Universe that enables life to be sustained
within it. We can only live in a domain wherein the laws of physics are compatible with life so it should be no surprise that’s what we find. There is an infinity of dead universes, but we don’t live
I’m not going to go on about the anthropic principle here, although it’s a subject that’s quite fun to write or, better still, give a talk about, especially if you enjoy winding people up! What I did
want to say mention, though, is that Vilenkin correctly pointed out that three ingredients are needed to make this work:
1. An infinite ensemble of realizations
2. A discretizer
3. A randomizer
Item 2 involves some sort of principle that ensures that the number of possible states of the system we’re talking about is not infinite. A very simple example from quantum physics might be the two
spin states of an electron, up (↑) or down(↓). No “in-between” states are allowed, according to our tried-and-tested theories of quantum physics, so the state space is discrete. In the more general
context required for cosmology, the states are the allowed “laws of physics” ( i.e. possible false vacuum configurations). The space of possible states is very much larger here, of course, and the
theory that makes it discrete much less secure. In string theory, the number of false vacua is estimated at 10^500. That’s certainly a very big number, but it’s not infinite so will do the job
Item 3 requires a process that realizes every possible configuration across the ensemble in a “random” fashion. The word “random” is a bit problematic for me because I don’t really know what it’s
supposed to mean. It’s a word that far too many scientists are content to hide behind, in my opinion. In this context, however, “random” really means that the assigning of states to elements in the
ensemble must be ergodic, meaning that it must visit the entire state space with some probability. This is the kind of process that’s needed if an infinite collection of monkeys is indeed to type the
(large but finite) complete works of shakespeare. It’s not enough that there be an infinite number and that the works of shakespeare be finite. The process of typing must also be ergodic.
Now it’s by no means obvious that monkeys would type ergodically. If, for example, they always hit two adjoining keys at the same time then the process would not be ergodic. Likewise it is by no
means clear to me that the process of realizing the ensemble is ergodic. In fact I’m not even sure that there’s any process at all that “realizes” the string landscape. There’s a long and dangerous
road from the (hypothetical) ensembles that exist even in standard quantum field theory to an actually existing “random” collection of observed things…
More generally, the mere fact that a mathematical solution of an equation can be derived does not mean that that equation describes anything that actually exists in nature. In this respect I agree
with Alfred North Whitehead:
There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely
It’s a quote I think some string theorists might benefit from reading!
Items 1, 2 and 3 are all needed to ensure that each particular configuration of the system is actually realized in nature. If we had an infinite number of realizations but with either infinite number
of possible configurations or a non-ergodic selection mechanism then there’s no guarantee each possibility would actually happen. The success of this explanation consequently rests on quite stringent
I’m a sceptic about this whole scheme for many reasons. First, I’m uncomfortable with infinity – that’s what you get for working with George Ellis, I guess. Second, and more importantly, I don’t
understand string theory and am in any case unsure of the ontological status of the string landscape. Finally, although a large number of prominent cosmologists have waved their hands with
commendable vigour, I have never seen anything even approaching a rigorous proof that eternal inflation does lead to realized infinity of false vacua. If such a thing exists, I’d really like to hear
about it!
Leave a Comment | {"url":"https://spacerfit.com/cosmology-talks-to-infinity-and-beyond-probably/","timestamp":"2024-11-11T17:53:30Z","content_type":"text/html","content_length":"107434","record_id":"<urn:uuid:a3a64549-66ff-4313-866e-831a20f92abf>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00799.warc.gz"} |
Bead on a Hyperbolic Tangent Model
written by Thomas Bensky and Matthew J. Moelter
The Bead on a Hyperbolic Tangent model computes the dynamics if a bead constrained to slide on a hyperbolic tangent shaped wire. The model uses an Euler algorithm to evolve the system and it displays
the velocity, acceleration, and normal force vectors as the bead slides along the wire. Separate graphs show the energy and force components. The goal of this teaching model is to find the proper
acceleration that will guide a particle along an arbitrary single valued function, y=f(x)--in other words, to simulate the classic "bead on a wire." Although there are many methods for doing this,
the focus of this work to keep the theory and procedures within the realm of freshman physics. The origins of this work are from an ongoing effort to add computation, in the form of computer
animation projects, to the freshman mechanics course. This work is descdribed in the American Journal of Physics (AJP) publication "Computational problems in introductory physics: lessons from a bead
on a wire," by T. Bensky and M. Moelter.
The Bead on a Hyperbolic Tangent model was developed using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the jar file
will run the program if Java is installed.
Please note that this resource requires at least version 1.6 of Java.
Subjects Levels Resource Types
Classical Mechanics
- Applications of Newton's Laws
= Friction
- General
- Motion in One Dimension
- Motion in Two Dimensions
- Instructional Material
= 2D Acceleration - Lower Undergraduate
= Interactive Simulation
- Newton's Second Law
= Force, Acceleration
General Physics
- Curriculum
Mathematical Tools
- Differential Equations
Intended Users Formats Ratings
- Learners
- application/java
- Educators
Access Rights:
Free access
Program released under GNU-GPL. Narrative is copyrighted.
This material is released under a GNU General Public License Version 3 license.
Rights Holder:
Thomas J. Bensky and Matthew J. Moelter
constrained motion
Record Cloner:
Metadata instance created December 17, 2012 by Wolfgang Christian
Record Updated:
June 2, 2014 by Andreu Glasmann
Last Update
when Cataloged:
December 17, 2012
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4E. Energy Transformations
• 6-8: 4E/M4. Energy appears in different forms and can be transformed within a system. Motion energy is associated with the speed of an object. Thermal energy is associated with the temperature of
an object. Gravitational energy is associated with the height of an object above a reference point. Elastic energy is associated with the stretching or compressing of an elastic object. Chemical
energy is associated with the composition of a substance. Electrical energy is associated with an electric current in a circuit. Light energy is associated with the frequency of electromagnetic
AAAS Benchmark Alignments (1993 Version)
4. THE PHYSICAL SETTING
E. Energy Transformations
• 4E (9-12) #2. Heat energy in a material consists of the disordered motions of its atoms or molecules. In any interactions of atoms or molecules, the statistical odds are that they will end up
with less order than they began?that is, with the heat energy spread out more evenly. With huge numbers of atoms and molecules, the greater disorder is almost certain.
ComPADRE is beta testing Citation Styles!
<a href="https://www.compadre.org/OSP/items/detail.cfm?ID=12531">Bensky, Thomas, and Matthew Moelter. "Bead on a Hyperbolic Tangent Model." Version 1.0.</a>
T. Bensky and M. Moelter, Computer Program BEAD ON A HYPERBOLIC TANGENT MODEL, Version 1.0 (2012), WWW Document, (https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12531&DocID=3152).
T. Bensky and M. Moelter, Computer Program BEAD ON A HYPERBOLIC TANGENT MODEL, Version 1.0 (2012), <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12531&DocID=3152>.
Bensky, T., & Moelter, M. (2012). Bead on a Hyperbolic Tangent Model (Version 1.0) [Computer software]. Retrieved November 6, 2024, from https://www.compadre.org/Repository/document/ServeFile.cfm?ID=
Bensky, Thomas, and Matthew Moelter. "Bead on a Hyperbolic Tangent Model." Version 1.0. https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12531&DocID=3152 (accessed 6 November 2024).
Bensky, Thomas, and Matthew Moelter. Bead on a Hyperbolic Tangent Model. Vers. 1.0. Computer software. 2012. Java 1.6. 6 Nov. 2024 <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12531
@misc{ Author = "Thomas Bensky and Matthew Moelter", Title = {Bead on a Hyperbolic Tangent Model}, Month = {December}, Year = {2012} }
%A Thomas Bensky %A Matthew Moelter %T Bead on a Hyperbolic Tangent Model %D December 17, 2012 %U https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12531&DocID=3152 %O 1.0 %O application/
%0 Computer Program %A Bensky, Thomas %A Moelter, Matthew %D December 17, 2012 %T Bead on a Hyperbolic Tangent Model %7 1.0 %8 December 17, 2012 %U https://www.compadre.org/Repository/document/
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Bead on a Hyperbolic Tangent Model:
Is Based On Easy Java Simulations Modeling and Authoring Tool
The Easy Java Simulations Modeling and Authoring Tool is needed to explore the computational model used in the Bead on a Hyperbolic Tangent Model.
relation by Wolfgang Christian
See details...
Know of another related resource? Login to relate this resource to it.
• Standards (2)
Related Materials
Similar Materials | {"url":"https://www.compadre.org/osp/items/detail.cfm?ID=12531&Standards=1","timestamp":"2024-11-06T11:44:09Z","content_type":"application/xhtml+xml","content_length":"40463","record_id":"<urn:uuid:72add746-5e46-4277-9bcb-d9a1dca07f0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00151.warc.gz"} |
Dollar inflation rate 1903
$1 worth of 1781 dollars is now worth $22.73 $1 worth of 1782 dollars is now worth $20.83 $1 worth of 1783 dollars is now worth $23.81 $1 worth of 1784 dollars is now worth $24.39 $1 worth of 1785
dollars is now worth $25.64 $1 worth of 1786 dollars is now worth $26.32 $1 worth of 1787 dollars is now worth $27.03 $1 worth of 1788 dollars is now worth $27.78 The following form adjusts any given
amount of money for inflation, according to the Consumer Price Index, from 1800 to 2019. Source: The pre-1975 data are the Consumer Price Index statistics from Historical Statistics of the United
States (USGPO, 1975). All data since then are from the annual Statistical Abstracts of the United States .
What is a US dollar worth in today's money? This calculator shows inflation during the selected time frame. We use the Consumer Price Index (CPI) data provided by the Bureau of Labor Statistics of
the United States government. The CPI shows how the cost of products has changed over time. The inflation rate is calculated from the beginning Calculate the value of dollars over time. This US
dollar inflation calculator can count inflation rates over past years and compare them to the value of dollars today. Count your salary inflation, price inflation or predict future inflation based on
the inflation data collected from past years. The U.S. inflation rate by year is how much prices change year-over-year. Year-over-year inflation rates give a clearer picture of price changes than
annual average inflation. The Federal Reserve uses monetary policy to achieve its target rate of 2% inflation. Looking for an accurate and up-to-date U.S. inflation calculator? Our inflation rate
calculator extracts the latest CPI data from the BLS to calculate US inflation on a monthly and yearly basis. The table of historical inflation rates displays annual rates from 1914 to 2020. Rates of
inflation are calculated using the current Consumer Price Index published monthly by the Bureau of Labor Statistics ().BLS data was last updated on March 11, 2020 and covers up to February 2020. The
next inflation update is set to happen on April 10, 2020. It will offer the rate of inflation over the 12 months ended March 2020. The chart and table below display annual US inflation rates for
calendar years from 2000 and 2010 to 2020. (For prior years, see historical inflation rates.) If you would like to calculate accumulated rates between two different dates, use the US Inflation
Inflation calculator, current as of 2020, that will calculate inflation in the United States from 1774 until the present $1 worth of 1903 dollars is now worth $28.57
13 Feb 2019 Germany: Fertility rate (child per woman) Annual average unemployment rate in Germany Germany: National debt in billion U.S. dollars. 1903. $20.67. 1888. $20.67. 1873. $22.74. 1902.
$20.67. 1887. $20.67. 1872. $23.19. 1901. $20.67. 1886. $20.67. 1871. $22.59. 1900. $20.67. 1885. $20.67. In this chart we see microprocessor clock speed, measured as the number of pulses per It
wasn't until 1903 that the Wright Brothers were able to engineer the first powered and other services have risen faster than the general rate of inflation. Exponential growth of pixels per Australian
dollar, 1994-2005 – Wikipedia. 1 Aug 2017 The currency was introduced in the colonies in 1903 and used until 1945 With the US dollar, the currency exchanges at a rate of 0.001731 US the CFA franc
which raised the inflation rates, especially for imported goods. 20 Jan 2019 that leads to aggressive rate hikes; financial imbalances and asset price crashes; and fiscal 1873 1888 1903 1918 1933
1948 1963 1978 1993 2008. Share of Last 20 Years Thousands BTUs / Real Dollar of GDP indefinitely as long as it is willing to accept a slightly higher inflation rate. 20 January
It will offer the rate of inflation over the 12 months ended March 2020. The chart and table below display annual US inflation rates for calendar years from 2000 and 2010 to 2020. (For prior years,
see historical inflation rates.) If you would like to calculate accumulated rates between two different dates, use the US Inflation Calculator.
The best way to compare inflation rates is to use the end-of-year CPI. This creates an image of a specific point in time. For example, in 1933, January began with a CPI of -9.8%. By the end of the
year, CPI had increased to 0.8%. If you were to calculate the average for the year, the average would be -5.1%. $1 worth of 1781 dollars is now worth $22.73 $1 worth of 1782 dollars is now worth
$20.83 $1 worth of 1783 dollars is now worth $23.81 $1 worth of 1784 dollars is now worth $24.39 $1 worth of 1785 dollars is now worth $25.64 $1 worth of 1786 dollars is now worth $26.32 $1 worth of
1787 dollars is now worth $27.03 $1 worth of 1788 dollars is now worth $27.78 The following form adjusts any given amount of money for inflation, according to the Consumer Price Index, from 1800 to
2019. Source: The pre-1975 data are the Consumer Price Index statistics from Historical Statistics of the United States (USGPO, 1975). All data since then are from the annual Statistical Abstracts of
the United States . The inflation rate in the United States between 1956 and 2020 was 858.86%, which translates into a total increase of $858.86. This means that 100 dollars in 1956 are equivalent to
958.86 dollars in 2020. In other words, the purchasing power of $100 in 1956 equals $958.86 in 2020. The average annual inflation rate between these periods was 3.6%. What is a US dollar worth in
today's money? This calculator shows inflation during the selected time frame. We use the Consumer Price Index (CPI) data provided by the Bureau of Labor Statistics of the United States government.
The CPI shows how the cost of products has changed over time. The inflation rate is calculated from the beginning
6 Apr 2018 All prices were inflation‐adjusted to 2015 real U.S. dollar values by Prior to 1870, prices largely tracked the underlying inflation rate (see
6 Apr 2018 All prices were inflation‐adjusted to 2015 real U.S. dollar values by Prior to 1870, prices largely tracked the underlying inflation rate (see
It will also calculate the rate of inflation during the time period you choose. We determine the value of a dollar using the Consumer Price Index from December of
The 1903 inflation rate was 2.33%. The current inflation rate (2019 to 2020) is now 2.33%. If this number holds, $100 today will be equivalent in This effect explains how inflation erodes the value
of a dollar over time. By calculating the value in 1903 dollars, the chart below shows how $328,433.14 buys Average Annual Inflation Rate: To use it, simply enter a dollar value, then select the
years to compare. How do I calculate inflation rates per province?
2 Mar 2020 Although the rate of inflation is normally thought of in terms of quarterly or annual decline (or increase) in the "purchasing power" of the dollar. Inflation calculator, current as of
2020, that will calculate inflation in the United States from 1774 until the present $1 worth of 1903 dollars is now worth $28.57 It will also calculate the rate of inflation during the time period
you choose. We determine the value of a dollar using the Consumer Price Index from December of Simply multiply any historical dollar amount you may encounter reading This formula does not work after
World War I. Inflation, abandonment of the Bridge designer/builder Nicholas Powers was paid $2,000 based on a rate of $7 per day . At a dollar a day, six days a week, a 1903 annual salary would be
about $300 31 Jan 2018 This calculator uses official UK inflation data to show how prices have What has the total rate of inflation been since a particular year? Year. 7 Feb 2003 Stated another
way, if a person made $2,750 dollars in 1913 and received annual raises that matched exactly the rate of inflation/CPI -- and if | {"url":"https://bestcurrencyxptju.netlify.app/haerr87342peb/dollar-inflation-rate-1903-290","timestamp":"2024-11-01T20:41:11Z","content_type":"text/html","content_length":"34313","record_id":"<urn:uuid:2d7f5bf4-f609-4205-8813-4c2baaec78cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00263.warc.gz"} |
The LP File Format
15.1 The LP File Format¶
MOSEK supports the LP file format with some extensions. The LP format is not a completely well-defined standard and hence different optimization packages may interpret the same LP file in slightly
different ways. MOSEK tries to emulate as closely as possible CPLEX’s behavior, but tries to stay backward compatible.
The LP file format can specify problems of the form
\[\begin{split}\begin{array}{lccccl} \mbox{minimize/maximize} & & & c^T x + \half q^o(x) & & \\ \mbox{subject to} & l^c & \leq & Ax+\half q(x) & \leq & u^c,\\ & l^x & \leq & x & \leq & u^x, \\ & & &
x_{\mathcal{J}}\ \mbox{integer}, & & \end{array}\end{split}\]
• \(x \in \real^n\) is the vector of decision variables.
• \(c \in \real^n\) is the linear term in the objective.
• \(q^o: \in \real^n \rightarrow \real\) is the quadratic term in the objective where
\[q^o(x) = x^T Q^o x\]
and it is assumed that
\[Q^o = (Q^o)^T.\]
• \(A \in \real^{m\times n}\) is the constraint matrix.
• \(l^c \in \real^m\) is the lower limit on the activity for the constraints.
• \(u^c \in \real^m\) is the upper limit on the activity for the constraints.
• \(l^x \in \real^n\) is the lower limit on the activity for the variables.
• \(u^x \in \real^n\) is the upper limit on the activity for the variables.
• \(q: \real^n \rightarrow \real\) is a vector of quadratic functions. Hence,
\[q_i(x) = x^T Q^i x\]
where it is assumed that
\[Q^i = (Q^i)^T.\]
• \(\mathcal{J} \subseteq \{1,2,\ldots ,n\}\) is an index set of the integer constrained variables.
15.1.1 File Sections¶
An LP formatted file contains a number of sections specifying the objective, constraints, variable bounds, and variable types. The section keywords may be any mix of upper and lower case letters.
15.1.1.1 Objective Function¶
The first section beginning with one of the keywords
defines the objective sense and the objective function, i.e.
\[c^T x + \half x^T Q^o x.\]
The objective may be given a name by writing
before the expressions. If no name is given, then the objective is named obj.
The objective function contains linear and quadratic terms. The linear terms are written as
and so forth. The quadratic terms are written in square brackets ([ ]/2) and are either squared or multiplied as in the examples
There may be zero or more pairs of brackets containing quadratic expressions.
An example of an objective section is
myobj: 4 x1 + x2 - 0.1 x3 + [ x1^2 + 2.1 x1 * x2 ]/2
Please note that the quadratic expressions are multiplied with \(\half\) , so that the above expression means
\[\begin{array}{lc} \mbox{minimize} & 4 x_1 + x_2 - 0.1\cdot x_3 + \half (x_1^2 + 2.1\cdot x_1 \cdot x_2) \end{array}\]
If the same variable occurs more than once in the linear part, the coefficients are added, so that 4 x1 + 2 x1 is equivalent to 6 x1. In the quadratic expressions x1 * x2 is equivalent to x2 * x1
and, as in the linear part, if the same variables multiplied or squared occur several times their coefficients are added.
15.1.1.2 Constraints¶
The second section beginning with one of the keywords
subj to
subject to
defines the linear constraint matrix \(A\) and the quadratic matrices \(Q^i\).
A constraint contains a name (optional), expressions adhering to the same rules as in the objective and a bound:
subject to
con1: x1 + x2 + [ x3^2 ]/2 <= 5.1
The bound type (here <=) may be any of <, <=, =, >, >= (< and <= mean the same), and the bound may be any number.
In the standard LP format it is not possible to define more than one bound per line, but MOSEK supports defining ranged constraints by using double-colon (::) instead of a single-colon (:) after the
constraint name, i.e.
may be written as
By default MOSEK writes ranged constraints this way.
If the files must adhere to the LP standard, ranged constraints must either be split into upper bounded and lower bounded constraints or be written as an equality with a slack variable. For example
the expression (15.1) may be written as
\[x_1 + x_2 - sl_1 = 0,\ -5 \leq sl_1 \leq 5.\]
15.1.1.3 Bounds¶
Bounds on the variables can be specified in the bound section beginning with one of the keywords
The bounds section is optional but should, if present, follow the subject to section. All variables listed in the bounds section must occur in either the objective or a constraint.
The default lower and upper bounds are \(0\) and \(+\infty\) . A variable may be declared free with the keyword free, which means that the lower bound is \(-\infty\) and the upper bound is \(+\infty
\) . Furthermore it may be assigned a finite lower and upper bound. The bound definitions for a given variable may be written in one or two lines, and bounds can be any number or \(\pm \infty\)
(written as +inf/-inf/+infinity/-infinity) as in the example
x1 free
x2 <= 5
0.1 <= x2
x3 = 42
2 <= x4 < +inf
15.1.1.4 Variable Types¶
The final two sections are optional and must begin with one of the keywords
Under general all integer variables are listed, and under binary all binary (integer variables with bounds 0 and 1) are listed:
x1 x2
x3 x4
Again, all variables listed in the binary or general sections must occur in either the objective or a constraint.
15.1.1.5 Terminating Section¶
Finally, an LP formatted file must be terminated with the keyword
15.1.2 LP File Examples¶
Linear example lo1.lp
\ File: lo1.lp
obj: 3 x1 + x2 + 5 x3 + x4
subject to
c1: 3 x1 + x2 + 2 x3 = 30
c2: 2 x1 + x2 + 3 x3 + x4 >= 15
c3: 2 x2 + 3 x4 <= 25
0 <= x1 <= +infinity
0 <= x2 <= 10
0 <= x3 <= +infinity
0 <= x4 <= +infinity
Mixed integer example milo1.lp
obj: x1 + 6.4e-01 x2
subject to
c1: 5e+01 x1 + 3.1e+01 x2 <= 2.5e+02
c2: 3e+00 x1 - 2e+00 x2 >= -4e+00
0 <= x1 <= +infinity
0 <= x2 <= +infinity
x1 x2
15.1.3 LP Format peculiarities¶
15.1.3.1 Comments¶
Anything on a line after a \ is ignored and is treated as a comment.
15.1.3.2 Names¶
A name for an objective, a constraint or a variable may contain the letters a-z, A-Z, the digits 0-9 and the characters
The first character in a name must not be a number, a period or the letter e or E. Keywords must not be used as names.
MOSEK accepts any character as valid for names, except \0. A name that is not allowed in LP file will be changed and a warning will be issued.
The algorithm for making names LP valid works as follows: The name is interpreted as an utf-8 string. For a Unicode character c:
• If c==_ (underscore), the output is __ (two underscores).
• If c is a valid LP name character, the output is just c.
• If c is another character in the ASCII range, the output is _XX, where XX is the hexadecimal code for the character.
• If c is a character in the range 127-65535, the output is _uXXXX, where XXXX is the hexadecimal code for the character.
• If c is a character above 65535, the output is _UXXXXXXXX, where XXXXXXXX is the hexadecimal code for the character.
Invalid utf-8 substrings are escaped as _XX', and if a name starts with a period, e or E, that character is escaped as _XX.
15.1.3.3 Variable Bounds¶
Specifying several upper or lower bounds on one variable is possible but MOSEK uses only the tightest bounds. If a variable is fixed (with =), then it is considered the tightest bound. | {"url":"https://docs.mosek.com/latest/cxxfusion/lp-format.html","timestamp":"2024-11-02T18:07:47Z","content_type":"text/html","content_length":"28796","record_id":"<urn:uuid:eaa0bafa-5265-41df-85a3-0ea17695a119>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00213.warc.gz"} |
Performing Linear Regression using Least Squares - WeirdGeek
Linear regression is defined as a linear approach which is used to model the relationship between dependent variable and one or more independent variable(s). When we try to model the relationship
between a single feature variable and a single target variable, it is called simple linear regression. But when there is more than one independent variable then the process is called multiple linear
Here in this post, we will learn how to perform Linear Regression using Least Squares. Here we will use the NumPy package and its .polyfit() function.
Let’s start with the basics of linear regression and how it actually works?
Understanding Regression
Think it as we want to fit a line to our data and as we know that we can define a line in two dimensions in the form of
y = ax + b,
where y is the target,
x is the single feature,
and a and b parameters slope and intercept respectively that we need to calculate.
Selecting slope and intercept values, which describe out data in the best possible way
The slope sets how steep the line is and the intercept sets where the line crosses the y-axis. for selecting the best values for slope and intercept we have to make sure that all the data points
collectively lie as close as possible to the line.
Residuals and Least Squares
The vertical distance between the data point and the line is called Residual. By looking at the above graph we can say that Residual_1 has negative value because the data point lies below the line.
Similarly, Residual_2 has positive value as the data point lies above the line.
So we have to define the line in such a way that all the data points lie as close as possible to that line and also for which the sum of squares of all the residuals is minimum. The Process of
finding the values or parameters for which the sum of squares of the residuals is minimal is called Least Squares.
Calculating Least Squares with np.polyfit() function
Here, we will use the .polyfit() function from the NumPy package which will perform the least square with polynomial function under the hood. Basic Syntax for np.polyfit() is:
slope(a), intercept(b) = np.polyfit ( X, Y, 1)
The first parameter(X) is the first variable,
The second parameter(Y) is the second variable,
The third parameter is the degree of polynomial we wish to fit. Here for a linear function, we enter 1.
Here for this post, we are going to use Anscombe’s-quartet data set which is stored as an excel file and we can read it using the pd.read_excel(). Also, we need to import Pandas and NumPy package
with common alias name as shown below:
import pandas as pd
import numpy as np
data = pd.read_excel("C:\\Users\\Pankaj\\Desktop\\Practice\\anscombes-quartet.xlsx")
or we can write above code as follows if the directory of our both files i.e. excel file and coed file (.py) is same:
data = pd.read_excel("anscombes-quartet.xlsx")
Calculating Summary Statistics
Calculating Mean using np.mean()
To know more how to calculate mean, variance and other summary statistics for Exploratory data analysis of a data set you can check our post.
Mean of X = Mean of X.1 = Mean of X.2 = Mean of X.3 = 9
Mean of Y = Mean of Y.1 = Mean of Y.2 = Mean of Y.3 = 7.5
Calculating Variance using np.var()
Variance of X = Variance of X.1 = Variance of X.2 = Variance of X.3 = 11
Variance of X = Variance of Y.1 = Variance of Y.2 = Variance of Y.3 = 4.125 (approx)
Calculating a(slope) and b(intercept) for all the groups((X,Y), (X.1, Y.1), (X.2, Y.2), (X.3,Y.3)) using np.polyfit() shown below:
a, b = np.polyfit(data['X'],data['Y'],1)
0.5000909090909095 3.000090909090909
a, b = np.polyfit(data['X.1'],data['Y.1'],1)
print(a, b)
0.5000000000000004 3.0009090909090896
a, b = np.polyfit(data['X.2'],data['Y.2'],1)
print(a, b)
0.4997272727272731 3.0024545454545453
a, b = np.polyfit(data['X.3'],data['Y.3'],1)
print(a, b)
0.4999090909090908 3.0017272727272735
So we can say from Linear regression line for all the values of X and Y for the different group is same where a(slope) = 0.50 and b(intercept) = 3.00 when rounded of for up to two decimal values
Plotting Linear Regression using Least Squares for X and Y (Fig1)
a, b = np.polyfit(data['X'], data['Y'], 1)
plt.scatter(data['X'], data['Y'], color='blue')#This will create matplotlib.collections.PathCollection object and won't show the plot untill we call plt.show()
X_th = np.array([3,15])
Y_th = a * X_th + b
plt.plot(X_th, Y_th, color='black', linewidth=3)#This will also create matplotlib.collections.PathCollection object and won't show the plot untill we call plt.show()
Plotting Linear Regression using Least Squares for X.1 and Y.1 (Fig2)
a, b = np.polyfit(data['X.1'],data['Y.1'],1)
plt.scatter(data['X.1'], data['Y.1'], color='blue')
X_th = np.array( [3, 15] )
Y_th = a * X_th + b
plt.plot(X_th, Y_th, color='black', linewidth=3)
Plotting Linear Regression using Least Squares for X.2 and Y.2 (Fig3)
a, b = np.polyfit(data['X.2'],data['Y.2'],1)
plt.scatter(data['X'], data['Y'], color='blue')
X_th = np.array([3,15])
Y_th = a * X_th + b
plt.plot(X_th, Y_th, color='black', linewidth=3)
Plotting Linear Regression using Least Squares for X.3 and Y.3 (Fig4)
a, b = np.polyfit(data['X.3'],data['Y.3'],1)
plt.scatter(data['X.3'], data['Y.3'], color='blue')
X_th = np.array([3,15])
Y_th = a * X_th + b
plt.plot(X_th, Y_th, color='black', linewidth=3)
Finally to show the plot use the command
For more information about Pandas and NumPy package, you can see their official documentation: Pandas Documentation, NumPy Documentation. | {"url":"https://www.weirdgeek.com/2018/11/linear-regression-using-least-squares/","timestamp":"2024-11-10T04:14:40Z","content_type":"text/html","content_length":"57558","record_id":"<urn:uuid:b80c1472-e9ad-41c1-8be4-50e7ce567690>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00885.warc.gz"} |
The value of θ in interval [0,90∘] for which 10sin2θ−11sinθ+3=0... | Filo
Question asked by Filo student
The value of in interval for which
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
6 mins
Uploaded on: 9/17/2022
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE
7 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Trigonometry
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The value of in interval for which
Updated On Sep 17, 2022
Topic Trigonometry
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 96
Avg. Video Duration 6 min | {"url":"https://askfilo.com/user-question-answers-mathematics/the-value-of-in-interval-for-which-31353332333639","timestamp":"2024-11-15T01:35:42Z","content_type":"text/html","content_length":"344350","record_id":"<urn:uuid:527e2743-49d7-4808-b622-5899cf7fc267>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00231.warc.gz"} |
The Big Blog of Maths Picture Books - How to STEM
The Big Blog of Maths Picture Books
Picture books are a fantastic way to explore mathematical concepts. We’ve selected our favourites and matched them to the following mathematical areas: place value, calculation, fractions and
Click on each image to find out more about the book including the age recommendation, key concept and an Amazon link.
A place for Zero
billions of bricks e1508284167431
if e1508281681257
if the world were a village e1508357081285
Infinity and Me e1508278907624
one thing e1515783471663
place value
the number devil e1508278896262
zero the hero e1508284211498
how many legs
maths book e1508279038973
divi e1508279050946
the grapes of math e1508286782263
The Math Inspectors e1508279062687
full house e1508286849243
pizza counting e1508286878473
the doorbell rang e1508285623311
the lions share
the wishing club e1508286919634
A second a minute e1508285678343
cluck o clock
How Big Was A Dinosaur e1508264145385
How Big Were Dinosaurs e1501538711385
Mind Boggling Numbers e1492007724709
just a second e1508281744440
sir cumference
spagetti and meatballs e1508284277567
the story of money e1508282457661
whats the time mr wolf
captain invincible
Sir Cumference e1508283068145
the greedy triangle e1508283089391
triangle e1508284310479
whats your angle
a very improbable story
lines bars and circles e1508285770968
Have we missed off a brilliant book? Comment below and we’ll add it on!
13 thoughts on “The Big Blog of Maths Picture Books”
1. Equal Scmequal
1. We’ve added this book in the number: place value section.
2. What a lovely collection of Maths picture books! I’d like to suggest Apple Fractions by Jerry Pallotta to add to your set. 🙂
1. Thanks for the recommendation. We’ve added it to the list! 🙂
3. These look great. I’d like to suggest, If the world were a Village.
1. What a lovely book! We’ve just added it to the number section.
4. 365 penguins is great for maths.
5. One is A Snail, Ten is A Crab – April Pulley-Sayre & Jeff Sayer!
6. Have you had a look at Multiplication Rules publishers Ragged Bears It’s Designed to understand times tables in a totaly visual way understanding number patterns.It covers division and
understanding of odd and even numbers.Its written by a Dyslexic who also has dyscalculia and the book is recommended by many local authority dyslexia assessors around England
The web site is http://www.multiplicationrules.co.uk for more details
Thank you
7. Charlie and Lola – One Thing is great
1. Thanks for the recommendation – we’ve added it to the place value section!
8. Pamela Allen ‘Who sank the boat?”
“Counting on Fred”
9. Mr Archimedes Bath by Pamela Allen | {"url":"https://howtostem.co.uk/big-blog-maths-books/","timestamp":"2024-11-06T20:55:42Z","content_type":"text/html","content_length":"126477","record_id":"<urn:uuid:99cfe97f-405d-4ba1-a7d6-ad4f4193a003>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00004.warc.gz"} |
Re: st: Imputing values for categorical data
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Imputing values for categorical data
From "Renzo Comolli" <[email protected]>
To <[email protected]>
Subject Re: st: Imputing values for categorical data
Date Thu, 8 Apr 2004 23:29:44 -0400
Hi Jennifer,
I have one piece of advice: be very careful when using -impute-
It is not suitable to impute categorical variables, and I am surprise the
manual does not mention that.
When I actually "ripped the ado file open" an saw what it does I gave up on
imputing categorical variables, but I had never done imputations before so I
have very little knowledge of the field
At its core, -impute- does a simple OLS projection.
Let me explain with a simplified case first and then with a more complicated
Simplifying assumption: only one variable (denoted by y) necessitates to be
imputed, all the other variables (denoted by matrix X) have no missings.
Without loss of generality assume that you have ordered the variable y so
that all the cases for which you have observations appear at the top (denote
this part of the vector y'), and all the missings at the bottom, denote this
part of the vector y by y". Also denote by X' and X" the corresponding
values of X (remember that X has no missings, X" just contains the X values
corresponding to the observation y")
Then -impute- trivially does OLS of y'=X'beta+epsilon where beta is the OLS
vector of coefficients. It saves it and imputes y" by doing X"beta
So of course this is completely unsuitable for cases categorical variables.
Even with continuous variables you have to be careful not to predict "out of
range". Let's assume that you are predicting "number of weeks of work", it
might well happen that -impute- predicts that the interviewee worked -1
weeks last year
The case is not that simple when the matrix X contains missing variables
itself. If so, -impute- looks for the best subset of regressors. In practice
-impute- repeats the procedure explained here above several times trying to
keep as many regressors as possible (exactly how I did not understand either
from the ado file or from the manual, but I did not spend much time on it,
because I did not care that much.
Said that, I did not know of these other methods you mentioned (hotdeck,
Amelia) and I would be glad to read what others have to say about it.
Renzo Comolli
*From Jennifer Wolfe Borum <[email protected]>
To <[email protected]>
Subject st: Imputing values for categorical data
Date Thu, 8 Apr 2004 18:50:21 -0400
I am working with a data set composed of responses to survey questions which
contains some categorical variables such as gender and ethnicity. The data
has missing values and I have decided that it would be best to keep all
observations due to a pattern in the missing values. I have decided to use
the impute command in Stata to handle this as I've had some difficulty and
am not familiar enough with the hotdeck and Amelia imputations. I've found
that impute works fine for the continuous variables, however for the
categorical variables I am obtaining values for which I am unsure how to
interpret. For example, I will get an imputed value of .35621 for gender
which is coded 1 or 0. Would anyone be able to help with the interpretation
of the values I am obtaining for the categorical data?
Also, I would be interested in knowing which approach other Stata users
prefer for imputing values as this is the first time I have encountered
missing values and I am just beginning to research the various methods of
Thanks in advance,
Graduate Student
Florida International University
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2004-04/msg00198.html","timestamp":"2024-11-15T00:08:01Z","content_type":"text/html","content_length":"10650","record_id":"<urn:uuid:ee2a6d55-2001-4717-953a-b404b813d440>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00579.warc.gz"} |
Exercises Using Ordinal Numbers - OrdinalNumbers.com
Exercises Using Ordinal Numbers
Exercises Using Ordinal Numbers – There are a myriad of sets that can easily be enumerated with ordinal numbers as a tool. They can also be used to generalize ordinal numbers.
The ordinal numbers are one of the most fundamental ideas in mathematics. It is a numerical number that defines the position of an object within an array of objects. The ordinal number can be
identified by a number that is between zero and 20. While ordinal numbers serve a variety of purposes, they are often used to represent the order in which items are placed in an orderly list.
Charts as well as words and numbers to depict ordinal numbers. They can also be used to illustrate how a set of or pieces are set up.
Ordinal numbers generally are classified into two groups. Transfinite ordinals can be represented with lowercase Greek letters, whereas finite ordinals are represented by Arabic numbers.
In accordance with the axiom that every set well-ordered should contain at minimum one ordinal. The first person in an class, for instance will receive the top score. The student who received the
highest score was declared the winner of the contest.
Combinational ordinal numbers
Compound ordinal numbers are multi-digit ones. They are created by multiplying an ordinal number by its last digit. They are commonly used to classify and date. They do not have an exclusive ending
for the last digit like cardinal numbers do.
Ordinal numbers identify the order in which elements are located in a collection. They may also be used for the names of objects within the collection. Ordinal numbers come in normal and suppletive
By prefixing cardinal numbers with the -u suffix, regular ordinals can be made. Then, you type the number in a word. Then add an hyphen to it. There are also additional suffixes. The suffix “nd” can
be used to denote numbers that end with 2 and “th” could refer to numbers that end in the numbers 4 or 9.
Suppletive ordinals can be made by prefixing words with -u.e. or –ie. The suffix, used to count is more extensive than the conventional one.
Limits of ordinal significance
Limit ordinal numbers are numbers that do not contain zero. Limit ordinal numbers have the drawback of not having a maximum element. They can be constructed by joining non-empty sets without any
maximum elements.
Transfinite recursion definitions also employ limited ordinal numbers. According to the von Neumann model, every infinite cardinal also acts as an order limit.
A number with an upper limit equals the sum all of the ordinals below. Limit ordinal amounts can be calculated using math, but they can also can be expressed in a series or natural numbers.
The data are arranged by ordinal numbers. They provide an explanation of an object’s numerical location. They are often applied in Arithmetic and set theory contexts. They are not in the same class
as natural numbers despite sharing a structure with them.
The von Neumann model uses a well-ordered set. Let’s suppose that fy subfunctions a function, g’, that is described as a single function. If fy is only one subfunction (ii), G’ must satisfy the
The Church-Kleene oral can be described as an limit order in a similar fashion. The Church-Kleene ordinal defines an appropriate limit as a well ordered collection of the smaller ordinals. It also
has a nonzero ordinal.
Numerological examples of common numbers in stories
Ordinal numbers are typically used to indicate the order of things between objects and entities. They are important to organize, count and ranking. They are able to explain the position of objects as
well as the order of their placement.
The ordinal numbers are usually identified by the letter “th”. However, sometimes the letter “nd” is used, but it’s not the only way it is utilized. Books titles often contain ordinal numbers.
Ordinal numbers can be stated in words, even though they are typically used in lists format. They may be also expressed in numbers or acronyms. Comparatively, they are easier to comprehend as
compared to cardinal numbers.
Three distinct types of ordinal numbers are accessible. You can learn more about them through practicing or games as well as other activities. You can enhance your math skills by understanding more
about these concepts. As a fun and easy method to increase your math abilities, you can use a coloring exercise. Make sure you check your work using the handy marking sheet.
Gallery of Exercises Using Ordinal Numbers
Ordinal Numbers Online Exercise For Grade 1
Ejercicio De Ordinal Numbers Assessment
Ordinal Numbers 2 Pages Of Uses And Exercises Esl Worksheet By
Leave a Comment | {"url":"https://www.ordinalnumbers.com/exercises-using-ordinal-numbers/","timestamp":"2024-11-04T20:21:38Z","content_type":"text/html","content_length":"64881","record_id":"<urn:uuid:1be8a59e-9588-4a3c-bbde-97237a3042e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00496.warc.gz"} |
I've got the question below in a homework task and i need some help:
The unknown resistance R of a resistor can be measured by comparing with a 100Ω standard. A potentiometer slide-wire, which consists of a bare wire with constant resistivity, so that contact can be
made at any point along its length, is used to find two balance points where the current through the ammeter is zero, when A is connected to B or C.
When A is connected to B, the balance point was found to be when l=400mm.
When A is connected to C, the balance point was found to be when l=588mm.
Any thoughts?
It would help to see working, but basically the current through the ammeter = 0 when the pd across it is 0.
You know resistance = constant * length
So set up 2 equations where the pd dropped in the top loop = pd dropped in the bottom loop, with one equation for each different length.
Then cancel stuff and solve
Quick Reply | {"url":"https://www.thestudentroom.co.uk/showthread.php?t=7476744","timestamp":"2024-11-05T00:23:36Z","content_type":"text/html","content_length":"294930","record_id":"<urn:uuid:741f5bbb-a13d-4e9a-a2cd-30201e47d6c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00781.warc.gz"} |
Chempak Technical Information - Detailed Discussion
© Copyright 1998 Madison Technical Software Inc. Used with permission.
Note: The information in this Appendix was taken from Section 8 of the Chempak Operating & Reference Manual, Version 4, Windows Edition, Issue: January 1998. Section numbering was left unchanged, and
some sections that were judged not relevant were not included so the numbering is not sequential. Notation and references in this Appendix apply only to this Appendix, and are referenced at the end.
8.1 General
This section sets out the data sources, correlations and estimation methods used in the CHEMPAK database. In putting together the database, the methods and sources were selected in the following
order of preference.
· Published experimental data
· Published correlations based on experimental data
· Specific category correlations
· General estimation methods
Madison Technical Software has followed the general recommendations in Reid and in Danner and Daubert as far as selection of specific category correlations and general estimation methods are
concerned. In selecting specific compound data, a combination of sources has been used wherever possible. Important sources of specific compound data used by CHEMPAK are:
· Reid et al
· Perry et al
· J Chem Eng Data
· Daubert & Danner
· ESDU publications
· API Technical Data Book - Petroleum Refining
· International Critical Tables
· CRC Handbook
· Vargaftik
In many cases, the compound property values are a combination of published data, published correlations and general estimation methods. Several properties in certain compound categories have been
estimated or adjusted by Madison Technical Software. It has been our policy to adopt and maintain a critical approach to available data sources and correlation methods.
The following sections set out details of the correlations and estimation methods used. In certain cases, the user is directed to the original references, particularly where the method is complex.
Data sources for Aqueous Solutions/Heat Transfer Liquids are published experimental data and correlations based on experimental data.
8.2 Physical Constants
8.2.1 Critical Temperature
The great majority of values are believed to be experimental. Where values had to be estimated, the Joback method was used.
8.2.2 Critical Pressure
Most of the values are experimental. In cases where experimental data were not available, the critical pressure was derived from the Joback method.
8.2.3 Critical Volume
A majority of the values are experimental. A great majority of the remaining compounds for which experimental values were not available had accurate boiling-point volumes available from which
critical volume estimates were derived using the Tyn and Calus correlation. For a few substances, estimates of the critical volume were derived from the Joback method.
8.2.4 Normal Boiling Points
All values are believed to be experimental. In some cases, the values were slightly adjusted for vapor pressure.
8.2.5 Freezing Points
Where possible, quoted freezing points are experimental. No accurate method of estimation of compound freezing point is available. In the absence of experimental data, a rough estimate was derived
from the Joback method.
8.2.6 Acentric Factors
The acentric factor is defined as
w = -log10(Pvr at Tr = 0.7) - 1
In all cases the acentric factor was derived from the vapor pressure correlation ( see section 8.8)
8.2.7 Joback Group Contribution Method
The Joback method is used to derive values of Tc, Pc, Vc and Tf where no experimental data or other predictive method was available.
Tc = Tb/(0.584 + 0.965 Sum(Dt) - Sum(Dt)2)
Pc = 1/(0.113 + 0.0032 na - Sum(Dp))2
Vc = 17.5 + Sum(Dv)
Tf = 122 + Sum(Df)
where na is the number of atoms in the molecule and the D contributions are given by Joback and by Reid et al (1987). Error magnitudes for the Joback method are as follows:
· Critical Temperature: average error about 1%
· Critical Pressure: average error about 5%
· Critical Volume: average error about 2%
· Freezing Point: average error about 11%
8.2.8 Tyn & Calus Relation
Tyn & Calus showed a close (< 3% error) relation between molar volume at normal boiling point and the critical molar volume of the form,
Vb = a Vcn
a = 0.285
n = 1.048
8.3 Liquid Specific Volume
Liquid specific volume rises slowly and approximately linearly with rise in temperature to about Tr = 0.8. At higher temperatures, the values rise more rapidly to the critical point.
Experimental data or correlations derived from experimental data are available for most compounds.
8.3.1 Hankinson-Brobst-Thompson Equation
The saturated specific volume is given by,
Vs/V* = Vr(O)(1 - wsrkVr(1))
Vr(O) = Sum{an(1 - Trn/3)} 0.25 < Tr < 0.95
Vr(1) = Sum{bnTrn/(Tr - 1.00001)} 0.25 < Tr < 1.0
aO = 1
a1 = -1.52816
a2 = 1.43907
a3 = -0.81446
a4 = 0.190454
bO = -0.296123
b1 = 0.386914
b2 = -0.0427258
b3 = -.0480645
V*, wsrk and Tc are tabulated property constants. The user is referred to Hankinson, Thompson and to Reid et al (1987). Errors are typically about 1% with most being less than 2%.
8.3.2 Rackett Equation
If a reference volume (Vref at Tref) is available then
Zra = (PcVref / RTc)n
n = 1/(1 + (1 - Tref / Tc)m)
m = 2/7 or other empirical constant
The saturated specific volume is given by,
Vs = VrefZrax
x = -(1 - Tref / Tc)m + (1 - T/Tc)m
In most cases, an experimental value of reference density was available. Where such a value was not available, values were derived from the group contribution method of Le Bas or derived from the
critical volume using the Tyn & Calus relation. Tests by Madison Technical Software on over 80 liquids showed that these two methods were significantly more accurate than the Spencer and Danner
method for Zra. The reader is referred to Reid et al for further details on these methods. With one or more experimental points, the Rackett equation gives errors of about 1% with most values less
than 3%. If the reference volumes are estimated, typical errors are 3%.
8.3.4 The Effect of Pressure on Liquid Specific Volume
The effect of pressure on liquid specific volume is calculated when
P > Ps + 0.1 Pc
The correction is derived from the equation of state as follows,
VL = VLs - VLs,es + VL,es
VL = specific volume at T and P
VLs = VL at T and Ps from methods of this section
VLs,es = VL at T and Ps from the equation of state
VL,es = VL at T and P from the equation of state
Liquid specific heat can in principle be derived from the equation of state but in practise, direct analytical or group contributions are preferred where experimental data are not available.
8.4 Liquid Specific Heat
8.4.1 Rowlinson-Bondi Method
(CpL - Cpo)/R = 1.45 + 0.45/X + 0.25w(17.11 + 25.2X0.333/Tr + 1.742/X)
X = 1 - Tr
w = acentric factor
This method is generally applicable to the range from Tf to values approaching Tc. Note that CpL approaches infinity as T approaches Tc.
Errors are generally less than 5% except in the case of hydrogen-bonding polar compounds (e.g. alcohols) at low temperatures. For these compounds, the Missenard group contribution method is
8.4.2 Missenard Method
The Missenard group contribution method yields values of coefficients in
CpL = a + bT + cT2
The accuracy is usually better than 5%. The method cannot deal with double bonds and is not applicable for Tr > 0.75
a = Sum{an}
b = Sum{bn}
c = Sum{cn}
The group contributions are available in Missenard. See also Reid at al.
8.4.3 The Effect of Pressure on Liquid Specific Heat
As noted above, the equations of state can be employed to estimate liquid specific heat, but the methods presented in 8.4.1 and 8.4.2 are more reliable. The equations of state however can be used to
estimate the effect of pressure on liquid specific heat.
CpLs = CpL at Ts and Ps determined by the methods of this section.
Cpo = ideal gas specific heat at Ts
CpL = CpL at Ts and P > Ps
The equations of state give estimates of
Ds = (CpLs - Cpo)es at Ts and Ps
D = (CpL - Cpo)es at Ts and P
The corrected value of the liquid specific heat is
CpL = CpLs + D - Ds
The correction is not applied when T is close to Tc
8.5 Liquid Viscosity
Liquid viscosity typically varies in magnitude by a factor of 100 or more between the freezing and critical temperatures. No generalized method is available to estimate or represent liquid viscosity
adequately over the entire temperature range. Corresponding states methods are applicable above Tr = 0.76. From the freezing point to the boiling point, the influence of structure is strong.
8.5.1 Method of Van Velzen
The method of Van Velzen et al is a group contribution method of some complexity and range of applicability. It is the most frequently used group contribution method. The accuracy of the estimation
averages about 10% and most estimates are better than 20%. Some of the limitations of the method are:
· Larger errors found with the first members of a homologous series
· Only normal and iso substitutions on alkyl chains can be treated
· Heterocyclic compounds cannot be treated
· Application only in the range Tf to Tb
The method is complex and the reader is directed to the original references for full details.
8.5.2 Method of Morris
The method of Morris is a group contribution method. This method is useful as a comparison and substitute for the Van Velzen method in cases where the Van Velzen method is not applicable. The
accuracy of estimation is of the same order as Van Velzen. The limitations of the method are,
· The method is less detailed than the Van Velzen method
· Applicable only in the range Tf to Tb
· No explicit treatment for heterocyclics or esters (apart from acetates).
The Morris method takes the following form
ln(v/v*) = 2.3026 J(1/Tr - 1)
J = (0.577 + Sum(Di))0.5
The values of v* are given for various categories of compounds. The constants v* and the group contributions D are given in Morris.
8.5.3 Method of Letsou and Stiel
This is a corresponding states method with applicability over 0.76 < Tr < 1. The method also predicts the viscosity at the critical point (Tr = 1). The accuracy is normally better than 5% up to Tr =
0.92 with higher errors encountered as the critical point is approached. Overall this is an excellent estimation method for high-temperature liquid viscosity. The only serious limitation is the
restricted range of applicability.
The form of the relation is
v = (f0 + w.f1)/A
w = acentric factor
f0 = a0 + b0Tr + c0Tr2
f1 = a1 + b1Tr + c1Tr2
A = 0.176x106 Tc0.1667/M0.5Pc0.667
a0 = 2.648 a1 = 7.425
b0 = -3.725 b1 = -13.39
c0 = 1.309 c1 = 5.933
In the above relations Pc is in bar and the viscosity is in units of Pa-sec.
8.5.4 Method of Przezdziecki & Sridhar
In this method, the viscosity is related to changes in the specific volume.
v = V0/E(V - V0) centipoise
V = liquid molar volume in cc/mol
E = -1.12 + Vc/D
D = 12.94 + 0.1 M - 0.23 Pc + 0.0424 Tf - 11.58 Tf/Tc
V0 = 0.0085 wTc - 2.02 + Vf / {0.342(Tf / Tc) + 0.894}
Tc = critical temperature, K
Pc = critical pressure, bar
Vc = critical volume, cc/mol
M = molecular weight
Tf = freezing point, K
w = acentric factor
Vf = specific volume at Tf
The authors recommend that the volumes be estimated from the Gunn and Yamada equation. The reader is referred to Reid for a discussion on this method. The method is less accurate below reduced
temperatures of about 0.55. Errors vary widely but will normally be less than 20% for Tr greater than 0.55.
This method is used in CHEMPAK only where necessary. An error analysis by Reid et al indicates a higher level of error associated with this method than with the Van Velzen method for instance.
8.5.5 Interpolation and Extrapolation
Two regions are typically covered well by available experimental data, experimental correlations and by the above relations:
273 < T < 0.6 Tc: this region is normally covered by published data or by one of the methods 8.5.1, 8.5.2, 8.5.4
0.76 Tc < T < Tc: this region is well covered by the method of Letsou and Stiel (section 8.5.3)
This leaves two regions which are often not covered by the above methods
Tf < T < 273: this region may be covered by extrapolation using ln(v) versus 1/T extrapolation. The error due to the extrapolation in practise will not normally exceed 10% with a possible 20% error
in the immediate vicinity of the freezing point.
0.6 Tc < T < 0.76 Tc: this region may be covered by interpolation between the 273 < T < 0.6 Tc region and the 0.76 Tc < T < Tc region using ln(v) versus 1/T interpolation. The errors due to
interpolation in this case rarely exceed 5%.
8.5.6 The Effect of Pressure on Liquid Viscosity
The method of Lucas is applied:
vL/vsL = (1 + B.FA)/(1 + w.C.F)
vL = viscosity at pressure P
vsL = viscosity at saturation pressure Ps
F = (P - Ps)/Pc
w = acentric factor
A = 0.9991 - 0.0004674/(1.0523/Tr0.03877 - 1.0513)
B = 0.3257/(1.0039 - Tr2.573)0.2906 - 0.2086
C = -0.07921 + 2.1616 Tr - 13.404 Tr2 + 44.1706 Tr3 -84.8291 Tr4 + 96.1209 Tr5 - 59.8127 Tr6 + 15.6719 Tr7
8.6 Liquid Thermal Conductivity
8.6.1 Method of Latini at al
For specified categories of compounds, the method of Latini et al gives correlations for liquid conductivity for the range Tr = 0.3 to 0.8
The correlations are in the form
k = A(1 - Tr)0.38/Tr0.167
A = A0TbnMmTcp
Category A0 n m p
Alkanes 0.0035 1.2 -0.5 -0.167
Alkenes 0.0361 1.2 -1.0 -0.167
Cycloalkanes 0.0310 1.2 -1.0 -0.167
Aromatics 0.0346 1.2 -1.0 -0.167
Alcohols/Phenols 0.00339 1.2 -0.5 -0.167
Acids 0.00319 1.2 -0.5 -0.167
Ketones 0.00383 1.2 -1.0 -0.167
Esters 0.0415 1.2 -1.0 -0.167
Ethers 0.0385 1.2 -1.0 -0.167
Halides 0.494 0.0 -0.5 0.167
R20,R21,R22,R23 0.562 0.0 -0.5 0.167
Errors may be large for Diols and Glycols. The Acids equation is not applicable to Formic acid. The reader is referred to Reid for a discussion of the method.
8.6.3 Method of Sato-Riedel
This method gives the following relation:
k = (1.11/M0.5)f(Tr)/f(Tbr)
f(X) = 3 + 20(1 - X)0.667
This method gives poor results for low molecular weight or branched hydrocarbons. Errors otherwise are likely to be less than 15%. The method should not be applied for Tr > 0.8
8.6.4 Method of Ely and Hanley
The method of Ely and Hanley has application to the high-temperature liquid region (Tr > 0.8). There are few data available for high temperature liquid conductivities. The method of Ely and Hanley is
probably the best method available. Error estimates are unknown.
This method is used in CHEMPAK for Tr > 0.8 with caution. It appears to give reasonable results for non-polar compounds. Errors with polar compounds can be large.
8.6.5 The Effect of Pressure on Liquid Conductivity
The procedure derived from Missenard as presented in Danner and Daubert is employed:
k/ks = 0.98 + 0.0079 PrTr1.4 + 0.63 Tr1.2Pr/(30 + Pr)
k = conductivity at P
ks = conductivity at Ps
8.8 Vapor Pressure
The vapor pressure is expressed in its reduced form
Pvr = Pv/Pc
Reduced vapor pressure varies from very low values at freezing point to unity at the critical point.
8.8.1 Published Correlations
The experimental correlations are commonly given in the following formats:
Wagner Equation
ln(Pvr) = (aX + bX1.5 + cX3 + dX6)/Tr
X = 1 - Tr
FKT Equation
ln(Pv) = a + b/T + cln(T) + dPv /T2
Antoine Equation
ln(Pv) = a + b/(T + c)
8.8.2 Gomez-Thodos Vapor Pressure Equation
Gomez-Nieto and Thodos give the following equation:
ln(Pvr) = B(1/Trm - 1) + G(Tr7 - 1)
G = aH + bB
a = (1 - 1/Tbr)/(Tbr7 - 1)
b = (1 - 1/Tbrm)/(Tbr7 - 1)
H = Tbrln(Pc/Pb)/(1 - Tbr)
For non-polar compounds,
B = -4.267 - 221.79/(H2.5exp(0.038 H2.5)) + 3.8126/exp(2272.33/H3) + D
m = 0.78425 exp(0.089315 H) - 8.5217/exp(0.74826 H)
D = 0
except D = 0.41815 for He, 0.19904 for H2, 0.02319 for Ne
For polar non-hydrogen-bonding compounds (e.g. ammonia and acetic acid),
m = 0.466 Tc0.1667
G = 0.08594 exp(0.0007462 Tc)
B = (G - aH)/b
8.8.2 Gomez-Thodos Vapor Pressure Equations
For polar hydrogen-bonding compounds (water, alcohols),
m = 0.0052 M0.29Tc0.72
G = (2.464.M) exp(0.0000098 MTc)
B = (G - aH)/b
The advantages of this method are,
· fit guaranteed at T = Tb and T = Tc
· good performance with polar compounds
· good performance over Tr = 0.5 to 1
In addition, tests carried out by Madison Technical Software show the clear superiority of this method especially at low temperatures over the Lee-Kesler method.
8.8.3 Lee-Kesler Vapor Pressure Equation
Lee and Kesler give the following vapor pressure equation:
ln(Pvr) = f(0) + wf(1)
w = acentric factor
f(0) = 5.92714 - 6.09648/Tr - 1.28862 ln(Tr) + 0.169347 Tr6
f(1) = 15.2518 - 15.6875/Tr - 13.4721 ln(Tr) +0.43577 Tr6
The characteristics of this equation are,
· guaranteed fit at Tr = 1 and 0.7
· accurate for non-polar compounds
This equation is used in the Lee-Kesler and Wu & Stiel equations of state.
8.8.4 Interpolation and Extrapolation
In many cases an accurate empirical equation is known which does not extend to the critical point or to the freezing point. The approach taken here is to fit the Wagner equation by least squares to
the empirical equation and use the Wagner equation to extrapolate to the freezing point and to the critical point.
Extrapolation by this method to the critical method is a very accurate procedure. Extrapolation to the freezing point is less accurate but it does provide reasonable values.
In CHEMPAK, the vapor pressure correlations set out in this section are used to provide the basic data. Empirical relations are used wherever possible.
8.14 Notation
C Specific Heat
e Expansion Coefficient
h Enthalpy
log Logarithm to base 10
ln Natural Logarithm
m Dipole Moment
M Molecular Weight
P Pressure
R Gas Constant
r Riedel Parameter
s Entropy
T Temperature
v Viscosity
V Specific Volume
w Acentric Factor
x Mole Fraction
Y Wu & Stiel Polarity Factor
Z Compressibility
b Boiling
c Critical
es Equation of State
f Freezing
ig Ideal Gas
L Liquid
m Mixture
0 Low Pressure
p Constant Pressure
ra Rackett
ref Reference
r Reduced
s Saturated
v Vapor
v Constant Volume
(o) Simple Fluid
(r) Reference Fluid
(p) Polar Fluid
8.15 References
API Technical Data Book - Petroleum Refining, 4 Vols, API, Washington DC, 1988
CRC Handbook of Chemistry and Physics, CRC Press Boca Raton 1991
Danner R P and Daubert T E , Data Prediction Manual, Design Institute for Physical Property Data , AIChE, NY 1983
Daubert T E and Danner R P, Physical and Thermodynamic Properties of Pure Chemicals, Data Compilation, AIChE, Hemisphere NY 1989
Engineering Sciences Data Units (ESDU), (9 Vols Data Compilation), London, England
Gomez-Nieto M and Thodos G, Ind Eng Chem Fundam, Vol 17, 45, 1978, Can J Chem Eng, Vol 55, 445, 1977, Ind Eng Chem Fundam, Vol 16, 254, 1977
Hankinson and Thomson, AIChEJ, Vol 25, 653, 1979
International Critical Tables, National Research Council, 7 Vols, McGraw-Hill, NY 1926
Joback K G, SM Thesis, MIT, June 1984
Keenan et al, Steam Tables, John Wiley, NY 1978
Knapp et al, Chem Data Ser, Vol VI, DECHEMA 1982
Kreglewski and Kay, J Phys Chem, Vol 73, 3359, 1969
Letsou A and Stiel L I, AIChEJ, Vol 19, 409, 1973
Le Bas G, Molecular Volumes of Liquid Chemical Compounds, Longmans Green, NY 1915
Lee B I and Kesler M G, AIChEJ, Vol 21, 510, 1975
Li C C, AIChEJ, Vol 22, 927, 1976
Lucas K, Chem Ing Tech, Vol 53, 959, 1981
Missenard F A, Rev Gen Thermodyn, Vol 101, 649, 1970
Morris P S, MS Thesis, Polytech Inst Brooklyn, NY 1964
Perry et al , Chemical Engineer's Handbook (various editions), McGraw Hill, NY
Plocker U J et al, Ind Eng Chem Proc Des Dev, Vol 17, 324, 1978
Prausnitz and Gunn, AIChEJ, Vol 4, 430 and 494, 1958
Reid R C at al, Properties of Liquids and Gases, 3rd Ed, McGraw Hill, NY 1977, 4th Ed, McGraw Hill, NY 1987
Schick and Prausnitz, AIChEJ, Vol 14, 673, 1968
Spencer and Danner, J Chem Eng Data, Vol 17, 236, 1972
Spencer, Daubert and Danner, AIChEJ, Vol 19, 522, 1973
Stiel and Thodos, AIChEJ, Vol 8, 229, 1962
Teja A S at al, Chem Eng Sci, Vol 33, 609, 1978, AIChEJ, Vol 26, 337 & 341, 1980, Chem Eng Sci, Vol 36, 7, 1981, Ind Eng Chem Fundam, Vol 20, 77, 1981, Chem Eng Sci, Vol 37, 790, 1982, J Chem Eng
Data, Vol 28, 83, 1983, Ind Eng Chem Proc Des Dev, Vol 22, 672, 1983
Thomson, Brobst, Hankinson, AIChEJ, Vol 28, 671, 1982
Tyn M T and Calus W F, Processing, Vol21(4), 16, 1975
Van Velzen D et al, Ind Eng Chem Fundam, Vol 11, 20, 1972
Van Velzen et al, Liquid Viscosity etc, Euratom 4735e, Joint Nuclear Research Centre, ISPRA Establishment, Italy 1972
Vargaftik N B, Tables on Therm Props Liq & Gases, 2nd Ed, Hemisphere, Washington DC, 1975
Wu G Z A and Stiel L I, AIChEJ, Vol 31, 1632, 1985
Yorizane et al, Ind Eng Chem Fundam, Vol 22, 454, 1983 | {"url":"https://docs.aft.com/fathom/ChempakTechnicalInformationDetailedDiscussion.html","timestamp":"2024-11-09T07:28:28Z","content_type":"text/html","content_length":"101177","record_id":"<urn:uuid:9bdecaa1-73b3-4c79-b8be-434658831b59>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00175.warc.gz"} |
Streaming Caused by Oscillatory Flow in Peripheral Airways of Human Lung
Streaming Caused by Oscillatory Flow in Peripheral Airways of Human Lung ()
1. Introduction
Oscillatory flow is a widespread phenomenon and plays an important role in many fields, e.g. pneumatic propulsion, piston-drived flow, and acoustic oscillation are commonly used in mechanical
engineering; pulsatile blood circulation, respiratory flow in lung, and capillary waves are of much interest in bio-mechanics; seasonal reversing wind, ocean circulations as well as tide flow are of
high concern in meteorology, etc. More than mere oscillation or repetition, mass, momentum, and energy may be transferred via these reciprocating movements. The present study focuses on the effect of
reciprocal motion in peripheral human lung airways that is much more complex than in a cylindrical channel. If Poiseuille flow is oscillated in a uniform duct, no net flux will occur when the flow
restores, although the Stokes layer may vary unsteadily and the flow is in sinusoidal motion during the oscillation.
However, for the oscillatory flow in non-uniform channels, fluid elements would not in general return to original locations at the end of oscillation. This displacement is the integrated result from
steady component of oscillation, and often referred to as steady streaming that can be defined as
Recently, this time-averaged effect (steady streaming) has gradually been found important and functional in many fields. Steady streaming generated by no slip boundary conditions was surveyed with
respect to flows in blood vessels by Padmanabhan and Pedley [1] . In the hearing process, Lighthill found that acoustic streaming associated with ossicle-produced waves might help to transform
acoustic signals to neural activity [2] . Leibovich studied the streaming flow raised in the boundary layer attached to a vibrating free boundary and showed that the streaming flow acted on the
instability of a particular ocean circulation [3] . Lyne investigated the oscillatory flow in a curved tube with circular cross section, found that there was a steady streaming at the boundary from
the inside to the outside bend when the characteristic Reynolds number was much greater than 1 [4] . Hall studied the flow along a pipe of slowly varying cross section driven by an oscillatory
pressure difference at ends, and a net flow was produced toward the wide end of the pipe [5] . Haselton and Scherer showed a steady streaming displacement of fluid elements in oscillatory flow
through a Y-shaped tube bifurcation model with Reynolds and Womersley numbers could exist in human bronchial airways [6] . Eckmann and Grotberg analyzed and solved the motion equations of oscillatory
flow in a curved tube by means of a regular pertubation method over a range of Womersley number, and a substantial time-average axial transport was demonstrated with respect to pulmonary gas exchange
[7] . Fresconi et al. illustrated the transport profiles in the conducting airways of the human lung by laser-induced fluorescence experiment and numerical calculation, they concluded that the simple
streaming was relatively more important to convective dispersion than augmented dispersion, and presented an explanation for steady streaming based on geometric asymmetries associated with a
bifurcation network [8] . Gaver and Grotberg surveyed the oscillatory flow in a tapered channel under conditions of fixed stroke volume and reported a bidirectional streaming in the channel by means
of both calculation and experiment [9] . The similar bidirectional steady streaming was also predicted by Goldberg et al. in their analysis of oscillatory flow at the entrance region of a
semi-infinite tube of circular cross section [10] .
The flow phenomena induced by oscillatory respiration are quite complex in human lung, due to the complicated pulmonary structure, different respiration scenarios, mechanical properties of lung
tissue, and the interaction with organs or body parts, etc. The Reynolds number varies from thousands at the trachea drastically to lower than 0.1 in the alveoli sacs during our rest breathing, and
the flows may differ more when the gas is oscillated by faster frequency even with lower tidal volume because the Reynolds number could be increased by higher-frequency vibration in airways, which
indicates the flow is more turbulent in the upper lung channels, and if the oscillation is fast enough, out-of-phase flow would be induced mainly in the intermediate airways. Currently, the studies
of airflow under HFOV (High-Frequency Oscillatory Ventilation) basically center on the upper or intermediate lung region due to the flow particularity there.
Anatomically, an adult human lung bifurcates from trachea to alveoli 23 times and thus forms a multi-branching structure with 24 generations (G0 - G23) according to Weibel’s lung model [11] , as
illustrated in Figure 1. The airways above G18 act as conducting zone where no gas exchange between oxygen and carbon dioxide occurs, the gas exchange commences from G18 to the terminals. In our
normal breathing, the inhaled air can easily arrive under G18 because the tidal volume (about 500 mL) is sufficient enough to deliver air to the respiratory region directly. However, in application
of HFOV, the gas is oscillated with fast frequency (3 - 25 Hz) and shallow tidal volume (30 - 150 mL) that is normally smaller than the dead space (inner volume of conducting zone), which implies
each oscillation barely can directly deliver fresh air to respiratory zone via the conducting airways, while HFOV is widely reported for being efficient in ventilation for injured lungs. The green
color in Figure 1 denotes the achievable region of tidal volume in HFOV.
Figure 1. Schematic of lung bifurcations and the directly achievable region (green) for tidal volume of HFOV.
Therefore, we assume that there might be progressive or cumulative net flow, aside from molecular diffusion, to overcome the shortage of tidal volume in HFOV and bring fresh air to the alveoli
indirectly. Whether or not steady streaming acts in the transitional bronchioles where the low tidal volume cannot reach needs to be confirmed, which is the first purpose of the present study. If the
steady streaming is found working within this zone, its importance and reason need to be analyzed and illustrated, which is the second purpose of this study. The relevant investigations implemented
by Haselton and Scherer [6] , Fresconi, Wexler, and Prasad [8] , Gaver and Grotberg [9] imply the close relation between airway geometry and steady streaming in oscillatory flow. To clarify reason of
steady streaming, different airway geometries (cylinder, cone, bifurcation, and multi-bifurcations), a series of Womersley numbers and the corresponding Reynolds numbers will be investigated to
testify their influences on steady streaming.
2. Exact Solutions of Oscillatory Flow in Uniform Channels
For the oscillatory flow in channels of uniform cross section, it is relatively convenient to figure out the exact solution of velocity distribution. One of the classic oscillatory problems is the
Stokes’ second problem which deals with the velocity distribution of viscous flow over an oscillatory plate, as shown in Figure 2(a). Because of the no-slip condition at y = 0, the fluid on the
surface follows plate movement and its velocity isFigure 2(b), and the velocity profiles are enveloped by dash curves
Another noted oscillatory flow is the unsteady oscillatory flow in cylindrical channel, in a cylindrical coordinate, the governing equation can be expressed by
While for Wo
K represents the acceleration per unit mass,Figure 3 shows the flow velocity distribution in a cylindrical channel at four different instants in one cycle when
Figure 2. Distribution of flow velocity close to an oscillating wall in Stokes’ second problem.
Figure 3. Velocity distribution of oscillating flow in pipe with uniform cross section at different phases in one cycle.
gas as shown in Figure 3.
In the case of oscillatory flow in pipe of uniform rectangular cross section, similarly, we assume the central axis in X direction, the flow velocity
solved by substituting into the governing equation
boundary conditions for the flow are
proximation of poiseuille velocity distribution is
strated in Figure 4.
These cases share one characteristic in common that the flows are oscillated in uniform channel. Velocity distribution varies at different instant of a cycle, however, the net flow is zero after
integral cycles of oscillation due to the symmetry of velocity distribution at incoming and outgoing phases, which also means the resultant steady streaming is zero. While in a non-uniform channel,
the steady streaming is generally nonzero after the oscillations, a coefficient [6] of this convective exchange process is defined for the normal plane at tube position X as
In non-uniform channels, it is not convenient to find the exact solution of velocity distribution and the coefficient
Figure 4. Velocity distribution of Poiseuille flow in pipe with uniform rectangular cross section at
with respect to the coefficient
3. CFD Calculation and PIV Experiment of Oscillatory Flow in Bifurcating Airways
As foresaid, a progressive mechanism is expected to act under the application of HFOV, and further to deliver fresh gas into the transitional zone. Therefore, a cluster model of transitional
bronchioles airways needs to be built numerically or realistically, for clarifying the flow feature during oscillations. Figure 5 shows the dimensions of airway branch (Figure 5(a)) and its mesh (
Figure 5(b)) used in numerical calculation, and Figure 6 depicts the airway model used in PIV measurement. Both the numerical and realistic model are based on geometry of Weibel’s model from G18 to
G20 (G stands for the generation in anatomical lung structure). The inflow channel and four outflow channels have been extended for fully developing the internal flow. In this study, we basically
adopt two scenarios i.e., rest breathing (sinusoidal, 0.2 Hz with 500 mL tidal volume) and HFOV (sinusoidal, 10 Hz with 50 mL tidal volume).
In numerical model, the top inlet is fed with oscillatory gas, the four outlets are connected to the pressure boundary conditions. The calculations are implemented by Star-ccm^+® which is based on
the FVM (finite volume method) algorithm. The governing equations for the gas flow in airway model are:
Figure 5. Numerical model of lung branch from G18 to G20.
Figure 6. Experimental airways from G18 to G20.
convective terms are discretized by MARS method in second order. Initial temperature and pressure are set at 293.10 K and 101.3 kPa, respectively. Because the flow velocity is lower than 0.1 m/s, and
local Reynolds number is less than 10 in this distal region, incompressible Newtonian fluid and laminar flow are selected as the flow properties.
For the numerical inlet boundary, gas velocity is used based on the feature of oscillatory Poiseuille flow, Equation (5) is employed to guarantee 50 mL tidal volume for sinusoidal oscillation with
different frequencies in HFOV, where u[z] and 2.5 × 10^−^4 (m) indicate velocity in z direction and the radius of G18 airway.
For the four outlets at the bottom, gas pressure is assigned as the boundary condition, which comprises lung compliance C, laminar resistance in airway R[j] and volumetric flow rate in bronchi for a
given generation q as demonstrated in Equation (6). And lung compliance C is defined as the ratio of volume difference
where the suffix i indicates the generation number of interest, and j counts from i+1 to the terminal.
Both Lagrangian method and VOF (Volume of Fluid) scheme are employed in calculation to demonstrate the internal flow. VOF method is normally used to distinguish the interface between different
species of fluid. The gas property in the airways is homogeneous without species difference, however, VOF can be employed here for judging the net flow which is based on the deformation of fictive
interface. In VOF setting, the effect of molecular diffusion is neglected to clearly show the interface between different fluids, and all the fluids share identical physical properties.
Similarly, in the experimental setup, as illustrated in Figure 7 and Figure 8, the upper inlet is connected to a buffer tank which acts as lung dead space, the other side of the buffer tank is
connected to HFOV supplier directly. The four ends are coupled with
Figure 8. Experimental setup of airways model.
four truncated elastic tubes, to simulate the compliances of following airways. Additionally, in G18 - G20, the Reynolds number is less than 10, and Womersley number is lower than 1, the Peclet
number is about 1 in the HFOV application, which indicates that viscous laminar flow and parabolic quasi-steady flow are dominant in this region, also the advective transport rate and diffusive
transport rate are generally in the same order of magnitude.
Our rest breathing normally takes 5 seconds for one cycle of respiration, while the HFOV achieves 50 oscillations within the same duration. As shown in Figure 9, Figure 10 and Figure 11, the
numerical results illustrate that after long stretching, almost all
the gas particles returned to their original locations after one cycle of rest breathing. While in the case of HFOV, a regular redistribution of gas particles is presented after 50 fast and shallow
oscillations-the core particles move downwards while the peripheral particles evacuate upwards, and the ones extremely close to the wall do not relocate apparently due to the no-slip condition, which
obviously is a time-averaged mechanism.
Similar phenomenon is revealed by means of VOF as demonstrated in Figure 12, Figure 13, and Figure 14. Four fluids with identical properties are initially arrayed in the airway model in parallel.
After a cycle of rest breathing, majority of gas parcels return to initial locations, and the rearrangement of these gas parcels is irregular. While the HFOV redistribute the gas parcels in a much
more regular way after 5-sceond oscillations, and apparent streaming net flow has been induced as shown in Figure 14. Accordingly, HFOV produces more apparent net flow than in rest breathing.
To confirm this phenomenon of net flow induced by HFOV, PIV measurement is thereafter carried out by using the realistic airway model, the gas-feeding pattern is set identical to the numerical
calculation-sinusoidal oscillation with 10 Hz frequency and 50 mL tidal volume. After obtaining velocity distribution at different phases in a cycle, the particle dislocations can be produced by
integrating the velocity distribution at different instants of the cycle as shown in Figure 15. Figure 16 depicts the particle dislocations in one cycle with same settings, except that the frequency
is raised up to 20 Hz. As expected, obvious net flow of streaming has been found in 10 Hz and 20 Hz HFOV. For other combinations of frequency and tidal volume in HFOV, semblable patterns of particle
movement have been obtained as well. The phenomena revealed by PIV measurement are highly similar to the numerical calculation, although the experimental results are not as neat as the computational
results. The existence of steady streaming in
Figure 15. Particle dislocations in PIV measurement (10 Hz).
Figure 16. Particle dislocations in PIV measurement (20 Hz).
HFOV has been confirmed by PIV experiment. According to the particle tracks obtained in experiment, it can be found that the down-coming and up-going routes do not superpose each other for every
particle, which implies the oscillatory flow is remarkably irreversible in distal airways. And the irreversible pattern gives rise to the net flow or steady streaming.
4. The Influence of Channel Geometry on Oscillatory Flow
The steady streaming phenomenon which is caused by irreversible flow has been confirmed by both CFD calculation and PIV measurement. In bifurcation geometry, the steady streaming is assumed to be
mainly affected by the asymmetric geometry in airway, according to the assumption of F. E. Fresconi et al. [8] . We will examine the assumption by means of comparing the effect of various geometries.
In this step, the VOF calculation is adopted with identical inlet boundary condition (oscillatory velocity) and outlet boundary condition (total pressure) for these geometries. The strength of steady
streaming CE[X] will be investigated under sinusoidal oscillation with 10 Hz frequency and 50 mL tidal volume in one-second HFOV application. Also, the molecular diffusion is neglected for clearly
observing the interface between the nominal fresh air and used air.
As illustrated in Figure 17, four different airway geometries including cylinder, cone, bifurcation, and multi-bifurcations, are built and connected to the upper straight channel numerically. The
interfaces between fresh and used gas are initially arranged on the geometry connections. The numerical results show that steady streamings enhanced gradually in each non-uniform geometry by
oscillation, and an obvious transition of steady streaming in these shapes can be seen that more divergent channel brings greater steady streaming. The variation extent of cross section is considered
as a crucial factor to the development of net flow. It basically confirms the assumption―the non- uniform geometry mainly generates the steady streaming.
Bifurcation (2 generations)
Branch (3 generations)
Shapes in volume mesh
Figure 17. Steady streaming in different channel shapes caused by oscillatory flow (sinusoidal, 10 Hz, 50 mL) in 1second.
Moreover, the volume of net flow (V[nf]) which denotes the volume of fresh gas left in the following region after oscillation has been calculated along with the coefficient CE[X], as listed in Table
1. In the cylinder channel, the steady streaming is supposed to be zero after integral cycles of oscillation because of the uniform channel, a slight deviation occurs in the numerical calculation due
to the approximate treatment in iterations. More divergent geometry gives greater V[nf] and higher CE[X]. The CE[X] of branch model is as high as 0.164, which means 16.4% of the tidal volume has been
thrusted into the lower region after 10 oscillations, this lung-like airways apparently is more suitable to be a steady-streaming generator. In addition, the tidal volume at G18 is 50 mL/2^18 = 1.91
× 10^−4 mL, because the tidal volume is normally applied to trachea in clinical application of HFOV. For the distal airways, the volume of oscillatory flow is quite limited due to the huge number of
Both bifurcation model and branch model consist of bifurcating channels, however, the volume of net flow and CE[X] are much higher in the branch model. Therefore, it is considered that more following
bifurcations bring greater net flow and stronger steady streaming, this cumulative effect implies that steady streaming may be much more considerable in HFOV application due to the multi-bifurcating
structure in the real human lung which bifurcates 23 times from trachea all the way to alveoli.
5. The Influence of Womersley Number and Reynolds Number on Steady Streaming
In clinical field, the airway geometry has been defined already, a series of Wo and corresponding Re are then selected and applied to check their influences on CE[X], which may be instructive to HFOV
application. The oscillatory frequency is changed to regulate the magnitude of Wo, and the changed velocity then determines Re, while the tidal volume is kept at 50 mL at trachea. Figure 18 depicts
shapes of different gas segment after three cycles of oscillation under various Womersley numbers. The internal gas is initially divided into four parts by nominal interfaces in VOF scheme, and the
molecular diffusion is neglected as well. It can be seen from the results that the high Wo brings apparent deformation and deep penetration in longitudinal direction.
The variation of penetration volume for three-cycle oscillation (sinusoidal, 50 mL tidal volume) under different Womersley numbers (from 0.1 to 10) are illustrated in Figure 19. All the curves reach
up to almost the same peak values due to the identical
Table 1. Volume of net flow and CE[X] by oscillatory flow (sinusoidal, 10 Hz, 50 mL) in one second.
Figure 18. Movements of different sections of internal gas after 3 cycles of oscillation under various Womersley numbers.
Figure 19. Penetration volumes under different Wo in oscillatory flow (sinusoidal with 50 mL tidal volume) within 3 cycles.
tidal volume, while at the end of each oscillation, the values of penetration volume diverge from each other because of the different magnitudes of streaming net flow in airways, which means
different amounts of fresh air are left in the following region after the oscillation. The curves indicate that the net flows are relatively low when Wo < 1, while for Wo > 1, the magnitudes of net
flow rise drastically as Womersley number grows up to about 5, then fall down gradually when Wo further increases. This trend can be noticed clearly in Figure 20 that directly shows the relation
between V[nf] and Wo in three cycles. For all the three cycles, net flow maximizes at about Wo = 5 as well.
The detailed values of oscillatory frequency, Wo, Re, V[nf], and CE[X] after three cycles are listed in the following Table 2. Both Wo and Re are calculated on the basis of oscillatory frequency. The
CE[X] reaches a maximum 0.345 when Wo = 5, Re = 123.6, the corresponding frequency is as high as 955.0 Hz. The net flows are compared for different frequencies in same cycles here, if it is compared
in an identical duration, the difference will be much greater because the higher frequency accomplishes more cycles in a certain duration. This result indicates that the steady streaming in lung can
be enhanced drastically by increasing the oscillatory frequency up to nearly one thousand, which may further give an improvement direction for current HFOV.
Haselton and Scherer investigated streaming magnitude in a single bifurcation channel whose sizes are much greater than our airway models, they found the maximum streaming displacement increases with
Re and Wo up to a Re of about 100 and Wo of about 5, after which a gradual decline occurs [6] . Although they regulated the value of Re and Wo also by changing the flow viscosity, we achieved it by
merely changing the oscillatory frequency, the critical values of Re and Wo they found are very close to our calculation results. Their measurement was accomplished within a big single bifurcation
while our model is much smaller and multi-bifurcated, which may indicate that steady streaming acts throughout the lung airways of different sizes. Gaver
Figure 20. Volume of net flow under different Wo in oscillatory flow (sinusoidal with 50 mL tidal volume) within 3 cycles.
Table 2. Influence of Re and Wo on steady streaming after three cycles of oscillation (sinusoidal with tidal volume).
and Grotberg [9] experimentally investigated oscillatory flow in a tapered channel and demonstrated that the deformation of a dye streak is different in the cases of Wo < 5 and Wo > 10. Therefore, it
is considered that a peak steady streaming appears when Wo reaches about 5 and Re is close to 100 or slightly over 100 in bifurcation airways.
Obviously, the Womersley number can be changed by changing either frequency or viscosity within a provided geometry. But only oscillatory frequency has been altered to change Wo in this study, one
reason is in clinical field, the viscosity of ventilating gas is almost constant, another reason is that we anticipate faster oscillation could incur better ventilation efficiency by causing more
turbulence in upper lung region as well as more steady streaming. From Table 2, it can be seen that the intensity of steady streaming increases with oscillatory frequency up to nearly 1000 Hz, it may
imply the possibility of super-fast HFOV in the future.
6. Discussion and Conclusions
A common scenario of HFOV (sinusoidal with 10 Hz frequency and 50 mL tidal volume) has been investigated by both CFD calculation and PIV measurement in the present study. An apparent time-averaged
net flow phenomenon has been found in peripheral lung airways which help to deliver the fresh air into deeper region centrally and discharge the used air peripherally. It demonstrates that even
though the tidal volume is smaller than dead space of lung in HFOV, steady streaming works to thrust the fresh gas deeply and thereby overcome the lack of tidal volume. The distal lung region is
therefore more convective than previously expected. In view of steady streaming found in upper lung airways by other researchers, it is considered that steady streaming may exist in the whole lung
tract and is an important factor of HFOV effect.
A series of airway geometry have been numerically surveyed to clarify the influence of geometry on steady streaming. It is found that more divergent channels bring stronger steady streaming, and the
multi-bifurcated channel causes greater steady streaming than the single bifurcation does, which indicates the magnitude of steady streaming in real lung may be much greater than that in the
truncated branch model under HFOV application.
By changing the oscillatory frequency, different Womersley numbers and corresponding Reynolds numbers have been obtained to testify their influences on steady streaming. It has been found that the
streaming magnitude rises with Re and Wo up to a Re of about 124 and Wo of about 5, which is in a good agreement with the experimental results brought out by other researchers. Moreover, the volume
of net flow maximizes when the frequency reaches nearly 1000, which may imply the feasibility of super-fast HFOV in the future.
The authors are grateful to Kohei Morita and Tomonori Yamamoto for assistance in numerical calculation and PIV measurement.
C: Compliance [m^3/Pa]
f: Frequency [Hz]
G: external force [N]
p: Pressure [Pa]
Q: Flow rate [m^3/s]
Pe: Peclet number
r: radial position [m]
Re: Reynolds number
t: time [s]
T: flow cycle period [s]
u: oscillatory velocity [m/s]
U: velocity amplitude [m/s]
u[z]: velocity in z direction [m/s]
V[T]: tidal volume [m^3]
Wo: Womersley number
υ: kinematic viscosity [m^2/s]
ω: angular frequency [rad/s]
ρ: fluid density [kg/m^3]
i: ith generation of lung
j: jth generation of lung
z: z-direction | {"url":"https://scirp.org/journal/paperinformation?paperid=70792","timestamp":"2024-11-12T15:49:56Z","content_type":"application/xhtml+xml","content_length":"128217","record_id":"<urn:uuid:d7c0819d-7f95-4abb-b36f-544a7ba23d23>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00712.warc.gz"} |
Unit 2
Unit 2 Family Materials
Adding and Subtracting within 100
Adding and Subtracting within 100
In this unit, students add and subtract within 100 using strategies based on place value, properties of operations, and the relationship between addition and subtraction. They then use what they know
to solve story problems.
Section A: Add and Subtract
This section allows students to use methods that make sense to them to help them solve addition and subtraction problems. They can draw diagrams and use connecting cubes to show their thinking. For
example, students would be exposed to the following situation:
• Make trains with cubes.
• Find the total number of cubes you and your partner used. Show your thinking.
• Find the difference between the number of cubes you and your partner used. Show your thinking.
As the lessons progress, students analyze the structure of base-ten blocks and use them to support place-value reasoning. Unlike connecting cubes, base-ten blocks cannot be pulled apart. Students
begin to think about two-digit numbers in terms of tens and ones. To add using base-ten blocks, they group the tens and the ones, and then count to find the sum.
Section B: Decompose to Subtract
In this section, students subtract one- and two-digit numbers from two-digit numbers within 100. They use strategies based on place value and the properties of operations to evaluate expressions that
involve decomposing a ten. For example, to evaluate expressions such as \(63 -18\), students use connecting cubes or base-ten blocks as they learn to trade in a ten for 10 ones before grouping by
place value. In this case they can trade one of the tens in 63 for 10 ones, making it 5 tens and 13 ones. They can then subtract 1 ten from 5 tens and 8 ones from 13 ones, resulting in 4 tens and 5
ones, or 45.
Section C: Represent and Solve Story Problems
This section focuses on solving one-step story problems that involve addition and subtraction within 100. The story problems are all types—Add To, Take From, Put Together, Take Apart, and Compare—and
have unknowns in all positions. A question that your student might be exposed to is:
Diego gathered 42 orange seeds.
Jada gathered 16 apple seeds.
How many more seeds did Diego gather than Jada?
Show your thinking.
Try it at home!
Near the end of the unit ask your student to solve the following word problem:
Diego gathered 37 orange seeds.
Jada gathered 25 more apple seeds than Diego.
How many seeds did Jada gather?
Show your thinking.
Questions that may be helpful as they work:
• Can you explain to me how you solved the problem?
• What pieces of information were helpful?
• How does your representation show the answer to the problem? | {"url":"https://im-beta.kendallhunt.com/k5/families/grade-2/unit-2/family-materials.html","timestamp":"2024-11-07T23:34:38Z","content_type":"text/html","content_length":"79450","record_id":"<urn:uuid:4b1c1d48-ed63-41a8-93c3-722029dd4097>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00408.warc.gz"} |
How To Read
How To Read Odds
In actual fact the margin of victory is irrelevant as the Moneyline requires just one condition be met, a win. That was all about plus and minus symbols in sports betting from our end. Top Online
casino vuelta 2021 mapa Philippine islands Knowing about ‘+’ and ‘–’ in betting is important because you will be dealing with these all the time. If you don’t understand what these are, then you’ll
feel very much lost while actually taking part in sports betting. Reading this blog will help you gain a clear understanding of the meaning of ‘+’ and ‘-’ in sports betting.
• An odds expression where the odds are shown in decimal format.
• It is also possible to wager on other results of the 2024 election.
• As such, a $10 bet at 5/2 odds is (10×5) / 2, which pays out at $25.
• These essentially, calculate the payout to be received from the winning wagers and tell us the chances of a wager to win.
A push bet is a bet that has neither a winning or losing outcome. Prop bets, or specials, are available at most online sports betting sites. These types of bets consist of betting on markets not
necessarily to do with the final score, but rather events that can happen during the game. The point spread is one of the most popular sports betting options when it comes to betting on college
football and the NFL. A point spread is a figure set by the oddsmakers that really serves as a margin of victory.
So you’ll get paid less for betting the Yankees -1.5 against the lowly Orioles than you would for betting the informative post Yankees -1.5 against the Astros, when the two teams are more evenly
matched. The odds are just changed depending on the ability of the team — you won’t get -110 on both sides. Low-scoring sports like hockey and baseball do have point spreads, but they’re almost
always -1.5 and +1.5. If the Jets (+13.5) lost by 13 points or fewer, or won the game, they covered the spread.
Decimal Betting Odds Explained
BPM does not take into account playing time — it is purely a rate stat! Playing time is included in Value Over Replacement Player which is discussed below. A report from Cass Business School found
that only 1 in 5 gamblers ends up a winner. As noted in the report, this corresponds to the same ratio of successful gamblers in regular trading. Evidence from spread betting firms themselves
actually put this closer to being 1 in 10 traders as being profitable.
Learn How To Read Betting Odds
Keep in mind, the top ten riders in the jockey standings win about 90 percent of the races run during the meet and favorite horses win about 33 percent of the time, and have low payoffs. Either at
the track on the tote board or on your online sportsbook, the odds will change depending on how many people are betting on each horse in the race up until post time. Odds are simply the way prices
and payouts are shown at a horse track. The numbers displayed as 4-7 or 2-5 tell you what you pay and how much you get back if the horse you bet on wins. The first number tells you how much you could
win, the second number is the amount you bet.
It would require a bet of $113 to win $100 on the Yankees, or $107 bet to win $100 on the Red Sox. The VegasInsider.com Consensus NFL Line is just as important as the Open Line and also a key
resource on odds platform. The Consensus column could be called a “Median Line” since it shows the most consistent number provided by the sportsbooks on VegasInsider.com. The consensus line will be
the same as the open line but once the wagers start coming in, this number is often different than the openers. If, as it is sometimes with the spread, the total is listed as a whole number, the
result may be a push.
To convert a positive money line into fractional odds, divide the number in the money line by 100. In this case, Tiger Woods is the favorite, but he has a positive money line. A bet of $10 on him
would result in a $60 profit if he wins. A bet of $10 on Steve Stricker would result in a $230 profit if he were to win. To properly explain how to bet the money line, the first thing to understand
is the difference between a negative and positive money line.
What Does It Mean To Pick Against The Spread?
To further explain, consider two people make a bet on each side of a game without a bookmaker. However, if he had made that $110 bet through a bookmaker he would have only won $100 because of the
vig. In a perfect world if all bookmaker action was balanced, they would be guaranteed a nice profit because of the vig. The house vigorish – and your chances of winning – get worse with the more
teams you add.
How To Read Nfl Odds
Therefore, if one bets $100 on Donald Trump to be re-elected as president, this person could make a total payout of $400 ($100 x 4.00). This amount includes the initial stake of $100, giving a net
profit of $300. For instance, one of the renowned betting websites priced the candidates to win the 2020 U.S. Here, we list the decimal odds for the candidates and the biggest long shot among the
candidates listed by the bookmaker. The potential profit for a Cleveland win would be even higher, as you could make a profit of $700 $100 x (7/1). With the initial stake of $100 being returned, it
would make for a total payout of $800. | {"url":"http://www.edutip.mx/how-to-read-odds/","timestamp":"2024-11-07T13:06:51Z","content_type":"text/html","content_length":"14096","record_id":"<urn:uuid:d9b9ac37-3a2a-4b90-82c7-81c710c8059e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00574.warc.gz"} |
SpaceX Starship SN10 Landing
In the past months, SpaceX was quickly moving forward with development of their interplanetary rocket – Starship. The last two prototypes even went ahead with a test flight to ~10 km. The test flight
was, in both cases, highly successful, apart from ending in an RUD (rapid unscheduled disassembly) during the landing. That was not unexpected since the previous prototypes had a low chance for
successful landing, according to Elon Musk. Nevertheless, many people (and we) are wondering whether the next prototype (SN10), scheduled to attempt the test flight and landing procedure in the
upcoming weeks, will finally stick the landing.
A recent twitter poll of almost 40,000 people estimated the probability of SN10 successfully landing at 77.5% (after removing people who abstained from voting).
A much higher chance that Elon’s own estimate of ~60% which is comparable to the Metaculus prediction market based on 295 predictions that converged to 56% median probability of successful landing.
Here, we also try to predict whether the next Starship, SN10, will successfully land. As all statisticians, we start by replacing a difficult problem with a simpler one — instead of landing, we will
predict whether the SN10 will successfully fire at least two of its engines as it approaches landing. Since the rocket engine can either fire up or malfunction, we approximate the engine firing up as
a binomial event with probability θ. Starship prototypes have 3 rocket engines, out of which 2 are needed for successful landing. However, in previous landing attempts, SpaceX tried lighting up only
2 engines — both of which are required to fire up successfully. Now, in order to improve their landing chances, SpaceX decided to try lighting up all 3 engines and shutting down 1 of them if all fire
successfully^1. We will therefore approximate the successful landing as observing 2 successful binomial events out of 3 trials.
To obtain the predictions, we will use Bayesian statistics and specify a prior distribution for the binomial probability parameter θ, an engine successfully firing up. Luckily, we can easily obtain
the prior distribution from the two previous landing attempts:
• The first Starship prototype attempting landing, SN8, managed to fire both engines, however, crashed due to low oxygen pressure resulting. That resulted in insufficient trust and way too fast
approach to the landing site. Video [here].
• The second Starship prototype attempting landing, SN9, did not manage to fire the second engine which, again, resulted in an RUD on approach. Video [here].
Adding an additional assumption of the events being independent, we can summarize the previous firing up attempts with beta(4, 2) distribution — corresponding to observing 3 successful and 1
unsuccessful event. In JASP, we can use the Learn Bayes module to plot our prior distribution for θ
and generate predictions for 3 future events. Since the prior distribution for θ is beta and we observe binomial events, the distribution of number of future successes based on 3 observations follows
a beta-binomial(3, 4, 2) distribution. We obtain a figure depicting the predicted number of successes from JASP and we further request the probability of observing at least two of them. Finally, we
arrive at an optimistic prediction of 71% chance of observing at least 2 of the engines fire up on the landing approach. Of course, we should treat our estimate as a higher bound on the actual
probability of successful landing. There are many other things that can go wrong (see SpaceX’s demonstration [here]) that we did not account for (in contrast to SpaceX, we are not trying to do a
rocket science here).
We can also ask how much does trying to fire up all 3 engines instead of 2 (as in previous attempts) increase the chance of successful landing. For that, we just need to obtain the probability of
observing at 2 successful events based on 2 observations = 48% (analogously from beta-binomial(2, 4, 2) distribution), and subtract it from the previous estimate of 71%. That is a 23% higher chance
of landing when trying to use all 3 instead of only 2 engines.
About The Authors
František Bartoš
František Bartoš is a Research Master student in psychology at the University of Amsterdam. | {"url":"https://www.bayesianspectacles.org/spacex-starship-sn10-landing/","timestamp":"2024-11-08T12:27:46Z","content_type":"text/html","content_length":"50444","record_id":"<urn:uuid:1522c162-5939-4fc7-b4e2-cff86b69ca15>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00738.warc.gz"} |
Output-sensitive algorithms for optimally constructing the upper envelope of straight line segments in parallel
The importance of the sensitivity of an algorithm to the output size of a problem is well-known especially if the upper bound on the output size is known to be not too large. In this paper we focus
on the problem of designing very fast parallel algorithms for constructing the upper envelope of straight-line segments that achieve the O (n log H) work-bound for input size n and output size H.
When the output size is small, our algorithms run faster than the algorithms whose running times are sensitive only to the input size. Since the upper bound on the output size of the upper envelop
problem is known to be small (n α (n)), where α (n) is a slowly growing inverse-Ackerman's function, the algorithms are no worse in cost than the previous algorithms in the worst case of the output
size. Our algorithms are designed for the arbitrary CRCW PRAM model. We first describe an O (log n · (log H + log log n)) time deterministic algorithm for the problem, that achieves O (n log H) work
bound for H = Ω (log n). We then present a fast randomized algorithm that runs in expected time O (log H · log log n) with high probability and does O (n log H) work. For log H = Ω (log log n), we
can achieve the running time of O (log H) while simultaneously keeping the work optimal. We also present a fast randomized algorithm that runs in over(O, ̃) (log n / log k) time with nk processors, k
> log^Ω (1) n. The algorithms do not assume any prior input distribution and the running times hold with high probability.
• Computational geometry
• Parallel algorithm
• Randomized algorithm
• Upper envelope
ASJC Scopus subject areas
• Software
• Theoretical Computer Science
• Hardware and Architecture
• Computer Networks and Communications
• Artificial Intelligence
Dive into the research topics of 'Output-sensitive algorithms for optimally constructing the upper envelope of straight line segments in parallel'. Together they form a unique fingerprint. | {"url":"https://nyuscholars.nyu.edu/en/publications/output-sensitive-algorithms-for-optimally-constructing-the-upper-","timestamp":"2024-11-11T07:06:54Z","content_type":"text/html","content_length":"55379","record_id":"<urn:uuid:a23a9cdc-0089-4445-8355-72d8b0d58c00>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00007.warc.gz"} |
Tiling vertices and the spacing distribution of their radial projection
The Fourier-based diffraction approach is an established method to extract order and symmetry properties from a given point set. We want to investigate a different method for planar sets which works
in direct space and relies on reduction of the point set information to its angular component relative to a chosen reference frame. The object of interest is the distribution of the spacings of these
angular components, which can for instance be encoded as a density function on ℝ[+]. In fact, this radial projection method is not entirely new, and the most natural choice of a point set, the
integer lattice ℤ^2, is already well understood. We focus on the radial projection of aperiodic point sets and study the relation between the resulting distribution and properties of the underlying
tiling, like symmetry, order and the algebraic type of the infation multiplier.
ASJC Scopus subject areas
• General Physics and Astronomy
Dive into the research topics of 'Tiling vertices and the spacing distribution of their radial projection'. Together they form a unique fingerprint. | {"url":"https://experts.arizona.edu/en/publications/tiling-vertices-and-the-spacing-distribution-of-their-radial-proj","timestamp":"2024-11-11T17:12:07Z","content_type":"text/html","content_length":"52278","record_id":"<urn:uuid:38711b61-4178-4482-b3bf-438489d3cf8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00571.warc.gz"} |
Problem Description XOR is a kind of bit operator, we define that as follow: for two binary base number A and B, let C=A XOR B, then for each bit of C, we can get its value by check the digit of
corresponding position in A and B. And for each digit, 1 XOR 1 = 0, 1 XOR 0 = 1, 0 XOR 1 = 1, 0 XOR | {"url":"http://blog.leanote.com/tag/rockdu/HDU?page=2","timestamp":"2024-11-03T12:06:12Z","content_type":"text/html","content_length":"20763","record_id":"<urn:uuid:5aada41f-5214-4adc-9365-88ff54f0bf1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00357.warc.gz"} |
How to Switch Between “First Last” and “Last, First” Name Simply With Chronicle
Jedidiah Cassinelli
A common scenario in the eDiscovery world is a need to switch first names and surnames around – or simply align them to the common alias value.
“Cassinelli, Jed” ≠ “Jed Cassinelli.", right? Technically, they are indicative of the same person / entity, but the presentation of the data is always part of the equation.
So if you've already normalized your data, and the client's requirements for displaying the names change, you might find yourself repeatedly saying: "Last, First. No, First Last. Wait, they changed
it again?? Oh goody."
Chronicle gives you tools for specifying normalized values - setting them in a one-off capacity, uploading them in bulk – enabling a solution for this task sourced from your existing Name
Normalization work product.
Within the UI, there is no immediate means of making this sort of wholesale change (yet). So what can be done to switch names when, for example, that 2nd request - with all of its myriad names - has
a requirements change on the display of names? Enter our superhero, Excel! (trusty sidekick to Chronicle!)
(1) First, pull bad data out using Chronicle
1. Navigate to the Manage Normalizations tab. Here, we can see all the normalizations in this workspace.
2. Filter (if needed) the results to those that you want to change
3. Select 'Export all data to CSV' to get a list of Original Values and their Normalized Values.
4. Download and open the CSV.
(2) Then, use Excel (formulas) to do the heavy lifting.
We'll add columns and formulas to the CSV to calculate the new values.
These formulas are written assuming you are starting with a CSV with the following columns, including headers in row 1:
1. Original Value (A)
2. Normalized Value (B)
3. Occurrences (C)
4. Documents (D)
You won't need columns A, C, or D, but we'll assume you're leaving those in the right place.
• If you are converting from "First Last" to "Last, First:
1. In column E, pull out the first name with this formula:
=LEFT(B2, SEARCH("", B2,1)-1)
b. In column F, pull out the last name with this formula:
=TRIM(RIGHT(B2,LEN(B2) - SEARCH("", B2, SEARCH("", B2))))
c. In column G, put the names back together with this formula:
=CONCATENATE(F2, ", ", E2)
Note: Columns E & F are not required, and you could shortcut directly to the following formula:
=CONCATENATE(TRIM(RIGHT(B2,LEN(B2) - SEARCH("", B2, SEARCH("", B2)))), ", ", LEFT(B2, SEARCH("", B2,1)-1))
• If you are converting from Last, First to First Last:
1. In column E, pull out the last name with this formula:
=LEFT(B2, SEARCH(",", B2,1)-1)
b. In column F, pull out the first name with this formula:
=TRIM(RIGHT(B2,LEN(B2) - SEARCH(",", B2, SEARCH(",", B2))))
c. In column G, put the names back together with this formula:
Note: Columns E & F are not required, and you could shortcut directly to the following formula:
=CONCATENATE(TRIM(RIGHT(B2,LEN(B2) - SEARCH(",", B2, SEARCH(",", B2))))," ", LEFT(B2, SEARCH(",", B2,1)-1))
(3) Do a little QC on the new values.
Before moving on to getting this data back into Relativity, one quick note: These formulas DO NOT cover all edge cases!
For a couple of the known issues, we can add another formula to warn us if this might be causing a problem with the new values being calculated.
1. When going from "Last, First" to "First Last" - If there is more than one comma in a value, it won't be correctly converted (e.g., "Cassinelli, Esq., Jed" → "Esq., Jed Cassinelli ")
1. In column H, check to see if the starting value has more than one comma with this formula: =(LEN(B2)-LEN(SUBSTITUTE(B2, ",",""))>1)
2. When going from First Last to Last, First - If a value is in the form First Middle Last, the middle portion will be incorrectly treated as part of the last name (e.g., "Jedidiah C. Cassinelli" →
"C. Cassinelli, Jedidiah ")
1. In column H, check to see if there are more than two names with this formula: =(LEN(B2)-LEN(SUBSTITUTE(B2," ",""))>1)
3. If the value is TRUE for any of the rows with these QC formulas, it's probably worth examining the calculation to see if it makes sense before using it.
(4) And now put the (good) data back into Chronicle.
Once all the new values have been calculated, we must get them back into Chronicle.
1. Copy the good values from column G into column B, overwriting the bad values. (G=good, B=bad )
2. Delete the extra columns added—where we're going, we don't need extra columns. This is optional, but who doesn't like a fresh start?
3. Navigate to the Manage Normalizations tab.
4. Select Import and select your updated file to load. This will update all the normalized values within Chronicle.
5. Navigate to the Normalization Projects tab, where you should see an indicator of projects that have been normalized and were just updated.
6. For each project, navigate to the Apply Normalizations tab and click Write values to Fields to push these updated values into their associated Relativity Fields.
And there you go! What was first is now last (or vice versa), and you're ready to go!
Ready to flip those names and more with Chronicle? Reach out ➜
Commenting has been turned off. | {"url":"https://www.milyli.com/post/how-to-switch-first-last-for-last-first-names-with-chronicle","timestamp":"2024-11-12T07:03:29Z","content_type":"text/html","content_length":"1050033","record_id":"<urn:uuid:64a073d2-f715-48e6-9d37-7ef09b54a0f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00179.warc.gz"} |
The first thing you learned about probability is wrong*
I’ve just started reading Against the Gods: The remarkable Story of Risk, a book by Peter Bernstein that’s been high on my “To Read” list for a while. I suspect it will be quite interesting, though
it’s clearly targeted at a general audience with no technical background. In Chapter 1 Bernstein makes the distinction between games which require some skill, and games of pure chance. Of the latter,
Bernstein notes:
“The last sequence of throws of the dice conveys absolutely no information about what the next throw will bring. Cards, coins, dice, and roulette wheels have no memory.”
This is, often, the very first lesson that gets presented in a book or a lecture on probability theory. And, so far as theory goes it’s correct. For that celestially perfect fair coin, the odds of
getting heads remain forever fixed at 1 to 1, toss after platonic toss. The coin has no memory of its past history. As a general rule, however, to say that the last sequence tells you nothing about
what the next throw will bring is dangerously inaccurate.
In the real world, there’s no such thing as a perfectly fair coin, die, or computer-generated random number. Ok, I see you growling at your computer screen. Yes, that’s a very obvious point to make.
Yes, yes, we all know that our models aren’t perfect, but they are very close approximations and that’s good enough, right? Perhaps, but good enough is still wrong, and assuming that your theory will
always match up with reality in a “good enough” way puts you on the express train to ruin, despair and sleepless nights.
Let’s make this a little more concrete. Suppose you have just tossed a coin 10 times, and 6 out of the ten times it came up heads. What is the probability you will get heads on the very next toss? If
you had to guess, using just this information, you might guess 1/2, despite the empirical evidence that heads is more likely to come up.
Now suppose you flipped that same coin 10,000 times and it came up heads exactly 6,000 times. All of a sudden you have a lot more information, and that information tells you a much different story
than the one about the coin being perfectly fair. Unless you are completely certain of your prior belief that the coin is perfectly fair, this new evidence should be strong enough to convince you
that the coin is biased towards heads.
Of course, that doesn’t mean that the coin itself has memory! It’s simply that the more often you flip it, the more information you get. Let me rephrase that, every coin toss or dice roll tells you
more about what’s likely to come up on the next toss. Even if the tosses converge to one-half heads and one-half tails, you now know with a high degree of certainty what before you had only assumed:
the coin is fair.
The more you flip, the more you know! Go back up and reread Bernstein’s quote. If that’s the first thing you learned about probability theory, then instead of knowledge you we’re given a very nasty
set of blinders. Astronomers spent century after long century trying to figure out how to fit their data with the incontrovertible fact that the earth was the center of the universe and all orbits
were perfectly circular. If you have a prior belief that’s one-hundred-percent certain, be it about fair coins or the orbits of the planets, then no new data will change your opinion. Theory has
blinded you to information. You’ve left the edifice of science and are now floating in the either of faith.
In short, the coin doesn’t have memory, but it MAY have bias. Because its assumed to be a fair coin, its assumed to be perfectly unbiased – and such a coin only really exists in pure theory.
There is no way to determine the bias or the true probabilty of ANY event without performing the experiment or event a sufficient enough time that you can get the sample size up high enough to get
the confidence interval narrowed down.
But once you’ve established a good estimate of the true probability of the biased coin (60% heads for example) then it IS true that the coin has no memory. You might flip a coin 10 times and get
heads 10 times, and on flip 11 you will STILL have a 60% chance of getting heads. This is no different than the theoretical case where you have an unbiased coin, and flip it 10 times and get heads 10
times. The only difference is that your odds of the next head is 50% instead of 60%.
Basically, it should be rephrased that any random number generator has no memory of its past history. BUT every throw gets you closer to knowing what the true probability of an event is. Everything
only breaks down when you assume that the theoretical concept of a perfect random number generator actually exists in the real world.
Well put. Though I wonder if it ever makes sense to say that a coin has a “true” probability of landing on heads, as in some immutable property of the coin itself. It seems like the information we
gain lets us make limited inferences about that coin tossed in a particular way over a particular period of time (obviously a coin will change slowly).
Interesting. I’ve started teaching probability the last two years differently since finishing my MS in statistics. I actually teach experimental probability first as opposed to second and discuss how
important its implications are. I actually had answers on my last test that described how experimental probability is a more accurate way to describe a solution than describing all possibly outcomes.
Didn’t really like the “against the gods” book, too much weight on theory and less about real life, prefer Taleb’s books (black swan, fooled by randomness)
Leaving aside the possibility that the coin flipper can sufficiently control the flip to bias the outcome (see the mathematician/statistician/magician Persi Diaconis’ work) and making the weaker
assumption that individual flips are independent of each other with probability P(H) = p not equal to zero or one, you can use John von Neumann’s algorithm/trick to construct a virtual “honest” coin:
1) Flip the coin twice, 2) if outcome is HT, with P(HT) = p*(1-p), the virtual coin is H, 3) if outcome is TH, with P(TH) = (1-p)*p, the virtual coin is T, 4) if outcome is HH or TT, go to step 1.
Similar constructions can be use to build honest virtual dice, etc.
I rather liked Against the Gods.
Computer-generated random numbers can be worse! With that old standby, the linear congruent generator, you’re assured of never getting the same number two or more times in succession.
Excellent article, but I can’t believe the name of Thomas Bayes appears nowhere in it.
Hi Michael, what are the ideas you believe Bayes could help in this subject?
Fascinating! Most excellent thinking. I teach research statistics to both undergraduate and graduate students, and have always ass-u-me(d) that a coin toss was always 50-50! But, you are correct.
Once you have completed an adequate statistical sample, and the results show that the potential is more 55-45, you have established a new reality for that coin (or?). I love it, thank you for
thinking… And I think The Black Swan is much more statistically meaningful of a codex on probability, just sayin… | {"url":"https://statisticsblog.com/2011/12/03/the-first-thing-you-learned-about-probability-is-wrong/","timestamp":"2024-11-02T08:42:46Z","content_type":"application/xhtml+xml","content_length":"38985","record_id":"<urn:uuid:fb8dba04-d047-4830-9113-6e19736cac67>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00633.warc.gz"} |
189-NCERT New Syllabus Grade 10 Real Numbers Ex-1.2
NCERT New Syllabus Mathematics
Class: 10
Exercise 1.2
Topic: Real Numbers
Understanding Real Numbers: Key to Mathematical Mastery.
Welcome to the path of discovering one of the most fundamental topics in mathematics: real numbers. This chapter from the NCERT class 10 syllabus offers the framework for comprehending advanced
mathematical concepts by concentrating on properties, theorems, and real-number operations. In this blog, we'll look at exercise 1.2, breaking down each difficulty carefully and addressing it.
Whether you're studying for an exam or creating a solid foundation, this post will easily guide you through each answer. Let's discover the power of real numbers together!
EXERCISE 1.2
Q1. Prove that √5 is irrational.
1) We can prove this using indirect proof. It is also known as proof by contradiction.
1) Let us assume, for the sake of contradiction, that √5 is a rational number.
2) Therefore, we can express √5 = p/q where p and q are coprime integers
(having no common factor other than 1) and q ≠ 0.
(√5)^2 = (p/q)^2
5^ = (p)^2/(q)^2
p^2^ = 5q^2 ---------- equation 1
3) Equation (1) shows that 5 divides p^2, meaning that 5 must also divide p (since
if a prime divides the square of a number, it must divide the number itself).
4) Let for some integer r. Substituting this value into equation (1):
(5r)^2 = 5(q)^2
25r^2 = 5q^2 ---------- equation 2
5) Dividing equation (2) by 5, we get:
6) Thus, we have shown that 5 divides both and , which contradicts our initial
assumption that and are coprime.
7) Therefore, our assumption that √5 is a rational number must be incorrect.
8) Hence, √5 is an irrational number.
Q2. Prove that 3 + 2√5 is irrational.
1) We can prove this using indirect proof. It is also known as proof by contradiction.
1) Let us assume, for contradiction, that 3 + 2√5 is a rational number.
2) Therefore, we can express 3 + 2√5 = p/q where p and q are coprime integers
(with no common factor other than 1) and q ≠ 0.
3) Since p and q are co-primes, (p – 3q)/2q is rational, so √5 is also rational.
However, this contradicts the well-known fact that √5 is irrational.
4) Therefore, our assumption was wrong, and 3 + 2√5 is an irrational number.
Q3. Prove that the following are irrationals :
(i) 1/√2 (ii) 7√5 (iii) 6 + √2
1) We can prove this using indirect proof. It is also known as proof by contradiction.
(i) 1/√2
1) Assume, for contradiction, that 1/√2 is a rational number.
2) Thus, 1/√2 = p/q where p and q are co-primes and q ≠ 0. (i.e., no common
3) Since p and q are co-primes, q/p is a rational number, which implies √2 is
rational. But this contradicts the known fact that which contradicts the fact that √2 is irrational.
4) Therefore, 1/√2 is an irrational number.
(ii) 7√5
1) Let us assume, for contradiction, that 7√5 is a rational number.
2) So, 7√5 = p/q where p and q are co-primes and q ≠ 0. (i.e., no common
7√5 = p/q
3) Since p and q are co-primes, p/7q is a rational number, which implies √5 is
rational. However, this contradicts the fact that √5 is irrational.
4) Thus, 7√5 is an irrational number.
(iii) 6 + √2
1) Assume that 6 + √2 is a rational number.
2) Therefore, 6 + √2 = p/q where p and q are co-primes integers and q ≠ 0. That is, there is no common factor other than 1.
6 + √2 = p/q^
3) Since p and q are co-primes, (p – 6q)/q is rational, which implies √2 is rational. But this contradicts the fact that √2 is irrational.
4) Therefore, 6 + √2 is irrational number.
Conclusion: Unveiling the Power of Real Numbers
In this chapter on Real Numbers, we've explored the foundational blocks of mathematics, laying the groundwork for a deeper understanding of algebra and beyond. From the beauty of irrational numbers
to the precision of the Euclidean algorithm, real numbers are at the core of every mathematical operation. As you continue to delve into the world of numbers, remember that these principles stretch
far beyond the classroom, influencing technology, science, and everyday calculations. Keep exploring, calculating, and unlocking the endless possibilities of real numbers!
#RealNumbersUnveiled #Class10Math #NCERTSyllabus #NumberTheory #AlgebraEssentials #MathInLife #MathIsBeautiful #NCERTClass10 #Mathematics #NCERTMaths #Grade10Maths #MathSyllabus #NCERTSolutions #
MathTips #LearnMath #MathConcepts #MathMadeEasy #RealNumberSystem #MathHelp #PrimeNumbers #MathForStudents #CBSEMath #MathEducation #MathLearning #simple method
No comments: | {"url":"https://anil7pute.blogspot.com/2024/09/NcertNewSyllabusGrade10-RealNumbers1-2.html","timestamp":"2024-11-02T15:41:46Z","content_type":"application/xhtml+xml","content_length":"128042","record_id":"<urn:uuid:37d945f8-b611-43be-a752-a78b8446ed84>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00335.warc.gz"} |
Vorticity of spirals in the rock-paper-scissors equation on the sphere
Vorticity of spirals in the rock-paper-scissors equation on the sphere
Solution of a reaction-diffusion equation involving three chemicals, each of them dominating one of the others, and dominated by the other one.
At each point in space and time, there are three concentrations u, v, and w of chemicals, that we may call Red, Blue and Green. Denoting by rho = u + v + w the total concentration, the system of
equations is given by
d_t u = D*Delta(u) + u*(1 - rho - a*v)
d_t v = D*Delta(v) + v*(1 - rho - a*w)
d_t w = D*Delta(w) + w*(1 - rho - a*u)
where Delta denotes the Laplace operator, which performs a local average, and the parameter a is equal here to 0.75 while D is equal to 0.2. The terms proportional to a*v, a*w and a*u denote reaction
terms, in which Red is beaten by Blue, Blue is beaten be Green, and Green is beaten by Red. The situation is thus similar to the Rock-Paper-Scissors game, and there exist simpler cellular automata
with similar properties,The equation is solved by finite differences, where the Laplacian is computed in spherical coordinates. Some smoothing has been used at the poles, where the Laplacian becomes
singular in these coordinates.
To obtain the vorticity, the values of u, v and w are first converted into an angle with the following steps:
- divide u and v by the total density rho = u + v + w
- set x = (u - 1/3) + (v - 1/3)/2 and y = (v - 1/3)*sqrt(3)/2
- convert (x,y) to polar coordinates
The resulting transformation from (u,v,w) to (x,y) is similar to a discrete Fourier transform with 3 modes. The vorticity is then obtained by computing the curl (or rotational) of the gradient of the
polar angle. For a smooth field, the curl of the gradient is zero. However, the curl has singularities where the gradient is not defined, as in the center of spirals. This allows to make the center
of the spirals visible as peaks (I’m not sure where the other spirals come from, they may be due to round-off errors). In the course of the simulation, some centers of vortices and anti-vortices
(rotating in opposing directions) annihilate. | {"url":"https://www.imaginary.org/film/vorticity-of-spirals-in-the-rock-paper-scissors-equation-on-the-sphere","timestamp":"2024-11-11T03:55:29Z","content_type":"text/html","content_length":"140294","record_id":"<urn:uuid:ffede5c3-afa8-4a83-966b-9ead568562ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00426.warc.gz"} |
Design and Analysis of Multifidelity Finite Element Simulations
The numerical accuracy of finite element analysis (FEA) depends on the number of finite elements used in the discretization of the space, which can be varied using the mesh size. The larger the
number of elements, the more accurate the results are. However, the computational cost increases with the number of elements. In current practice, the experimenter chooses a mesh size that is
expected to produce a reasonably accurate result, and for which the computer simulation can be completed in a reasonable amount of time. Improvements to this approach have been proposed using
multifidelity modeling by choosing two or three mesh sizes. However, mesh size is a continuous parameter, and therefore, multifidelity simulations can be performed easily by choosing a different
value for the mesh size for each of the simulations. In this article, we develop a method to optimally find the mesh sizes for each simulation and satisfy the same time constraints as a single or a
double mesh size experiment. A range of different mesh sizes used in the proposed method allows one to fit multifidelity models more reliably and predict the outcome when meshes approach
infinitesimally small, which is impossible to achieve in actual simulations. We illustrate our approach using an analytical function and a cantilever beam finite element analysis experiment.
1 Introduction
Computer experiments are widely used to simulate engineering applications, where physical experiments are prohibitively expensive or challenging to carry out. Statistical design and analysis of
computer experiments came into prominence through the seminal works of Sacks et al. [1] and Currin et al. [2]. See the book by Santner et al. [3] for details. More recently, multifidelity modeling
has become an important topic in computer experiments. For example, see the applications in satellite systems [4], laser melting [5], and polymers [6]. Multifidelity simulations are motivated by the
fact that combinations of simulations with different levels of fidelity can be utilized to improve the system estimation overall, making the predictions more accurate and computationally efficient.
Kennedy and O’Hagan [7] introduced a Gaussian process framework for multifidelity modeling, which is extended by other researchers [8,9], to name a few.
Space-filling designs [10] are commonly used for computer experiments, but the introduction of multiple fidelity calls for new approaches. Qian [11] proposed the idea of nested space-filling designs
where the higher fidelity simulations are nested within the lower fidelity simulations. Sequential design methods that incorporate multifidelity have also been proposed in the literature [12–14].
However, they restrict the selection of fidelities to only a few discrete levels, which are insufficient when the fidelity can be changed continuously such as in finite element analysis (FEA).
FEA typically divides a larger geometric domain into smaller and simpler cells and then aims to solve partial differential equations on them instead. These cells are called mesh elements, and this
representation is called meshing [15]. It is known that more mesh elements, i.e., finer meshes, lead to more accurate simulation results, but it will inevitably consume more computational power.
Simulations with fewer mesh elements, though less accurate, are often cheaper to conduct. In practice, we can adjust the number of mesh elements in FEA, achieving a trade-off between accuracy and
Thus, multifidelity simulations in FEA differs from the usual problems because the fidelity can be changed almost continuously using the mesh size or the number of elements. There has been some
research in multifidelity modeling in FEA [16,17], where the density of finite elements is connected to fidelity. However, the challenge of constructing experimental design under these circumstances
remains. Moreover, the computational expenditure as a constraint for the designs has not been well investigated. We aim to address them both in this work.
We propose a new method to construct the experimental design where each simulation has a different mesh number/size. The proposed method targets finite element simulations with uniform meshes, where
mesh density can be directly calculated from the dimensions of the uniform mesh elements. The method is novel in that it integrates computational costs for simulations into the design strategy, which
is a practical matter for engineers. We will describe in detail how the experimenter can choose different mesh sizes/mesh element numbers to complete the full set of simulations within the given
computational budget. We would also demonstrate how simulations performed at various mesh sizes can be integrated to produce a predictive model over the entire experimental region that is more
accurate than others acquired from simulations with one or two levels the of mesh size.
This article is organized as follows. We will briefly review the commonly used space-filling experimental design methods and multifidelity models in Sec. 2. We then develop our new design in Sec. 3.
Finally, we demonstrate the performance of the new design in two applications in Sec. 4: a simulation study on a function with a scalar response and an FEA on beam deflection with a functional
response. Some concluding remarks are made in Sec. 5.
2 Related Work
In this section, we will review a few existing works in space-filling designs and multifidelity modeling, which are pertinent to this work.
2.1 Experimental Design.
In the design of deterministic computer experiments, space-filling designs that spread out the design points in the experimental region are commonly used [3]. Johnson et al. [18] propose two
strategies based on the distance between the design points: minimax designs minimize the maximum gap in the experimental region, whereas maximin designs maximize the minimum distance between the
points. Since our work is closely related to maximin designs, we will explain it in a bit more detail. See Joseph [10] for a recent review on space-filling designs.
Suppose we are interested in exploring the relationship between the output
and input variables
, …,
. We can scale the input variables so that the experimental region is a unit hypercube [0, 1]
. Let
= {
, …,
} be the experimental design, where
∈ [0, 1]
= 1, …,
. Let ‖
‖ be the Euclidean distance between points
. Then the maximin design is obtained as the solution to the following optimization problem:
An issue with the maximin design is that some of the design points may project onto the same value for some input variables. This is undesirable because only a few inputs may be important, and
therefore, replicated values in them are not useful in a deterministic computer experiment that has no random errors. Latin hypercube design (LHD) proposed by Ref. [19] is a solution to this problem.
It ensures that the design points project onto n different values in each input variable. However, there can be many LHDs. Therefore, an optimum LHD can be obtained by maximizing the minimum distance
among the points, which is called a maximin LHD [20]. Joseph et al. [21] proposed the maximum projection (MaxPro) design that tends to maximize the minimum distance among projections to all possible
subsets of inputs.
To accommodate the idea of multifidelity in experiment design, we need to account for the situation where some of the design points will be more valuable than others due to different levels of
accuracy. Nested LHDs are specifically tailored to such scenarios [11,22,23]. For example, for two fidelity levels, the set of design points as a whole is an LHD. Meanwhile, it contains a subset that
is also an LHD (of smaller size). The whole set of points is used for the low-fidelity experiment, while the subset LHD is for the high-fidelity experiment.
Sequential design strategies for multifidelity simulations are also proposed in the literature [24–26]. In contrast, this article focuses on developing a fixed design strategy that is tailored for
multifidelity finite element simulations. Interestingly, almost all of the sequential strategies need an initial set, and therefore, even if one is interested in sequential simulations, the fixed
design developed in this work can be utilized for constructing the initial design.
2.2 Multifidelity Modeling.
In the seminal work of Kennedy and O’Hagan [
], the authors model the correlation between simulation outputs of high- and low-fidelity levels with an autoregressive function.
) is the output at fidelity level
indicates lower fidelity),
is a scale factor, and
) is the bias for
= 1, …,
. The bias term
) is modeled by a stationary Gaussian process with a Gaussian covariance function [
denotes the correlation parameter in the
th dimension at the
th fidelity level. The foregoing model can be used for integrating data from all the fidelity levels and use it for predicting the response at the highest fidelity level. In this article, we refer to
this modeling approach as KOH. See Fernández-Godino et al. [
] for a recent review of other multifidelity modeling methods.
However, the KOH model gets overly complex when there are numerous fidelity levels (large
), which is especially true in FEA because the fidelity can be easily changed by varying the mesh density. The work of Ref. [
] addresses it by regarding the mesh density/size as a tuning parameter for the system. Denote the mesh density tuning variable by
. It is assumed that
> 0, and a smaller
indicates a higher mesh density, implying higher simulation accuracy. Tuo et al. [
] proposed the following model:
) is the true response and
) denotes the bias in the simulation at input
under mesh density
. The true response function
) is unattainable in FEA because it is impossible to run the simulation with
= 0. The beauty of Tuo et al.’s approach is that by modeling data from different fidelity levels, we can extrapolate and predict the response at
= 0. Similar to the model used in Ref. [
], the two terms
) and
) can be modeled by two independent Gaussian processes, where
) has a stationary covariance function:
) cannot be modeled by a stationary Gaussian process because
) → 0 as
→ 0 regardless of
, breaking the necessary condition for a stationary process. Therefore, Tuo et al. [
] use a nonstationary covariance function involving Brownian motion:
is a covariance parameter. There are a few choices for
but typically it is set to 4 motivated by the error analysis of a finite element simulation. Note that Var[
)] =
→ 0 as
→ 0, and therefore, the bias disappears from the model.
Tuo et al.’s model is more flexible than the KOH model because each simulation is free to select a different mesh number/density parameter. Moreover, this model contains only 2p unknown correlation
parameters as opposed to Kp unknown parameters in KOH, making the estimation easier when K > 2. However, it poses a challenge to the experimental design strategy. While space-filling design for
system inputs is relatively well studied, determining the corresponding fidelity parameters is not. To meet this challenge, we propose a more flexible and versatile experimental design method that is
accommodative to finite element simulations with various mesh densities and mesh element numbers. We call it multi-mesh experimental design (MMED), which is discussed in Sec. 3.
3 Multi-Mesh Experimental Design
The aim is to create an
-point experimental design
= {(
), …, (
)}, where each simulation can have a different mesh density. Since the geometry of the meshing element can differ depending on the type of the meshing system used in the FEA, it will be convenient to
use the number of mesh elements (
) as the tuning parameter instead of the mesh density. They are related by the relation
is the dimension of the mesh (usually 2 or 3). Thus,
→ 0 when
→ ∞. In other words, the larger the
is, the more accurate the results will be. However, the computational cost increases with
. Assume that the computational time (
) of a single simulation is related to the number of mesh elements by
is a proportionality constant that depends on the meshing characteristics such as geometry and adaptation, and
is a positive constant. For ease of exposition, the rest of this article considers the case of
= 1, which seems to be a reasonable assumption [
]. The case of
≠ 1 is given in the
. We will also restrict our attention to uniform meshes, although there is evidence that the linearity holds with certain mesh adaptation as well [
To perform simulations given the number of mesh elements M, we need to make sure that the mesh arrangement is reasonable and the simulation is able to converge. Suppose FEA simulations within a range
of $M~$ and $M¯$ converge and produce reasonable results. Then if we were to perform the whole set of simulations at a single fidelity level, we can choose the upper bound $M¯$ as the mesh element
number. If we were to pursue a multifidelity scenario, then choosing two different fidelity levels is a common approach. We can designate meshing with $M~$ elements as the low-fidelity level, and
that with $M¯$ elements as the high-fidelity level. In this scenario, we will be able to perform more simulations at $M~$ compared to $M¯$ and incur the same computational time. That is, for each
simulation performed at $M¯$, we can do $(M¯/M~)$ simulations at $M~$. For a multi-mesh experimental design, we will choose $M1,…,Mn∈[M~,M¯]$ while at the same time ensure the total computational
time is still within the budget.
Suppose we have a budget to perform
simulations at
. Thus, the total time budget is
. We would like to divide this budget for simulations at
distinct levels
, …,
. Thus,
. Therefore, the levels need to satisfy the constraint
How should the
levels be chosen so that Eq.
is satisfied? One possibility is to find the levels so that it minimizes the integrated mean squared error criterion [
] under Tuo et al.’s [
] multifidelity model. However, such a criterion would involve the correlation parameters of the Gaussian process models, which are unknown before doing the simulations. Therefore, a more robust
approach is to use a space-filling design. If we were to use a maximin design in
, we would choose the levels using
= {
, …,
}. Since this is a one-dimensional case, the maximin design is an equidistant point set given by Ref. [
can be chosen to approximately satisfy Eq.
. However, this solution spreads the levels uniformly within
, which does not make sense because we expect to have more simulations near
than near
. This can be achieved by using a weighted maximin design
where the weights
’s should be inversely proportional to
. The weighted maximin design is also known by the name minimum energy design [
], which has the property that the optimal design points will converge to a distribution with density proportional to
. Thus, if we take
, then the design points will have a distribution proportional to 1/
. This makes sense because under the assumption in Eq.
, the number of simulations that can be carried out at level
for a given time is inversely proportional to
. Therefore, the
levels can be obtained by solving the optimization problem:
There is no explicit solution to the foregoing optimization problem. However, an approximate solution can be obtained using transformations as follows. As shown in Refs. [
], the solution to Eq.
will asymptotically converge to a distribution with density function:
. The cumulative distribution function is given by
) has a uniform distribution in [0, 1]. Therefore, under this transformation, Eq.
The solution to this problem is given by Ref. [
Thus, the solution to
is given by
Substituting Eq.
into Eq.
, we obtain
. Then, Eq.
which gives
By solving for
and choosing the nearest integer solution, we obtain
and [
] denotes the nearest integer of
. Once
is obtained, {
, …,
} can be obtained from Eq.
Now to get the MMED, D = {(x[1], M[1]), …, (x[n], M[n])}, we can first find an n-point MaxPro LHD [21] in (p + 1) dimensions and replace the n levels of the last column with {M[1], …, M[n]}. The
design procedure is summarized in Algorithm 1.
MMED(p,M~,M¯,n¯): Multi-Mesh Experiment Design
Algorithm 1
Data:$p$: System input dimension,
$[M~,M¯]$: Range of mesh element numbers,
$n¯$: No. of simulations at $M¯$ affordable by budget.
Find suitable$n$:
Obtain the number of design points $n$ from Eq. (15).
Space-filling design:
1. Obtain $n$ levels for mesh elements from Eq. (13).
2. Obtain an $n$-run design in $(p+1)$ dimensions using a space-filling design such as MaxPro LHD [21].
3. Assign the $p$ input variables from the first $p$ columns and the mesh elements from the remaining column of the design.
Output: Experimental design $D={(xi,Mi)}i=1n$.
4 Applications
In this section, we apply MMED to two applications: an analytical function and a cantilever beam deflection. To evaluate MMED, we compare it to two other design methods and their corresponding
modeling practices. The first method is a set of space-filling points with simulations executed with the same number of mesh elements. We call it the single-mesh design as only one meshing
arrangement is used throughout the finite element simulations. Under this setting, fidelity is not taken into account at all. The second method is a two-level nested Latin hypercube design [11]
(briefly described in Sec. 2.1), where all simulations on level 1 are carried out with a mesh arrangement with fewer elements $M~$, while simulations on level 2 are conducted with more mesh elements
$M¯$. The computational budget is equally split between the two fidelity levels because running simulations in parallel from two separate solvers makes the whole experiment finish at the same time.
We call it the double-mesh design.
For modeling the data from the single-mesh simulations, we fit a standard Gaussian process with a Gaussian covariance function. For double-mesh simulations, we use the KOH model [7] described at the
beginning of Sec. 2.2. For MMED simulations, we use the nonstationary model of Ref. [16] described at the end of Sec. 2.2. The hyperparameters for all three models are estimated using maximum
4.1 Analytical Function.
We first evaluate the performance of MMED using a 2D analytical function [
= [
] ∈ [0, 1]
. Although this is an analytical function, for illustration purposes, we predict the output from a mesh grid via a grid-based interpolation. The meshing process divides the input domain into many
square elements in 2D. The response surface is shown in Fig.
with 25 mesh elements. We obtain the surface plot via the following procedure: (i) generate a uniform 5 × 5 square mesh grid in the input plane, (ii) evaluate the function outputs
) by Eq.
at all these grid points, and (iii) carry out piecewise linear interpolation and form a continuous surface within each grid box from the discrete grid evaluations. Since there are two input
variables, we use bilinear interpolation. To estimate the system response given input (
), we first find the four corner points of the square mesh box in which the input point falls denoted by (
), (
), (
), (
). (By convention,
.) The interpolated function response is then:
This procedure is effectively the same as finite element meshing, splitting the input plane into many smaller squares. The number of mesh elements used controls the scale of these squares. For
example, M = 25 means the mesh grid contains 5 × 5 discrete points. The more mesh cells there are, the more accurate the surface becomes. Another plot of Eq. (16) is generated using M = 400 mesh
cells, as shown in Fig. 1. The interpolated surface becomes much more accurate because there are more mesh elements.
To evaluate the performance of MMED, we first designate a reasonable range for the number of mesh elements N, with $M~=16$ and $M¯=144$. Then we specify the computational time constraint to be
equivalent to the total time of running ten simulations at $M¯=144$. Therefore, the single-mesh method would consist of ten simulations with $M¯$. We use MaxPro LHD to find the ten design points in
[0, 1]^2. For the double-mesh method, we propose a two-layer Latin hypercube nested design, where the simulations in the first layer use $M~$ mesh elements and those in the second layer use $M¯$ mesh
elements. We split the budget into half for simulations in each layer. It leads to 45 simulations in the first layer and five in the second layer (more precise). The nested design is illustrated in
Fig. 2, which is constructed using the MaxProAugment function in the R package MaxPro. For our multi-mesh method, we follow Algorithm 1 to construct the design. We can obtain n = 24 design points
with different values of $M∈[M~,M¯]$ such that the total simulation time does not exceed the budget constraint. The scatter plots of the design points in inputs and mesh element numbers are shown in
Fig. 3. We can see the generated designs are space filling in each of the 2D projections plotted. The histogram plots on the diagonal show that the points fill the (x[1], x[2]) space uniformly,
whereas more points are allocated to the region with a smaller number of mesh elements.
To evaluate the performance of the three methods, we randomly draw 1000 system inputs
as the testing dataset. For each of the three methods, we train the model using their design points and corresponding interpolation response. Subsequently, we obtain their predictions at the testing
points, denoted by
. The true response is calculated using Eq.
directly, denoted by
, …,
. We use the root-mean-squared error (RMSE) as the performance metric:
Repeating this procedure 30 times, we can obtain 30 RMSE values for each of the three methods. The results are plotted as a box plot in Fig.
. The average RMSE over the 30 runs for the single-mesh, double-mesh, and multi-mesh methods are 2.00, 1.06, and 0.74, respectively. This clearly shows that MMED significantly outperforms the two
other methods.
4.2 Cantilever Beam Deflection Under Stress.
For the second application, we conduct static stress analysis on a cantilever beam using FEA simulations with cubical cells. The simulations are carried out using the abaqus software [32].
For the beam structure, we set the dimensions as (
) corresponding to its breadth, height, and length, respectively. We set
, so the cross section of the beam is square shaped. An illustration of the beam can be seen in Fig.
. We set its Young’s modulus to be 200 MPa, and Poisson ratio to be 0.28, which corresponds to steel. For the static stress analysis, one end surface along the length of the beam is fixed with no
degree-of-freedom allowed, whereas the other surfaces are free to move. A continuous static half-sine pressure field is then applied vertically downward onto the top surface of the beam as shown in
. For the deflection analysis due to this pressure field, we set three system inputs variables
= [
] ∈ [0, 1]
for this problem. The pressure field is described by a function depending on the span along
from the fixed surface, denoted by
∈ [0, 200
where input variable
∈ [0, 1] is the weighting parameter, and
∈ [0, 1] controls the length of the beam by
= 200
denotes the breadth and the height of the beam, corresponding to both
, respectively.
The deflection profile is measured across the span of the beam. We take the maximum deflection measurements along the beam at z = 2x[2], 4x[2], …, 200x[2], which gives us 100 uniformly placed
discrete points to form the deflection profile as the functional response. Since the beam is a cuboid and the mesh elements are set to be cubic, we use t = M^−1/3 as the mesh density parameter.
Overall, each simulation requires three input variables and one mesh density parameter.
Since the deflection profile is a functional response, it requires an additional variable, the measuring location on the span, to be put into the response model Eq. (6). We assume a Gaussian
correlation function for this additional variable as well. The computations in functional Gaussian process modeling can be simplified using Kronecker products as described in Ref. [33].
For this application, suppose a reasonable range for mesh cell numbers is $M~=1,000$ and $M¯=8,000$. Let the time budget be equivalent to the total computational time of running eight simulations
with $M¯$. Similar to the previous application, the single-mesh design with eight points in the input space [0, 1]^3 is obtained using MaxPro. The double-mesh design uses a two-layer nested design
with half of the time budget allocated to each layer. The first layer consists of 32 simulations at $M~$, and the second layer contains four simulations at $M¯$. Finally, for the multi-mesh method,
n = 18 simulations with different values of $M∈[M~,M¯]$ are obtained using Algorithm 1, which maintain the same total time constraint.
We randomly draw 30 sets of system inputs x as the testing dataset. To obtain the corresponding deflection profile, we simulate each of these 30 sets with a corresponding finely meshed FEA model with
M = 320,000. Under this setting, we mesh the beam using a 40 × 40 × 200 grid. With such a large number of mesh elements, the size of each mesh element is small. As a result, we presume the deflection
measurements from these simulations to be sufficiently accurate and can serve as the “true” response. To evaluate the performance of the three methods, we compare the estimated deflection profiles $
{y^s}s=1100$ by each of the methods at these testing points against the true profile ${ys}s=1100$, where s denotes the index of the testing point.
The error curves
of the deflection profiles for the 30 testing cases under each of the three methods are plotted in Fig.
. We can see that the errors in estimated deflection profiles by MMED are smaller than those by double-mesh and significantly smaller than those using single-mesh. The RMSE:
is plotted as a box plot in Fig.
. The p-values for the two-sample t-tests with unequal variances are 2.4 × 10
and 7.5 × 10
for multi-mesh versus single-mesh and multi-mesh versus double mesh, respectively. Thus, the MMED significantly outperforms the other two methods in this application.
5 Conclusions
In this work, we have proposed an experimental design method that enables the experimenter to choose optimal mesh sizes for finite element simulations given a fixed time budget. We have shown that it
outperforms the single-mesh and double-mesh approaches because the design is well coupled with the modeling method. The single-mesh approach does not take the concept of multifidelity into account,
hampering its ability to take the effects of meshing into account. Kennedy and O’Hagan’s model [7] used in double mesh utilizes multifidelity, but it can predict the response only at the highest
fidelity level used in the simulations. On the other hand, MMED naturally fits with the model proposed by Ref. [16], which helps to perform extrapolation and predict the true response that is
impossible to achieve through simulations.
For future work, MMED can be refined and tailored to accommodate more complex finite element simulations where the meshing is no longer uniform, or the geometry is difficult to be easily described by
the number of mesh elements generated alone. In FEA computer simulators such as abaqus and ls-dyna, nonuniform meshes can often be generated by specifying mesh properties such as average or maximum/
minimum sizes. Therefore, we expect MMED to remain effective for these scenarios, although this needs to be validated with complex finite element simulations. For nonuniform and adaptive mesh
assignments, other variables will impact the computational time and simulation accuracy on top of mesh density. For instance, chordal errors referring to the quality of mesh approximation to true
geometry can be considered as one such parameter. This would lead to multiple tuning parameters controlling the computational cost jointly, which goes beyond the scope of the current work where only
one parameter is involved.
This work was supported by an LDRD grant from Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering
Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the US Department of Energy’s National Nuclear Security Administration (Contract No. DE-NA0003525). This
article describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the article do not necessarily represent the views of the US Department of
Energy or the United States Government. Wu was also supported by NSF DMS-1914632.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.
Appendix: MMED for α ≠ 1
≠ 1 in Eq.
. Then, we should take the weights in Eq.
to be
= 1, …,
so that the optimal design points will have an asymptotic distribution proportional to 1/
]. As mentioned earlier, let
. Now following the same steps in Sec.
, we obtain the optimal number of mesh elements as follows:
= 1, …,
, where
needs to be solved from
There is no explicit solution for
≠ 1, and it needs to be solved numerically.
W. J.
T. J.
, and
H. P.
, “
Design and Analysis of Computer Experiments
Statis. Sci.
), pp.
, and
, “
Bayesian Prediction of Deterministic Functions, With Applications to the Design and Analysis of Computer Experiments
J. Am. Stat. Assoc.
), pp.
T. J.
B. J.
W. I.
, and
B. J.
The Design and Analysis of Computer Experiments
, Vol.
New York
, and
Gary Wang
, “
Multi-Fidelity Modeling and Adaptive Co-Kriging-Based Optimization for All-Electric Geostationary Orbit Satellite Systems
ASME J. Mech. Des.
), p.
, and
, “
Calibration and Validation Framework for Selective Laser Melting Process Based on Multi-Fidelity Models and Limited Experiment Data
ASME J. Mech. Des.
), p.
T. D.
, and
, “
A Multi-Fidelity Information-Fusion Approach to Machine Learn and Predict Polymer Bandgap
Comput. Mater. Sci.
, p.
M. C.
, and
, “
Predicting the Output From a Complex Computer Code When Fast Approximations Are Available
), pp.
C. C.
V. R.
J. K.
, and
C. F. J.
, “
Building Surrogate Models Based on Detailed and Approximate Simulations
ASME J. Mech. Des.
), pp.
J. P.
M. J.
C. C.
, and
, “
PreDiction and Computer Model Calibration Using Outputs From Multifidelity Simulators
), pp.
V. R.
, “
Space-Filling Designs for Computer Experiments: A Review
Q. Eng.
), pp.
P. Z.
, “
Nested Latin Hypercube Designs
), pp.
M. E.
S. D.
, and
, “
Multifidelity and Multiscale Bayesian Framework for High-Dimensional Engineering Design and Calibration
ASME J. Mech. Des.
), p.
, and
, “
A Sequential Sampling Generation Method for Multi-Fidelity Model Based on Voronoi Region and Sample Density
ASME J. Mech. Des.
), p.
, and
, “
Sequential Design of Multi-Fidelity Computer Experiments: Maximizing the Rate of Stepwise Uncertainty Reduction
), pp.
O. C.
R. L.
J. Z.
The Finite Element Method: Its Basis and Fundamentals
C. F. J.
, and
, “
Surrogate Modeling of Computer Experiments With Different Mesh Densities
), pp.
, and
, “
Bayesian Assimilation of Multi-Fidelity Finite Element Models
Comput. Struct.
, pp.
M. E.
L. M.
, and
, “
Minimax and Maximin Distance Designs
J. Stat. Plan. Inference
), pp.
M. D.
R. J.
, and
W. J.
, “
A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output From a Computer Code
), pp.
M. D.
, and
T. J.
, “
Exploratory Designs for Computational Experiments
J. Stat. Plan. Inference
), pp.
V. R.
, and
, “
Maximum Projection Designs for Computer Experiments
), pp.
, and
P. Z.
, “
An Approach to Constructing Nested Space-Filling Designs for Multi-Fidelity Computer Experiments
Stat. Sinica
), p.
P. Z.
, and
C. J.
, “
Nested Space-Filling Designs for Computer Experiments With Two Levels of Accuracy
Stat. Sinica
), pp.
T. T.
W. I.
, and
R. A.
, “
Sequential Kriging Optimization Using Multiple-Fidelity Evaluations
Struct. Multidiscipl. Optim.
), pp.
Le Gratiet
, and
, “
Cokriging-Based Sequential Design Strategies Using Fast Cross-Validation Techniques for Multi-Fidelity Computer Codes
), pp.
, and
, “
Multimodel Fusion Based Sequential Optimization
AIAA. J.
), pp.
M. G.
N. -H.
, and
R. T.
, “
Issues in Deciding Whether to Use Multifidelity Surrogates
AIAA J.
), pp.
A. F. d.
R. G.
, and
C. H.
, “
Influences of the Mesh in the Cae Simulation for Plastic Injection Molding
), p.
De Oliveira
, and
, “
Tetrahedral Mesh Optimisation and Adaptivity for Steady-State and Transient Finite Element Calculations
Comput. Methods. Appl. Mech. Eng.
), pp.
V. R.
, and
C. J.
, “
Sequential Exploration of Complex Surfaces Using Minimum Energy Designs
), pp.
V. R.
, and
, “
Deterministic Sampling of Expensive Posteriors Using Minimum Energy Designs
), pp.
, “
ABAQUS/Standard User’s Manual
,” Version 6.9, Dassault Systèmes Simulia Corp, Providence, RI.
V. R.
, and
S. N.
, “
Analysis of Computer Experiments With Functional Response
), pp. | {"url":"https://computationalnonlinear.asmedigitalcollection.asme.org/mechanicaldesign/article/145/6/061703/1156705/Design-and-Analysis-of-Multifidelity-Finite","timestamp":"2024-11-09T02:41:50Z","content_type":"text/html","content_length":"338559","record_id":"<urn:uuid:f0afcb7c-dc66-4708-91ac-c9966c72547b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00581.warc.gz"} |
Exterior Angles Of Triangles Worksheet - Angleworksheets.com
Angles Of Triangles Worksheet – This article will discuss Angle Triangle Worksheets as well as the Angle Bisector Theorem. In addition, we’ll talk about Isosceles and Equilateral triangles. If you’re
unsure of which worksheet you need, you can always use the search bar to find the exact worksheet you’re looking for. Angle Triangle Worksheet This … Read more
Exterior Angles Of Triangles Worksheet
Exterior Angles Of Triangles Worksheet – This article will discuss Angle Triangle Worksheets as well as the Angle Bisector Theorem. We’ll also discuss Equilateral triangles and Isosceles. If you’re
unsure of which worksheet you need, you can always use the search bar to find the exact worksheet you’re looking for. Angle Triangle Worksheet This Angle … Read more | {"url":"https://www.angleworksheets.com/tag/exterior-angles-of-triangles-worksheet/","timestamp":"2024-11-05T00:19:06Z","content_type":"text/html","content_length":"54670","record_id":"<urn:uuid:88407e7c-7ae3-4587-9cbf-34436c7d9116>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00019.warc.gz"} |
MPH in gear
Water-Cooled VW Performance Handbook
Calculating MPH in gear
You might think that calculating your speed in any given gear a waste of time — why not just go out on the road and see what happens? However, in racing you often need to know how to set up your
transaxle to hit certain targets. For example, if you want to be able to make a 200 mph pass at Bonneville, you must first ensure that your motor has the legs to do it.
For example, at 5,000 RPM, with a tire 22.76 inches in diameter, with a 3.90 final drive ratio and a 0.68 fifth gear, your vehicle is capable of a top speed of 128 MPH.
The formula for the maximum speed you can reach is:
\[Maximum\ MPH = \frac{RPM \times tire\ diameter \times \pi}{final\ drive\ ratio \times gear\ ratio \times 1056}\] | {"url":"https://gregraven.org/hotwater/calculators/mph-in-gear","timestamp":"2024-11-15T04:40:14Z","content_type":"text/html","content_length":"7194","record_id":"<urn:uuid:171b96d7-8b24-419b-9a44-388bb8569b08>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00070.warc.gz"} |
Solving One Step Equations - Decimals (examples, solutions, videos, activities)
Related Topics:
More Lessons for Algebra I More Lessons for Algebra Math Worksheets
A series of free, online Basic Algebra Lessons or Algebra I lessons.
Examples, solutions, videos, worksheets, and activities to help Algebra students.
In this lesson, we will learn how to
• solve one step equations involving decimals.
• solve one step equations with decimals by adding or subtracting
• solve one step equations with decimals by multiplying or dividing
Solving One Step Equations
Equations are fundamental to Algebra, and
solving one step equations
is necessary for students in order to learn how to solve two-step equations, and other multi-step equations. Solving one-step equations means finding the value for the variable that makes the
statement true using additive and multiplicative inverses.
Solving One Step Equations Involving Decimals
This video provides four examples of solving one step linear equations involving decimals.
Ex: Solve a One Step Equation With Decimals by Adding and Subtracting
This video provides two examples of solving a one step linear equation with decimals by adding and subtracting.
Ex: Solve a One Step Equation With Decimals by Multiplying
This video provides an example of how to solve a linear equation with decimals by multiplying.
Ex: Solve a One Step Equation With Decimals by Dividing
This video provides an example of how to solve a linear equation with decimals by dividing.
Solving One Step Equations: A Summary
Determine is a given value is a solution to an equation, Solve one step equations.
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"https://www.onlinemathlearning.com/solving-one-step-equations-decimals.html","timestamp":"2024-11-02T13:50:15Z","content_type":"text/html","content_length":"39143","record_id":"<urn:uuid:04d5525d-fad6-4c6f-9467-7538459149f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00488.warc.gz"} |
Eksperimen Pembelajaran Matematika dengan Strategi Problem Based Learning dan Problem Posing Terhadap Kemampuan Pemecahan Masalah Ditinjau dari Motivasi Belajar Siswa Kelas X Semester Genap di SMK Negeri 2 Sragen Tahun 2015/2016
Akbar, Fattah Nur and , Prof. Dr. Sutama, M.Pd and , M. Noor Kholid, S.Pd., M.Pd (2016) Eksperimen Pembelajaran Matematika dengan Strategi Problem Based Learning dan Problem Posing Terhadap Kemampuan
Pemecahan Masalah Ditinjau dari Motivasi Belajar Siswa Kelas X Semester Genap di SMK Negeri 2 Sragen Tahun 2015/2016. Skripsi thesis, Universitas Muhammadiyah Surakarta.
PDF (NASKAH PUBLIKASI)
NASKAH PUBLIKASI.pdf
Download (283kB)
PDF (HALAMAN DEPAN)
HALAMAN DEPAN.pdf
Download (225kB)
PDF (BAB I)
BAB I.pdf
Download (33kB)
PDF (BAB II)
BAB II.pdf
Restricted to Repository staff only
Download (67kB) | Request a copy
PDF (BAB III)
BAB III.pdf
Restricted to Repository staff only
Download (413kB) | Request a copy
PDF (BAB IV)
BAB IV.pdf
Restricted to Repository staff only
Download (280kB) | Request a copy
PDF (BAB V)
BAB V.pdf
Restricted to Repository staff only
Download (19kB) | Request a copy
PDF (DAFTAR PUSTAKA)
DAFTAR PUSTAKA.pdf
Download (29kB)
PDF (LAMPIRAN)
Restricted to Repository staff only
Download (2MB) | Request a copy
SURAT PERNYATAAN PUBLIKASI KARYA ILMIAH.pdf
Restricted to Repository staff only
Download (50kB) | Request a copy
Problem solving is an integral in the learning of mathematics. Problem solving ability is very important for solving the problem is part of the math curriculum, students gain experience possible to
use knowledge and skills already possessed. But in reality, the problem solving ability of students have not been as expected. This research three purposes. 1) Analyze and examine the effect of the
implementation strategies Problem Based Learning and Problem Posing toward the math problem solving abilities. 2) Analyze and examine the effect of learning motivation toward math problem solving
abilities. (3) Analyze and examine interaction between learning strategy with learning motivation towards the math problem solving abilities. The type of the research is quantitative wits
quasi-experimental research. The population in this research is all students of X SMK Negeri 2 Sragen of academic year 2015/2016. Sample of this research consisted of two class there are experiment
class and control class with the sampling technique that are use is clester random sampling. Methods of data collection use tests and questionnaires. Technique of analyzed used analysis of variance
two paths with different cell with a significance level of 5%. Based on the research results: 1) F_A=10,387>F_(0,05;1;56)=4,013 which means that there is an effect of the implementation strategies
Problem Based Learning and Problem Posing toward math problem solving abilities. 2) F_B=19,177>F_(0,05;2;56)=3,162 which means that there is an effect of learning motivation toward math problem
solving abilities. 3) F_AB=1,467<F_(0,05;2;56)=3,162 which means that there is no interaction between learning strategy with learning motivation towards the math problem solving abilities. Results of
the data analysis was obtained: 1) There is an effect of the implementation strategies Problem Based Learning and Problem Posing toward math problem solving abilities. 2) There is an effect of
learning motivation toward math problem solving abilities. 3) There is no interaction between learning strategy with learning motivation towards the math problem solving abilities.
Actions (login required) | {"url":"https://eprints.ums.ac.id/48339/","timestamp":"2024-11-09T12:31:37Z","content_type":"application/xhtml+xml","content_length":"40845","record_id":"<urn:uuid:8baee8d8-63b4-4ee1-a01e-b6589a76ebf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00356.warc.gz"} |
Terje Haukaas - UBC Civil Engineering
Dr. Terje Haukaas
Structural & Earthquake Engineering
Office: CEME 2014
Email: terje@civil.ubc.ca
Phone: 604-827-5557
Publications: Google Scholar
Website: terje.civil.ubc.ca
Professor Terje Haukaas has been a member of the structural engineering group in the Department of Civil Engineering since 2003. He received his Master’s and PhD degrees from the University of
California at Berkeley in 1999 and 2003. Originally from Norway, Dr. Haukaas obtained his undergraduate degree from the Norwegian University of Science and Technology in Trondheim in 1996. Before
that, he obtained an engineering degree from the Stavanger University College in 1994 and a technician degree from the Stavanger Technical College in 1992. He worked as a researcher and engineer in
Norway from 1996 to 1998. Prior to entering the field of engineering, Dr. Haukaas had become a Journeyman and Master Builder of carpentry.
Dr. Haukaas conducts research on probabilistic modelling of hazards, structures, and impacts, with emphasis on computational simulation models. His notes and computer codes are posted here. Dr.
Haukaas has co-authored a number of journal papers on reliability, sensitivity, and optimization analysis applied to civil engineering problems. Software development is an integral part of Dr.
Haukaas’ research. He developed the first version of the Matlab toolbox FERUM and he implemented the first reliability and sensitivity options in OpenSees. He later spearheaded the development of Rt
and Rts, which are computer programs for multi-hazard and multi-model reliability and optimization analysis.
Research Interests
Probabilistic mechanics, structural reliability and optimization, timber engineering, earthquake engineering, decision making, risk, advanced structural analysis, finite elements, response
sensitivity analysis, software development
CIVL 332 Structural Analysis
CIVIL 509 Nonlinear Structural Analysis
CIVIL 518 Reliability and Structural Safety
Awards & Recognitions
• UBC Killam Teaching Prize, 2016
• President, Civil Engineering Risk and Reliability Association (CERRA), 2015-2019
• Semi-plenary Speaker, COMPDYN 2017, Rhodes, Greece, June 15-17, 2017
• Keynote Speaker, ICCSTE’16, Ottawa, Canada, May 5-6, 2016
• Chair (organizer), ICASP12, Vancouver, Canada, July 12-15, 2015
• Early Career Keynote Speaker, ICOSSAR 2013, New York, June 16-20, 2013
• Student Appreciation Award from the Civil Undergraduate Student Club: Top 4th Year Professor 2015, 2016
• Student Appreciation Award from the Civil Undergraduate Student Club: Top 3rd Year Professor 2007, 2010, 2012, 2013
• Best paper award, ASCE Journal of Computing in Civil Engineering, 2007
• Fulbright Fellowship, 1998
• Haukaas, T. (2024) “Exact sensitivity of nonlinear dynamic response with modal and Rayleigh damping formulated with the tangent stiffness.” ASCE Journal of Structural Engineering, 150(3).
• Haukaas, T. (2023) “Importance ranking of correlated variables in one analysis.” Structural Safety, 104.
• Gavrilovic, S., Haukaas, T. (2021) “Cost of environmental and human health impacts of repairing earthquake damage.” ASCE Journal of Performance of Constructed Facilities, 35(4).
• Costa, R., Haukaas, T. (2021) “The effect of resource constraints on housing recovery.” International Journal of Disaster Risk Reduction, 55.
• Gavrilovic, S., Haukaas, T. (2021) “Seismic loss estimation using visual damage models.” ASCE Journal of Structural Engineering, 147(3).
• Costa, R., Haukaas, T., Chang, S. (2021) “Agent-based model for post-earthquake housing recovery.” Earthquake Spectra, 37(1).
• Gavrilovic, S., Haukaas, T. (2020) “Multi-model probabilistic analysis of the lifecycle cost of buildings.” Sustainable and Resilient Infrastructure.
• Costa, R., Haukaas, T., Chang, S. (2020) “Predicting population displacements after earthquakes.” Sustainable and Resilient Infrastructure.
• Costa, R., Haukaas, T., Chang, S., Dowlatabadi, H. (2019) “Object-oriented model of the seismic vulnerability of the fuel distribution network in Coastal British Columbia.” Reliability
Engineering & System Safety, 186, pp. 11-23.
• Mahsuli, M., Haukaas, T. (2019) “Risk minimization for a portfolio of buildings considering risk aversion.” ASCE Journal of Structural Engineering, 145(2).
• Lok, I., Eschelmuller, E., Haukaas, T., Ventura, C., Bebamzadeh, A., Slovic, P., & Dunn, E. (2019). “Can we apply the psychology of risk perception to increase earthquake preparation? ” Collabra:
Psychology, 5(1).
• Ganesh Pai, S., Lam, F., Haukaas, T. (2016) “Force transfer around openings in cross laminated timber shear walls.” ASCE Journal of Structural Engineering, 143(4).
• Javaherian Yazdi, A., Haukaas, T., Yang, T., Gardoni, P. (2016) “Multivariate fragility models for earthquake engineering.” Earthquake Spectra, 32(1), pp. 441-461.
• Gomes, W.J.S., Beck, A.T., Haukaas, T. (2013) “Optimal inspection planning for onshore pipelines subject to external corrosion.” Reliability Engineering & System Safety, 118, pp. 18-27.
• Tannert, T., Haukaas, T. (2013) “Probabilistic models for structural performance of rounded dovetail joints.” ASCE Journal of Structural Engineering, 139(9), pp. 1478-1488.
• Mahsuli, M., Haukaas, T. (2013) “Sensitivity measures for optimal mitigation of risk and reduction of model uncertainty.” Reliability Engineering & System Safety, 117, pp. 9-20.
• Mahsuli, M., Haukaas, T. (2013) “Seismic risk analysis with reliability methods, Part II: Analysis.” Structural Safety, 42(1), pp. 63–74.
• Mahsuli, M., Haukaas, T. (2013) “Seismic risk analysis with reliability methods, Part I: Models.” Structural Safety, 42(1), pp. 54–62.
• Mahsuli, M., Haukaas, T. (2013) “Computer program for multi-model reliability and optimization analysis.” ASCE Journal of Computing in Civil Engineering. 27(1), pp. 87–98.
• Haukaas, T., Gardoni, P. (2011) “Model uncertainty in finite element analysis: Bayesian finite elements.” ASCE Journal of Engineering Mechanics, 137 (8), pp. 519-526.
• Bebamzadeh, A., Haukaas, T., Vaziri, R., Poursartip, A., Fernlund, G. (2010) “Application of response sensitivity in composite processing.” Journal of Composite Materials, 44 (15), pp. 1821-1840.
• Koduru, S.D., Haukaas, T. (2010) “Probabilistic seismic loss assessment of a Vancouver high-rise building.” ASCE Journal of Structural Engineering, 136 (3), pp. 235-245.
• Koduru, S.D., Haukaas, T. (2010) “Feasibility of FORM in finite element reliability analysis.” Structural Safety, 32 (2), pp. 145-153.
• Sharma, G., Haukaas, T., Hall, R., Priyadarshini, S. (2009) “Bayesian statistics and production reliability assessments for mining operations.” International Journal of Mining, Reclamation and
Environment, 23 (3), pp. 180-205.
• Choe, D.E., Gardoni, P., Rosowsky, D. Haukaas, T. (2009) “Seismic fragility estimates for reinforced concrete bridges subject to corrosion.” Structural Safety, 31, pp. 275–283.
• Bebamzadeh, A., Haukaas, T., Vaziri, R., Poursartip, A. , Fernlund, G. (2009) “Response sensitivities and parameter importance in composites manufacturing.” Journal of Composite Materials, 43(6),
pp. 621-659.
• Bebamzadeh, A., Haukaas, T. (2008) “Second-order sensitivities of inelastic finite element response by direct differentiation.” ASCE Journal of Engineering Mechanics, 134(10), pp. 867-880.
• Scott, M.H., Haukaas, T. (2008) “Software framework for parameter updating and finite element response sensitivity analysis.” ASCE Journal of Computing in Civil Engineering, 22(5), pp. 281-291.
• Haukaas, T. (2008) “Unified reliability and design optimization for earthquake engineering.” Probabilistic Engineering Mechanics, 23, pp. 471–481.
• Zhong, J., Gardoni, P., Rosowsky, D., Haukaas T. (2008) “Probabilistic seismic demand models and fragility estimates for reinforced concrete bridges with two-column bents.” ASCE Journal of
Engineering Mechanics, 134(6), pp. 495-504.
• Choe, D.E., Gardoni, P., Rosowsky, D. Haukaas, T. (2008) “Probabilistic capacity models and seismic fragility estimates for RC columns subject to corrosion.” Reliability Engineering & System
Safety, 93, pp. 383-393.
• Koduru, S.D., Haukaas, T., Elwood, K.J. (2007) “Probabilistic evaluation of global seismic capacity of degrading structures.” Earthquake Engineering and Structural Dynamics, 36, pp. 2043-2058.
• Zhu, L., Elwood, K.J., Haukaas, T. (2007) “Classification and seismic safety evaluation of existing reinforced concrete columns.” ASCE Journal of Structural Engineering, 133(9), pp. 1316-1330.
• Riederer, K.A., Haukaas, T. (2007) “Cost-benefit importance vectors for performance-based structural engineering.” ASCE Journal of Structural Engineering, 133(7), pp. 907-915.
• Haukaas, T., Der Kiureghian, A. (2007) “Methods and object-oriented software for FE reliability and sensitivity analysis with application to a bridge structure.” ASCE Journal of Computing in
Civil Engineering, 21(3), pp. 151-163.
• Liang, H., Haukaas, T., Royset, J.O. (2007) ”Reliability-based optimal design software for earthquake engineering applications.” Canadian Journal of Civil Engineering, 34(7), pp. 856-869.
• Mathakari, S., Gardoni, P., Agarwal,P., Raich, A., Haukaas, T. (2007) “Reliability-based optimal design of electrical transmission towers using multi-objective genetic algorithms.” Computer-Aided
Civil and Infrastructure Engineering, 22, pp. 282-292.
• Koduru, S.D., Haukaas, T. (2006) “Uncertain reliability index in finite element reliability analysis.” International Journal of Reliability and Safety, 1(1/2), pp. 77-101.
• Haukaas, T. (2006) “Efficient computation of response sensitivities for inelastic structures.” ASCE Journal of Structural Engineering, 132 (2), pp. 260-266.
• Haukaas, T., Scott, M. H. (2006) “Shape sensitivities in the reliability analysis of nonlinear frame structures.” Computers & Structures, 84 (15), pp. 964-977.
• Haukaas, T., Der Kiureghian, A. (2006) ”Strategies for finding the design point in nonlinear finite element reliability analysis.” Journal of Probabilistic Engineering Mechanics, 21 (2), pp.
• Zhu, L., Elwood, K.J., Haukaas, T., Gardoni, P. (2006) “Application of a probabilistic drift capacity model for shear-critical columns, deformation capacity and shear strength of reinforced
concrete members under cyclic loading.” ACI Special Publication (SP-236), American Concrete Institute, 21 pages.
• Der Kiureghian, A., Haukaas, T. Fujimura, K. (2006) ”Structural reliability software at the University of California, Berkeley.” Structural Safety, 28, pp. 44-67.
• Haukaas, T., Der Kiureghian, A. (2005) “Parameter sensitivity and importance measures in nonlinear finite element reliability analysis.” ASCE Journal of Engineering Mechanics, 131(10), pp.
• Remseth, S., Leira B. J., Okstad K. M., Mathisen, K. M., Haukaas, T. (1999) “Dynamic response and fluid/structure interaction of submerged floating tunnels.” Computers and Structures, 72, pp. | {"url":"https://civil.ubc.ca/terje-haukaas/","timestamp":"2024-11-13T22:35:02Z","content_type":"text/html","content_length":"79111","record_id":"<urn:uuid:c1a7b9a8-1ab7-44c8-afcb-57bb89ac63d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00302.warc.gz"} |
Cincinnati, OH 2023-07-17
Ron's notes:
"Thanks so much for Bob Isaacs for feedback on the couple of alternative versions, and for cleaning up the A1.
The title: Leonard Nimoy’s last public words from Feb 23, 2015:
“A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP”
Vulcan hand signs on allemandes are optional.
Written Feb 28, 2015" | {"url":"https://contradb.com/programs/368","timestamp":"2024-11-03T05:52:17Z","content_type":"text/html","content_length":"24750","record_id":"<urn:uuid:28997c44-fce8-4bea-a72c-82dcefcc29a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00302.warc.gz"} |
IntroductionResearch ObjectiveRelated WorksSDN ArchitectureSDN architectureOpenFlow switch componentsNetwork RoutingOpenFlow routing structure of SDNAnt Colony Optimization (ACO)Optimization of Dynamic Routing in SDN Using ACOSDN deployment phasesAn example to illustrate the Box-Covering algorithmPseudo code of k-means algorithmPseudo code of the proposed HACOPerformance EvaluationCentroid movement against the network delayCentroid movement against packet loss rate (%)Network size against total delay and packet lossNetwork size against total delayNetwork size against packet loss ratePerformance of HACO against other routing algorithms (Network size against running time)Performance of HACO against other routing algorithms (network size against running time)Performance of HACO against other routing algorithms (network size against delay time)Performance of HACO against other routing algorithms (network size against delay time)Conclusion and Points for DiscussionReferences
CMC CMC CMC Computers, Materials & Continua 1546-2226 1546-2218 Tech Science Press USA 17787 10.32604/cmc.2022.017787 Article Dynamic Routing Optimization Algorithm for Software Defined Networking
Dynamic Routing Optimization Algorithm for Software Defined Networking Dynamic Routing Optimization Algorithm for Software Defined Networking El-HefnawyNancy Abbas1nancyabbas_1@ics.tanta.edu.eg
RaoufOsama Abdel2 AskrHeba3 Department of Information Systems, Tanta University, Tanta, 31511, Egypt Department of Operations Research and Decision Support, Menoufia University, Shepen Alkom, Egypt
Department of Information Systems, University of Sadat City, AlSadat City, 048, Egypt Corresponding Author: Nancy Abbas El-Hefnawy. Email: nancyabbas_1@ics.tanta.edu.eg 30082021 70 1 1349 1362
1122021 2332021 © 2022 El-Hefnawy, Raouf and Askr 2022 El-Hefnawy, Raouf and Askr This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
Time and space complexity is the most critical problem of the current routing optimization algorithms for Software Defined Networking (SDN). To overcome this complexity, researchers use
meta-heuristic techniques inside the routing optimization algorithms in the OpenFlow (OF) based large scale SDNs. This paper proposes a hybrid meta-heuristic algorithm to optimize the dynamic routing
problem for the large scale SDNs. Due to the dynamic nature of SDNs, the proposed algorithm uses a mutation operator to overcome the memory-based problem of the ant colony algorithm. Besides, it uses
the box-covering method and the k-means clustering method to divide the SDN network to overcome the problem of time and space complexity. The results of the proposed algorithm compared with the
results of other similar algorithms and it shows that the proposed algorithm can handle the dynamic network changing, reduce the network congestion, the delay and running times and the packet loss
Dynamic routing optimization Openflow software defined networking
Distributed routing algorithms are used in traditional networks and this cause problems in controlling and management of the network. SDN outperforms the traditional network architecture management
in terms of cost. SDN separates the network control plane layer from the forwarding/data plane layer. SDN controllers have a full image of the network topology and make forwarding decisions based on
flow tables using the OF protocol. SDNs controller have full image and control of the network topology and which improves the performance of routing processes [1].
Time and space complexity is the most critical problem of the current SDN routing optimization algorithms. These algorithms use Dijkstra algorithm in exploring the shortest path. The complexity of
the Dijkstra algorithm is that the number of nodes and edges of the network affect the efficiency of the algorithm [2]. To overcome this complexity, researchers use meta-heuristic techniques inside
the routing optimization algorithms in OF-based large scale SDNs. [3].
Ant Colony Optimization (ACO) is the most famous meta-heuristic technique that outperforms other traditional routing techniques beside the ACO methodologies have the potential to conduct the
flow-based routing strategy as same as SDNs [4].
This paper proposes a hybrid meta-heuristic algorithm to optimize the dynamic routing problem for the large scale SDNs. it is called Hybrid Ant Colony Optimization (HACO) algorithm HACO uses two
different methods for dividing the network into small subnets to overcome the problem of time and space complexity. These methods are box-covering and k-means clustering methods. Also, HACO uses a
mutation operator to discover new areas in the search space and improve the solution.
The structure of this paper as follows. Section 2 presents the goal of the research. Section 3 covers the related work efforts. Section 4 gives an overview of the SDN. Section 5 presents an overview
of the network routing problem. Section 6 presents an overview of Ant Colony Optimization. Section 7 presents the proposed algorithm. Section 8 presents the performance evaluation of the proposed
algorithm. Finally, the conclusion of the paper is presented in Section 9.
The main goal of this paper is to overcome the problem of time and space complexity of the dynamic routing problem inside SDNs using the proposed HACO algorithm.
Dijkstra algorithm is one of the most famous shortest path algorithms applied in network routing. But the complexity of the Dijkstra algorithm affects the efficiency of the routing process.
Literature [2] proposed a box-covering-based routing (BCR) algorithm for large scale SDNs trying to minimize the time and space complexity of the Dijkstra algorithm by reducing the number of nodes
and edges in the network. In the BCR algorithm, Firstly, the whole network is divided into several subnets and each subnet is covered by a box of size (l). Secondly, each subnet is treated as a new
node, and the shortest path between these new nodes is calculated by the Dijkstra algorithm. Thirdly, the shortest path between nodes inside each subnet is calculated also by the Dijkstra algorithm.
Finally, the shortest path between subnets and the shortest paths inside those corresponding subnets are linked together and the path from the source node to the target node is found.
Although the BCR algorithm in [2] reduces the network size and it still uses the Dijkstra algorithm in the routing process. This encourages researchers to use meta-heuristic techniques inside the
routing optimization algorithms in the OpenFlow (OF) based large scale SDNs.
The SDN architecture is divided into three planes. At the very bottom is the data plane which comprises of hardware such as network switches. Above the data plane is the network control plane. The
centralized controller could be as simple as a server machine attached to the network running on controller software [5]. Residing above the control plane is the application plane. This plane
comprises of individual applications which could be network monitoring utilities, voice over IP applications which has a particular set of requirements such as delay. Communication between the
application and the control plane is by means of northbound application programming interface (API) such as the restful protocols. While the controller communicates with the data plane devices such
as network switches using the southbound application programming interface commonly using the open flow protocol. This architecture is presented in Fig. 1.
OpenFlow is the core of the forwarding plane of network devices in SDNs [6]. An OpenFlow Switch consists of one or more flow tables and a group table, which perform packet lookups and forwarding, and
an OpenFlow channel to an external controller. The switch communicates with the controller and the controller manages the switch via the OpenFlow protocol [7]. Using the OpenFlow protocol, the
controller can add, update, and delete flow entries in flow tables. Each flow table in the switch contains a set of flow entries; each flow entry consists of match fields, counters, and a set of
instructions to apply to matching packets [8]. The processing of packets always starts at the first flow table. Then proceed with the highest-priority matching flow table and of the instructions of
that flow entry is executed. Otherwise, the packet is dropped. The OF switch is illustrated in Fig. 2.
Network routing is the process of selecting a path across one or more networks. Metrics are cost values used by routers to determine the best path to a destination network. The most common metric
values are hop, bandwidth, delay, reliability, load, and cost [9]. SDN routing example is shown in Fig. 3. The SDN controller has full image and management over the SDN network [10].
Routing algorithms are responsible for selecting the best path for the communication [11]. Open Shortest Path First (OSPF) allows routers to dynamically update information about network topology.
Dijkstra's algorithm uses Shortest Path First (SPF) algorithm [12]. It finds the path the shortest path between that node and every other node [13].
ACO is a meta-heuristic algorithm where ants searching for food and depositing pheromone on the route. The quantity of pheromone on the route affects the behavior of ants; the path with the largest
quantity of pheromone represents the shortest path [3].
ACO starts with generating m random ants and evaluates the fitness of each ant according to an objective function then updates the pheromone concentration of every possible trail using Eq. (1).
where i and j are nodes, t is a particular iteration; τij(t) is the revised pheromone concentration related to the link ℓij at iteration t, τij(t−1) is the pheromone concentration at the previous
iteration (t−1); Δτij is the pheromone concentration change; and ρ is the pheromone evaporation(decay) coefficient with value ranging from 0 to 1 to avoid too strong influence of the old pheromone so
that premature solution stagnation is incurred. The decay value equals the average of the windows’ size of the network. Δτij is the sum of the contributions of all ants related to ℓij at iteration t
and can be calculated using Eq. (2).
where m is the number of ants and Δτijk is the pheromone concentrate laid on link ℓij by ant k. Δτijk can be calculated by Eq. (3) with R being the pheromone reward factor and fitnessk being the
value of the objective function for ant k.
Δτijk=∑k=1mRfitnesskif ℓij is chosen by ant k
Once the pheromone is updated, each ant must update its route respecting the pheromone concentration and also some heuristic preference consistent with the subsequent probability by Eq. (4).
where pij(k,t) is the probability that link ℓij is chosen by ant k at iteration t; τij(t) is the pheromone concentration related to link ℓij at iteration t; ηij is the heuristic factor for preferring
among available links and is an indicator of how good it’s for ant k to pick link ℓij; α and β are exponent parameters that specify the impact of trail and attractiveness, respectively, and take
values greater than 0.
The deployment phases of the SDN environment are presented in this section followed by presenting the proposed algorithm.
SDN Deployment Phases:
Fig. 4 is illustrated SDN deployment phases as follow:
Phase (1) SDN Simulation: SDN is simulated by Mininet with VMware Workstation in the Ryu controller.
Phase (2) Network discovery and network dividing: SDN controller features a full image of the topology and it dynamically updates the topology after every data flow (Packet In). HACO divides the
network using either the BCR or the k-means clustering algorithm. Both algorithms are introduced as follow:
Box-Covering algorithm (BCR) in [2] divides the network into boxes or subnets. A box size is given in terms of the network distance, which corresponds to the number of edges on the shortest path
between two nodes. The idea of the BCR algorithm is illustrated in Fig. 5. To find the shortest path from node 1 to node 25, the network is split into six boxes. Each box is considered as one node
and the dimension of the network is prominently decreased. If there is an edge between two nodes in two different boxes, then these two boxes are connected. The shortest path between node 1(box 1)
and node 5 is found using the proposed HACO rather than using the Dijkstra. Then, the shortest path in each box is calculated then the shortest paths are linked together to get the globally shortest
path (the red lines) from node 1 to node 25.
K-means clustering is a type of unsupervised learning. The goal of this algorithm is to find groups in the data, with the number of groups represented by the variable K. The algorithm works
iteratively to assign each data point to one of K groups based on the features that are provided. For large scale networking, K-Means is computationally faster than hierarchical clustering [14] and
considers the best partitioning clustering algorithm according to the time complexity [15]. The goal of the algorithm is to partition the n nodes into k sets (clusters) Si where, i = 1, 2… k so that
the within-cluster sum of squares is minimized, defined as Eq. (5) [16].
where, term (|xij−cj|) provides the distance between a node and the cluster’s centroid. Traditional k-Means algorithm selects initial centroids randomly and the result of clustering highly depends on
selection of initial centroids and the algorithm may find a suboptimal solution when the centres are chosen badly [17]. The Pseudo code of k-means algorithm is shown in Fig. 6. Some methods for
selecting the initial centroids includes Forgy’s Approach, MacQueen Method, Simple Cluster Seeking method, Kaufman Approach, and k-means++ method. This research used the k-means++ as an algorithm for
choosing the initial values for the k-means clustering algorithm because it is successfully overcome some of the problems associated with the other methods [18].
Phase (3) The Suggested Algorithm Implementation: Here the routing process is executed by the proposed algorithm.
Phase (4) Forwarding: This phase responsible for forwarding the data through the path given form phase (3). If no matching happens, the controller is informed to take new action (drop the packet or
install it in the pipeline tables).
The Proposed Algorithm:
HACO optimizes the routing in large scale SDN using four parallel optimization steps.
In the first step, the SDN network is divided into boxes using BCR or k-means. This optimizes the search space and the packet time of exploring the best path.
In the second step, assuming a zero-memory system within the network initiated for the first time. A broadcast is a way to explore all network nodes; this is often like ants’ first time randomly
distributed on all the available paths. An ant within the HACO algorithm decides the path to follow based on the pheromone trails on the path but, instead of covering the path where the pheromone
trail is stronger just like the natural ant would do, it explores the path where the pheromone intensity does not exceed a predefined threshold. This avoids the congestion and maximizes the network
In the third step, the packet matching time spent in each router is optimized by creating a new matching table within the OF pipeline with entries of the discovered best paths and giving the matching
table the priority so that decreasing the time spent in the packet matching process and minimize both the total delay time and the packet loss rate. The probability of choosing a node is consistent
with the roulette wheel statistical distribution [19] as given by Eq. (6):
where τij(t) is the concentration of pheromone between node i and node j for the (t)th iteration, ηj(t) is the value of the heuristic information in node j and supposed to equal 0.01, τik(t) is the
concentration of pheromone between node i and node k where k is a value increasing from 1 to the number of successors of node i, ηk(t) is its current value of the heuristic function.
The local pheromone level on all the paths discovered is decreased by an amount called the pheromone decay or the evaporation rate ρ and therefore the global pheromone level on the best path is
updated and increased by α using Eq. (7):
In the fourth step, HACO uses a mutation operation to discover new paths. Mutation operation is mainly derived from the Genetic Algorithm (GA) but it can be applied to other meta-heuristic algorithms
to increase the probability of exploring a better solution in the search space and improve the routing optimization process [20]. The mutation operation randomly selects a path from the paths that
have generated in step (3) and mutates this path by a mutation probability in Eq. (8):
where pm is the probability of mutation. For example, if the number_of_generated_paths = 20 paths and the mutaion_parameter = 2, this means that 2 paths from 20 will be mutated. Pseudo code of HACO
is described in Fig. 7.
The platform for implementing the proposed HACO algorithm on large-scale SDNs involves the following software tools and programming language: Ubuntu16.04, Mininet 2.2, Ryu 3.6, VMware Workstation
Pro, the size l of each box is 1, clustering parameter k is 3, Iperf software was used in the SDN network flow, flow rate 2 Mb/s, bandwidth 5 Mb/s and Python 2.7.9. The hardware environment includes
a PC that has an Intel i7 as a CPU, 8 GB memory, and 1 GB hard disk.
This platform is used to create SDN, and then the performance of the HACO algorithm is measured as follows:
Measuring the performance of HACO under dynamic changing of the topology.
Testing the HACO using k-means network delay and packet loss at different centroids.
Testing the proposed HACO total network delay and packet loss rates at different network sizes.
Comparing the performance of HACO against other routing algorithms in SDN and literature relevant algorithms consistent with the running time.
Comparing the performance of HACO against other routing algorithms in SDN and literature relevant algorithms consistent with the delay time.
Measuring the performance of the HACO under dynamic changing of the network topology:
At a predefined time-instance, a network device is added, and the network is reconfigured, and therefore the best paths are updated consistent with the least hop count and congestion [15].
Testing the HACO using k-means network delay and packet loss at different centroids:
For different network sizes, the k-means++ method is generating the initial centroids, then the HACO using k-means is implemented at different centroids to choose the best centroids which achieve the
minimum network delay and packet loss rates. Figs. 8 and 9 presents plots of the centroid’s movement against the network delay and packet loss in case of network size is 100 nodes. It is observed
that the best centroid value which achieves the minimum network delay and packet loss is 3.
Testing the proposed HACO total network delay and packet loss rates at different network sizes:
HACO is executed at different network sizes as shown in Tab. 1.
Network size Total delay using box-covering (ms) Total delay using k-means (ms) Packet loss rate using box-covering (%) Packet loss rate using k-means (%)
10 19.2 27.2 0.002 0.003
50 90.2 105.3 0.003 0.004
100 220.1 251.2 0.005 0.006
500 403.4 469.4 0.008 0.009
750 489.4 567.1 0.010 0.201
1000 667.3 813.6 0.100 0.365
2000 876.1 1165.8 0.231 0.582
5000 1004.2 1398.4 0.398 0.895
Fig. 10 shows that the entire delay by box-covering or k-means is proportional to the network size but not in a linear behavior. This because of the stochastic nature of meta-heuristic algorithms.
the entire delay by box-covering and k-means is approximately equal until 100 nodes. With the rapid growth of the numbers of nodes from 500 to 5000, the entire delay by k-means is worse than the
entire delay using box-covering.
Fig. 11 shows acceptable packet loss rates by either box-covering or k-means which is smaller than the benchmark of 1% at 10 Mb/s dedicated for voice and video streaming. The packet loss rates by
box-covering and k-means are approximately equal to 500 nodes. With the rapid growth of the numbers of nodes from 750 to 5000, the packet loss rate using k-means is worse than the packet loss rate
using box-covering.
Comparing the performance of HACO against other routing algorithms in SDN and literature relevant algorithms according to the running time:
HACO is implemented at different network sizes against the running time and compared with both Dijkstra and BCR algorithm in [2] as indicated in Tab. 2.
Fig.12 indicates that the running time of Dijkstra, BCR, HACO using box-covering and HACO using k-means algorithms is approximately equal to 500 nodes. With the rapid growth of the numbers of nodes
from 750 to 5000, the advantage of HACO using box-covering and mutation algorithm becomes increasingly obvious.
Running time (ms)
Network size HACO using box-covering HACO Using k-means Dijkstra Alg. BCR Alg. in [2]
10 0.00001 0.00098 0.00113 0.00998
50 0.00009 0.00812 0.03098 0.03899
100 0.00021 0.08001 0.09302 0.08042
500 0.04302 1.97674 3.36523 2.34667
750 1.13456 3.8534 8.72980 4.69111
1000 2.46721 4.6542 17.7252 7.32198
2000 17.5231 25.7634 92.1916 32.1823
5000 131.4875 171.872 815.167 204.111
Comparing the performance of HACO against other routing algorithms in SDN and literature relevant algorithms according to the delay time:
HACO is implemented at different network sizes against the total delay time and compared with Dijkstra and BCR algorithm in [2] as indicated in Tab. 3. The comparison is made for only 10, 50 and 100
nodes because these are the only network sizes used as the benchmark sizes for the literature relevant algorithms.
Delay time (ms)
Network size HACO using box-covering HACO using k-means Dijkstra Alg. BCR Alg. in [2]
10 19.2 27.2 50.4 100.1
50 90.2 105.3 130.6 129.7
100 220.1 251.2 535.4 300.4
The results shown in Fig. 13 are analysed as follow:
When the number of nodes is 10, the delay time by BCR algorithm is the worst and the delay time by HACO using box-covering is the best one. When the number of nodes is 50, the two delay times by BCR
and Dijkstra are approximately the same but the delay time by HACO using box-covering is still the best one. When the number of nodes reaches 100, the delay time by Dijkstra algorithm becomes the
worst and the delay time by HACO using box-covering is still the best one, consequently the proposed HACO using box-covering outperforms the other algorithms.
This paper suggested Hybrid Ant Colony Optimization (HACO) algorithm for optimizing the routing problem inside SDNs.
HACO using box-covering optimized the time and space complexity and the mutation gives a far better divergence and a far better chance for HACO for exploring less congested paths. A new table within
the OF pipeline is created which contains all the explored paths. This optimizes the packet matching time and both the network delay and running times and maximizing the network throughput.
By comparing with other routing algorithms, the results show that HACO using box-covering outperforms all other algorithms and achieves a significant reduction of the network delay, packet loss
rates, and running times.
It is recommended to use either HACO using box-covering or HACO using k-means when the network size is less than 50 nodes and to use HACO using box-covering when the network size exceeds 50 nodes.
As a future point for research, the proposed HACO may be improved by optimizing the initial centroids or the box-size values.
Funding Statement: The authors received specific funding for this study.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
A. Abdulaziz and S. Yahya, “Improved extended dijkstra’s algorithm for software defined networks,” 1, no. 2, pp. 249–868, 2017. L. Zhang and Y. Hu, “A box-covering-based routing algorithm for
large-scale SDNs,” 5, no. 2, pp. 314–327, 2017. L. Lingxia and L. Feng, “Evolutionary algorithms in software defined networks,” 15, no. 3, pp. 20–36, 2017. B. Assefa and O. Ozkasap, “State-of-the-art
energy efficiency approaches in software defined networking,” in Proc. SoftNetworking, Barcelona, Spain, pp. 555–567, 2015. E. Duran and G. Caraus, “On software defined networks for particle
accelerators,” Ph.D. dissertation, Lund University, Scandinavia, Northen Europe, 2018. W. Braun and M. Menth, “Software-defined networking using openflow: Protocols, applications and architectural
design choices,” 5, no. 3, pp. 302–336, 2014. M. Maugendre, “Development of a performance measurement tool for SDN,” M.Sc. dissertation, UPC University, Barcelona, Spain, 2015. C. Black and T.
Culver, USA: Open Networking Foundation, pp. 89–136, 2015. [Online]. Available: https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1. J. Kurose and K. Ross, 7^th ed., USA:
Pearson, 2017. [Online]. Available: https://www.pearson.com/us/higher-education/program/Kurose-Computer-Networking-A-Top-Down-Approach-7th-Edition/PGM1101673.html?tab=features. M. Alnaser, “A method
of multipath routing in SDN networks,” in 17. Allahabad, India: Publishing House, pp. 11–17, 2018. P. Asher, “Comprehensive analysis of dynamic routing protocols in computer networks,” 6, pp.
4450–4455, 2015. A. Karim and M. Khan, “Behaviour of routing protocols for medium to large scale networks,” 5, no. 3, pp. 1605–1613, 2011. N. Gupta and K. Mangla, “Applying Dijkstra’s algorithmin
routing process,” 2, no. 5, pp. 122–124, 2016. O. Abdel Raouf and H. Askr, “ACOSDN–Ant colony optimization algorithm for dynamic routing in software defined networking,” in Proc. ICCES, Cairo, Egypt,
pp. 141–148, 2019. D. Xu and Y. Tian, 2. Berlin Heidelberg: Springer-Verlag, pp. 165–193, 2015. A. Baswade and P. Nalwade, “Selection of initial centroids for k-Means algorithm,” 2, no. 7, pp.
161–164, 2013. R. Alvida and I. Ikhwan, “Using k-means++ algorithm for researchers clustering,” in Proc. AIP, USA, pp. 20052, 2017. D. Sonagara and S. Badheka, “Comparison of basic clustering
algorithms,” 3, no. 2, pp. 58–61, 2014. Z. Liang and Z. Zhu, “Orderly roulette selection based ant colony algorithm for hierarchical multi-label protein function prediction,” 2017, no. 1, pp. 1–15,
2017. H. Xu and F. Duan, “A Hybrid ant colony optimization for dynamic multi-depot vehicle routing problem,” 2018, no. 1, pp. 1–10, 2018. | {"url":"https://cdn.techscience.cn/ueditor/files/cmc/TSP_CMC_70-1/TSP_CMC_17787/TSP_CMC_17787.xml?t=20220620","timestamp":"2024-11-03T07:19:19Z","content_type":"application/xml","content_length":"66557","record_id":"<urn:uuid:713d05ac-6c75-49bd-848f-d25c99e9cdb3>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00748.warc.gz"} |
k-Order Fibonacci Polynomials on AES-Like Cryptology
Computer Modeling in Engineering & Sciences
DOI: 10.32604/cmes.2022.017898
k-Order Fibonacci Polynomials on AES-Like Cryptology
Pamukkale University, Kinikli, Denizli, 20160, Turkey
*Corresponding Author: Suleyman Aydinyuz. Email: aydinyuzsuleyman@gmail.com
Received: 15 June 2021; Accepted: 26 October 2021
Abstract: The Advanced Encryption Standard (AES) is the most widely used symmetric cipher today. AES has an important place in cryptology. Finite field, also known as Galois Fields, are cornerstones
for understanding any cryptography. This encryption method on AES is a method that uses polynomials on Galois fields. In this paper, we generalize the AES-like cryptology on 2×2 matrices. We redefine
the elements of k-order Fibonacci polynomials sequences using a certain irreducible polynomial in our cryptology algorithm. So, this cryptology algorithm is called AES-like cryptology on the k-order
Fibonacci polynomial matrix.
Keywords: Fibonacci numbers; Fibonacci polynomials; k-order Fibonacci polynomials; Fibonacci matrix; k-order Fibonacci polynomial matrix; Galois field
AES (Advanced Encryption Standard) is a standard offered for encryption of electronic data. AES, adopted by the American government, is also used as a defacto encryption standard in the international
arena. It replaces DES (Data Encryption Standard). The encryption algorithm defined by AES is a symmetric-key algorithm in which the keys used in both encryption and decryption of encrypted text are
related to each other. The encryption and decryption keys are the same for AES.
The algorithm standardized with AES was created by making some changes to the Rijndael algorithm, which was mainly developed by Vincent Rijmen and Joan Daeman. Rijndael is a name obtain using the
developers’ names: RIJmen and DAEmen.
AES is based on the design known as substitution-permutation. Its predecessor, DES, is an algorithm designed in Feistel structure. AES’ software and hardware performance is high. The 128-bit input
block has a key length of 128, 192 and 256 bits. Rijndael, on which AES is based, supports input block lengths that are multiples of 32 between 128 and 256 bits and key lengths longer than 128 bits.
Therefore, in the standardization process, key and input block lengths were restricted. AES works on a 4×4 column-priority byte matrix called state. Operations in the matrix are also performed on a
special finite field.
The algorithm consists of identical rounds that transform a certain number of repeating input open text into output ciphertext. Each cycle consists of four steps, except for the last cycle. These
cycles are applied in reserve order to decode the encrypted text. The number of repetitions of cycles is a function of the key length according to Table 1.
These cycles include key addition, byte substitution, ShiftRow and MixColumn. We can see these cycles in Fig. 1. One can see detailed information about AES in Fig. 2 [1].
A finite field, sometimes also called Galois field, is a set with a finite number of elements. Roughly speaking, a Galois field is a finite set of elements in which we can add, subtract, multiply and
invert. Before we introduce the definition of a field, we first need the concept of a simple algebraic structure, a field.
A field F is a set of elements with the following properties:
• All elements of F form an additive group with the group operation “+” and the neutral element 0.
• All elements of F except 0 form a multiplicative group with the group operation “×” and the neutral element 1.
• When the two group operations are mixed, the distributivity law holds, i.e., for all a,b,c∈F:a(b+c)=(ab)+(ac).
Galois field arithmetic is the most widely used field involving matrix operations. One can see detailed information about the Galois field and the operations performed on it in [2]. Also, you can
find information on the classical cryptology benefit in [3].
In extension fields GF(2m) elements are not represented as integers but as polynomials with coefficients in GF(2). The polynomials have a maximum degree of m −1, so that there are m coefficients in
total for every element. In the field GF(28), which is used in AES, each element A∈GF(28) is thus represented as
A(x)=a7x7+…+a1x+a0, ai∈GF(2)={0,1}.
Note that there are exactly 256 = 28 such polynomials. The set of these 256 polynomials is the finite field GF(28). It is also important to observe that every polynomial can simply be stored in
digital form as an 8-bit vector
In particular, we do not have to store the factors x7, x6, etc. It is clear from the bit positions to which power xi each coefficient belongs.
Fibonacci numbers are defined by the recurrence relation of Fn = Fn −1 +Fn −2 for n≥2 with the initial conditions F0 = 0 and F1 = 1. There are a lot of generalizations of Fibonacci numbers satisfied
and studied by some authors. For more information one can see in [4–8]. The Fibonacci Q-matrix is defined in [9,10] as follows:
and nth power of the Fibonacci Q −matrix is shown in [11–13] by
Fibonacci polynomials that belong to the large polynomial classes are defined by a recurrence relation similar to Fibonacci numbers. The Belgian mathematician Eugene Charles Catalan and the German
mathematician E. Jacobsthal were studied Fibonacci polynomials in 1983. The polynomials fn(x) studied by Catalan are defined by the recurrence relation
where f0(x)=0,f1(x)=1,f2(x)=x and n≥3. Fig. 3 notice that for x = 1, fn(1)=Fn, Fn is nth Fibonacci number.
In [14], the k-order Fibonacci polynomial is defined by A. N. Philippou, C. Georghiou and G. Philippou in 1983.
The sequence of polynomials {fn(k)(x)}n=0∞ is said to be the sequel of Fibonacci polynomials of order k if f0(k)(x)=0,f1(k)(x)=1 and
Kizilates et al. studied a new generalization of convolved (p,q)−Fibonacci and (p,q)−Lucas polynomials in [4]. Also, Qi et al. gave a closed formula for the Horadam polynomials in terms of a
tridiagonal determinant in 2019 in [15] and Kizilates et al. defined several determinantel expressions of generalized tribonacci polynomials and sequences in [5]. In [6], Kizilates et al. introduced
new families of three-variable polynomials coupled with well-known polynomials and numbers in 2019. New families of Horadam numbers associated with finite operators and their applications were
studied by Kizilates in [7].
In [16], Basu et al. introduced the generalized relations among the code elements for Fibonacci coding theory in 2009. In 2014, Basu et al. defined a new coding theory for Tribonacci matrices in [17]
and they expended the coding theory on Fibonacci n-step numbers in [18]. Also, Basu et al. defined generalized Fibonacci n-step polynomials and stated a new coding theory called generalized Fibonacci
n-step polynomials coding theory in [19].
In [19], for k≥2
Qkn(x)= [ Fn+k−1(k)(x)xk−2Fn+k−2(k)(x)+xk−3Fn+k−3(k)(x)+⋯+Fn(k)(x)Fn+k−2(k)(x)xk−2Fn+k−3(k)(x)+xk−3Fn+k−4(k)(x)+⋯+Fn−1(k)(x)⋮⋮Fn+1(k)(x)xk−2Fn(k)(x)+xk−3Fn−1(k)(x)+⋯+Fn−k+2(k)(x)Fn(k)(x)xk−2Fn−1(k)
(x)+xk−3Fn−2(k)(x)+⋯+Fn−k+1(k)(x) xk−3Fn+k−2(k)(x)+xk−4Fn+k−3(k)(x)+⋯+Fn+1(k)(x)⋯Fn+k−2(k)(x)xk−3Fn+k−3(k)(x)+xk−4Fn+k−4(k)(x)+⋯+Fn(k)(x)⋯Fn+k−3(k)(x)⋮⋮⋮xk−3Fn(k)(x)+xk−4Fn−1(k)(x)+⋯+Fn−k+3(k)(x)⋯Fn
(k)(x)xk−3Fn−1(k)(x)+xk−4Fn−2(k)(x)+⋯+Fn−k+2⋯(k)(x)⋯Fn−1(k)(x) ](1)
where Fn(k)(x) is a k-order Fibonacci polynomials.
Diskaya et al. created a new encryption algorithm (known as AES-like) by using the AES algorithm in [20]. They created the encryption algorithm by splitting the message text into 2×2 block matrices
using Fibonacci polynomials.
Fibonacci polynomials have many applications in algebra. In recent years, we see that these polynomials have many uses in the field of engineering. Also, Fibonacci polynomials are used in solving
differential equations. These solutions are used in engineering and science, adding new approaches to the solution of engineering problems. Mirzae and Hoseini solved singularly perturbed
differential-difference equations arising in science and engineering with Fibonacci polynomials in [21]. Also, in [22], Haq et al. studied approximate solution of two-dimensional Sobolev equation
using a mixed Lucas and Fibonacci polynomials.
In this paper, we generalize the encryption algorithm given in [20] and study the encryption made with the 2×2 type block matrix operation to the k×k type in Galois field. We redefine the elements of
k-order Fibonacci polynomials sequences using a certain irreducible polynomial in our cryptology algorithm. The algorithm consist of four steps as in the AES encryption algorithm. The encryption
algorithm defined in this algorithm is a symmetric-key algorithm in which the keys used in both encryption and decryption of encrypted text are related to each other. The encryption and decryption
keys are the same like AES. So, this cryptology algorithm is called AES-like cryptology algorithm on the k-order Fibonacci polynomials.
2 The k-Order Fibonacci Polynomials Blocking Algorithm
In this chapter, we redefine the elements of k-order Fibonacci polynomial sequences using a certain irreducible polynomial in our coding algorithm. In extension fields GF(2m) elements are not
represented as integers but as polynomials with coefficients in GF(2). Throughout this section, we take m = 5 for next process. Since m = 5, we consider the finite Galois field containing 32 elements
in this algorithm and this Galois field is denoted as GF(25). Note that there are exactly 25 = 32 such polynomials. The set of these 32 polynomials is the finite field GF(25). Each elements of this
polynomials correspond to one letter of the alphabet.
The AES encryption algorithm uses the P(x)=x8+x4+x3+x+1 polynomial as the irreducible polynomial.
The irreducible polynomials of GF(25) are as follows:
In this paper, we consider the irreducible polynomials as P(x)=x5+x2+1. We can also diversify our encryption algorithm by using other irreducible polynomials.
Definition: In [8], the Fibonacci polynomial sequence {fn(x)}n≥0 is f0(x)=0,f1(x)=1 and fn+2(x)=xfn+1(x)+fn(x).
For later use the first few terms of the sequence Fibonacci polynomials can be seen in the following Table 2 and a few the irreducible polynomials for Fibonacci polynomials are given as Table 3.
Polynomials of the Galois field are equivalent of each alphabet in Table 4 is as following:
Now, we obtain our encryption algorithm in line preliminary information we have given.
2.1 The k-Order Fibonacci Encryption Algorithm: The Coding Algorithm
• Step 1: We can consider the message text of length n and assume that each letter represents one length.
• Step 2: We can choose arbitrary value of k and n. The k-value we choose determine which order Fibonacci polynomials to use. We can create the matrix Qkn(x) in Eq. (1) according to the kand n value
we have chosen. Our message text is divided into blocks according to the value k. We get matrices of k×1 type. We get a new matrix by multiplying the k×1 type matrix with the Qkn(x). Our new message
is created by looking at the values in the matrix we obtained from the alphabet table.
• Step 3: We multiply the message matrix we just obtained by the invertible key matrix. In this paper, we accept the key matrix as follows:
1. KeyMatrix=[BBCÇEGˇKEY]=[11235813529]
If there is an ascending 2 letters in the text, it letters is multiplied by 2. Key matrix in 2×2:
2. KeyMatrix=[EAOD]=[50174].
• Step 4: The text created in the 3th step is collected sequentially with the k-order Fibonacci polynomials by starting from left and our encrypted message is created.
2.2 The k-Order Fibonacci Decryption Algorithm: The Decoding Algorithm
• Step 1: We can consider encrypted a text of length n and assume that each letter represents one length.
• Step 2: The text created is addition sequentially with the k-order Fibonacci polynomials by starting from the left and our new message is created.
• Step 3: We multiply the message matrix we just obtained by inverse of the 1. key matrix.
If there is an ascending 2 letters in the text, it letters is multiplied by 2. Inverse key matrix in 2×2:
• Step 4: We can obtain the matrix (Qkn(x))−1 according to the k and n we have chosen. Our text is divided into blocks according to the value k. We get matrices of k×1 type. We get a new matrix by
multiplying the k×1 type matrix with the (Qkn(x))−1. Our new message is created by looking at the values in the matrix we obtained from the alphabet table. We can obtain our text message text.
2.3 Illustrative Examples for AES-Like Cryptology on the k-Order Fibonacci Polynomial Matrix
Example 1: Let us consider the message text for the following:
Application to the Coding Algorithm:
• Step 1: “HELLO” is 5 letters. In this example, we encrypt process by choosing n = 5 (We can choose n arbitrarily).
• Step 2: For k = 3 and n = 5, we can use Tribonacci polynomials for encryption.
We can get as
It is known that
So, it is
Q35(x).[ HEL ]=[ x4+1x4+x+1x3+x2x3+x2x2x3+x+1x3+x+1x2+1x4+x ][ x3+1x2+1x3+x2+x ]=[ x4+x3+xx4+x3+xx3+x2+x+1 ]=[ VVM ]
Since the word “HELLO” has 5 letters, we divide it into blocks of 3×1 and 2×1. So now we encrypt the 2×1 block with the usual Fibonacci polynomial matrix.
We can get in Eq. (1) as
Q25(x)=[ x110 ]5=[ x2+x+1x4+x2+1x4+x2+1x3 ]
So, it is
Q25(x).[ LO ]==[ x2+x+1x4+x2+1x4+x2+1x3 ][ x3+x2+xx4+1 ]=[ x3+x2+1x4+x2 ]=[ KR ]
It results “HELLO”→“VVMKR”.
• Step 3: We multiply the message matrix we just obtained by the invertible 1. Key matrix. Turn into blocks of 3s and multiply with the key matrix.
[ BBCCEG⌣KEY ][ VVM ]=[ 11xx+1x2+1x3x3+x2+1x2+1x4+x3+x2+1 ][ x4+x3+xx4+x3+xx3+x2+x+1 ] =[ x4+x3+x2+x1x2 ]=[ ZBD ]
Since we have 2 letters left, we can use our 2. Key matrix,
[ EAOD ][ KR ]=[ x2+10x4+1x2 ][ x3+x2+1x4+x2 ]=[ x4+x3+x2x4+x3+1 ]=[ XU¨ ]
It results “VVMKR”→“ZBDXU¨.
• Step 4: We get
where Tn(x) is a nth Tribonacci polynomial.
It results “ZBDXU¨”→“QEŞTS”.
Application to the Decoding Algorithm:
• Step 1: We can get as
where Tn(x) is a nth Tribonacci polynomial.
It results ‘‘QEŞTS”→‘‘ZBDXÜ”.
• Step 2: We multiply the message matrix we just obtained by inverse of the 1. Key matrix.
Since we have 2 letters left, we can use our 2. Inverse key matrix.
It results “ZBDXU¨”→“VVMKR”.
• Step 3: We can obtain the matrix (Qkn(x))−1 according to the k and n value we have chosen. For k = 3 and n = 5; we get as
So, it is
(Q35(x))−1[ VVM ]=[ 0xx3+1x3+11x4x4x+1x2 ][ x4+x3+xx4+x3+xx3+x2+x+1 ] =[ x3+1x2+1x3+x2+x ]=[ HEL ]
Since we have 2 letters left, we can get (Q25(x))−1 for k = 2 and n = 5 as
So, it is
(Q25(x))−1[ KR ]=[ x3x4+x2+1x4+x2+1x2+x+1 ][ x3+x2+1x4+x2 ]=[ x3+x2+xx4+1 ]=[ LO ]
It results “VVMKR”→“HELLO”.
We have handled the example given in [20] again with the algorithm we created. The correct result was obtained as a result of the operation we have done. In addition, the encryption process performed
with 2×2 block matrices in the other study was performed faster and easier with this method.
Example 2: Let us consider the message text for the following:
Application to the Coding Algorithm
• Step 1: “PUBLIC” is 6 letters. In his example, we encrypt process by choosing n = 6 (We can choose n arbitrarily. We do not have to choose the same number of letters as the number of n in our
message text to be encrypted).
• Step 2: For k = 4 and n = 6, we can use Tetranacci polynomials for encryption. We can get as
Q46(x)=[ x3x2x1100001000010 ]6=[ x4+x3+xx3+xx3+1x4+x3+x2+x+1x4+x3+x2+x+1x4+x3+1x4+x3+1x4+xx4+xx4+x3+x+1x4+x3+x+1x4+x3x4+x3x3+x2x4+x2x3+x2+x ]
It is known that
So, it is
Q46(x)[ PUBL ]=Q46(x)[ x4+x+1x4+x31x3+x2+x ]=[ x4+x3+x2+xx+1x3+x2+xx3+x2+x+1 ]=[ ZCLM ]
Since the word “PUBLIC” has 6 letters, we divide it into blocks of 4×1 and 2×1. So now:
We encrypt the 2×1 block with the usual Fibonacci polynomial matrix.
We can get as
Q26(x)[ IC ]=[ x4+x3+x+1x2+x+1x2+x+1x4+x2+1 ][ x3+xx ]=[ x4+x3+x+1x4+x3+x2 ]=[ WX ]
It results ‘PUBLIC'→‘ZÇLMWX'.
• Step 3: We multiply the message matrix we just obtained by the invertible 1. Key matrix. Turn into blocks of 3s and multiply with the key matrix.
[ BBCCEG⌣KEY ][ ZCL ]=[ 11xx+1x2+1x3x3+x2+1x2+1x4+x3+x2+1 ][ x4+x3+x2+xx+1x3+x2+x ] =[ 1x4+x2+x+1x4+x3+x ]=[ BTV ]
Since we have 3 letters left, we can use our 1. Key matrix again.
[ BBCCEG⌣KEY ][ ZCL ]=[ 11xx+1x2+1x3x3+x2+1x2+1x4+x3+x2+1 ][ x4+x3+x2+xx+1x3+x2+x ] =[ 1x4+x2+x+1x4+x3+x ]=[ BTV ]
It results “ZÇLMWX”→“BTVHO¨O¨”.
• Step 4: We get
where Fn(4)(x) is a Tetranacci polynomial.
It results “BTVHO¨O¨”→“BTWBXI”.
Application to the Decoding Algorithm
• Step 1: We can get as
where Fn(4)(x) is a Tetranacci polynomial.
It results “BTWBXI”→“BTVHO¨O¨”.
• Step 2: We multiply the message matrix we just obtained by inverse of the 1. Key matrix.
Since we have 3 letters left, we can use our 1. Inverse key matrix again.
It results “BTVHO¨O¨”→“ZÇLMWX”.
• Step 3: We can obtain the matrix (Qkn(x))−1 according to the k and n value we have chosen. For k = 4 and n = 6; we get as
and for k = 2 and n = 6
It results “ZÇLMWX”→“PUBLIC”.
AES (Advanced Encryption Standard) is a standard offered for encryption of electronic data. The AES cipher is almost identical to the block cipher Rijndael. The Rijndael block and key size vary
between 128, 192 and 256 bits. However, the AES standard only calls for a block size of 128 bits. Hence, only Rijndael with a block length of 128 bits is known as the AES algorithm. In the remainder
of this page, we only discuss the standard version of Rijndael with a block length of 128 bits.
The Rijndael algorithm perform encryption with the help of polynomials in Galois fields. We have obtained a new encryption algorithm by generalizing the previous studies. In this paper, we
generalized the encryption algorithm given in [20] and studied the encryption made with the 2×2 type block matrix operation to the k×k type in Galois field. We redefined the elements of k-order
Fibonacci polynomials sequences using a certain irreducible polynomial in our cryptology algorithm. The algorithm consist of four steps as in the AES-like encryption algorithm. The encryption
algorithm defined in this algorithm is a symmetric-key algorithm in which the keys used in both encryption and decryption of encrypted text are related to each other. The encryption and decryption
keys are the same like AES. So, this cryptology algorithm is called AES-like cryptology algorithm on the k-order Fibonacci polynomials. In this way, researchers can perform the encryption process
based on arbitrary choices.
In this paper, we present the mathematical basis for understanding the design rationale and the features that follow the description itself. Then, we define AES-like encryption by giving the
encryption method and its implementation.
Funding Statement: This work is supported by the Scientific Research Project (BAP) 2020FEBE009, Pamukkale University, Denizli, Turkey.
Conflicts of Interest: The authors declare that there are no conflicts of interest regarding the publication of this article.
1. Avaroglu, E., Koyuncu, I., Ozer, A. B., Turk, M. (2015). Hybrid pseudo-random number generator for cryptographic systems. Nonlinear Dynamics, 82(1–2), 239–248. DOI 10.1007/s11071-015-2152-8. [
Google Scholar] [CrossRef]
2. Paar, C., Pelzl, J. (2009). Understanding cryptography: A textbook for students and practitioners. London: Springer Science, Business Media. [Google Scholar]
3. Klima, R. E., Sigmon, N. P. (2012). Cryptology: Classical and modern with maplets. New York: Chapman and Hall/CRC. [Google Scholar]
4. Kizilates, C., Tuglu, N. (2017). A new generalization of convolved (p,q)-Fibonacci and (p,q)-Lucas polynomials. Journal of Mathematics and Computer Science, 7, 995–1005. DOI 10.28919/jmcs/3476. [
Google Scholar] [CrossRef]
5. Kizilates, C., Du, W. S., Fi, Q. (2022). Several determinantal expressions of generalized tribonacci polynomials and sequences. Tamkang Journal of Mathematics, 53, 17–35. DOI 10.5556/
j.tkjm.53.2022.3743. [Google Scholar] [CrossRef]
6. Kizilates, C., Cekim, B., Tuglu, N., Kim, T. (2019). New families of three-variable polynomials coupled with well-known polynomials and numbers. Symmetry, 11(264), 1–13. DOI 10.3390/sym11020264. [
Google Scholar] [CrossRef]
7. Kizilates, C. (2021). New families of Horadam numbers associated with finite operators and their applications. Mathematical Methods in the Applied Science, 3(4), 161. DOI 10.1002/mma.7702. [Google
Scholar] [CrossRef]
8. Koshy, T. (2001). Fibonacci and Lucas numbers with applications. A Wiley-Interscience Publication, John Wiley & Sons, Inc. [Google Scholar]
9. Gould, H. W. (1981). A history of the Fibonacci Q-matrix and a higher-dimensional problem. The Fibonacci Quarterly, 19(3), 250–257. DOI 10.1177/001316448104100337. [Google Scholar] [CrossRef]
10. Hoggat, V. E. (1969). Fibonacci and Lucas numbers. Palo Alto: Houghton-Mifflin. [Google Scholar]
11. Stakhov, A. P. (1999). A generalization of the Fibonacci Q-matrix. Reports of the National Academy of Sciences of Ukraine, 9, 46–49. [Google Scholar]
12. Stakhov, A. P., Mssinggue, V., Sluchenkov, A. (1999). Introduction into Fibonacci coding and cryptography. Kharkov: Osnova. [Google Scholar]
13. Vajda, S. (1989). Fibonacci and Lucas numbers and the golden section theory and applications. Lancashire, UK: Ellis Harwood Limitted. [Google Scholar]
14. Philippou, A. N., Geoughiou, C., Philippou, G. (1983). Fibonacci polynomials of order k, multinomial expansions and probability. International Journal of Mathematics and Mathematical Sciences, 6
(3), 545–550. DOI 10.1155/S0161171283000496. [Google Scholar] [CrossRef]
15. Qi, F., Kizilates, C., Du, W. S. (2019). A closed formula for the Horadam polynomials in terms of a tridiagonal determinant. Symmetry, 11(6), 8. DOI 10.3390/sym11060782. [Google Scholar] [
16. Basu, M., Prasad, B. (2009). The generalized relations among the code elements for Fibonacci coding theory. Chaos Solitons and Fractals, 41(5), 2517–2525. DOI 10.1016/j.chaos.2008.09.030. [Google
Scholar] [CrossRef]
17. Basu, M., Das, M. (2014). Tribonacci matrices and a new coding theory. Discrete Mathematics Algorithms and Applications, 6(1), 1450008. DOI 10.1142/S1793830914500086. [Google Scholar] [CrossRef]
18. Basu, M., Das, M. (2014). Coding theory on Fibonacci n-step numbers. Discrete Mathematics Algorithms and Applications, 6(2), 1450017. DOI 10.1142/S1793830914500177. [Google Scholar] [CrossRef]
19. Basu, M., Das, M. (2017). Coding theory on generalized Fibonacci n-step polynomials. Journal of Information & Optimization Sciences, 38(1), 83–131. DOI 10.1080/02522667.2016.1160618. [Google
Scholar] [CrossRef]
20. Diskaya, O., Avaroglu, E., Menken, H. (2020). The classical AES-like cryptology via the Fibonacci polynomial matrix. Turkish Journal of Engineering, 4(3), 123–128. DOI 10.31127/tuje.646926. [
Google Scholar] [CrossRef]
21. Mirzaee, F., Hoseini, S. F. (2013). Solving singularly perturbed differential-difference equations arising in science and engineering with Fibonacci polynomials. Results in Physics, 3(5),
134–141. DOI 10.1016/j.rinp.2013.08.001. [Google Scholar] [CrossRef]
22. Haq, S., Ali, I. (2021). Approximate solution of two-dimensional Sobolev equation using a mixed Lucas and Fibonacci polynomials. Engineering with Computers, 21, 366–378. DOI 10.1007/
s00366-021-01327-5. [Google Scholar] [CrossRef]
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited. | {"url":"https://www.techscience.com/CMES/v131n1/46636/html","timestamp":"2024-11-03T10:28:29Z","content_type":"application/xhtml+xml","content_length":"199588","record_id":"<urn:uuid:2e9d1e49-6b50-49f7-b976-22d8343bdb28>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00312.warc.gz"} |
What is a Norton's Theorem? - Circuit Globe
Norton’s Theorem
Norton’s Theorem states that – A linear active network consisting of the independent or dependent voltage source and current sources and the various circuit elements can be substituted by an
equivalent circuit consisting of a current source in parallel with a resistance. The current source being the short-circuited current across the load terminal and the resistance being the internal
resistance of the source network.
The Norton’s theorems reduce the networks equivalent to the circuit having one current source, parallel resistance and load. Norton’s theorem is the converse of Thevenin’s Theorem. It consists of the
equivalent current source instead of an equivalent voltage source as in Thevenin’s theorem.
The determination of internal resistance of the source network is identical in both the theorems.
In the final stage that is in the equivalent circuit, the current is placed in parallel to the internal resistance in Norton’s Theorem whereas in Thevenin’s Theorem the equivalent voltage source is
placed in series with the internal resistance.
Explanation of Norton’s Theorem
To understand Norton’s Theorem in detail, let us consider a circuit diagram given below
In order to find the current through the load resistance I[L] as shown in the circuit diagram above, the load resistance has to be short-circuited as shown in the diagram below:
Now, the value of current I flowing in the circuit is found out by the equation
And the short-circuit current I[SC] is given by the equation shown below:
Now the short circuit is removed, and the independent source is deactivated as shown in the circuit diagram below and the value of the internal resistance is calculated by:
As per Norton’s Theorem, the equivalent source circuit would contain a current source in parallel to the internal resistance, the current source being the short-circuited current across the shorted
terminals of the load resistor. The Norton’s Equivalent circuit is represented as
Finally, the load current I[L] calculated by the equation shown below
• I[L] is the load current
• I[sc] is the short circuit current
• R[int] is the internal resistance of the circuit
• R[L ]is the load resistance of the circuit
Steps for Solving a Network Utilizing Norton’s Theorem
Step 1 – Remove the load resistance of the circuit.
Step 2 – Find the internal resistance R[int] of the source network by deactivating the constant sources.
Step 3 – Now short the load terminals and find the short circuit current I[SC] flowing through the shorted load terminals using conventional network analysis methods.
Step 4 – Norton’s equivalent circuit is drawn by keeping the internal resistance R[int] in parallel with the short circuit current I[SC].
Step 5 – Reconnect the load resistance R[L] of the circuit across the load terminals and find the current through it known as load current I[L.]
This is all about Norton’s Theorem.
3 thoughts on “Norton’s Theorem”
I am thankful for your information
Thank you
Leave a Comment | {"url":"https://circuitglobe.com/what-is-nortons-theorem.html","timestamp":"2024-11-04T15:38:43Z","content_type":"text/html","content_length":"166640","record_id":"<urn:uuid:874247ab-bdd0-48ab-967b-d595495d3940>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00687.warc.gz"} |
Forum Archive : Puzzles
Lowest probability of winning
From: masque de Z
Address: (none)
Date: 12 April 2012
Subject: Smallest backgammon nonzero game equity ever possible
Forum: 2+2 Backgammon Forum
What is the smallest win probability in backgammon over 0 of course. 1
point game say. Just for fun consider boards that say white has
astronomically small probability to win. Do you think you can solve this?
Does it have a trivial answer or not so fast ...?
Bill Robertie writes:
A pretty clear answer, I think. We can eliminate all contact positions,
since they're 'easy' to win, relatively speaking. For non-contacts, how
about this:
13 14 15 16 17 18 19 20 21 22 23 24 O's home
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | X O |
| | | |
| | | |
| | | | +---+
| | | | | 2 |
| | | | +---+
12 11 10 9 8 7 6 5 4 3 2 1 X's home
O has 15 men on his 3-point. Black has 15 men on his 21-point. Black on
roll wins with 15 consectutive big doubles, getting 60 crossovers, while
White responds with 15 consecutive 2-1s. It's not unique since Black
doesn't have to throw 6-6 every roll, just most of the time.
uberkuber writes:
Nice one Bill! After that, you start tossing white checkers on 1-pt and 2-
pt until it's a lock and you rollback 1 pip.
TomCowley writes:
Yeah, I thought about this while we were down and you can actually improve
by moving one checker to the 1 point. This position is basically a parlay
1) Black must get off in 15 rolls
2) White must not get off in 14 rolls (he always gets off in 15)
You can't make 1 harder without contact (or it being impossible for white
not to win), but you can make 2 harder, because as is, he can roll 6-1 the
first 2 times, or 6-2 once, and still not get off in 14. If you move a
checker up to the 1, then any 31-61 or 32-62 will let you to bear off 2 by
the 14th at the latest. I think moving 2 men to the 2 is also equally
difficult (both requiring exactly 14 2-1s in a row to avoid being able to
win in 14).
13 blots (Timothy Chow+, Aug 2009)
Alice, who is not on the bar, discovers that however she plays she ends up with 13 blots. What is her position and roll?
All-time best roll (Kit Woolsey+, Dec 1997)
What position and roll give the greatest gain in equity?
All-time worst roll (Tim Chow+, Feb 2009)
Find a position that goes from White being too good to double to Black being too good to double.
All-time worst roll (Michael J. Zehr, Jan 1998)
What position and roll give the greatest loss in equity?
Back to Nack (Zorba+, Oct 2005)
How can you go from the backgammon starting position to Nackgammon?
Cube ownership determines correct play (Kit Woolsey, Jan 1995)
Find a position and roll where the correct play depends on who owns the cube.
Highest possible gammon rate (Robert-Jan Veldhuizen+, May 2004)
What is the highest possible gammon rate in an undecided game?
Infinite loops (Timothy Chow, Mar 2013)
Is this position reachable? (Timothy Chow+, Feb 2013)
Janowski Paradox (Robert-Jan Veldhuizen+, Nov 2000)
Position that's a redouble but not a double?
Least shots on a blot within direct range (Raymond Kershaw, Dec 1998)
Find a position with no men on bar that has the least number of shots out of 36 to hit a blot within direct range.
Legal but not likely (David desJardins, July 2000)
Find a position that can be legally reached but never through optimum play.
Lowest probability of winning (masque de Z+, Apr 2012)
What is the smallest win probability in backgammon, greater than zero.
Mirror puzzle (Nack Ballard, Apr 2010)
Go from the starting position to the mirror position (colors reversed)
Most checkers on the bar (Tommy K., May 1997)
What is the maximum total possible checkers on the bar?
Most possible plays (Kees van den Doel+, May 2002)
Find the position and dice roll which have the most possible plays.
Not-so-greedy bearoff (Kit Woolsey, Mar 1997)
Find a no-contact position where it is better to move a checker than bear one off.
Not-so-greedy bearoff (Walter Trice, Dec 1994)
Find a no-contact position where it is better to move a checker than bear one off.
Priming puzzle (Gregg Cattanach+, May 2005)
From the starting position, form a full 6-prime in three rolls.
Pruce's paradox (Alan Pruce+, Dec 2012)
Quiz (Martin Krainer, Oct 2003)
Replace the missing checkers (Gary Wong+, Oct 1998)
Returning to the start (Nack Ballard, May 2010)
What is the least number of rolls that can return a game to the starting position?
Returning to the start (Tom Keith+, Nov 1996)
What is the least number of rolls that can return a game to the starting position?
Shortest game (Stephen Turner+, Jan 1996)
What is the shortest (cubeless) game in which both players play reasonably?
Small chance of ending in doubles (Walter Trice, Dec 1999)
Find a position where the probability of the game ending in doubles is less than 1/6.
Three-cube position (Timothy Chow+, Sept 2011)
Find a position and roll for which three different checker plays are best, depending on the location of the cube.
Trivia question (Walter Trice, Dec 1998)
What is the symmetric bearoff with the smallest pip count that is not an initial double?
Worst possible checker play (Gregg Cattanach+, June 2004)
What position and roll have the largest difference between best and worst play?
Worst possible opening move (Gregg Cattanach, June 2004)
What is the worst possible first move given any choice of dice?
Worst symmetric bearoff of 8 checkers (Gregg Cattanach+, Jan 2004)
What symmetric arrangement of 8 checkers in each player's home board gives roller least chance to win?
Worst takable position (Christopher Yep, Jan 1994)
What position has lowest chance of winning but is a correct take if doubled?
Zero equity positions (Kit Woolsey, Apr 1995)
Find a position with exactly zero equity in (1) money play or (2) cubeless.
┃ Book Suggestions Luck versus Skill Rollouts ┃
┃ Books Magazines & E-zines Rules ┃
┃ Cheating Match Archives Rulings ┃
┃ Chouettes Match Equities Snowie ┃
┃ Computer Dice Match Play Software ┃
┃ Cube Handling Match Play at 2-away/2-away Source Code ┃
┃ Cube Handling in Races Miscellaneous Strategy--Backgames ┃
┃ Equipment Opening Rolls Strategy--Bearing Off ┃
┃ Etiquette Pip Counting Strategy--Checker play┃
┃ Extreme Gammon Play Sites Terminology ┃
┃ Fun and frustration Probability and Statistics Theory ┃
┃ GNU Backgammon Programming Tournaments ┃
┃ History Propositions Uncategorized ┃
┃ Jellyfish Puzzles Variations ┃
┃ Learning Ratings ┃ | {"url":"https://www.bkgm.com/rgb/rgb.cgi?view+1597","timestamp":"2024-11-03T01:10:07Z","content_type":"text/html","content_length":"17997","record_id":"<urn:uuid:1850601d-a171-4039-ad37-6535fc75fcfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00097.warc.gz"} |
Physics Problem: Why Throw at 45 Degrees?
I recently discovered how to write equations in my Hugo website using Mathjax (credit goes to kunlei for the instructions!). In order to demonstrate the awesomeness of this tool (and also to refresh
my memory on basic Physics concepts I have not used in a long time), let me demonstrate why 45 degrees is the ideal angle to throw something at maximum range.
The Problem
Prove that the angle at which an object must be thrown to maximize distance is $45^{o}$, assuming there is no air resistance.
The Solution
Let’s use a referential where the $x$ axis is parallel to the ground and $y$ axis is perpendicular to the ground, what we want to maximize is the function $x(\theta)$, i.e. the distance travelled by
our object as a function of the angle at which we throw it. Let’s define $\theta$ as the angle we throw the object with respect to the $y$ axis.
Intuitively, we know that an object thrown with $\theta=0$ wouldn’t travel any distance because it would reach the ground immediately (if thrown from the ground, e.g. by a cannon). Similarly, we know
that throwing an object with $\theta=90$ would have the same result: the object would go up in the air, and in the absence of wind, would fall back exactly from where it left the ground initially.
Thus, the angle that maximizes the distance ($\theta_m$) is an angle between 0 and 90.
Naively, we can assume the $x(\theta)$ function looks a little something like this:
The travelled distance $x$ as a function of the angle $\theta$. This is not an exact plot, but it roughly matches my experience when throwing things at various angles.
Because the thrown object moves in a fluid motion that follows the laws of kinematics, it is reasonable to expect that $\theta_m$ is found exactly where the derivative of $x(\theta)$ becomes 0, that
is to say $\frac{dx}{d\theta}=0$.
So, our game plan is to find the equation of $x(\theta)$ and find which value of $\theta$ makes $\frac{dx}{d\theta}=0$.
A well known equation of kinematics is $d=v_i\Delta t + \frac{1}{2}a\Delta t^2$, where:
• $d$ is distance
• $v_i$ is the initial velocity
• $a$ is the constant acceleration
• $\Delta t$ is the time elapsed
When applied to the movement of our thrown object over the $x$ axis, this becomes
$$ x=v_{ix}\Delta t + \frac{1}{2} a_x \Delta t^2$$ $$ \Leftrightarrow x= v_{ix}\Delta t \quad \text{because $a_x$ = 0, no air resistance!}$$
By expressing the velocity in terms of $x$ and $y$ components,
we can re-express $x$ as
$$ x = v\cos\theta\Delta t $$
Ah ha! We are starting to unfold how $x$ is dependent of $\theta$ explicitly. However, inside $\Delta t$, there is a hidden dependency on $\theta$. Indeed, you can imagine, if we throw an object at $
\theta=0$, its airborne time will be much shorter than if we throw it with the same strength at the sky at $\theta=90^{o}$. Thus, the time of flight is dependent on the angle at which we throw the
ball (makes sense).
What is the expression for $\Delta t (\theta)$?
Let’s use the kinematics equation in the $y$ axis to get some insight.
$$ \Delta y = v_{iy}\Delta t+\frac{1}{2}a_y\Delta t^2$$
Because we throw the object from a flat plane and we suppose it falls at the same height from which it was launched (i.e. $\Delta y = 0$), we get
$$ 0 = v_{iy}\Delta t + \frac{1}{2}a_y\Delta t^2$$ $$ 0 = v\sin\theta\Delta t+\frac{a_y}{2}\Delta t^2 $$ $$ 0 = \Delta t(v\sin\theta + \frac{a_y}{2}\Delta t) \quad \text{(here we divide by $\Delta t
\neq 0$)}$$ $$ 0 = v\sin\theta + \frac{a_y}{2}\Delta t $$ $$ \Leftrightarrow \Delta t = \frac{-2v\sin\theta}{a_y}$$
Hurray, we found our equation for $\Delta t (\theta)$, it is $\Delta t(\theta) = \frac{-2v\sin\theta}{a_y}$
We can thus substitute this value in the earlier equation we found for $x(\theta)$:
$$ x(\theta)=v\cos\theta\Delta t(\theta) = v\cos\theta\left(\frac{-2v\sin\theta}{a_y}\right) = \frac{-2v^2\cos\theta\sin\theta}{a_y}$$
Let us derivate this equation (here is a cheatsheet for derivation rules if this step seems unclear ):
$$ \frac{dx}{d\theta} = \frac{-2v^2}{a_y}\left(-\sin^2\theta+\cos^2\theta\right) = \frac{2v^2}{a_y}\left(\sin^2\theta-\cos^2\theta\right) $$
We are interested in finding $\theta_m$ for which $\frac{dx}{d\theta}=0$. In other words,
$$ \frac{dx}{d\theta} = 0 = \frac{2v^2}{a_y}\left(\sin^2\theta_m-\cos^2\theta_m\right)$$ $$ \Leftrightarrow \sin^2\theta_m = \cos^2\theta_m $$ $$ \Leftrightarrow \frac{\sin^2\theta_m}{\cos^2\theta_m}
= 1 $$ $$ \Leftrightarrow \frac{\sin\theta_m}{\cos\theta_m} = \tan\theta_m = 1 $$ $$ \Leftrightarrow \theta_m = \arctan(1) = \frac{\pi}{4} = 45^o \quad\blacksquare$$
And there we have it, proof that the angle that maximizes the distance travelled by a thrown object is $\theta_m = 45^o$.
This was a fun trip down memory lane for me. I might do this again sometime! As you can see, Mathjax is very good at rendering equations in HTML. Let me know if you found this interesting (maybe
you’re a LaTeX user, or maybe you’re learning physics/maths). 😃
physics featured
784 Words
2021-01-07 21:09 | {"url":"https://felixleger.com/posts/2021/01/physics-problem-why-throw-at-45-degrees/","timestamp":"2024-11-08T17:07:26Z","content_type":"text/html","content_length":"25574","record_id":"<urn:uuid:046d9508-7d9f-4b15-a59c-29344f617398>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00123.warc.gz"} |
Algebra calculator with fractions
Bing users found our website yesterday by typing in these keywords :
• math basic +pratice paper
• prentice hall algebra 1 practice sheets
• free college math problem solver
• math enrichment for 2nd grade worksheets
• Texas instrumental calculator T1-86 usable
• converting mixed fractions to decimals
• solve rungekutta method by program+ third order
• dividing fractions with exponents
• prentice hall math worksheets for algebra 1
• grade 9 biology review
• nonlinear equation solver
• algebra help-find dimension of rectangle with dimension of 3x+1 and x
• linear vs. nonlinear free worksheet
• 84 graphing cheat sheet keystrokes
• how do you change a mixed number to a fractional notation
• highest common factor 80 and 65
• mix numbers
• systems of linear equations in two variables with fractions
• polynomial raised to 3
• algebra+ artin + chapter 11 + solutions
• converting decimals to fractions worksheet
• Square Root Calculator
• statistics and probability formula sheet mcgraw
• graphing linear programs and finding out what the variables are
• distributive property critical thinking
• worksheets on half equations in chemistry GCSE
• sample solve problems in parabola
• free aalgebra worksheet expanding
• factoring interactive games
• dividing Polynomials Game
• homogeneous equation solver calculator
• fraction decimal percentage 9th grade math worksheet
• 9th grade science tests .pdf
• ppt presentation on linear inequalities
• application of a hyperbola with solution and graphs
• multiplying cube roots
• Polynomial newton fortran
• printable points caculator
• solving polar equations with ti 83 plus
• radicals absolute values
• cubed root calculator online
• free math worksheets on ratios
• how to interpolate with zero intercept matlab
• how to add fractions on ti-84
• balancing chemical equations tutorial
• pre-algebra group activities
• interactive integer games online
• ti 83 download
• algebra tiles difference of squares
• converting mixed numbers to decimals calculator
• exponent worksheet high school
• Create the number 24 using (all of) 1, 3, 4, and 6. You may add, subtract, multiply, and divide. Parentheses are free. You can (and must) use each digit only once
• worksheets add and subtract time
• Math Problem solver
• solve linear system ti83
• solving differential equation homogeneous particular
• rational exponents solver
• Java storing string inputs in while loops
• least common multiple calculator variable
• exam papers for grade 11
• quadratic equation solver for ti-83 with complex roots
• rational equations calculator
• free worksheets writing equation of lines
• ti vector programs
• calculas
• Free algebra online solver formula Math
• samples of math expressions for 5th graders
• inverse function of quadratic equation
• converting radicals into fractions
• Write in word notation worksheet in decimal
• fall subtraction worksheets
• test questions for 5th grade that use application and analyze
• multiply by 50 worksheet
• online trinomial factorer
• Inequalities Worksheets
• Quadratic Equations and Problem Solving
• adding fractions on ti-84 plus
• fractions in the substituation method
• linear equation example calories burned
• Create integers math worksheets
• free printable algrebra lessons
• cost accounting solutions manual free download
• math polynomials third order
• algebra hard for children
• "Simplifying algebraic expressions" "middle school" worksheet
• TRIG PLUS 2 COMPUTER APP
• maths algebra basic equations concept
• addition equation worksheets
• permutations and combinations and probability sums for GRE
• radical expressions online calculator
• write a fraction or mixed number as a decimal
• 3rd grade divison word problems
• matlab simultaneous equations solver
• negative numbers worksheet
• "basic algebra drill"
• probability cheat sheet
• factoring perfect square trinomials calculator
• McGraw Hill seventh grade math homework
• math-area
• fourth grade algebra
• study intermediate algebra online
• quadratic equation factoring calculator
• quadratic equation solver with working out
• intermediate algebra motion problems
• rudin fundamental analysis solutions
• Alan S. Tussy study guide
• multiply matrix by degrees calculator
• the least common sale items
• rom base TI89
• simplify sqrt of 160
• free aptitude test papers
• worlds hardest maths game ks2
• 5th grade algebra powerpoint
• number sum combination in java
• square root-third root
• Solving Quadratic Equations: Solving by Completing the Square
• math algebra question online free
• Worksheets 8th Grade
• associative property worksheets
• fifth grade math extended rate charts to solve real world problems
• simplifying expressions worksheet
• multiplying by square root fractions calculator
• adding rational expressions solver
• formula hyperbola
• free thir and fourth grade printable mathh sheets
• ti 84 simulator
• ppt tutorial rules of exponents
• worksheets on finding slope
• eigenvalue calculator initial value problem
• square root method
• ks2 conversion problems worksheet
• how to factor algebraic expressions with multiple expressions
• convert the base of a number JAVA
• Rational Expression calculator
• solving equations, for "x" and proportion calculator
• free worksheets in maths chapter ratio and proportions
• how to grade slope
• maximize linear equation subject to
• convert 216mm to decimals
• sat worksheets vocabulary 8th grade sat
• trig for dummies online
• worksheet puzzles fraction multiplication
• comparing integers worksheets
• answers to math with pizzazz! book d-53
• simplify square root expressions
• advance algebra trivia and tricks worksheets
• combining like terms activity
• google . com mathematics questions and answers on inequalities
• solve integer valued function equations
• 9th grade sample math worksheets
• simplifying expressions exponents f(x)
• factor equations for me
• ti online emulator"
• solve simultaneous equation matlab
• formula to convert metre into bar
• Sleeping Parabola Equation
• radical exponents
• highest common factors machine
• lcm 25, 50 ,125
• comparing a linear equation to real life
• solving for the roots of a third order polynomial
• can someone solve my algebra 2 problems for free
• simultaneous equations solver
• state vector first order equations
• algebra problem solving games
• adding like tirms worksheet
• factors up to 24 worksheet
• printable function machine for algebra
• Factoring a Quadratic Expression solver
• addition square root
• answers to quadratic equasions
• Ti83 + graph log
• as level maths inequality quadratic
• free geometry problem solver online
• evaluate exponents worksheet
• quadrat program calculator
• quadratic equations in standard form calculator
• describe square root in words and symbols
• reducing radical worksheets
• math factors calculator
• math solver radicals
• quadratic equation modelling and problem solving
• practice: skills Multiplying integers
• free pictograph worksheets
• mathamatics
• one step algebriac equations printable worksheets
• 3 or 4 rights using partial sums method
• slope formula activity
• free practice online algebra factoring
• graphing system of equations
• mcdougall littell math cheats
• ks3 algebra
• mcdougal littell world of chemistry answers
• rudin solution
• as3 limit decimal
• 8th grade algebra function table worksheets
• nth term math investigations
• math software from beginner level to college level
• restrictions on the variable solver
• adding fractions formula
• ebooks for free for accounting tests
• how to multiply negative fractions
• using matlab to solve second order ode
• common denominator of 100
• college intermed pre algebra textbook
• KS2 TX PC
• sequence problems GCSE
• simultaneous equations exam online
• lowest common multiple 3 integers calculator
• division rational expression
• how can i help my first grader to excel
• division fraction equation calculator
• 4th grade algebra worksheet
• add 9 to these numbers unit 8 worksheet 3
• quadratic factoring calculator
• how to multiply step equations
• Algebra Dummies Free
• enclosed
• FREE COST ACCOUNTING books
• maths working out sheet
• calculating an equation in matlab
• first and second order differential equations with and without initial conditions
• KS3 mathematics 4 quadrants graph plotting worksheet
• Grade 5 Partial Sum Addition Method
• free algebraic expressions worksheets
• operation of fractions with signs
• fun exercises in hexagonal algebra
• logarithm law to simplify algebraic expressions
• yr8 timed maths quiz
• powerpoints on lcm
• Dividing Rational expressions calculator
• practice tests for maths high school
• middle school math with pizzazz book d answers
• radical expressions calculator
• adding radicals calculator
• Radical Expressions Calculator
• factorization 7th grade
• mixed numbers to decimal
• Teaching Aptitude download
• one step worksheets free
• how to solve polynomials money problems
• examples of difference of square
• solve for slope
• dividing polynomials easy way
• Graph of a hyperbola
• "fractions cards" free
• add subtract multiply divide fractions
• square root long form
• multiplying integers worksheet
• difference between a homogeneous and non-homogeneous linear ODE
• online factoring
• different riules in adding,. subtracting, multiplying and dividing integers
• dummit and foote solutions
• online Inequality Graphing in Two Variables Calculator
• compound interest worksheet grade 8
• sample algebra formulas involving percentages
• inequality worksheets
• solving hard sequences
• square root in radical form
• what are the characteristic of the graph of linear equation based on each equation
• Glencoe Biology Textbook wksht answers
• algebra online slover
• trinomial factor calculator
• factoring online
• steps
• accretivehealth aptitude question paper
• Linear Algebra with Applications by Otto
• general aptitude questions
• rules for multiplying square roots
• adding and subtracting integers worksheet free
• linear equations used in careers
• dividing Rational Expression calculator
• prentice hall mathematics algebra 1 chapter tests
• solving 2nd order ODE
• math ratio to simplest form calculator
• kids online algebra test
• formula for ratio
• college prep algebra 1 free online practice problems for arithmetic sequences
• free online course permutations
• change from standard form calculator
• FIND THE SUM OF TEN TERMS CONVERT
• Free Elementary Algebra Worksheet
• convert fraction to decimal
• online ti calculator quadratic equations
• find an equation for a quadratic function table of values
• maths percentage formulas
• log x quadratic equations purple math
• least common multiples of 30 and 33
• solve roots radicals
• learning algebra cartoon videos for kids
• roots and rational exponents math solver
• multiplying, dividing, adding, and subtracting equations worksheet
• free worksheet for function rules
• simplifying like terms worksheets
• percentage formula
• the cheat code to the math compass test
• adding and subtracting decimals 6th grade levels
• green globs copy
• two step word problems for 2nd grade worksheets
• rom code for ti 83
• adding square roots with expressions
• permutations and combinations notes
• lineal metre calculator
• how do you turn a decimal into a fraction on a calculator
• quadratic equation solver for ti-83
• how is slope and y intercept used in daily life
• algebra hard equations sheet
• what is the sqare root of negetive nine over sixteen
• implicit differentiation calculator
• techniques in adding, subtracting, dividing, multiplying fractions
• samples of Concepts & Application aptitude tests
• Ti-83 calculator rom
• physics concept development worksheet answers
• graphing lines equations powerpoint
• aptitude questions of java
• TI-83 solving systems of equations
• online calculator t 89
• scott foremans math answersfor grade 5
• adding and subtracting 4 digit numbers
• hyperbola equation regression
• Add/Subtract/Multiply/Divide Integers
• applied inequality worksheets
• common denominator greater than less than
• glencoe algebra 1 answer key
• solve for specified variable
• solve rational expressions calculator
• prentice hall literature worksheet 11th grade answers
• examples of real life situations when people use scientific notation
• Free Math Question Solver
• free 9th grade english worksheets
• exponent variable
• prentice hall algebra 1 practice workbook
• complex linear equations+solving polynomials+practice samples
• mathematics trivia
• algibra
• Absolute value calculator
• factoring program for TI-84 plus
• how to use algebra tiles to expand a cubic equation
• solve a second order function
• applied math vocabulary glencoe 07 chapter 4
• solving simultaneous equations in casio calculators
• www.dividing fractions and mixed numbers calactor/.com
• nonhomogenous second order differential equation
• the method for the highest common factor in mathematics
• nonlinear differential equations on matlab
• math worksheet patterns and rules using variables grade 6
• Radical calculator
• maths yr 10 exam question
• What is the Greatest Common Factor of 76, and 86?
• finding integer patterns worksheet
• simple explanation of what is the difference between cube and cube root
• factors two calculator
• solving cubed quadratic equations
• solving simultaneous nonlinear differential equations with Matlab
• adding decimals worksheet assessment
• java code accepts fractional number
• ca simple interest paid worksheets
• 9th grade transition to algebra
• compare,convert and order fractions and decimals
• First Order Quasilinear Partial
• teaching exponents to kids
• adding fractions with variables worksheet
• algebra riddle
• conversion fraction to decimal and back sample test
• nonlinear matrix matlab
• how to slove quadratic equations using factorisation
• rational expressions factoring calculator
• mixed numbers to decimal calculator
• TI-89 solve system
• freeworksheetsmaths
• printable 1st grade questions
• how do you calculate the least common multiple
• how to solve rational expressions
• free algebra function rule worksheets
• maths study worksheets for year 10
• software for algebra
• 4th grade algebra
• GRADE FOUR FLOW DIAGRAMS MATH WORKSHEETS
• Graphing linear functions, worksheets
• algebra exercises solved
• how to work a beginner algebra problem 3rd grade
• What is the square root of 108, in simplified ratical form
• how to find the a variable in the vertex form equation
• equation elimination calculator
• multiply and divide rational expressions
• mathematical trivia
• algebraic expressions worksheets 6th grade
• difference quotient problem solver
• java range of numbers one variable
• long division polynomials Ti-83 calculator
• 4th root of 8 squared
• solving quadratic equation by square root property
• symmetry, gcse, worksheets
• square roots and exponents
• solving simultaneous equation 3 unknowns
• worksheets series nth term practical real life
• "integer operation worksheet"
• multiply fraction test
• add negative fractions worksheet
• "Simplifying algebraic expressions" worksheet
• AJweb
• square root of 85
• summary: hyperbola; parabola; exponential graph "shifting"
• practice masters houghton mifflin Algebra and Trigonometry ( fractional Coefficient)
• Pictograph Worksheets For Elementary Students
• free automatic factoriser
• algebra AND solving for x AND worksheets
• convert decimal to mixed number
• Easy way to teach circumference
• finding least common denominator calculator
• solve nonlinear ODE second order matlab
• algebra linear equation
• solve rational expressions online
• ti86 improper fraction to mixed number
• easy way to find LCM
• how to solve functions of x
• free worksheet elementary variables
• rudin solutions
• 6th grade math exponent worksheets
• convert linear meters to m2
• aptitude papers for cat online
• solve for y then graph
• calculator radical
• Differential Properties worksheets
• teach compound interest algebra II
• simplifying square root expressions
• learning algebra factoring
• printable order of operation with exponents worksheets
• solve initial value second order differential equation
• ks2 sats sample paper
• free worksheets for reducing fractions for 6th grade
• ti 85 + matrices vector + beginner
• solving quadratic equations by factoring and graphing free worksheet
• solve nonlinear simultaneous equations in matlab
• standard form worksheet GCSE
• Mathematics: Applications and Connections, Course 1 answer key
• what the fourth root of 3
• algebra square roots
• trivia about inverse variation
• 9th garde homework helper
• integer adding subtracting worksheet
• Factoring trinomials using TI-89
• orders a fraction from least to greatest
• exponent erpressions worksheet
• pre algebra equations book
• printable math worksheet on multiplying integers
• how to create a test for 6th grade
• scope of college algebra
• create your own multiply and division worksheets
• algebra power 3
• math sheet third grade
• Math help + Year 11
• ks2 scale maths word problems
• explaining algebra
• adding interger worksheets
• online cost accounting books
• first grade ordering numbers sheet
• powerpoint presentation on partial differential equations
• a math tool to solve math expressions
• aptitude question
• third grade algebra printables
• High Schools using logos from College
• rationalizing denominator online calculator
• factor quadratics calculator
• greatest common factor equations
• ti89 polar
• java codes. sum of the number to 100
• how to solve multiple variable differential equations in maple
• transformation practise learning for yr8
• mcdougal littell worksheet answers
• root difference of two square
• least common multiples chart
• Quadratic equation on a ti 89
• puzzle pack passwords+ti 84
• logarithms for dummies
• solving using add/subtract integers worksheets
• positive and negative calculator
• do you square first in equations
• simplifying rational equations calculators
• TI-84 Emulator
• formula to out prime factor
• squARE root expressions
• free worksheet exponents with fractions
• solve systems of linear equations ti-84
• least common multiple of 14,44
• Holt: Algebra 1 - Solution Key
• math formular to image online
• use excel to solve simultaneous equation
• quantitative aptitude test+examples+free download
• The National Topographic Maps with a scale of 1:1,000,000 are 1 degrees N-S by 4 degrees E-W
• lcm powerpoints
• basic Symmetry Elementary Lesson worksheets for 2nd grade
• laplace ti-89
• system of linear equations with excel solver
• calculating fractional, coefficients
• two-step equations worksheet
• online calculator with negatives
• trig application problems test
• algebra solve
• factors multiples y7 worksheet
• slope of a quadratic formula
• McDougal Littell Geometry Book Answers
• whats the algebra sign for addition
• learn algebra audio way
• 2nd order differential equation solver
• solving slope problms algebra
• a copy of the houston algebra 2 book
• variable equations + percents
• examples of math trivia with answers
• rules in dividing negative numbers worksheet
• measure of fit third order polynomial
• subtract 1 and 2 worksheets
• evaluating expressions with signed numbers worksheets
• Type in Rational Expressions problems and get answers
• free printable algebra 1 activities
• download rom code ti 92 plus
• powerpoint presentation quadratic equations vertex
• simplify the exponential expression
• pictograph worksheets
• factorising with exponents Gr.10
• answers to introductory algebra problems
• second order differential general solution plot
• algebra sums
• convert mixed number to percent
• ti 83programs
• ppt on vector algebra
• how to do terms, like terms, coefficients, and constant terms in pre algerbra
• worksheets sales tax word problems algebra
• scientific calculator cubed root
• factoring quadratic equations worksheets
• adding and subtracting positive and negative free worksheet
• Finding slope from a table worksheets
• ti rom code download
• rom image to tx
• "answers to dummit and foote"
• algebraic expressions and how to solve
• how to calculate linear programming using a graphics calculator
• texas bar outline cheat sheet free
• solve radical exponents
• c decimal to base 8
• practice algebra 2
• mcdougal littell online books
• cube root of 16
• yr 8 maths exam test
• rATIONAL EXPRESSIONS calculator
• accounting vocabulary free download
• mixed review decimals worksheet adding subtracting multiplying
• linear equations worksheets with solutions
• free printable square number worksheets
• polynomial expression calculator
• how to use solve function on 89
• integers multiply and divide games and questions
• converting liner metre into metre squared
• free worksheets- permutations
• answers to elementary and intermediate algebra second edition Mark Dugopolski
• math expression worksheets
• cost accounting book by the best author, for chartered accountant students
• convert price of cubic foot to lineal foot price
• examples of mathematical poems
• holt algebra 2 worksheets
• third order linear equation solving
• 5th grade subtracting fractions worksheets
• free online algebrator
• what is the bar in top of a decimal called
• mastering physics answer key
• algebra fractional equations
• factoring a binomial cubed
• +Adding Subtracting Integers Worksheets
• algbra problems
• solving second order non homogeneous linear equals constant
• online least common denominator calculators
• expressions for square root
• simplifying rational expressions step by step
• examples of math trivia and facts
• math chart examples of hills
• glencoe math workbook 7 grade louisiana
• easy algebra
• the hardest math equation in the world
• free online mathematics books for kids
• mixed fractions algebra worksheets free
• subtraction algebraic expression
• Glencoe algebra 2 answers
• "linear programing" pdf & matlab
• long division of polynomials calculator
• how to use Log button on TI-83 plus calculator when doing Logarithms
• free algebra calculator download
• write a mixed number as a decimal
• Using Function table tosolve equations worksheet in 8th grade
• factoring trinomial worksheets
• solving addition equation worksheets
• graphong linear equation worksheet
• free on line Execsices of Factorial Analysis
• outliers in pre-algebra
• middle school math with pizzazz! book e
• TI 84 emulator downloads
• holt pre-algebra: homework and practice book answer key
• Modern Biology Study Guide Answer Key
• Prentice Hall book answers
• free mathematics for college help
• maths algebra tests year 10
• Solve multivariable equations
• free online stories about fractions
• what's the formula in adding a percentage to a number
• Math Lesson plan integrated for 6th grade on fraction
• free online question for physics for class 9th
• sample lesson plan in deriving the vertex form of a quadratic equation from the standard form
• radical expressions math tricks
• prentice hall mathematics algebra 1 lesson 3-5 answer key
• linear algebra with application otto bretscher sollution
• "hungerford abstract algebra solutions"
• algerbra expressions
• ONLINE MATH PROBLEM SOLVER
• college algebra clep
• multiply algebraic expressions involving brackets
• parabola calculator to find equation for two given two points
• all four operations with integers worksheets
• simplify the answer as product of a prime numbers trigonometry
• lowest common denominator worksheets
• cost accounting ebook download
• free 7th grade ratio math worksheets
• pre algebra formula chart
• maths and english yr 8
• solve nonlinear equations in matlab
• "square root" formula Javascript
• how do you find the cube root on the ti-83
• Past Exam papers in Accounting School doc
• adding integers worksheet
• solving complex numbers in radicals
• printable integers tests
• mixed numbers [decimals]
• holt algerbra I rinehart and winston
• solutions of aptitude questions'
• online books with example sum of truth table in discrete mathematics
• linear non linear function word problems
• nc ged math test answers
• algabra
• rational expression calculator
• hardest math problem for a sixth grader
• power graph equation
• how we can free online to help to learn middle school algebra
• how to calculate 3rd order in excel
• lineal metre
• full free solution college physics fifth edition pdf
• McDougal Littell Algebra 1 Answers
• pics of math symbols
• green's theorem triangle example
• trigonometry conversion table
• holt math worksheets answers
• matlab coupled equation differential
• ordering fractions from least to greatest calculator
• free online test sheet module 1 maths
• cubic, quadratic, exponential, logarithmic graphs
• college algebra software
• kumon worksheet
• algebra common denominator
• radical simplifying calculator
• Problems and step by step solution (Physics)
• how to solve differential equations using MATLAB
• online foil calculator
• sum number java
• free scale math activity
• generate gr car polynomial matlab
• pre algebra printouts
• factoring polynomials in third degree
• combining like terms calculator
• square roots activities
• negative subtracting positive
• adding with variables worksheet
• formula for fractions into decimals
• Refresher on Pre Algebra Worksheet PDF
• 10th grade math problems-integrated 1 help?
• mixed number to a decimal
• simultaneous quadratic equation calculator
• hard math games yr 8
• RATIO and proportions+grade 7+worksheets+free
• simplifying like terms activity
• mathematic exercise grade 10 online
• subtracting and adding fractions worksheets
• adding subtracting multiply divide decimal worksheets
• free online algebra math test
• glencoe algebra
• 2. Form each of the following: • A linear equation in one variable • A linear equation in two variables • A quadratic equation • A polynomial of three terms • An exponential function • A
logarithmic function
• apititude question and answer downloads
• expanding brackets in algerba
• Adding Integers Worksheets
• ratio worksheet free
• laplace ti 89
• factorise online
• algebra 2 saxon math answers
• Calculate X Y Intercepts
• texas bar exam excel outline cheat sheet
• GA Accelerated 9th grade Math 1 tutoring
• maths angle revision year 8
• third order equation solver
• how to solve trinomials
• powerpoint presentation on linear equations in two variables
• 9th grade Online Math Games
• a level maths revision quadratic inequalities
• parenthesis with addition and subtraction
• interpreting a hyperbola
• TI-89 using pdf
• Mcdougal Littell algebra 2 answers
• simplifying radicals worksheets free
• solving differential equation in MATLAB
• glencoe study guide answers
• cheat sheet for algebra for year 8
• ax+by=c equation
• radical equations lcd
• chapter 4+page 98+exercise 1+Rudin+solved problems
• how do you multiply a fraction by a minus sign
• finding lcd algebra
• paul foerster solution manual
• algebraicly solving three variable linear equations
• free gcse maths worksheets
• method of characteristics pde heat equation
• LIFE SCIENCE EXAM PAPERS FOR GRADE 11
• common denominator solver
• mixed number to decimal
• free 9th grade Algebra lesson plans
• decomposition on TI-83 Plus
• printable cross number algebra puzzles
• highest common factor activities
• free printable college algebra worksheets
• least common denominator tool
• lcd solver
• math quiz algebra 2 (radical expressions)
• rational exponents and roots
• mixed number convert to percent
• Pearson Education, Inc. Algebra Readiness Puzzles
• trigonometry test for 11 year olds
• алгебратор
• factors and exponents for kids
• free 7th grade english worksheets
• e-books on Costing, accounting
• adding subtracting decimals worksheets
• Samples of college algebra "Math Tutor"
• cost accounting ppt
• factoring quadratic with 5 step method
• saxon algebra 1 answers
• Finding Where Two Graphs Intersect Using Your Graphing Calculator
• revision maths for yr8
• completing the sqaure
• addition of rational polynomial worksheets
• 6th Grade Math Dictionary
• maths - equation of hyperbolas
• convert base ti-89
• how to simplify radical ti 83
• math algebrator
• nonlinear differential equations
• finding slopes in word problem worksheets for algebra
• simplifying the variable expression calculator
• mcdougal littell world history reading study guide answers
• gcse maths-compound interest
• How Do You Turn a Faction into a Decimal
• chapter 6 polynomials
• simplify radicals generator
• fraction problem solver
• Glencoe Math: Exponents
• matlab van der pol equation solution to initial value
• free maths test papers for year 8
• solving third power polynomial
• rule to solve quadratic equations
• factoring polynomials online calculator
• finding roots with TI-83 plus calculator
• ti-83 how to get the cross product
• grade 10 maths paper
• coordinate plane powerpoints
• free printable worksheets on how to solve a linear system by graping
• ordering numbers printables 3rd grade
• how to factor quadratic expression
• factors worksheets ks2
• gallian solutions
• free download maple7 for mathematical
• two step equations + worksheets with jokes
• simplifying radical equations examples
• algebra functions problems for eighth graders
• quadratic equation for curve crossing points
• Learn algerbra
• solve for multiple variables in fractions
• lesson plans for equations using integers and variables
• Intermediate accounting MCQs
• free math problem solver
• solve nonlinear differential equation to linear equation
• GCSE Maths algebraic long division and multiplication
• exponential growth function examples on a ti-83 calculator
• equations with rational expressions and graphs
• liner equation
• free printable algebra questions grade 9
• mcdougal littell World History Worksheets
• grade 10 maths hyperbolas online problems
• Model Equations Fifth grade
• free polynomial test
• online decimals to fractions calculator
• TERMS THAT HAVE THE SAME VARIABLES
• Examples for Hand.java
• matrixpad
• answers to algebra 2 workbook
• how to solve multi step algebra equations
• 1st Grade Printable Math Test
• 1st order linear differential equation non homogeneous
• GED past papers
• caching alg 2 answers
• math percentage formulas
• approximate radical expressions
• solving formulas for a variable
• chapter 5, lesson 6 multiplying and dividing fractions
• free tutorial on cost accounting
• Third+Grade+Math+Help
• maths - algebra grade 10
• equation solver excel
• PROGRAMING FOR SOLVING PROBLEM LIKE L C M &G C D
• adding and subtacting positive and negative number worksheets
• math workbook sheets
• how to check integer is divisible by 5 in java
• Absolute Value,Radical and rational Equations
• answers to my math lab elementry statistics
• quadratic inverse calculator
• standard form of a linear equation using coefficients for algebra II
• demonstrate a beginning algebra problem
• calcul maths for primary
• mcdougal littel algebra 2 texas edition answers
• dividing integer word problem examples
• Equations with fractions worksheet
• c++ solve second order polynomial
• Glencoe Algebra 1 © 2004 Teacher edition
• function to a quadratic formula in visual basic application
• factoring binomials with the same variables
• glencoe mcgraw-hill algebra 1 7-7 answers
• free Evaluating Expressions worksheet
• second order differential equation open loop
• changing fractions into decimals on a texas instrument calculater
• how to factor trinomials with complex numbers
• ti-84 simulator
• learning algebra free
• ti calculator for pocket pc
• easy probability worksheets ks4
• multiplying square roots exponents
• downloadable trig calculators
• Simple math decimals fractions multiply divide
• using function to solve equation on table
• elementary school solving variable expressions worksheets
• download maths activities for 1st grade tulsa
• maths lesson plan on factors and highest common factor
• rules in conversion of decimal to binary to octal to hexadecimal
• holt Algebra 1 online
• completing square solve quadratic negative coefficient
• math rule for least common multiples
• online limit calculator
• aptitude test download
• adding/subtracting integer worksheets
• free algebra word problem solver
• exponential expressions problems
• Algebra Properties of Equality Worksheet
• finding the LCM of monomials
• java programing codes for finding the LCM of two numbers
• free download of ks3 science papers
• can i have a free test on positive and negative integars
• which calculator for college math
• McDougal Littell Geometry Help
• differentiation with graphing calculator
• glencoe pre algebra masters
• graphing rational function absolute value
• third Grade Algebra exam
• ti 83 log
• Rudin Homework Solutions
• foil cubic algebra
• adding and subtracting positive and negative numbers worksheets
• online radical simplifier
• factor sentences mathes
• equation solving multiple variables non-linear
• worksheet answers
• factorising quadratics calculator
• graphing chain rule calculator
• defenition of multi metre tester
• ti89 solve quadratic formula
• Scale factor
• x root calculator
• free pre-algebra word problems
• algebra formula- what is speed if go against the wind
• Java labs with rational equations
• how to factor a cubed polynomial
• adding, subtracting, multiplying and dividing decimal worksheets
• download holt , rinehart, winston physics CD-ROM
• simplifying exponential expressions
• problem in fluid mechanics mcq
• math work sheet working with faction
• how to write an algebraic expression for ninth grade math
• statistics math test for grade eights
• combined probability powerpoint lessons
• cheat algebra answer generator
• unit circle simplifying radicals
• convert,number,base
• TRIGONOMETRY RATIOS .PPT
• free aptitude test questions and answers in finance
• printable free ged work pages
• Add two integers w/o using '+'
• Slope intercept equation worksheets
• mathd printable worksheets gcse
• factoring calculator trinomial
Google visitors found our website today by typing in these algebra terms:
│permutations and combinations powerpoint │sample test in trigonometry │help with college algbra │solving for variable "area of a rectangle" │
│math learning beginner work sheet │yr 8 math │complex quadratic equations │multiplying negative equations worksheets │
│square root of fractions │square root with variables │simplifying radical expressions calculator │TRIGOMETRIC SAMPLE PROBLEMS │
│calculator that adds and subtract integers │walker 3rd edition physics text answer key │pictures on graphing calculator answers │quadratic equation factorer │
│matlab solve second order differential equations │algebra cheats │The Ti 84 Plus online │quizzes on integers on adding and subtracting │
│poems about math │adding multiple addends decimals printable │partial quotient division worksheets │Palm OS Solving Linear Equations │
│pre algebra free test │physics examples with answers │free 11+ printable exam papers │completing the square calculator │
│solving probability on a ti 89 │ti-89 integration sample │Systems of Inequalities Algebra 2 ppt │downloadable calculators │
│calculator with fractions decimals integers │download books on accounting │free seventh grade math printable worksheets │java sum output │
│finding vertices in solving systems of inequalities by│Differential Properties preAlgebra │greatest common factor and least common │convert square root exponent │
│graphing │ │multiple, algebra │ │
│free Advance Excel Practical Assignments │Algebra printable equations free │steps to solving radicals │Exponent Equations subtraction │
│teacher manauls for scott foresman 6th grade │free oline matriculation tenth standared maths │free polynomial factor program │dividing polynomials solver │
│ │book │ │ │
│solving 2nd order linear homogenous differential │non-homogeneous second order differential │algebra software │cheating factor of trinomial │
│equation │equations │ │ │
│difference between permutation and combination │simplifying algebraic expressions using the │free printables logical games for 5th grade │simply equation graphices calculator │
│ │distributive property │ │ │
│on line homeshool math 6th grade │free algebra sixth grade worksheet │factor the difference of two squares │worksheet finding common factor of an │
│ │ │calculator │expression │
│algebra cube square │divison distributive equation │steps on how to get cube of a binomial │free calculator online Add or subtract │
│ │ │algebra │rational expressions │
│11+ exam papers printout │algebra percentage formulas │finding greatest common factor worksheet │solve algebra equations │
│exponentiation ax^2 │first-order linear differential calculator │ti-83 trinomials │Basic algebra answers │
│algebra clep test │free T 84 sientific calculator on internet │iowa test of algebra aptitude questions │teaching slope to eighth grade lesson plans │
│equations work sheet puzzle │ordered pairs worksheet pictures free │greatest common factor worksheet 2nd grade │mixed numbers 6th grade word problems │
│math worksheet +combining like terms │statistics font download │general maths exam unit 2 revision │worksheet on evaluating expression │
│worksheet multiply scientific notation │developing skills in Algebra Book C Solving │solve a third order quadratic equation │how to teach my son pre algebra? │
│ │Inequalities │ │ │
│cube roots on ti-83 plus │Free tutorials on mathematica │convert mixed numbers to decimals │conceptual physics powerpoint │
│gcse maths for dummies │combination rules +algerbra │FIRST GRADE MATHS WORKSHEETS │INT 2 Homework: algebra answers │
│trinomial word problems │adding, subtracting, and dividing square roots │how to solve inequality with exponents │mathematics factoring trinomials diamond │
│ │with a online calculator │ │problems │
│adding to 30 worksheets │year 10 trigonometry notes for the SC │trinomial squares real life │handbook of accountancy in pdf format │
│number patterns pre-algebra worksheet printable │how to teach sum and product of cubic roots │ti-89 laplace transforms │worksheets on l.c.m. for third graders │
│ │coefficients │ │ │
│find the greatest and least solutions possible for an │multiplying and dividing negative numbers │sixth grade sample +apptitude tests │solving quadratic equation by the method of │
│equation │worksheets │ │completing the square │
│Free Sums puzzles for 8 year olds │algebra 2 online tutor │solve factors online │how to type limits for graph in graphic │
│ │ │ │calculator │
│greatest common divisor matlab code │laplace transform online calculator │general math cheat sheet for year 11 │easy way to find imaginary roots equation │
│ │ │ │factors │
│simplifying radicals │quadratic factoring using square roots │c language aptitude questions │linear combination method calculator │
│sample algebra worksheets │trinomial factoring calculators │GREATEST COMMON FACTOR = 479 │how do i work out how to simplify an algebraic│
│ │ │ │equation? │
│trigonometry word problem for college with answer │adding, subtracting, and dividing square roots │TRIGONOMETRY CHEAT SHEETS │algebra calculator online rational expressions│
│ │with a free online calculator │ │ │
│math: scale factor samples │online nonlinear equation solver │worksheets for adding and subtracting │cpm pre algebra assessment test │
│ │ │decimals and placing decimals │ │
│worksheets for mcgraw biology 2007 │beginners algebra practice │9th Grade Algebra Problems │free online 12th grade advanced math placement│
│ │ │ │test │
│solution of a nonlinear differential equation │where is the log key on a TI-83 Plus calculator │free percent worksheets │scientific notation worksheet │
│examples of math trivias │online radical expression calculator │polinomial fitting VBA │Algebra 1 Homework Solver │
│free powerpoint games grade 11 │ti-89 examples │program to Subtract number from left to right│trinomial calculator │
│maths work sheet for 13 yr olds │T-83 online │factoring difference of cubes calculator │inequalities worksheets │
│equivalent fractions/free reproductables │multiplying and dividing rational expressions │schoolwork sheets for 3rd and 4th grade │"nonlinear equations" matlab │
│ │calculator │ │ │
│5th grade factoring lesson plan │parentheses with addition and subtraction │physics caculator │division expressions │
│solve quadratic equation integer exponent │accounting worksheets grade 8 │program to simplify decimals into radicals │solve simplify radicals │
│free polynomial solver │solving nonlinear ode power series maple │solve nonlinear equations in terms of │domain of a function solver │
│ │ │equations in matlab │ │
│11th grade algebraic expressions │convert from expanded form to decimal form │Simple Solutions Math │plus and minus trigonometric formulas │
│ │calculator │ │calculator │
│How to calculate 2 variable algebra │Free math papers for theird graders │game on adding and subtracting integers │free first grade math sheets │
│free pre-algebra tutoring software │basic maths cheat sheet │quadratic trinomial calculator │finding slope from an equation worksheets │
│adding decimals │ │ │ │
│ │factoring multiple variable │Linear equation worksheet │decimals calculator │
│practice │ │ │ │
│QUADRATIC EQUATIONS USING SQUARE ROOT PROCEDURES │parabolaword problems │using casio calculator │systems of equations TI 83 │
│the difference between fractions and rational │fraction subtractor │adding negative integers worksheets │discriminant for third degree polynomial │
│expressions │ │ │equations │
│solving second order homogeneous ordinary differential│multiplying fractions worksheet │solve system ti 89 │cube root fractions │
│equations │ │ │ │
│adding subtracting multiplying dividing decimals │intermidiate algebra for dummies │Subtracting Negative Integers │solve the polynomial inequality and graph the │
│worksheets │ │ │solution set on a number line │
│Download calculator TI-83 │prealgreba │teach me algebra free │standard for of equation power point │
│solve differential equations in matlab │free download objective non verbal questions with│solving cubed polynomials │free online algebra solver answers │
│ │answers │ │ │
│simultaneous equation for 3 equation │how to subtract two times and convert to a whole │MATHS FORMULARS TO FIND MISSING VALUES │use free access code pre algebra online │
│ │number in excel │ │textbook prentice hall │
│algebra helper │free online fraction solver │integers worksheet │multiplying and dividing fractions worksheet │
│"algebra 2 an incremental development" │pre algrabra │factoring equation calculator │examples of how to solve a rational expression│
│ │ │ │with unlike denominators │
│how to solve fractions equations │Adding multiple integers │answers to INT 2 Homework: algebra │dilations for 8th grade worksheets │
│glencoe pre algebra msters │y-intercept example and explanation │matlab simultaneous differential equation │ │
│Algebra Help Monomials │what is a proportion? worksheet │step by step n completing a square for │algebraic equations worksheet │
│ │ │functions │ │
│factoring higher order polynomial │'Free worksheets + number sequences + negative │third order polynomial │hyperbola equations │
│ │numbers' │ │ │
│simultaneous equation with 3 unknowns │convert declare bigdecimal │factoring quadratic equation solver │ontario, grade 6, maths, sheet │
│what is the least common multiple for 3 and 21 │rotation free worksheet │mcdougal littell │tables and orderd pairs free printable │
│ │ │ │worksheets │
│prentice hall mathematics Pre Algebra Answers │second order differential equation with spherical│Decimals to radicals │nonlinear functions worksheet │
│ │in Matlab │ │ │
│university of chicago school mathematics project │comparing and ordering integers worksheet │7th grade problems in algebra with variables │usable T-83 calutor │
│workbook for grade 4 │ │ │ │
│math ,scale for a graph world dictionary │java convert int to time │adding and subtracting mixed numbers fifth │teach free algebra online │
│ │ │grade │ │
│rational equations to linear equations calculator │find the lcd of a fraction calculator │simplify, radical, absolute value │solving simultaneous equations excel │
│online calculators that solve rational expressions │Equation Calculators For 3 unknown │solve quadratic equation using matlab │easy way to solve lcm │
│aptitude question and answers │Pre Algebra Chapter 2 Test A │free download e-book cost accounting │Pemutation activities + 3rd grade │
│rational expressions online calculator │algebra quizzes for year 9's │Learning Basic Algebra │how do you do roots over 3 on ti-83 plus │
│homework help for inequalities for 7th graders │square root problems for 4th graders │using excel solver linear equations │solve limits online │
│Answer key in factoring polynomials │examples of solving for three unknowns │online graphing calcu │holt algebra │
│an expression containing a square root │Permutation And Combination GRE │HOW TO ORDER LEAST COMMON DENOMINATOR │advance algebra trivia and tricks │
│fraction variable calculator │Using MATLAB to solve diff │Solving equations with variables in the │worksheets on math word problems add subtract │
│ │ │exponential and linear variables │multiply │
│"free e book"+"english grammer" │ti-84 how to do square root to the nth power │How to solve a non linear equation in Matlab │Algebra 2 (Glencoe Mathematics) answer │
│trig. fuctions w calculator │Algebra from UCSMP │worksheets on solving equations with │prime number decomposition ks3 example │
│ │ │variables on both sides │ │
│free 10th grade math worksheets │"systems of equations on the TI 83-plus" │solving + partial differential + matlab + │factoring equations including fractions │
│ │ │nonlinear │ │
│multiplying and dividing radical expressions │whats the difference between a texas ti-84 and a │javascript divisor │simultaneous nonlinear equations │
│calculator │texas ti-89 graphics calculator │ │ │
│I need an online calculator that will solve any math │rational expression answers │cubic root solver │printable coordinate worksheet for year 2 │
│problem I put in and show the steps │ │ │children │
│free year 7 long division test │how do you know if a linear inequality represents│age and mixture problems │exponent worksheet elementary │
│ │the area above the line? │ │ │
│attitude test .pdf │trinomial factoring calculator │simplification of absolute value equation │simplifying radicals with variables │
│homogeneous solving second order ODE │aleks problem solver │worksheets solving equations with two │General Aptitude QUestions and answers │
│ │ │variables │ │
│algebra game, java download │LCM lesson plans │who invented 3d algebra │softmath │
│algabra adding │solving third order equations │different problems and solutions in math │free printable exponent worksheets │
│ │ │algebra2 │ │
│solving 3 variable polynomial equations │3rd grade math work │Mathematical Standard Notation │factoring trinomials calculator expression │
│factoring polynomials for dummies │nonhomogeneous linear systems matlab │multiple differential equation + MATLAB │Second Order Differential Equation solution │
│ │ │ │graphs │
│simple interest worksheets │free extra sats lessons on pc year 6 │algebra grinds │solving reducing rational expressions │
│8th grade algebra worksheet │math exercises grade 4 │solving quadratic equation by extracting the │ratio worksheet for 7th graders │
│ │ │root │ │
│Combining like terms activity │Refresher on Pre Algebra Worksheet PDF files │simplify adding, subtracting and multipliying│positive exponents worksheet │
│Square root of a perfect square monomial calculator │ordering fractions and decimals from least to │vertex form calculator │graphing calculator tips TI-86 cubed root\ │
│ │greatest │ │ │
│convert decimal to equation │Examples of Math Trivia │algebra for dummies online │rational expressions equation downloads ti 83 │
│Free Math Worksheets/slop of a line │suare roots │exponent math charts │free printable yr 4 maths tests number │
│solve a radical with fractions and variables │How do you convert a mixed number into a decimal?│how to work out the square root using prime │7th grade math lessons pre algebra │
│ │ │factors │ │
│free english 9th grade worksheets │math work taxes,algebra/seventh grade with │multiplying and dividing fractions with │lowest terms calculator │
│ │answers │unlike denominators │ │
│grade eight math algebra │calculating LCM │quad program ti-83 │cubic meters maths worksheet │
│variable solving in matlab │1st grade math equations │investigatory games in geometry │free adding and subtracting intergers │
│ │ │ │worksheets │
│Yr 8 inequations quizzes │finding variables in exponents │simplify expression worksheet │java convert string to fraction │
│kumon maths worksheets │Squaring Fractions │Algebra 2 Worksheets │simplify radical calculator │
│convert fractions to decimal mixed number │Algrebra 2 tutor │learn elementary algebra │denominator calc │
│how do we solve equations with rational numbers? │lattice multiplication worksheets │free pre algebra graphing worksheets │simplifying radical that are not perfect │
│ │ │ │squares │
│y intercept and slope exercises │convert java time │printable practice algebra 2 │bouncing ball t-i 83 vertex │
│Algebra Story Problems solver │free intermediate algebra for dummies │algebra worksheets grade six │rational square root solver │
│free online maths + what is guass │Maths free online revision year 10 │quadratic │statistics grade 9 test review │
│integration by substitution calculator │3 things must be true if a radical expression is │mathematics poems │merrill math │
│ │simplified │ │ │
│solving composition function using fractions in │online graphing calculator trigonometry │year 11 maths problem solver │the quadratic formula on a TI-84 plus │
│mathematics or inverses │ │ │ │
│how to solve fractions on a ti-83 plus │divide rational expressions involving polynomials│fractions with positives and negatives │Multiple Choice Questions on 9th Standard │
│ │ │fractions │Maths │
│quadratic form of linear equation │dividing decimals by integers worksheet │free usable online ti 83 │ti-83 solve for b linear equations │
│runge kutta matlab second order │college agebra software │kids help maths (scale) │divisiblity rules, practice, worksheet │
│simplify expressions with zero and negative exponent │online tutorial trigonometry bearing │hardest maths equations │solving non-homogeneous non-linear equations │
│worksheet │ │ │ │
│simplifying exponential expressions calculator │"abstract algebra" answers │"mcdougal littell algebra 2 online answer │pre algebra worksheets │
│ │ │key" │ │
│introducing algebra 1 │algebra solutions program │Coordinate Plane Worksheets │clearing equations of fractions calculator │
│definition of total factor vaiables │C# calculator free example │check answers for system of equations by │Factorise algebric expression work sheet │
│ │ │addition method │activity grade 9 │
│WORKSHEET ON SCALE FACTOR FOR 6TH GRADE │program to add,subtract,multiply,divide and │pre algebra software │teaching equations to 5th grade │
│ │compare fractions │ │ │
│onestepequations.ppt │asset exam for 4th graders │worksheets to help with point slope in linear│statistics Y intercept slope │
│ │ │equations │ │
│install maths font pocket pc │coordinate grid printable homework │high school math worksheets │9th grade algebra 1 formula sheet │
│Compare Greek numbers to our numeration system today │lambda symbol in ti 84 plus │Adding and subtracting worksheets for 3rd │online factoring calculator equations │
│ │ │grade │ │
Google users found our website today by typing in these math terms :
│factoring calculator quadratic │grade 11 past exam papers │
│third root │accounting book + pdf │
│solve by completing the square calculator │what is intercept formula? │
│multiplying by 2-digit practice sheets │free ks2 math software │
│greatest common divisor chart │examples of math trivias │
│subtracting rational expressions multiple choice │free logarithm solver │
│factoring program for ti 83 │prentice hall mathematics answers │
│simplify algebraic terms with square roots and powers │simplify square root calculator │
│pre algebra with pizzazz answers │trigonometry online solver │
│solver by elimination │power equation graph │
│Topic 7-b test of genius │holt mathematic worksheet answers │
│eBook for Cost accounting of B com │exercices about conversions,addition and abstraction,simplification using│
│ │boolean algebra │
│free worksheets dealing with integers │how to solve a homogenous equation │
│formula for dividing a percent by a whole number │combining like terms worksheets │
│scale math term │arabic GCSE past paper │
│"How to convert a decimal into a mixed number" │algebra 2 math answers for logarithmic │
│year 9 probability homework sheets │subtraction of radical expressions calculator │
│how to solve complex number using TI-89 calculator │simultaneous equation with excel │
│free worksheet for kids in singapore │Difference between parentesis and brackets in equations │
│how to solve for a variable exponent │Free Printable Worksheets Grade 2-10 │
│factoring third order equations │SIMULTANEOUS EQUATION SOLVER │
│saxon math algebra 1 answer book online │second order nonlinear O.D.E solving numerically in matlab │
│circle equation using matlab │how to change base on ti-84 │
│online books for concept of truth table in discrete mathematics │worksheet adding and subtracting multiple numbers │
│prentice hall physics online │common denominator equation │
│greatest common denominator formula │free online square root calculator │
│in java using a loop to get the sum of the squares of the numbers entered │general form to standard form quadratic worksheet │
│least common denominator equation │addition and subtraction graph paper │
│"free math lesson plans" adding fractions │square root sample equations │
│beginning evaluate exponents worksheet │free adding and subtracting integers worksheets │
│calculator third root │how to find the square root without calcualtor │
│adding multiples of ten worksheet │ged word problems for pythagorean theory │
│expressions worksheets │the standard form of an algebraic expression │
│calculator steps │9th grade pre algebra worksheets │
│secondary sample test trigonometry │Solution 3rd edition introductory and intermediate algebra bittinger │
│ │beecher │
│learning algebra online │free online ti 83 calculator with binary │
│free printable test worksheet s on exponents │nonhomogeneous pde │
│Solving Radical Fractions │{searchTerms} │
│vhdl calculate mean │ti 89 delta function │
│mixed fraction into a decimal │fraction to decimal on ti 83 │
│graphing ellipse applet │Factoring perfect square trinomials online calculator │
│TI 84 roms │algebra and linear programming │
│basic mathematics freeware ebooks │factor tree worksheets │
│finding the slope of an equation examples │pizzazz integer worksheet │
│simplify expressions worksheets │tutorial for Excel VBA Programming for Solving Chemical Engineering │
│maths algebra key concepts basic children india │algebra 1 linear function test │
│algebra 1 chapter 3 resource book answers │get variable out of exponent │
│combine like terms worksheet │AS fractioning quadratic equations │
│aptitude test book free download │solve for x online │
│multiplying dividing fractions word problems │rational expressions calculator │
│Free Algebra Online │quadratic perfect square │
│math properties free worksheet │solving equations using substitution calculator │
│worksheet pages 6th │"factor 9 ti-84" │
│easy algebraic equations for kids │how to solve quadratic equations on texas instrument calculator │
│"factorizing algebraic expressions lesson" │adding and subtracting multiplying and dividing integers worksheet │
│subtraction of real numbers worksheets │How to find the least common multiple of two or more polynomials │
│Mathamatics │why ccalculate lcm and gcm │
│solve functions online │how many chapters are there in a elementary algebra workbook? │
│sample story problem of evaluating special triangle │exponents algebra worksheet │
│algebra 2 worksheets for chapter 3 resource book │quadratic two variables vertex form │
│percent equations │lleast common denominator calculator │
│easy elimination practice worksheet │Factoring trinomials calculator │
│kumon worksheet download │find all values for which the rational expression is undefined TI-89 │
│exponent Lyapunov Matlab two -dimensional maps │modern biology study guide answer key 10-1 holt rinehart winston │
│dividing fractions and exponents │graphing linear equations worksheet │
│Find number of positive integers, and the sum of these positive integers (ie ignore the negative integers), in a list of 10│dividing square roots with exponents │
│integers input by the user. │ │
│downloadable grade 10 maths exam paper 1 │equations involving fractions algebra 7th grade │
│Simplifying Radicals Calculator │Galahad, Matlab │
│6th grade writing practice sheets │fifth grade math worksheets exponents │
│Formula in Converting Decimal to Fraction │math + free + exponents + worksheet │
│my algebra calculator │multiply adding subtracting and dividing negative and positive integers │
│common algebra equations │free math worksheet two step equation │
│answering fractional graphs │online rational expression calculator │
│pictures for algebra for seventh standard │TI-83 plus instructions for quadratic equation │
│lesson plans; coordinate planeelementary school │FREE MATHS WORK SHEETS KS3 │
│summations math │in order least to greatest calculator │
│self learning college level algebra │algerbra 1 homework answers │
│solve 3rd order polynomial │Is there a calculator that factors polynomials? │
│Find Least Common Denominator Calculator │software algebrator │
│mental maths questions year 5 printable free ks2 │algebra(trivia) │ | {"url":"https://softmath.com/math-com-calculator/adding-matrices/algebra-calculator-with.html","timestamp":"2024-11-09T12:32:33Z","content_type":"text/html","content_length":"142948","record_id":"<urn:uuid:56fe1508-9080-48a7-b003-93ae9eb7322c>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00372.warc.gz"} |
Systems governed by Differential Equations Optimization
\(\) We present randomized reconstruction approaches for optimal solutions to mixed-integer elliptic PDE control systems. Approximation properties and relations to sum-up rounding are derived using
the cut norm. This enables us to dispose of space-filling curves required for sum-up rounding. Rates of almost sure convergence in the cut norm and the SUR norm in control … Read more | {"url":"https://optimization-online.org/category/nonlinear-optimization/systems-governed-by-differential-equations-optimization/","timestamp":"2024-11-04T09:10:17Z","content_type":"text/html","content_length":"113215","record_id":"<urn:uuid:b4ac80bf-f1cb-45a6-aa75-c1ba36a8754d>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00794.warc.gz"} |
Coefficients in simple equations, y not represented all the time
Answered: Zuber Khan on 19 Sep 2024
I have the equations x=0, and z=50000. Written as a=x==0, and b=z==50000. Potentially in other examples maybe all variables (x,y,z) are represented in the equations, I would like matlab to tell me
the coefficients of x,y and z in both equations. For example the x=0 equation would tell me I have 1 x, 0y, 0z as the coefficents. Another exmaple is the other equation would tell me I have 0x,0y and
1 z.
2 Comments
1 view (last 30 days)
Coefficients in simple equations, y not represented all the time
Okay I have it so that I use children to seperate the equations from the numbers at the end
and then I can get the coefficients but I want it so that it will tell me if there are no xs or zs or ys rather than just omitting the 0s
dpb on 12 Jan 2021
Edited: dpb on 13 Jan 2021
More info on what you're doing and how to use this probably would lead to some other feedback, but if you have the Statistics or CurveFitting TB there are options to use a fitobject or linearmodel
object that contains information on coefficients, variables, etc., ...
Other than that, a mechanism such as holding the coefficients vector where the terms are related positionally is about the best one can do in ordinary procedural code; one could write a special class
of one's own to deal with somewhat similarly as do the two objects above without all the bells and whistles.
Or, there is the Symbolic TB to treat that way...I've never used/had it so not all that familiar.
Answers (1)
When working with symbolic expressions in MATLAB, you might find it necessary to extract the coefficients of specific variables, ensuring that even those variables which are not present in the
expression are accounted for, with a zero coefficient. To accomplish this, the coeffs function of Symbolic Math Toolbox can be a valuable tool.
Firstly, you need to define the symbolic variables and equations involved as follows.
% Define symbolic variables
syms x y z
% Define the equations
eq1 = 2*x == 0;
eq2 = z == 50000;
eq3 = x+y-3*z == 23;
Then you can write a function to extract the coefficients. Kindly note that you need to take care of the use cases as mentioned below to eliminate any logical errors in the code.
• Case 1: When the mathematical expression has a single term. For instance, x, 4*z etc.
• Case 2: When the mathematical expression has more than one term. For instance, x-20, 4*z-3*x-5 etc.
I am attaching the code snippet below for your reference.
% Create a list of all variables
vars = [x, y, z];
% Extract coefficients for each equation
coeffs_eq1 = extractCoefficients(eq1, vars);
coeffs_eq2 = extractCoefficients(eq2, vars);
coeffs_eq3 = extractCoefficients(eq3, vars);
% Display results
disp('Coefficients for eq1 (2*x=0):');
disp(['x: ', num2str(coeffs_eq1(1)), ', y: ', num2str(coeffs_eq1(2)), ', z: ', num2str(coeffs_eq1(3))]);
disp('Coefficients for eq2 (z=50000):');
disp(['x: ', num2str(coeffs_eq2(1)), ', y: ', num2str(coeffs_eq2(2)), ', z: ', num2str(coeffs_eq2(3))]);
disp('Coefficients for eq3 (x+y-3*z==23):');
disp(['x: ', num2str(coeffs_eq3(1)), ', y: ', num2str(coeffs_eq3(2)), ', z: ', num2str(coeffs_eq3(3))]);
% Function to extract coefficients
function coff = extractCoefficients(eq, vars)
% Convert equation to symbolic expression
expr = lhs(eq) - rhs(eq);
% Initialize coefficients array
coff = zeros(1, length(vars));
% Possible operators combining symoblic variables
operators = {'+', '-'};
% Extract coefficients for each variable
for i = 1:length(vars)
[cof,term] = coeffs(expr, vars(i));
Operators = operators(cellfun(@(op) contains(char(expr), op), operators));
if ~isempty(Operators)
% Muti-term expression
if length(term)==1
coff(i) = 0; % If no term found, coefficient is zero
coff(i) = cof(1); % If term exists, equal to coefficient
% Single term expression
if contains(string(expr),string(vars(i)))
coff(i) = cof(1); % Extract the coefficient for term, if it exists
coff(i) = 0;
This will address your query. If you have any concerns, drop a comment and I will be glad to assist you further.
0 Comments | {"url":"https://au.mathworks.com/matlabcentral/answers/714733-coefficients-in-simple-equations-y-not-represented-all-the-time?s_tid=prof_contriblnk","timestamp":"2024-11-09T19:40:51Z","content_type":"text/html","content_length":"135944","record_id":"<urn:uuid:94c67321-dbb5-4c12-9244-e7bfa2572836>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00343.warc.gz"} |
Alright, my head is about to explode. For the next two days I'll be sitting here in a room all alone trying to figure out how the heck we are supposed to
bridge this gap
. I have all the tools ready to go(computers, legal pads, pens, state framework,etc.). And all of a sudden it hits me. The rules of math have come about because they were necessary. For example,
Natural numbers work until you try to subtract. Then you have to have integers. Integers are fine until you divide, which leads to Rational numbers. The Rational numbers break down when you try to
find the side length of a square with an area of 15. Take the kids on this tour and we get to say: "Okay class, we have just discovered the Real Number system."
We have exponents and scientific notation because it get really tedious to multiply (5,000,000,000)(8,000,000) by hand. We introduce the symbols and variables because we don't want to have to work
out every single case for every single situation. We generalize because mathematicians are inherently lazy. We truly find the shortest distance between two points. Kids are inherently lazy; they know
the shortest distance between homework and their XBox. Hey, we have someting in common. How do we expolit that commonality in order to have kids "discover" algebra for themselves? How do we scaffold
our entire curriculum, so that kids move from the Natural Numbers to Projectile Motion in such a way that they actually see how there is a need for it?
I realize that I am probably not saying anything you all haven't already discovered for yourselves. But, before I go and reinvent this wheel, I would like to know what you have all been doing to get
this Algebra bus rollin'.
You are really starting to make me angry. I was content to just do my job, help as many kids as I could, do some extra stuff to help them make it connect to their reality. And then at 4:00pm shut it
down, go home, kiss my wife, play with the kids, have a nice dinner, watch a little TV and go to bed. But no! You just had to start in with all this "using pictures to help kids learn math" stuff.
You couldn't leave well enough alone. When I couldn't get Graphing Stories to work, you couldn't just say "sorry dude, I am not sure why those chapters won't play. Better luck next time"... no you
had to send me a copy and not even take the reimbursement I offered. Who do you think you are?
Man I can't even go to Target with my family without trying to take a picture of something. You know how disruptive that is? You know how hard it is to hold the baby, push the cart and snap a pic at
the same time? I am looking into having an extra arm grafted onto my torso. You think my insurance will cover that? Nooooo! And it is all because of YOU!
The worst part is that I have these video cameras lying around my classroom and a blue screen in the library so I had to skip lunch the other day and take some footage that has resulted in some
stills like these:
I mean, look at that. How can they not see that the ball is accelerating as it falls? I don't want that. I want them to depend on me to tell them that gravity is an acceleration.
[caption id="attachment_134" align="aligncenter" width="300" caption="I'm not mad at you for this one, I actually think it's pretty cool. "]
What's even worse, is I have all this raw video footage and I have no friggin' idea what to do with it. What am I supposed to do, have students graph the height of the ball vs. time and realize that
there are some relationships that aren't linear?
My students are even getting into the taping. They look so cute and happy throwing the ball back and forth, but little do they know that one day this concept of math actually helping them to
interpret the world around them can consume them. What's next? Are we going to start treating mathematics like a humanity and sit around discussing it as if it were a piece of literature or work or
art? Don't you know that math is only supposed to be important 8:20-2:55 in Room 405 from September to June? Don't you know that math is supposed to be a set of rules that we force our kids to
memorize until April 22 and then they aren't supposed to think about it again? Get with the program, will ya?
And no textbook? C'mon, man! What are you thinking? Those things were all written by people who really care about making math matter to our children. Don't you know that the more a student sits in
front of a textbook, the more they learn? I saw some research done by an independent agency Houghton, McGraw, Holt and Littell that says students can actually teach themselves with these things.
In closing, you have got it all wrong. Kids want to have subjects forced upon them. They want to be told the rules, they want to mindlessly copy exactly what the teacher says and does and they
especially want to ask questions like, "I don't get it."
Inquiry? Yeah, right!
p.s. Can I make a cameo in Graphing Stories vol. 2?
p.p.s Can you all help me package this up into a series of lessons?
Teacher: So what is the value of f(x) if x =3?
Student: Does f(x) mean f times x?
Teacher: No, no, no...f(x) is a function of x.
Student: Oh, it is a function of x?
Teacher: Right.
Student: Oh, okay, I think I get it. So we can plug in a number for x and find out what f equals.
Teacher: Yeah, that's pretty close. Do you have another question?
Student: So, then is f like the slope of the line?
Teacher: *slaps forehead* Uncle!
Student: Well you said it is a function of x and of means to multiply.
Alright, so that didn't just happen in my class. But similar dialogues do take place right around the time I first introduce things like f(x) or sin(x). Kids always think that means that we are
multiplying something by x. We usually end up discussing how often times functions need to have names like f(x) or g(x) so you can tell them apart. We don't spend too much time on function notation
in middle school, but when it comes up, I would like a better way to explain it.
Don't act like that hasn't happened to you.
So how do you explain it?
Every day at 5:00 pm I receive an email from Markus Hohenwarter. It isn't a "hey how are you doing?" kind of email, this one is all business. Actually it isn't even a personal email, it is one of
those mass emails that often ends up in the spam box. But more times than not, this spam is worth reading.
Markus Hohenwarter is the creator of GeoGebra which has to be one of my top three classroom tools. The email is generated by the Geogebra upload manager and lists all of the different GeoGebra files
(both .html and .ggb) that have been uploaded to the bank during the previous day. Many of the dynamic .html worksheets are ready for classroom use and author attribution is at the bottom of the
page. However, if you like the concept of the worksheet but would like to use your own questions, you can download the .ggb file and learn how the sheet was created by viewing the construction
The only downside to the upload manager is that the uploads aren't tagged, so you have to do bit of hunting to find someting worth while. Regardless, this is yet another reason to love GeoGebra!
This one goes out to all the algebra teachers; especially those in middle school. Anyone notice that kids "get it" in 7th grade and then act like they have never seen a variable once they hit
algebra? Or am I the only one? We had nearly 70% of our 7th graders proficient and above on last year's CST's, but only 40% of our algebra students were proficient. If you take the advanced classes
out of the mix, it is more like 60% to 30%. I know, I know, you can't base what a kid knows solely on a standardized test. But, those numbers are pretty indicative of how the kids actually do in
class. Some will say that algebra is just too abstract for most 8th grade kids. I don't know if I buy that, especially when I read articles like this.
I am going to have some release time once testing is over in order to adjust our pacing guides and I would like to be able to develop some lesson ideas to help our teachers bridge the gap between
number sense and algebraic thinking. Maybe some of you have already tackled this. I would love to hear what you have done. How have you sequenced your 7th grade curriculum and how have you helped
move your students from numeric fluency to algebraic proficiency? I figure if I am going to lock myself in a room and try to hash this out, you may as well be there with me!
Testing started today. I have done everything within my power to make my classroom as non-mathy as possible. No work on the boards. All procedure posters covered. I even attempted to remove the
letters A,B,C,D,F,G,H,J from any text on my walls. However, this one slipped by:
Uh oh! I am sure this is one of the answers to one of the problems. I hope the students don't see it.
Ok, I admit it. My lesson planning skills suck! It just hit me the other day. I am reading all your blogs and seeing all the cool things you are doing in your classrooms, and all I can ask myself is,
"Why don't I think of stuff like that?"
I think I know why. I cut my teeth on CPM (another post on CPM coming soon) and all of my lessons were pre packaged. All I needed to learn was to ask good questions and get out of the way. That
suited my style very well. By the time NCLB came around and we dropped CPM, my school was already going lock step with pacing guides and common assessments. Neither one of these approaches allowed
for much innovation nor did they require a bunch of thought. It didn't help that I became the varsity baseball coach in my first full year of teaching and spent way more time planning practice than I
did lessons. The real problem was that I didn't really know how much I had bitten off until recently.
Now that I am teaching middle school math as the high school guy brought in to handle all the GATE kids, I am not only responsible for my own classes but for setting a tone and pace for an entire
department. I feel more accountability now than ever before. It has been in these last three years that I have really started to ask reflective questions regarding not only my practice, but good
practice in general. I also have access to tools that I didn't even know existed when I was at the high school.
So here is a snapshot of where my lessons were compared to where they are. Feel free to take your shots and help me make this better. This lesson is stolen borrowed adapted from a lesson in the April
2009 issue of Mathematics Teaching in the Middle School.
What I would have done before:
I don't know if I would have even bothered to re type the lesson, I may have just photo copied it and given it like this:
I would have passed out this handout and told the students to represent each savings plan as a: graph, input/output table, algebraic expression and verbal expression. My students would have muddled
through it and most would have just jumped through another hoop.
What I did today:
I toggled between these four slides and asked students to pick out what information they thought was important. It was interesting that some students felt it necessary to try to copy the information
while the rest of the class was willing to observe and jot down what they thought was essential info.
Questions students came up with:
• Does the chart represent the amount of money Diana makes each week or the total she has saved?
• Will she continue to save at this rate? "Yeah, look at the "dot, dot, dot."
Teacher: "What can you tell me about Yoni?"
Class: "She has $300."
Teacher: "How much will she have in 3 weeks? 2 years? 100 years?
Class: "$300. $300. $300."
A few students were quick to point out that we only needed to focus on two or three key points in order to make a generalization of Michael's situation. They immediately zoomed in on (0, 30) and (10,
80). However, I did have one group who decided to focus on the last point (20, 130). They decided that Michael averaged $6.50 per week. But when they were asked if their trend would continue, they
quickly realized that they had failed to consider the original amount of Michael's savings.
Chandler, one of my 7th graders decided to ask, "Does x represent weeks?" Beautiful, that info was on the next slide.
The question of the day had to come from Paul: "So, if the only point we know for sure in Michael's graph is (30, 80) wouldn't he have more money if he actually started at (0,0)? In his mind he was
seeing this...
...which led to an interesting discussion on how slope represents the amount of money saved per week.
My students are starting to catch on to how things work in my class because as soon as these slides came up:
I asked, "What is my next question?" To which they all responded:
Once they had made the decision on which represenation they prefered, I had them represent the other three savings plans the same way.
At first the justifications for "why" one representation was preferred over another were weak at best, lazy and unthoughtful at worst:
• An input/output table is more organized.
• It's easier to see the data.
• I like it better because it is better and less worse than the others.
Okay, that last one was mine, but you get the point. Being the father of four boys, I can smell an "alright get off my back, already" answer a mile away, I continued to prod. Eventually we had
answers like this:
• I prefer the chart because it is easier to see which numbers go together; on the graph, you have to work a little more to see which numbers are related.
• I prefer the graph because I can actually see how the different points relate. The slope helps me see how fast someone is saving.
• I prefer the verbal expression because it helps me understand the overall situation better.
We eventually got to the discussion on "Who has the most money right now?"
Hands fly upright around the same time I hear: "Yoni! Danny!"
Then Adrian pipes up, "When is now?"
Gotta love that kid!
Because my district is in program improvement, there has been a huge push to do things "one standard at a time." Not a bad idea since the standards (for algebra anyway) are pretty solid. The problem
lies in the fact that there is a tendency to simply teach the skills and neglect the conceptual development as well as problem solving aspect to algebra. So I decided to call a buddy of mine who
happens to be a farmer and here is a project I came up with. I created an answer key using GeoGebra that allows for quick checking of student progress. Feedback request: I would really like some
input on this one. I know that I would like to use it again next year, but I know it needs some work.
So I took a stab at letting my kids have a go at Dan's last installment. And to say I was pleased is the understatement of the year. The first obvious question was "will it go in the can?" But, since
we have finished going over parabolas, kids started asking questions like:
• How high was the ball at its highest point?
• How far did it travel?
• What was its velocity?
• How long was the ball in the air?
The question that really opened one of those "teachable moments" was in regards to velocity. To this point we have only covered vertical motion. These kids understand how to model a falling object as
well as an object with an initial velocity other than 0. This led to an interesting discussion. Does the "falling object" or "thrown object" apply here? And that is when Lio hit the nail right on the
He pipes up with, "Hey Mr. Cox, if we shoot a gun horizontally and drop a bullet from the same height instantaneously, they both hit the ground at the same time right?"
"So does the fact that it is travelling horizontally have anything to do with how fast it falls?"
"So can we use the stuff we know about falling objects here?"
"But we need heights."
"Well I guess we are done here."
That is when Seth walks over to the trash can and measures how tall it is. All these trash cans have to be the same, right?
And the rest is history. The kids opened up the computers, dragged the images into the SmartNotebook software and here is what Group 1 came up with:
Here is where it gets really cool. My other Seth asks if we can find the actual distance the ball travels along the parabola. He thinks that if we can measure the distance between the balls, then we
could get a series of straight lines. He comes to the conclusion that the closer the balls are to each other, the more accurate our approximation is.
Wait till he gets a load of Calculus. Did I mention he is 13?
I have to confess. I am a bit upset. Why is it that I have been teaching for 13 years and have yet to encounter a professional development session that wasn't an utter waste of my time? Why is it
that the majority of educational conversations focus on what kids can't do and how we can't change the way we do business because, well, "that's just how we do it here?" If Jack Johnson were a
teacher he'd sing "Where'd all the good conversations go?" Why is it that when someone sets foot in another teacher's classroom it is assumed that the visitor is looked at as an intruder? Why did it
take me so long to find teachers who are asking really tough questions-really good questions? Why wasn't I looking? Why wasn't I asking them myself?
I have tried to teach my students that the answer isn't the point. It is all about the question. The beauty of it is that the more questions I find answers to, the more questions I have. A few months
ago I stumbled across Classroom 2.0 and asked a question about dealing with gifted kids. It wasn't long before I was in the middle of a conversation with Nancy Bosch about how to deal with gifted
students that she says, "Hey there is a guy named Dan who is asking some pretty good questions. You are both math guys; check him out." So I do and realize that we are practically neighbors. (Okay,
we both live in CA but Nancy is in Kansas.) So now I realize that all of those good conversations? All the good questions? Yeah, they are right here. Right Now! And I don't have to sit in a 3 hour
meeting to join them.
So again I ask: Why have I had more professional development in the past 4 months than in the previous 13 years? Was I not listening? Or was no one showing me where to listen?
Every nine weeks my district gives benchmark exams covering approximately one-third of the standards that have been deemed "essential." Many teachers feel the need to do a bunch of last minute
cramming and do intensive review. For the most part, I see the benchmark as a speedbump; one of those things that I have to do. I mean, I we already have department CFAs (common formative
assessments) that we use to re-direct our instruction, so I pretty much already know where my kids stand. So for the last benchmark of the year, I decided to change up the review.
I made up a practice test covering the standards that would be assessed, uploaded it to voicethread and had students sign up to create a mathcast for specific problems. However, this time they were
looking for a fastball up and in and I gave them a change away. I asked a few students to do the problems incorrectly (of course, first they had to demonstrate to me that they could do the problem
right.) Once everyone had added their comments to the Voicethread, I assigned the Easter Egg Hunt--Which ones are wrong and why? I think I really like this activity. Students not only have to work
out each problem themselves, but they have to view another's work critically. I would love to hear what you think.
Alright, here it goes. I have been browsing the world of edublogs for a while and have been impressed with the quality of dialogue. The sad thing is that I think I have enjoyed more professional
development from these online discussions than I have from countless inservices, staff meetings and trainings. So, I figure it is time to join the conversation. | {"url":"https://coxmath.blogspot.com/2009/04/","timestamp":"2024-11-06T09:25:15Z","content_type":"text/html","content_length":"144019","record_id":"<urn:uuid:9c77fc64-1a36-4562-9346-6cf1efafe1e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00797.warc.gz"} |
Pascal's triangle and the binomial expansion
Pascal's triangle and the binomial expansion resources
Pascal's Triangle & the Binomial Theorem 1
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 2
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 3
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 4
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 5
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 6
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 7
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 8
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's Triangle & the Binomial Theorem 9
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is
released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Maths EG
Computer-aided assessment of maths, stats and numeracy from GCSE to undergraduate level 2. These resources have been made available under a Creative Common licence by Martin Greenhow and Abdulrahman
Kamavi, Brunel University.
Mathematics Support Materials from the University of Plymouth
Support material from the University of Plymouth:
The output from this project is a library of portable, interactive, web based support packages to help students learn various mathematical ideas and techniques and to support classroom teaching.
There are support materials on ALGEBRA, GRAPHS, CALCULUS, and much more.
This material is offered through the mathcentre site courtesy of Dr Martin Lavelle and Dr Robin Horan from the University of Plymouth.
University of East Anglia (UEA) Interactive Mathematics and Statistics Resources
The Learning Enhancement Team at the University of East Anglia (UEA) has developed la series of interactive resources accessible via Prezi mind maps : Steps into Numeracy, Steps into Algebra, Steps
into Trigonometry, Bridging between Algebra and Calculus, Steps into Calculus, Steps into Differential Equations, Steps into Statistics and Other Essential Skills.
Pascal's triangle and the binomial expansion
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. (mathtutor video) This
resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
Pascal's triangle and the binomial expansion
A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very
cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. (mathtutor video) The
video is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. | {"url":"https://www.mathcentre.ac.uk/topics/algebra/pascals-triangle/","timestamp":"2024-11-11T19:55:54Z","content_type":"text/html","content_length":"20272","record_id":"<urn:uuid:b760110f-b35e-4e02-a83d-72d17577508b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00156.warc.gz"} |
Google Hashcode 2022
Google Hashcode 2022#
Description: Google Hashcode 2022 Practice Problem
Tags: amplpy, heuristics, engineering, scheduling, complexity
Notebook author: Marcos Dominguez Velad <marcos@ampl.com>
Model author: Marcos Dominguez Velad <marcos@ampl.com>
References: N/A
Google Hashcode is a team programming competition to solve a complex engineering problem.
In this notebook we are showing how Mathematical Optimization methods as Mixed Integer Programming (MIP) are useful to solve this kind of problems, as they are really easy to implement and give
optimal solutions (not only trade-off ones), as opposed to greedy approaches or heuristics. We are solving the pizza warm-up exercise.
We are using AMPL as the modeling language to formulate the problem from two different approaches (not all the formulations are the same in terms of complexity), coming up with enhancements or
alternative approaches is an important part of the solving process.
As an instructive example of how to face this kind of problems, we are using the AMPL API for Python (AMPLPY), so we can read the input of the problem, translate easily to data for AMPL, and retrieve
the solution to get the score. Because of using MIP approach, the score will be the highest possible for the problem.
Problem statement#
The statement of this year is related to a pizzeria, the goal is to maximize the number of customers coming, and we want to pick the ingredients for the only pizza that is going to be sold:
• Each customer has a list of ingredients he loves, and a list of those he does not like.
• A customer will come to the pizzeria if the pizza has all the ingredients he likes, and does not have any disgusting ingredient for him.
Task: choose the exact ingredients the pizza should have so it maximizes the number of customers given their lists of preferences. The score is the number of customers coming to eat the pizza.
(The statement can be found here)
First formulation#
The first MIP formulation will be straightforward. We have to define the variables we are going to use, and then the objective function and constraints will be easy to figure out.
We have to decide which ingredients to pick, so
• \(x_i\) = 1 if the ingredient i is in the pizza, 0 otherwise.
• \(y_j\) = 1 if the customer will come to the pizzeria, 0 otherwise.
Where \(i = 1, .., I\) and \(j = 1, .., c\) (c = total of customers and I = total of ingredients).
Objective function#
The goal is to maximize the number of customers, so this is clear: $\(maximize \ \sum \limits_{j = 1}^c y_j\)$
Finally, we need to tie the variables to have the meaning we need by using constraints.
If the j customer comes, his favourite ingredients should be picked (mathematically \(y_j=1\) implies all the \(x_i = 1\)). So, for each \(j = 1, .., c\):
\[|Likes_j| \cdot y_j \leq \sum \limits_{i \in Likes_j} x_i\]
Where \(Likes_j\) is the set of ingredients \(j\) customer likes, and \(|Likes_j|\) the number of elements of the set.
If any of the disliked ingredients is in the pizza, customer \(j\) can’t come (any \(x_i = 1\) implies \(y_j = 0\)). For each customer \(j = 1, .., c\):
\[\sum \limits_{i \in Dislikes_j} x_i \leq \frac{1}{2}+(|Dislikes_j|+\frac{1}{2})\cdot(1-y_j)\]
So when customer \(j\) comes, the right side is equal to $\(\frac{1}{2}+(|Dislikes_j|+\frac{1}{2})\cdot(1-1) = \frac{1}{2} + 0 = \frac{1}{2}\)\( This implies the left side to be zero, because the \)
x_i\( variables are binary. If the customer \)j$ does not come, the inequality is satisfied trivially.
We will need the input data files from the problem, they are available in the amplpy Github repository:
import os
if not os.path.isdir("input_data"):
os.system("git clone https://github.com/ampl/colab.ampl.com.git")
if not os.path.isdir("ampl_input"):
Let’s use AMPL to formulate the previous problem. The following section setup AMPL to run in also in the cloud (not only locally) with Google Colab.
AMPLPY Setup in the cloud#
Here is some documentation and examples of the API: Documentation, GitHub Repository, PyPI Repository, other Jupyter Notebooks. The following cell is enough to install it. We are using ampl (modeling
language) and COIN (contains aasdasdasdasdasdasdasdasdCBC open-source solver) modules.
# Install dependencies
%pip install -q amplpy
# Google Colab & Kaggle integration
from amplpy import AMPL, ampl_notebook
ampl = ampl_notebook(
modules=["coin"], # modules to install
license_uuid="default", # license to use
) # instantiate AMPL object and register magics
Solving problem with AMPL#
First, we need to write a model containing the mathematical formulation. After that, we will add the data to solve the different instances of the Hashcode problem.
param total_customers;
# Set of ingredients
set INGR;
# Customers lists of preferences
set Likes{1..total_customers};
set Dislikes{1..total_customers};
# Take or not to take the ingredient
var x{i in INGR}, binary;
# customer comes OR NOT
var y{j in 1..total_customers}, binary;
maximize Total_Customers: sum{j in 1..total_customers} y[j];
Customer_Likes{j in 1..total_customers}:
card(Likes[j])*y[j] <= sum{i in Likes[j]} x[i];
param eps := 0.5;
Customer_Dislikes{j in 1..total_customers}:
sum{i in Dislikes[j]} x[i] <= 1-eps+(card(Dislikes[j])+eps)*(1-y[j]);
Translate input with Python#
The input files are in the folder input_data/, but they do not have the AMPL data format. Fortunately, we can easily parse the original input files to generate AMPL data.
import sys
# dict to map chars to hashcode original filenames
filename = {
"a": "input_data/a_an_example.in.txt",
"b": "input_data/b_basic.in.txt",
"c": "input_data/c_coarse.in.txt",
"d": "input_data/d_difficult.in.txt",
"e": "input_data/e_elaborate.in.txt",
def read(testcase):
with open(filename[testcase]) as input_file, open(
"ampl_input/pizza_" + testcase + ".dat", "w+"
) as output_data_file:
# total_customers
total_customers = int(input_file.readline())
ampl.param["total_customers"] = total_customers
# loop over customers
ingr = set()
for c in range(1, total_customers + 1):
likes = input_file.readline().split()
ampl.set["Likes"][c] = likes[1:]
dislikes = input_file.readline().split()
ampl.set["Dislikes"][c] = dislikes[1:]
ingr = ingr.union(set(likes))
ingr = ingr.union(set(dislikes))
ampl.set["INGR"] = ingr
# Let's try with problem 'c' from hashcode
Now, solve the problem usign AMPL and CBC (mip solver)
option solver cbc;
display x, y;
CBC 2.10.5: CBC 2.10.5 optimal, objective -5
0 nodes, 3 iterations, 0.00673 seconds
: x y :=
1 . 0
2 . 0
3 . 1
4 . 1
5 . 1
6 . 0
7 . 1
8 . 1
9 . 0
10 . 0
'0' 0 .
'1' 0 .
'3' 0 .
akuof 1 .
byyii 1 .
dlust 1 .
luncl 1 .
qzfyo 0 .
sunhp 0 .
tfeej 1 .
vxglq 1 .
xdozp 1 .
xveqd 1 .
So the ingredients we should pick are:
• byyii, dlust, luncl, tfeej, vxglq, xdozp and xveqd.
• Customers coming are: 4, 5, 7, 8, 10. Total score: 5.
We can write an output file in the hashcode format:
printf "%d ", sum{i in INGR} x[i] > output_file.out;
for{i in INGR}{
if x[i] = 1 then printf "%s ", i >> output_file.out;
shell 'cat output_file.out';
8 luncl dlust xveqd byyii tfeej xdozp vxglq akuof
You can try this with the other practice instances!#
The big ones can take several hours to get the optimal solution, as MIP problems are usually hard because of the integrity constraints of the variables. That’s why it is often necessary to
reformulate the problem, or try to improve an existing formulation by adding of combining constraints / variables. In the following section, we present an alternative point of view to attack the
Hashcode practice problem, hoping the solver finds a solution earlier this way.
Alternative formulation#
We could exploit the relations between customers and see if we can figure out of them. Actually, the goal is to get the biggest set of independent customers that are compatible (so none of their
favourite ingredients are in the pizza). The ingredients we are picking may be deduced from the particular customers preferences we want to have.
With this idea, let’s propose a graph approach where each customer is represented by node, and two nodes are connected by an edge if and only if the two customers are compatible. This is translated
to the problem as:
• Customer i loved ingredients are not in the disliked ingredients list of j (and vice versa).
With sets, this is:
\[Liked_i \cap Disliked_j = Liked_j \cap Disliked_i = \emptyset \]
So the problem is reduced to find the maximal clique in the graph (a clique is a subset of nodes and edges such as every pair of nodes are connected by an edge), which is an NP-Complete problem. The
clique is maximal respect to the number of nodes.
New variables#
To solve the clique problem we may use the binary variables:
• \(x_i\) = 1 if the node belongs to the maximal clique, 0 otherwise. For each \(i = 1, .., c\).
Objective function#
It is the same as in the previous approach, as a node \(i\) is in the maximal clique if and only if the customer \(i\) is coming to the pizzeria in the corresponding optimal solution to the original
problem. A bigger clique would induce a better solution, or a better solution would imply the solution customers to generate a bigger clique as all of them are compatible.
\[maximize \ \sum \limits_{i = 1}^c x_i\]
New constraints#
The constraints are quite simple now. Two nodes that are not connected can’t be in the same clique. For each pair of nodes not connected \(i\) and \(j\): $\(x_i + x_j \leq 1\)$
Formulation with AMPL#
We are writing a new model file (very similar to the previous one). In order to reuse data (read function), we will keep the INGR set although it is not going to be used anymore.
The most interesting feature in the model could be the condition to check that two customers are incompatible to generate a constraint. The condition is:
\[Liked_i \cap Disliked_j \neq \emptyset \ \text{ or } \ Liked_j \cap Disliked_i \neq \emptyset\]
A set is not empty if its cardinality is greater or equal to one, so in AMPL we could write:
card(Likes[i] inter Dislikes[j]) >= 1 or card(Likes[j] inter Dislikes[i]) >= 1
param total_customers;
# Set of ingredients
set INGR;
# Customers lists of preferences
set Likes{1..total_customers};
set Dislikes{1..total_customers};
# customer comes OR NOT <=> node in the clique or not
var x{i in 1..total_customers}, binary;
maximize Total_Customers: sum{i in 1..total_customers} x[i];
# Using the set operations to check if two nodes are not connected
Compatible{i in 1..total_customers-1, j in i+1..total_customers : card(Likes[i] inter Dislikes[j]) >= 1 or card(Likes[j] inter Dislikes[i]) >= 1}:
x[i]+x[j] <= 1;
Read the data and solve:
option solver cbc;
display x;
CBC 2.10.5: CBC 2.10.5 optimal, objective -5
0 nodes, 0 iterations, 0.002318 seconds
x [*] :=
set picked_ingr default {};
for{i in 1..total_customers}{
if x[i] = 1 then let picked_ingr := picked_ingr union Likes[i];
printf "%d ", card(picked_ingr) > output_file.out;
for{i in picked_ingr}{
printf "%s ", i >> output_file.out;
shell 'cat output_file.out';
8 akuof luncl vxglq dlust xveqd tfeej xdozp byyii
First, let’s compare the size of the two models.
• First approach size: \(c+I\) variables + \(2c\) constraints.
• Second approach size: \(c\) variables + \(c(c-1)/2\) constraints (potentially).
Also in the second approach, each constraint has only two non-zero coefficients along with variables, which is an advantage to have more sparse coefficient matrices.
The choice of one model or another will depend on the concrete instance of the problem, so the sparsity of the matrix and the real number of constraints can change (actually, the constraints of the
two models are compatible). AMPL will take care of building the coefficient matrix efficiently, so there is no extra effort to compute the constraints or sums within them once the model is prepared
and sent to the solver, and we can focus on thinking algorithmically. Also a lot of constraints and variables would be removed by presolve. To know more about the AMPL modeling language you can take
a look to the manual.
Some of the advantages of this approach are:
• It is really easy to implement solutions.
• There is no need to debug algorithms, only the correctness of the model.
• Models are very flexible, so new constraints could be added while the rest of the model remains the same.
• It is hard to estimate how long it is going to take, even in simple models like the ones presented.
• Sometimes it is hard to formulate the problem, as some of the constraints or the objective function could not adjust to the usual mathematical language. The problem could be non-linear so
convergence would be more difficult and even optimal solutions would not be guaranteed.
• For simple problems, more efficient algorithmic techniques could also give the best solution (Dynamic Programming, optimal greedy approaches…).
• Study the problem to come up with presolve heuristics in order to get smaller models.
• Add termination criterias (solver options) so the solver can stop prematurely when finding a enough good solution (there is a little gap between the best found solution and the known bounds), or
even a time limit. If you are lucky the solution could be the optimal one but the optimality was not proved yet.
• If the solver could not find the optimal solution on time, but we used a termination criteria, we could retrieve a good solution and run some kind of algorithm over it so we can improve and get
closer to the optimal (GRASP or Genetic Algorithms, for instance). Actually, when solving a real engineering problem is desirable to combine exact methods as MIP, heuristics (greedy approaches)
or metaheuristics (GRASP, Simulated Annealing, …) among others, to reach better solutions.
Author: Marcos Dominguez Velad. Software engineer at AMPL. | {"url":"https://ampl.com/colab/notebooks/google-hashcode-2022.html","timestamp":"2024-11-15T00:48:29Z","content_type":"text/html","content_length":"72429","record_id":"<urn:uuid:b7f7955a-76c9-4c92-9226-d51c445a693d>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00752.warc.gz"} |
How to Become Data Science expert ? - DED9
How to Become Data Science expert ?
When we talk about a data science expert, we mean becoming a person who can go through a lot of data, patterns, etc. Discover the values hidden between them. The discovery of these patterns can
increase the added value of a business.
But there are steps you can take to become a specialist in data mining. Naturally, the steps written in this article are not the only solution. However, a data science expert should follow these
eight steps.
In the following article, we will explain these eight steps. Then one of the sub-domains of data mining is text mining.
Introduce and explain some of its applications in different areas.
An example of the application of data science
Suppose a company like Amazon (or similar examples) is an online retailer and online sales interface. Use their data to predict which products and how much each will sell in three months. Naturally,
this forecast can greatly lead to the growth of this business and increase its profits.
The same simple prediction as above. It can help set up sales warehouses in different areas. And significantly reduce warehousing and logistics costs. Amazon, for example, predicts that laptop sales
will increase in the Middle East at the beginning of the summer season. The company can ship that type of laptop by different merchant ships before it reaches the peak of demand. Transfer to your
warehouse in the Middle East and deliver the product to the customer immediately when ordering. This will increase delivery speed and customer satisfaction and reduce shipping costs.
The people who produce this prediction system are data science experts in the example given. These people, also known as machine learning specialists or data miners, can build predictive and learning
systems and help different parts of the business. Note that the terms “data mining” and “machine learning” may be interchangeable.
1. Learn the basics of statistics and probability
Statistics and probability are basic sciences require in many engineering activities. Data science is no exception, and in fact, data science owes itself to statistics and probability and scientists
in this field. Many of the algorithms proposed in data mining and machine learning are based on statistics and probabilities. This can be a reason to say that statistics and probability science are
the mothers of data-related sciences.
Of course, the question that some students ask is how much statistics and probability should be learnable in the field of data science? The answer to this question depends on the students’ interest
in the area they need. Some students are more inclined to analyze data. These students naturally need to learn more about statistics and statistical analysis. But if a student is more focused on
implementation and data engineering behaviors, the need for statistical topics is less felt. However, it is expectable that all students, regardless of their field of work, will be familiar with the
basic topics of statistics and probability and its basic theories in data.
There are many resources for learning statistics and probabilities. For example, the book of statistics and engineering probabilities written by Dr. Nemat Elahi or the book of statistics and its
application in management, which has been compiled in two volumes, are some of the good academic books in this field. Of course, these books are mostly academic, but they can be useful in learning
statistics and reasonable probabilities because of the very good content. There are also various free online courses that you can use to learn statistics and probabilities.
۲. Learn a programming language
There is a lot of talk about the benefits of learning a programming language. Today, learning a programming language is a prerequisite for technological advancement in that discipline in many
engineering disciplines. Data science is not separate from these disciplines. And any data science expert should be familiar with programming languages such as Python, R, or Java, which can
implement machine learning algorithms and processes on an executable platform. There are also ready-made libraries in these programming languages that speed up the implementation of data mining
3. Learn the basics of matrices and linear algebra
Many data mining algorithms are based on Linear Algebra. They make extensive use of matrices and matrix operations in their processes. In this regard, learning the basics of matrices and linear
algebra helps understand the functionality of data algorithms. In some textbooks for teaching mining and machine learning, a chapter is usually devote to this topic, or they discuss matrices and
linear algebra while teaching topics. But if we want to introduce a book in this field, we can refer to Mr. Avar Nering’s book on linear algebra.
4. Learn the basics of data mining and machine learning
Basic data mining and machine learning algorithms can be a solution to this field’s basic and classic problems. These algorithms can teach students a good view of problems and how to solve them.
Algorithms and their diversity can contribute to the breadth of student knowledge and learning the basics with these algorithms. In learning data mining, the student should be familiar with different
methods and algorithms of classification and clustering and be able to solve various problems in this field with their help. It should also be able to prepare and clean data for these algorithms
according to its needs. In this section, the student should also be able to evaluate their models and compare different models and algorithms to find the best algorithm and model for their problem.
Read more:
5. Learn a variety of practical examples in the field of data mining
Learning does not stay in mind except through practice and repetition. If you want to become an expert in this field, you must test various algorithms on various datasets and see the results.
Observing different examples and how to solve them can deepen the pattern of data mining problem-solving in the student’s mind. To become an expert in this field, there are various companies and
institutions where you can do internships or solve their problems. For example, we can mention the Kaggle site, which has been able to be a good reference for real examples in data mining by holding
numerous competitions. By referring to and reading the real-world data on this site, the student’s mind can quickly think data-driven and solve the problem according to the existing structure.
6. Neural networks and deep learning
Neural Networks and Deep Learning have enhanced the quality of data mining outputs and attracted the attention of many people and scientists in the field. In data mining, students can solve far more
complex problems and improve the quality of different problems by using deep neural networks and various deep learning methods. These algorithms can learn more complex patterns in data and have
gradually become one of the mainstays in solving data mining problems.
7. Learning the specialized subfields of data science
There are several areas such as Text Mining, Image Mining, Video Mining, Voice Mining, Working on Economic Data, and two other subfields of data mining. After learning algorithms, students can select
one or more sub-domains as specialized sub-domains and focus on issues related to that sub-domain. Also, a data science specialist usually finds the necessary expertise in one of these sub-fields and
identifies and solves the more complex problems of each sub-field well.
8. Learning advanced algorithms and methods such as reinforcement learning and applied optimization methods
Reinforcement Learning, combined with deep learning techniques, can solve more advanced problems. Learning these techniques allows the student to solve more advanced problems in a dynamic
Text mining
In the following, to consolidate learning and create a vision of a career future in this field, we will deal with one of the sub-domains of data mining: text mining. Text mining or natural language
processing (NLP) is one of the data science and data mining subfields. Many companies in data mining have focused on text mining and extracting patterns from text. In-text mining, the focus is on
textual data: everyday texts composed of different words (such as words in Persian or English).
A large amount of data produced by modern man has been collected in the form of text, and this in itself has created valuable and rich content and, consequently, complex patterns among textual data.
But how can these valuable patterns be extracted from data using new tools, such as computers and supercomputers? The answer to this question gave rise to the field of text mining, and many
scientists began to work on textual data.
Different methods are presented in the field of text, each of which is used for one or more issues in this field. These algorithms are commonly implemented by popular programming languages such as
Python or Java, and some of them have been used in large businesses.
Analyze users’ feelings by text mining
For example, suppose a business such as Google Play (or its internal counterparts) can use text mining algorithms to evaluate user input in each application and the quality or poor quality of each
software (depending on the dynamic analysis of each Which comments) to be informed. This Sentiment Analysis can also be much more accurate or advanced. Suppose each of the texts contains comments
about a piece of software. For example, someone said that “this application looks good, but its speed is low.” Advanced algorithms and hybrid methods of emotion analysis can detect this separation in
particular software. Text-based algorithms and methods can analyze texts like humans.
Search through a multitude of texts by text mining
Another problem that text mining seeks to solve is searching through many texts. Building search engines like Google or Yandex are among these issues. Grouping different texts and pages and fetching
the right content from many texts can help in a very fast search through a multitude of contents and increase the quality of the search. These algorithms can analyze and understand the texts on a
page. For example, if the page is about “mobile games,” these search engines know that on this page, you can find content such as “games,” “mobile,” “software,” “iTee,” and so on, so search engines
put the page next to They place pages that work in the same fields and display these pages to the user when searching.
Set up an automated ticketing response system supported by text mining
Or, for example, suppose you have a system in which different people, using the support system, send different tickets and requests to different units of a company. These tickets must be sent to the
relevant unit. An intelligent system using text mining algorithms can automatically send a support ticket to the relevant unit. Also, in more advanced mode, generate an auto-reply and send it to the
user. Many companies have come up with valuable data in their Q&A. Given the relationships they have had with their customers over the years. For example, many customers chat with the operator by
text every day. These chats can be valuable data that the algorithm learns from these questions and answers. From now on, the algorithm can automatically give useful and instructive answers to users’
Investment risk management by text mining
Another application of text mining can be considered investment risk management. Large investment companies can analyze news and articles in the official newspapers of companies to gain important and
valuable points for investing. For example, the algorithm may be based on news it has learned in the past from news texts, that whenever it sees news about the import of a particular product, after a
week, the shares of a particular company increase. According to these patterns and trends that it recognizes, the algorithm gives the ability to offer investment in a particular company so that this
particular company will gain a lot of profit for the owners of capital.
Online crime detection by text mining
Text mining can also play an important role in detecting online crime. For example, thieves who hunt their prey through cyberspace can have special patterns in chats or comments on social networks.
Cyber security policy in any country can identify these patterns and deal with them legally by intelligently monitoring virtual networks.
Smart online advertising by text mining
Another area in which text analysis can play an effective role is smart online advertising. By analyzing the pages on which their ads are located, advertising companies can understand the content of
a web page and display an ad that is relevant to the topic on that page. For example, one page might contain information about an “electronic kit.” The smart advertising engines that are placed by
the site administrator and have access to that page try to display the most relevant ad to the user.
Steps are required to become a data science expert in data mining. This article has explained the eight steps to becoming a data science expert, but the path mentioned above is not the only path
available. And each student can go through different paths according to their interests and abilities, but the path and roadmap are stated. The above is one of the ways that seem to have attracted
the attention of many scientists in the field of science.
In the continuation of the article, we introduced one of the sub-domains of data mining, namely text mining, and explained some text mining applications. Of course, text mining and natural language
processing are not limited to the mentioned cases. We created and corrected texts, created analytical texts, created subtitles, composed texts to create new texts, and automatically sorted documents.
Discovering hidden relationships between articles, creating chatbots (chatbots), and many other things can be other text mining applications. If you have any questions or comments about this article,
please share them with SunLearn users in the comments section and us. | {"url":"https://ded9.com/data-science/","timestamp":"2024-11-04T09:19:57Z","content_type":"text/html","content_length":"189308","record_id":"<urn:uuid:c7f1b363-0c86-4d7f-b186-a2968914fb1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00540.warc.gz"} |
0.65 as a Fraction - Calculation Calculator
How to write 0.65 as a Fraction? A decimal to fraction conversion is the process of representing a decimal number as a fraction. In a fraction, the top number (numerator) represents the part of a
whole, and the bottom number (denominator) represents the whole. Converting a decimal to a fraction involves expressing the decimal as a ratio of two numbers, with the denominator being a power of
10. The resulting fraction is usually in its simplest form, which means that the numerator and denominator have no common factors other than 1. | {"url":"https://calculationcalculator.com/0.65-as-a-fraction","timestamp":"2024-11-05T04:25:57Z","content_type":"text/html","content_length":"97904","record_id":"<urn:uuid:a4c99443-3dea-47d5-bf9e-cfd90cb2c004>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00186.warc.gz"} |