content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Reset Password
Prof. Ben. Iyorzor,
Department of Physics, University of Benin, Nigeria.
Email: editor.jnamp@uniben.edu
Prof. Vincent Ele Asor, FNAMP
Department of Mathematics, Michael Okpara University of Agriculture,
Umudike, Abia State, Nigeria
Prof. Garba Babaji,
Dept. of Mathematics, Bayero Univeristy, Kano, Nigeria
Prof. M. O. Ibrahim
Department of Mathematics University of Ilorin
Prof. C. M. I. Okoye
Dept. of Physics, University f Nigeria, Nsukka, Nigeria.
Prof. A. A. Tijanni
Dept. of Mathematics, Ahmadu Bello Univ. Zaria, Nigeria.
Dr. G. A. Okeke
Dept. of Mathematics, Federal University of Technology, Owerri, Nigeria.
Prof. Lawan Sani Taura,
Department of Physics, Bayero University, Kano. Nigeria.
Prof. G. C. E. Mbah,
Dept. of Mathematics, Univrsity of Nigeria, Nsukka, Nigeria.
Prof. E. O. Oghre,
Department of Mathematics, University of Benin. Nigeria.
Prof. A. O. Popoola,
Department of Mathematics, Osun State University, Osogbo. Nigeria.
Prof. E. O. Aiyohuyin,
Department of Physics, University of Benin. Nigeria.
Prof. Akpan Ndem Ikot
Department of Physics, University of PortHarcourt, PortHarcourt, Rivers State.
Prof. Edison Aroghene Enaibe
Department of Physics, Federal University of Petroleum Resources, Effurun, Delta State.
Prof. E. Aghemenloh,
Department of Physics, University of Benin. Nigeria. | {"url":"https://nampjournals.org.ng/index.php/index/login/lostPassword","timestamp":"2024-11-07T01:21:13Z","content_type":"text/html","content_length":"9577","record_id":"<urn:uuid:941ab406-d524-4157-a4b9-b61dc04beca2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00017.warc.gz"} |
Special Year 2022-23: Dynamics, Additive Number Theory and Algebraic Geometry - Seminar
The "intersectivity lemma" states that if a ∈ (0,1) and A_n, n ∈ N, are measurable sets in a probability space (X,m) satisfying m(A_n) ≥ a for all n, then there exist a subsequence n_k, k ∈ N, which
has positive upper density and such that the... | {"url":"https://www.ias.edu/mathematics/tags/special-year-2022-23-dynamics-additive-number-theory-and-algebraic-geometry-0?page=2","timestamp":"2024-11-02T04:54:56Z","content_type":"text/html","content_length":"63477","record_id":"<urn:uuid:55234a94-a0c4-46e6-a16a-810ac6a7303a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00850.warc.gz"} |
MathFiction: The Distant Dead (Heather Young)
When a boy named Sal discovers the burned body of his middle school math teacher, two amateur sleuths try to determine who killed him. One of them is Jake, the volunteer fireman to whom Sal initially
reports the grisly crime. The other is Sal's history teacher, one of the few people in town who attempted to befriend the timid and anti-social math teacher since he arrived in this rural Nevada town
at the start of the school year.
I suppose I could have guessed from the title that this book would not be a light-hearted comedy, but it portrays a much more depressing view of the human condition than I expected. The author has an
amazing talent for creating tragic and painful tales of misery. Let me give you one example (summarized in my own words) so that you can see what I mean:
Jake was in love with Sal's mother since they were children together. It was love of the unrequited variety, but he was saving himself for her hoping that one day she would come around. Sadly, the
first time he went into her home was as an EMT after she had overdosed on heroin. (He cleaned up the scene, hid the drug paraphernalia and reported it as a heart attack.) And, while carrying her
coffin at the funeral, he thought about how many times he had imagined walking down the aisle with her in that church, but never in that way.
Wow, right? But, that's not the only one. Nearly every character in the book -- and there are many -- has an equally tragic story of woe. There are three fathers in the book who were responsible for
the deaths of their own sons and feel overwhelming guilt as a result...all in this same small area in rural Nevada. (I am rarely accused of optimism, but even I don't believe that unbearable sorrow
is quite that common.)
I am guessing that the author would also tell us that she is a keen observer of human behavior. The book is filled with details about lies that are revealed by body language, true motivations, and
things left unsaid. Sal (short for "Absalom") is described as being especially good at reading people, and some of the chapters are presented from his viewpoint, but this sense of deciphering what
people really think rather than what they say runs through the whole book.
Oh, there is also an archeological sub-theme concerning the bones of the first peoples in the Americas.
But, aside from the tragic life stories, the details about human behavior, and the archeology, there is also quite a lot about mathematics...and of course the mystery of how the math teacher came to
be lying dead beside a campfire with his legs bound by a jump rope.
Before I go into a detailed discussion of the mathematics in the book, let me point out that the mystery is what makes this a page turner. The reader will want to try to piece together all of the
information in an attempt to figure out what really happened. The clues are given out slowly, one by one, chapter by chapter, and the final answer will not be revealed until very close to the end of
the book. So, if you intend to read this book for enjoyment, you probably want to stop reading this website now.
Spoiler Alert: In the list and discussion below of mathematics in The Distant Dead, I will not be directly identifying the murderer, but I will be mentioning some of the clues.
• Adam Merkel, the murder victim, was an associate professor of mathematics in Reno before becoming a middle-school math teacher in rural Nevada. The book correctly indicates that this is an
unusual career trajectory. One of his university colleagues explains his failure to be promoted to full professor as being due to "the usual reasons. Not enough innovation. Not enough
• Merkel fits one of the stereotypes of mathematicians in fiction: he is timid, anti-social, nerdy. His middle-school students call him "Merkel the Turtle" and he tries to ignore it when they mock
him openly in class. His one great ability appears to be sharing his love of mathematics in a one-on-one situation. He is able to convey the beauty and especially the utility of math to students
in that situation. One of the students who likes to hear Merkel talk about math is Sal (who also signs up for Merkel's chess club out of pity). Another student who fell under his spell is his
ex-wife, who took his calculus class when she was in college to fulfill an "interdisciplinary requirement". Finally, there was a brilliant grad student named Lucas who found motivation in
Merkel's mathematical soliloquies.
• The reader is supposed to believe that Lucas was so brilliant that he was going to prove the Riemann Hypothesis. The only reason he didn't is that he was arrested for dealing drugs. The
university claimed the work he had done on the Riemann Hypothesis was their intellectual property and set some of their professors working on completing Lucas' proof while he was in prison. He
blames Merkel for the fact that he is not a mathematician, both because he claims that Merkel was trying to steal some of the glory (and perhaps the $1 million Clay Institute prize money) for
himself and because it was Merkel who turned him in to the police.
There is some interesting discussion of this idea that Merkel was trying to unfairly claim credit for the Riemann Hypothesis proof. Merkel would show up to talk with Lucas about the proof, and
Lucas saw this as selfish and unhelpful. Sal, despite his youth and inexperience, seems to realize that Lucas may not be giving Merkel credit for the role he played in helping with the proof. In
fact, it is not unusual for math researchers to have difficulty weighing the contributions that their collaborators are making and to end up with an impression that they themselves are doing all
of the important work while the others are just free-loading.
But, I have some serious problems with this aspect of the story.
For one thing, I can't get myself to believe that Lucas would have proved the Riemann Hypothesis if he hadn't been arrested. All of the characters in the book seem to really believe this without
irony or skepticism. In reality, there have been lots of smart people who have thought that they had an approach that could result in a proof but were not able to make it work. You can't be sure
that someone can prove it until they have. Either you have a proof or your don't, and Lucas didn't. Since the book gives me no other indication of Lucas' mathematical genius aside from this claim
that he could have proved the Riemann Hypothesis if only the law and the university hadn't gotten in his way, I am left with no reason to think that he was as smart as is claimed...but I don't
think that was the author's intention.
Moreover, this idea that he was somehow prevented from completing the proof by the university policy is ridiculous to me. I know that universities do sometimes claim that something is their
intellectual property, but I think that applies to patents or to data. I've never seen it applied to a mathematical proof and I personally don't think it would hold up in court if anyone tried.
(Note that mathematical ideas cannot be protected by either patent or copyright.) And, in any case, they could not have stopped him from working on it while in prison! Number theory research does
not depend on having any expensive equipment or data collected from experiments. If he was close to finishing a proof of the Riemann Hypothesis, I would think that a couple of years in prison
would be ideal and he'd have finished the proof by the time he got out. Then he would be telling people he had proved the Riemann Hypothesis instead of that he could have done so.
And, if we're supposed to think that the goal here was winning the fame, prestige and money of solving a Millennium Problem prize, then I don't think the university's policy on proprietary
research would matter. If the other professors put the finishing touches on Lucas' proof of the Riemann Hypothesis while he was in prison, then as far as the mathematical community and the Clay
Institute were concerned, I think the credit would still go (at least mostly) to Lucas!
• Lucas is yet another mathematical stereotype. He's a genius who is not only conceited and obnoxious but also immoral (or at best amoral). Not only was he a drug dealer while working on his PhD
thesis, he was also having an affair with his advisor's wife. And, he shows up in the small town where Merkel is working as a middle school teacher with the intention of making him suffer for
what he did to him. His plan involves using Sal, and he doesn't mind hurting a child to achieve this goal. It is for these reasons that I've tagged this work of mathematical fiction with the "
Evil Mathematicians" motif.
• The book mentions "Pi Day" and Merkel's habit of preparing pies for his students repeatedly, almost to the point that it becomes annoying. While baking pies to bring to school the next day,
Merkel defines π for Sal. He describes it in terms of a ratio made from circles which has the same value regardless of the size of the circle and then adds
(quoted from The Distant Dead)
That made it a "constant", which was a special kind of number that helped mathematicians solve equations.
(Note: Although the definition is correct, that last part is not quite right. All numbers are constants, not just the ones that are useful in solving equations.)
• The distinction between pure and applied math is a running theme in the book. It does a pretty good job of addressing that, despite not saying anything particularly novel or interesting. Merkel
is said to be mostly interested in the applications of math and the "math stories" he tells Sal mostly involve ways that math helps us understand natural phenomena or even why there are 60
minutes in an hour. In contrast, Lucas is interested in math for its own sake without caring about applications:
(quoted from The Distant Dead)
"No, it's not just a game." Lucas leaned forward, suddenly serious. "It's a beautiful hypothesis. Pure, wicked math, absolutely elegant. And it's true. Everybody knows it's true. But nobody's
been able to prove it's true, not even Riemann, even though they've been trying for a hundred and fifty years. How could you not want to prove it, if you could?"
Well, I'm not sure I agree that everyone "knows" the Riemann Hypothesis is true, but I understand his sentiment. (In fact, that's why I find it so hard to believe that Lucas wouldn't have proved
RH during his many months in prison if he'd really been able to do so. Read the quote in the box above and ask whether that person would have been deterred by the university's claim that his
prior research was proprietary.)
In summary, this is a very sad and dark murder mystery in which the victim is a former mathematics professor, one of the suspects is his former thesis student, and an incomplete proof of the Riemann
Hypothesis is one of the possible motives. | {"url":"https://kasmana.people.charleston.edu/MATHFICT/mfview.php?callnumber=mf1349","timestamp":"2024-11-11T04:21:13Z","content_type":"text/html","content_length":"19923","record_id":"<urn:uuid:1b6b66dc-209b-4e9a-8d9e-2693ee03d5ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00555.warc.gz"} |
Physics Questions
About me and why I created this physics website.
Physics Questions
On this page I put together a collection of physics questions to help you understand physics better. These questions are designed to challenge and inspire you to think about physics at a deeper
level. In addition to being challenging, these questions are fun and interesting. This page is a good resource for students who want good quality problems to practice with when studying for tests and
To see the questions click on the category you are interested in:
High School Physics Questions College and University Physics Questions Extra Challenging Physics Questions High School Physics Questions Problem # 1
Do heavier objects fall more slowly than lighter objects?
See solution Problem # 2
Why do objects float in liquids denser than themselves?
See solution Problem # 3
A particle is moving around in a circle and its position is given in polar coordinates as
, and
, where
is the radius of the circle, and
is in radians. From these equations derive the equation for centripetal acceleration.
See solution Problem # 4
How come in free fall you feel weightless even though gravity is pulling down on you? (ignore air resistance when answering this question).
See solution Problem # 5
What is the difference between centripetal acceleration and centrifugal force?
See solution Problem # 6
What is the difference between energy and power?
See solution Problem # 7
Two identical cars collide head on. Each car is traveling at 100 km/h. The impact force on each car is the same as hitting a solid wall at:
(a) 100 km/h
(b) 200 km/h
(c) 150 km/h
(d) 50 km/h
See solution Problem # 8
Why is it possible to drive a nail into a piece of wood with a hammer, but it is not possible to push a nail in by hand?
See solution Problem # 9
An archer pulls back 0.75 m on a bow which has a stiffness of 200 N/m. The arrow weighs 50 g. What is the velocity of the arrow immediately after release?
See solution Problem # 10
When a moving car encounters a patch of ice the brakes are applied. Why is it desirable to keep the wheels rolling on the ice without locking up?
See solution Solutions For High School Physics Questions Solution For Problem # 1
No. If an object is heavier the force of gravity is greater, but since it has greater mass the acceleration is the same, so it moves at the same speed (if we neglect air resistance). If we look at
Newton's second law,
F = ma
. The force of gravity is
, where
is the mass of the object and
is the acceleration due to gravity.
Equating, we have
mg = ma
. Therefore,
a = g
If there was no air resistance, a feather would fall at the same speed as an apple.
Solution For Problem # 2
If an object were completely immersed in a liquid denser than it, the resulting buoyant force would exceed the weight of the object. This is because the weight of the liquid displaced by the object
is greater than the weight of the object (since the liquid is denser). As a result, the object cannot remain completely submerged and it floats. The scientific name for this phenomenon is
Archimedes Principle
Solution For Problem # 3
Without loss of generality, we only need to look at the equation for the
-position, since we know that centripetal acceleration points towards the center of the circle. Thus, when
= 0, the second derivative of
with respect to time must be the centripetal acceleration.
The first derivative of
with respect to time
= -
The second derivative of
with respect to time
= -
In both of the above equations the chain rule of Calculus is used and by assumption
is a function of time. Therefore,
can be differentiated with respect to time.
Now, evaluate the second derivative at
= 0.
We have,
= -
The term d
is usually called the angular velocity, which is the rate of change of the angle
. It has units of radians/second.
For convenience we can set
≡ d
= -
This is the well-known form for the centripetal acceleration equation.
Solution For Problem # 4
The reason you feel weightless is because there is no force pushing against you, since you are not in contact with anything. Gravity is pulling equally on all the particles in your body. This creates
a sensation where no forces are acting on you and you feel weightless. It would be the same sensation as if you were floating in space.
Solution For Problem # 5
Centripetal acceleration is the acceleration an object experiences as it travels a certain velocity along an arc. The centripetal acceleration points towards the center of the arc.
Centrifugal force is the imaginary force an unrestrained object experiences as it moves around an arc. This force acts opposite to the direction of centripetal acceleration. For example, if a car
makes a sharp right turn the passengers would tend to slide in their seats away from the center of the turn, towards the left (if they are not wearing their seat belts, that is). The passengers would
feel as if they are experiencing a force. This is defined as centrifugal force.
Solution For Problem # 6
Power is the rate of energy being generated or consumed. For example, if an engine produces 1000 Watts of power (where Watts is Joules/second), then after an hour the total energy produced by the
engine is 1000 Joules/second × 3600 seconds = 3,600,000 Joules.
Solution For Problem # 7
The answer is (a).
Since the collision is head on and each car is identical and traveling at the same speed, the force of impact experienced by each car is equal and opposite. This means that the impact is the same as
hitting a solid wall at 100 km/h.
Solution For Problem # 8
When you swing a hammer you increase its kinetic energy, so that by the time it strikes the nail it imparts a large force which drives the nail into the wood.
The hammer is basically an energy reservoir to which you are adding energy during the course of the swing, and which is released all at once upon impact. This results in the impact force greatly
exceeding the maximum force you can exert by just pushing on the nail.
Solution For Problem # 9
This can be solved using an energy method.
We can solve this by equating the potential energy of the bow to the kinetic energy of the arrow.
The bow can be treated as a type of spring. The potential energy of a spring is:
, where
is the stiffness and
is the amount the spring is stretched, or compressed.
Therefore, the potential energy
of the bow is:
= (1/2)(200)(0.75)
= 56.25 J
The kinetic energy of a particle is:
, where
is the mass and
is the velocity.
The arrow can be treated as a particle since it is not rotating upon release.
Therefore, the kinetic energy
of the arrow is:
= (1/2)(0.05)
If we assume energy is conserved, then
Solving for the velocity of the arrow
we get
= 47.4 m/s
Solution For Problem # 10
Static friction is greater than kinetic friction.
Static friction exists if the wheels keep rolling on the ice without locking up, resulting in maximum braking force. However, if the wheels lock up then kinetic friction takes over since there is
relative slipping between wheel and ice. This reduces the braking force and the car takes longer to stop.
Anti-lock braking systems (ABS) on a vehicle prevent the wheels from locking up when the brakes are applied, thus minimizing the amount of time it takes for the vehicle to reach a complete stop.
Also, by preventing the wheels from locking up you have greater control of the vehicle.
College and University Physics Questions (mostly first year level) Density Problems Energy Problems Force Problems Friction Problems Inclined Plane Problems Kinematics Problems Kinetic Energy
Problems Mechanics Problems Momentum Problems Pulley Problems Statics Problems Thermodynamics Problems Torque Problems Extra Challenging Physics Questions
The 20 physics questions given below are both interesting and highly challenging. You will likely have to take some time to work through them. These questions go beyond the typical problems you can
expect to find in a physics textbook. Some of these physics questions make use of different concepts, so (for the most part) there is no single formula or set of equations that you can use to solve
them. These questions make use of concepts taught at the high school and college level (mostly first year).
It is recommended that you persist through these physics questions, even if you get stuck. It's not a race, so you can work through them at your own pace. The result is that you will be rewarded with
a greater understanding of physics.
Problem # 1
A crank drive mechanism is illustrated below. A uniform linkage
of length
connects a flywheel of radius
(rotating about fixed point
) to a piston at
that slides back and forth in a hollow shaft. A variable torque
is applied to the flywheel such that it rotates at a constant angular velocity. Show that for one full rotation of the flywheel, energy is conserved for the entire system; consisting of flywheel,
linkage, and piston (assuming no friction).
Note that gravity
is acting downwards, as shown.
Even though energy is conserved for the system, why is it a good idea to make the components of the drive mechanism as light as possible (with the exception of the flywheel)?
Problem # 2
An engine uses compression springs to open and close valves, using cams. Given a spring stiffness of 30,000 N/m, and a spring mass of 0.08 kg, what is the maximum engine speed to avoid “floating the
During the engine cycle the spring is compressed between 0.5 cm (valve fully closed) and 1.5 cm (valve fully open). Assume the camshaft rotates at the same speed as the engine.
Floating the valves occurs when the engine speed is high enough so that the spring begins to lose contact with the cam when the valve closes. In other words, the spring doesn’t extend quickly enough
to maintain contact with the cam, when the valve closes.
For simplicity, you may assume that Hooke’s Law applies to the spring, where the force acting on the spring is proportional to its amount of compression (regardless of dynamic effects).
You may ignore gravity in the calculations.
Problem # 3
An object is traveling in a straight line. Its acceleration is given by
is a constant,
is a real number, and
is time.
Find the general equations for the position and velocity of the object as a function of time.
Problem # 4
In archery, when an arrow is released it can oscillate during flight. If we know the location of the center of mass of the arrow (
) and the shape of the arrow at an instant as it oscillates (shown below), we can determine the location of the nodes. The nodes are the “stationary” points on the arrow as it oscillates.
Using a geometric argument (no equations), determine the location of the nodes.
Assume that the arrow oscillates in the horizontal plane, so that no external forces act on the arrow in the plane of oscillation.
Problem # 5
A gyroscope wheel is spinning at a constant angular velocity
while precessing about a vertical axis at a constant angular velocity
. The distance from the pivot to the center of the front face of the spinning gyroscope wheel is
, and the radius of the wheel is
. The rod connecting the pivot to the wheel makes a constant angle
with the vertical.
Determine the acceleration components normal to the wheel, at points A, B, C, D labeled as shown.
Problem # 6
When a vehicle makes a turn, the two front wheels trace out two arcs as shown in the figure below. The wheel facing towards the inside of the turn has a steering angle that is greater than that of
the outer wheel. This is necessary to ensure that both front wheels smoothly trace out two arcs, which have the same center, otherwise the front wheels will skid on the ground during the turn.
During a turn, do the rear wheels necessarily trace out the same arcs as the front wheels? Based on your answer, what are the implications for making a turn close to the curb?
Problem # 7
A horizontal turntable at an industrial plant is continuously fed parts into a slot (shown on the left). It then drops these parts into a basket (shown on the right). The turntable rotates 180°
between these two stages. The turntable briefly stops at each 1/8
of a turn in order to receive a new part into the slot on the left.
If the rotational speed of the turntable is
radians/second, and the outer radius of the turntable is
, what must be the inner radius
so that the parts fall out of the slot and into the basket, as shown?
• The angular speed
of the turntable can be treated as constant and continuous; which means you can ignore the brief stops the turntable makes at each 1/8
of a turn.
• The location of the basket is 180° from the feed location.
• The slots are very well lubricated so that there is no friction between the slot and part.
• The parts can be treated as particles, which means you can ignore their dimensions in the calculation.
• The slots are aligned with the radial direction of the turntable.
Problem # 8
A flywheel for a single piston engine rotates at an average speed of 1500 RPM. During half a rotation the flywheel has to absorb 1000 J of energy. If the maximum permissible speed fluctuation is ± 60
RPM, what is the minimum rotational inertia of the flywheel? Assume there is no friction.
Problem # 9
An aluminum extrusion process is simulated numerically with a computer. In this process, a punch pushes an aluminum billet of diameter
through a die of smaller diameter
. In the computer simulation, what is the maximum punch velocity
so that the net dynamic force (predicted by the simulation) acting on the aluminum during extrusion is at most 5% of the force due to deformation of the aluminum? Evaluate for a specific case where
= 0.10 m,
= 0.02 m, and the density of aluminum is
= 2700 kg/m
The force due to deformation of the aluminum during extrusion is given by
The extrusion of the aluminum through the die is analogous to fluid flowing through a pipe which transitions from a larger diameter to a smaller diameter (e.g. water flowing through a fireman’s
hose). The net dynamic force acting on the fluid is the net force required to accelerate the fluid, which occurs when the velocity of the fluid increases as it flows from the larger diameter section
to the smaller diameter section (due to conservation of mass).
Problem # 10
A child on a horizontal merry-go-round gives an initial velocity
to a ball. Find the initial direction and velocity
of the ball relative to the merry-go-round so that, relative to the child, the ball goes around in a perfect circle as he’s sitting on the merry-go-round. Assume there is no friction between
merry-go-round and ball.
The merry-go-round is rotating at a constant angular velocity of
radians/second, and the ball is released at a radius
from the center of the merry-go-round.
Problem # 11
A heavy pump casing with a mass
is to be lifted off the ground using a crane. For simplicity, the motion is assumed to be two-dimensional, and the pump casing is represented by a rectangle having side dimensions
(see figure). A cable of length
is attached to the crane (at point
) and the pump casing (at point
). The crane pulls up vertically on the cable with a constant velocity
The center of mass
of the pump casing is assumed to lie in the center of the rectangle. It is located at a distance
from point
. The right side of the pump casing is located at a horizontal distance
from the vertical line passing though point
Find the maximum cable tension during the lift, which includes the part of the lift before the pump casing loses contact with the ground, and after the pump casing loses contact with the ground (lift
off). In this stage the pump casing swings back and forth.
Evaluate for a specific case where:
= 0.4 m
= 0.6 m
= 0.2 m
= 3 m
= 200 kg
= 9 kg-m
(rotational inertia of pump casing about
• The friction between the pump casing and ground is high enough so that the pump casing does not slide along the ground (towards the right), before lift off occurs.
• Before lift off occurs, dynamic effects are negligible.
• The velocity
is fast enough so that the bottom of the pump casing swings clear of the ground after lift off occurs.
• For purposes of approximating the cable tension, you can model the system as a regular pendulum during swinging (you can ignore double pendulum effects).
• The mass of the cable can be neglected.
Problem # 12
A linkage arrangement is shown below. The pin joints
are attached to a stationary base and are separated by a distance
. The linkages of identical color have the same length. All linkages are pin jointed and allow for rotation. Determine the path traced by the end point
as the blue linkage of length
rotates back and forth.
Why is this result interesting?
Problem # 13
A conveyor belt carrying aggregate is illustrated in the figure below. A motor turns the top roller at a constant speed, and the remaining rollers are allowed to spin freely. The belt is inclined at
an angle
. To keep the belt in tension a weight of mass
is suspended from the belt, as shown.
Find the point of maximum tension in the belt. You don’t have to calculate it, just find the location and give a reason for it.
Problem # 14
A quality test has determined that a pump impeller is too heavy on one side by an amount equal to 0.0045 kg-m. To correct this imbalance it is recommended to cut out a groove around the outer
circumference of the impeller, using a milling machine, on the same side as the imbalance. This will remove material with the intent of correcting the imbalance. The dimension of the groove is 1 cm
wide and 1 cm deep. The groove will be symmetric with respect to the heavy spot. How far around the outer circumference of the impeller should the groove be? Specify the answer in terms of
. Hint: Treat the groove as a thin ring of material.
The outer radius of the impeller, at the location of the groove, is 15 cm.
The impeller material is steel, with a density of
= 7900 kg/m
Problem # 15
As part of a quality check, an axisymmetric container is placed over a very well lubricated fixed mandrel, as shown below. The container is then given an initial pure rotation
, with no initial translational motion. What do you expect to see if the center of mass of the container is offset from the geometric center
of the container?
Problem # 16
A stream of falling material hits the plate of an impact weigher and the horizontal force sensor allows the mass flow rate to be calculated from this. If the speed of the material just before it
strikes the plate is equal to the speed of the material just after it strikes the plate, determine an equation for the mass flow rate of the material, based on the horizontal force readout on the
sensor. Ignore friction with the plate.
Hint: This can be treated as a fluid flow problem.
Problem # 17
The SunCatcher is a Stirling engine that is powered by solar energy. It uses large parabolic mirrors to focus sunlight onto a central receiver, which powers a Stirling engine. In the parabolic mirror
you can see the reflection of the landscape. Why is the reflection upside down?
Source: http://www.stirlingenergy.com
Problem # 18
On a cold, dry winter day your glasses fog up when you go indoors after being outside for a while. Why is that?
And if you go back outside with your glasses still fogged up, they quickly clear up. Why is that?
Problem # 19
In an astronaut training exercise, an airplane at high altitude travels along a circular arc in order to simulate weightlessness for its passengers. Explain how this is possible.
Problem # 20
A rope is wrapped around a pole of radius
= 3 cm. If the tension on one end of the rope is
= 1000 N, and the coefficient of static friction between the rope and pole is
= 0.2, what is the minimum number of times the rope must be wrapped around the pole so that it doesn’t slip off?
Assume that the minimum number of times the rope must be wrapped around the pole corresponds to a tension of 1 N on the other end of the rope.
I created solutions for the 20 physics questions given above. The solutions are given in an ebook, in PDF format. They are available through
this link
Return to Real World Physics Problems home page | {"url":"https://www.real-world-physics-problems.com/physics-questions.html","timestamp":"2024-11-04T04:33:35Z","content_type":"text/html","content_length":"63251","record_id":"<urn:uuid:a9dcfb55-d37e-406e-93c7-6754c9a6f073>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00825.warc.gz"} |
KFUPM Bulletin
Basic concepts and principles of mechanics; equilibrium of particles in two dimensions; definition of moment and couple; reduction of systems forces; equilibrium of rigid bodies in two dimensions;
analysis of truss-type structures and internal forces; geometric properties of cross-section area; centroid and moments of inertia; shear and bending moment diagrams in beams; stress, Stress-strain
relationships; stress and deformation of axially loaded members; stress-concentration; thermal stresses; pressure-vessels; torsion-stress and deformation; elastic bending and shear stresses in beams;
compound stresses; stress transformation. Note: This course is for non-CE students only Not to be taken for credit with CE 201 or CE 203
Pre-requisites: PHYS101 | {"url":"https://bulletin.kfupm.edu.sa/course-detail?course_code=CE202","timestamp":"2024-11-08T01:07:01Z","content_type":"text/html","content_length":"11215","record_id":"<urn:uuid:ba153a02-999c-4d22-bd76-49345b94859b>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00783.warc.gz"} |
Odd Even Linked List
Given a singly linked list, group all odd nodes together followed by the even nodes. Please note here we are talking about the node number and not the value in the nodes.
You should try to do it in place. The program should run in O(1) space complexity and O(nodes) time complexity.
Given 1->2->3->4->5->NULL,
return 1->3->5->2->4->NULL.
The relative order inside both the even and odd groups should remain as it was in the input.
The first node is considered odd, the second node even and so on …
Result: Accepted Time: 4 ms
Here should be some explanations.
* Definition for singly-linked list.
* struct ListNode {
* int val;
* struct ListNode *next;
* };
struct ListNode* oddEvenList(struct ListNode* head) {
struct ListNode Odd,Even,*OLast,*ELast;
OLast = &Odd;
ELast = &Even;
Odd.next = NULL;
Even.next = NULL;
int i = 1;
OLast->next = head;
OLast = head;
ELast->next = head;
ELast = head;
i ^=1;
head = head->next;
OLast->next = Even.next;
ELast->next = NULL;
return Odd.next;
Complexity Analytics
• Time Complexity: $O(n)$
• Space Complexity: $O(1)$ | {"url":"https://imkean.com/leetcode/328-odd-even-linked-list/","timestamp":"2024-11-08T18:25:19Z","content_type":"text/html","content_length":"16729","record_id":"<urn:uuid:21109824-800f-4a2d-91c4-71ce76683ab4>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00369.warc.gz"} |
Thanks, it's working better then the original?
I don't know if it is better. It is more my style. It has a bit more control and a bit less atk power: +1 Rewind, +1 Oty, +1 Bloodsucker, +1 Pulvi, +1Deflag, +1 crusader, -1 steal, -1 graboid, -1
charger, -1 BH, -1 archangel, -1 hourglass. I really think the bloodsucker is a must.
Here an example of it taking a EQ/RW grabodeck down (the double effect poison/fire buckler killed all graboids)
And here a deadly immorush also controlled with a combination of maxwell (yet dead) and bloodsucker: | {"url":"https://elementscommunity.org/forum/the-arena/most-sensational-arena-matches/msg404311/","timestamp":"2024-11-09T19:13:05Z","content_type":"application/xhtml+xml","content_length":"144921","record_id":"<urn:uuid:661db333-f795-4680-b37d-4121fa399e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00514.warc.gz"} |
Central Angle of Annulus Sector given Outer Arc Length and Breadth Calculator | Calculate Central Angle of Annulus Sector given Outer Arc Length and Breadth
What is an Annulus Sector?
An Annulus Sector, also known as the circular ring sector, is a cut piece from an Annulus that is joined by two straight lines from its center.
What is Annulus?
In mathematics, an Annulus (plural Annuli or Annuluses) is the region between two concentric circles. Informally, it is shaped like a ring or a hardware washer. The word "annulus" is borrowed from
the Latin word anulus or annulus meaning 'little ring'. The adjectival form is annular (as in annular eclipse).The area of an Annulus is the difference in the areas of the larger circle of radius R
and the smaller one of radius r.
How to Calculate Central Angle of Annulus Sector given Outer Arc Length and Breadth?
Central Angle of Annulus Sector given Outer Arc Length and Breadth calculator uses Central Angle of Annulus Sector = Outer Arc Length of Annulus Sector/(Inner Circle Radius of Annulus+Breadth of
Annulus) to calculate the Central Angle of Annulus Sector, The Central Angle of Annulus Sector given Outer Arc Length and Breadth formula is defined as the angle whose apex (vertex) is the center of
the concentric circles of Annulus and whose legs (sides) are radii intersecting the circles in four distinct points, calculated using outer arc length and breadth of Annulus Sector. Central Angle of
Annulus Sector is denoted by ∠[Central(Sector)] symbol.
How to calculate Central Angle of Annulus Sector given Outer Arc Length and Breadth using this online calculator? To use this online calculator for Central Angle of Annulus Sector given Outer Arc
Length and Breadth, enter Outer Arc Length of Annulus Sector (l[Outer Arc(Sector)]), Inner Circle Radius of Annulus (r[Inner]) & Breadth of Annulus (b) and hit the calculate button. Here is how the
Central Angle of Annulus Sector given Outer Arc Length and Breadth calculation can be explained with given input values -> 1641.403 = 5/(6+4). | {"url":"https://www.calculatoratoz.com/en/central-angle-of-annulus-sector-given-outer-arc-length-and-breadth-calculator/Calc-38684","timestamp":"2024-11-13T15:56:18Z","content_type":"application/xhtml+xml","content_length":"132861","record_id":"<urn:uuid:f24f7ac1-8770-4cb0-9c41-a89d6af644ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00882.warc.gz"} |
Senator adds “Every Sperm is Sacred” amendment onto Oklahoma personhood bill
(40,432 posts) Fri Feb 10, 2012, 12:25 PM Feb 2012
Senator adds “Every Sperm is Sacred” amendment onto Oklahoma personhood bill
Senator adds Every Sperm is Sacred amendment onto Oklahoma personhood bill
By Vanessa | Published: February 9, 2012
Amazing. After Oklahoma conservatives introduced a personhood bill to the state Senate on Monday, Sen. Constance Johnson decided to follow in Virginia Senator Janet Howell s footsteps and attach an
amendment in protest, which would add this language to the bill:
However, any action in which a man ejaculates or otherwise deposits semen anywhere but in a woman s vagina shall be interpreted and construed as an action against an unborn child.
Jezebel adds that another pro-choice senator added an amendment:
Another pro-choice legislator, Democrat Jim Wilson, attempted to add an amendment to the bill that would require the father of the child to be financially responsible for the woman s health care,
housing, transportation, and nourishment while she was pregnant.
17 replies
1. The Onion, ....right ?
Fri Feb 10, 2012, 12:27 PM
Feb 2012
Fri Feb 10, 2012, 12:29 PM
Feb 2012
4. good god..., holy shit, and I'll be damned. Incredible.
Fri Feb 10, 2012, 12:35 PM
Feb 2012
Fri Feb 10, 2012, 12:38 PM
Feb 2012
15. That's what I thought when I saw this thread!!
Fri Feb 10, 2012, 01:46 PM
Feb 2012
Fri Feb 10, 2012, 02:00 PM
Feb 2012
to find use of that term which preceeded Monty Python so we might as well call it there's by right.
3. Let me get this right. My daily act that keeps my prostate healthy and mind sane makes me
Fri Feb 10, 2012, 12:32 PM
Feb 2012
7. there must be something in the water in oklahoma
Fri Feb 10, 2012, 12:42 PM
Feb 2012
anyone who'd vote for the likes of inhofe (a guy who I think is nuttier than a fruit cake) and thinking the muslins are attempting to push sharia law on us-by god, gotta have an anti-sharia law. And
didn't they refuse the mortgage settlement money to help their fellow homeless oklahomans? Now, I'd see doing it, if they were going after the banks for criminal prosecution; but being as how they
love them some corporations and banksters, I don't think that's the case.
9. these amendments were attached in protest nt
Fri Feb 10, 2012, 12:51 PM
Feb 2012
12. Only if you think that monthly act that drives me bonkers, makes me grumpy,
Fri Feb 10, 2012, 01:20 PM
Feb 2012
causes me to lose my temper for no apparent reason and make those around me miserable contains a personhood in all that mess.
6. That could backfire: Some nutjobs may support it. n/t
Fri Feb 10, 2012, 12:39 PM
Feb 2012
8. I wish about 250 men would surround her house
Fri Feb 10, 2012, 12:48 PM
Feb 2012
and have the world's largest circle jerk. I can't even comprehend this insanity. So if I blow a load into my wife's back versus her vagina i could be violating the law in Oklahoma?
10. It's a pro-choice ammendment attachment.
Fri Feb 10, 2012, 01:05 PM
Feb 2012
Taking the 'personhood' bill to it's extreme.
13. It's meant to help defeat the anti-choice bill
Fri Feb 10, 2012, 01:21 PM
Feb 2012
Since men have no problem controling women's reproductive organs then why shouldn't we have the right to control men's reproductive organs
Fri Feb 10, 2012, 01:39 PM
Feb 2012
I read the other articles about this and realized that SHE was the one making sense and being satirical.
Fri Feb 10, 2012, 07:03 PM
Feb 2012
11. I think this bill does not go far enough!
Fri Feb 10, 2012, 01:15 PM
Feb 2012
My body makes sperm and stores them for future use. If they are not used, they die and are reabsorbed. So by this same logic every woman who refuses to have sex with me (BTW this group comprises BY
FAR the majority of women I know, damn it!) is in effect MURDERING those millions of unborn sperm cells that die a lonely death in my seminal vesicles or wherever. Therefore, declining to have sex
with me constitutes genocide! All of the above is sheer nonsense of course. It's not that much more ridiculous than the proposal put forth by these state senators however. | {"url":"https://upload.democraticunderground.com/1002293436","timestamp":"2024-11-13T18:22:30Z","content_type":"text/html","content_length":"101790","record_id":"<urn:uuid:57dd7fdc-958b-4594-8917-5369339d799b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00517.warc.gz"} |
Arguments specify values used in the evaluation of a function. Arguments are specified in the function-identifier. These arguments can be specified as identifiers, as arithmetic expressions, or as
literals. The definition of a function specifies the number of arguments required, which can be zero, one, or more. For some functions, the number of arguments which can be specified is variable. The
order in which arguments are specified in a function-identifier determines the interpretation given to each value in arriving at the function value. Arguments may be required to have a certain class
or a subset of a certain class. The types of argument are:
1. Numeric. An arithmetic expression must be specified. The value of the arithmetic expression, including operational sign, is used in determining the value of the function.
2. Alphabetic. An elementary data item of the class alphabetic or a nonnumeric literal containing only alphabetic characters must be specified. The size associated with the argument can be used in
determining the value of the function.
3. Alphanumeric. A data item of the class alphabetic or alphanumeric or a nonnumeric literal must be specified. The size associated with the argument can be used in determining the value of the
4. National. An elementary data item of class national or a national literal must be specified. The size associated with the argument can be used in determining the value of the function.
5. Integer. An arithmetic expression which will always result in an integer value must be specified. The value of the arithmetic expression, including operational sign, is used in determining the
value of the function.
The rules for a function can place constraints on the permissible values for arguments, in order to accurately determine the function's value. If, at the time a function is referenced, the arguments
specified for that reference do not have values within the permissible range, the returned value for the function is undefined.
When the definition of a function permits an argument to be repeated a variable number of times, a table can be referenced by specifying the data-name and any qualifiers that identify the table,
followed immediately by subscripting where one or more of the subscripts is the word ALL.
When ALL is specified as a subscript, the effect is as if each table element associated with that subscript position were specified. The order of the implicit specification of each occurrence is from
left to right, with the first (or leftmost) specification being the identifier with each subscript specified by the word ALL replaced by one, the next specification being the same identifier with the
rightmost subscript specified by the word ALL incremented by one.
This process continues with the rightmost ALL subscript being incremented by one for each implicit specification until the rightmost ALL subscript has been incremented through its range of values. If
any additional ALL subscripts exist, the subscript immediately to the left of the rightmost ALL subscript is incremented by one, the rightmost ALL subscript is reset to one and the process of varying
the rightmost ALL subscript is repeated. The ALL subscript to the left of the rightmost ALL subscript is incremented through its range of values. For each additional ALL subscript this process is
repeated in turn until the leftmost ALL subscript has been incremented by one through its range of values.
If the ALL subscript is associated with an OCCURS DEPENDING ON clause, the range of values is determined by the object of the OCCURS DEPENDING ON clause. The evaluation of an ALL subscript must
result in at least one argument, otherwise the result of the reference to the function-identifier is undefined.
01 Test-Fields.
10 OT-Elem PIC 9(02).
10 Arr.
15 Ind occurs 5 times PIC 9(02).
compute OT-Elem = function sum (IND(ALL)).
is equivalent to specifying
compute OT-Elem = function sum (IND(1), IND(2), IND(3),
IND(4), IND(5)).
The ALL subscript can also be used with a table that has a variable number of elements as in the following example:
01 Test-Group.
03 OT-Elem pic 9(15) binary.
03 table-length pic s9(9) binary.
03 array.
05 Ind pic 9(2) occurs 1 to 200 times depending on
table length
move 100 to table-length.
compute OT-Elem = function sum (Ind(ALL))
These statements will sum the first 100 elements of the table. At the time the function is referenced, the number of elements specified by table-length determines that number of elements that are | {"url":"https://www.microfocus.com/documentation/visual-cobol/30pu12/VC-DevHub/HRLHLHPDF791.html","timestamp":"2024-11-11T00:52:55Z","content_type":"text/html","content_length":"15614","record_id":"<urn:uuid:78f5d1d3-618e-4873-b993-e50350a6d13d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00158.warc.gz"} |
Axiom of union - Wikipedia Republished // WIKI 2
In axiomaticsettheory, the axiom of union is one of the axioms of Zermelo–Fraenkelsettheory. This axiom was introduced by ErnstZermelo.^[1]
Informally, the axiom states that for each set x there is a set y whose elements are precisely the elements of the elements of x.
YouTube Encyclopedic
• (Axiomatic Set Theory, 4) Pairing and Union Axioms
• 1.11.11 Set Theory Axioms: Video [Optional]
• Sets: Union and Intersection
Formal statement
In the formallanguage of the Zermelo–Fraenkel axioms, the axiom reads:
${\displaystyle \forall A\,\exists B\,\forall c\,(c\in B\iff \exists D\,(c\in D\land D\in A)\,)}$
or in words:
Givenany set A, thereis a set B such that, for any element c, c is a member of B ifandonlyif there is a set D such that c is a member of D and D is a member of A.
or, more simply:
For any set ${\displaystyle A}$, there is a set ${\displaystyle \bigcup A\ }$ which consists of just the elements of the elements of that set ${\displaystyle A}$.
Relation to Pairing
The axiom of union allows one to unpack a set of sets and thus create a flatter set. Together with the axiomofpairing, this implies that for any two sets, there is a set (called their union) that
contains exactly the elements of the two sets.
Relation to Replacement
The axiom of replacement allows one to form many unions, such as the union of two sets.
However, in its full generality, the axiom of union is independent from the rest of the ZFC-axioms: Replacement does not prove the existence of the union of a set of sets if the result contains an
unbounded number of cardinalities.
Together with the axiomschemaofreplacement, the axiom of union implies that one can form the union of a family of sets indexed by a set.
Relation to Separation
In the context of set theories which include the axiom of separation, the axiom of union is sometimes stated in a weaker form which only produces a superset of the union of a set. For example, Kunen^
[2] states the axiom as
${\displaystyle \forall {\mathcal {F}}\,\exists A\,\forall Y\,\forall x[(x\in Y\land Y\in {\mathcal {F}})\Rightarrow x\in A].}$
which is equivalent to
${\displaystyle \forall {\mathcal {F}}\,\exists A\forall x[[\exists Y(x\in Y\land Y\in {\mathcal {F}})]\Rightarrow x\in A].}$
Compared to the axiom stated at the top of this section, this variation asserts only one direction of the implication, rather than both directions.
Relation to Intersection
There is no corresponding axiom of intersection. If ${\displaystyle A}$ is a nonempty set containing ${\displaystyle E}$, it is possible to form the intersection ${\displaystyle \bigcap A}$ using the
axiomschemaofspecification as
${\displaystyle \bigcap A=\{c\in E:\forall D(D\in A\Rightarrow c\in D)\}}$,
so no separate axiom of intersection is necessary. (If A is the emptyset, then trying to form the intersection of A as
{c: for all D in A, c is in D}
is not permitted by the axioms. Moreover, if such a set existed, then it would contain every set in the "universe", but the notion of a universalset is antithetical to Zermelo–Fraenkel set theory.)
1. ^ Ernst Zermelo, 1908, "Untersuchungen über die Grundlagen der Mengenlehre I", Mathematische Annalen 65(2), pp. 261–281.
English translation: JeanvanHeijenoort, 1967, From Frege to Gödel: A Source Book in Mathematical Logic, pp. 199–215 ISBN 978-0-674-32449-7
2. ^ Kunen,Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. ISBN 0-444-86839-9.
Further reading
• PaulHalmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. ISBN 0-387-90092-6 (Springer-Verlag edition).
• Jech,Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. ISBN 3-540-44085-2.
This page was last edited on 8 November 2023, at 19:50 | {"url":"https://wiki2.org/en/Axiom_of_union","timestamp":"2024-11-05T20:25:27Z","content_type":"application/xhtml+xml","content_length":"75500","record_id":"<urn:uuid:9bc54ce4-b5b5-42e5-8539-595a7455bc73>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00573.warc.gz"} |
Quasi-Periodic Pulsation in the decay phase of a flare on AB Dor
Oscillation periods versus damping times of monotonously-decaying quasi-harmonic quasi-periodic pulsation (QPP) occurring the decay phases of solar and stellar flares in the X-ray emission. Blue and
green color filled circles represent data for solar and stellar flares, respectively, as studied by Cho et al. (2016). Red filled circles represent data points for AB Dor QPPs studied here. Solid
color lines are the logarithmic linear regression fits to the data points, corresponding to the respective sets of flares. The grey shaded area around the solid lines of different colors is the 1
sigma confidence band for respective regression fits. The best fitting parameters are given by the text of the respective colors. The black solid line represents a linear fit of all the data points
combined. Histogram density plots for damping time and period are shown along the right and top borders of the plot. Dash colored lines on the histograms correspond to mean values of the damping time
and periods, respectively. Corresponding values of the mean damping time and mean period are printed near the dashed lines.
[Image Credit: Kalugodu et al. (2022)] | {"url":"https://www.subhajeet-karmakar.com/single-post/quasi-periodic-pulsation-in-the-decay-phase-of-a-flare-on-ab-dor","timestamp":"2024-11-08T01:50:46Z","content_type":"text/html","content_length":"1050489","record_id":"<urn:uuid:e159626e-746e-4627-a915-82c32c34ed3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00426.warc.gz"} |
Comparing and then multiplying several cells
My first post, please forgive me if this exact question has been asked 100 times before...
Here goes...
In my first Excel file I have the following information;
In my second Excel File I have the following information;
Amount of Volvo cars: 10
Amount of SAAB cars: 5
Amount of JEEP cars: 3
Amount of Mercedes cars: 2
In my third, and final, Excel file I want Excel to do the following;
Total sum of item A : (2*2) = 4
Total sum of item B : (4*10+3*8) = 64
Total sum of item C : (3*5) = 15
It would be great if it was possible for Excel to check one column for the item name ("A", "B") and then do the above mentioned calculation only when Excel finds this item name. This so that I don't
have to manually adjust each formula and enter "A", "B" etc. for each and every Item I want the sum of.
I hope my question makes sense and many thanks in advance.
Excel Facts
Using Function Arguments with nested formulas
If writing INDEX in Func. Arguments, type MATCH(. Use the mouse to click inside MATCH in the formula bar. Dialog switches to MATCH.
Hello and welcome to the board,
When you say in one Excel file, do you mean the data is in different files, or is it just on different sheets?
I think this will be a SUMPRODUCT formula.
Different files (*.xls), altough I guess I could smack them all together into one big Excel file, but I'd prefer to not, unless I have to...
I agree with you, it should be some kind of SUMPRODUCT, but I 've neen trying that all day without getting it right...
EDIT: Spelling
THis works if the data is on one sheet
in G1 enter this formula and copy down
A1:A4 = cars
B1:B4 = item
C1:C4 = Amount
D1:D4 = Amount of cars
F1:F3 = A, B and C
have a play with the formula if data in different files.
Thanks! I'll give it a shot and post my success/failure here...
Have come up with this
change file names as required.
Book 8 is the volvo, B, 4 etc
Book 7 is the volvo, 10 etc
Thanks, I figured that bit out myself. I had to rearrange my figures a bit, but after much trial and error I finally nailed it!
Many thanks! I wouldn't have made it without you! | {"url":"https://www.mrexcel.com/board/threads/comparing-and-then-multiplying-several-cells.171595/","timestamp":"2024-11-11T09:47:02Z","content_type":"text/html","content_length":"122569","record_id":"<urn:uuid:f245f33a-9106-41af-a133-342883512eac>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00281.warc.gz"} |
Math Tuition Classes - Vip Maths Tuition Classes -
Math Tuition Classes
Math’s Tuition Classes
Something about math
Mathematics is often considered the universal language of the universe, providing a framework for understanding and describing the world around us. Archimedes is called the father of mathematics. It
is believed that the earliest achievements of mathematical texts came from Mesopotamia and Egypt. In the blog of Math tuition classes in Pathankot, we will learn about some things related to
Mathematics and also why the subject of Mathematics is important.
Few lines about Magician of math
National tuition day is celebrated every year on 22 December in India. The great math Srinivasa Ramanujan was born on this day, the sad thing is that he died at the age of 33, but even at such a
young age he gave about 3500 math formulas to the world, this is a great achievement. It is said that knowing more about this great man born in such a small family has remained an inexplicable puzzle
for the scientists.
Why math’s tuition is necessary?
Taking math’s tutoring helps students prepare better for their exams, which further improves their grades. Many times the students do not understand what is taught in the school properly or some
questions are left out and due to the fear of the teacher, Students are not able to ask the answer of that question. In this the parents of the student advise the student to join the math tuition
classes. Students can ask all their questions multiple times without any fear and get all their doubts cleared. Students we also provide math tuition classes in Pathankot simple and pure language.
Easily understood by every student and if not understood, then the student can ask many times. You can also enroll in VIP Studies, best institute for math tuition in Pathankot, and brush up your
math’s skills.
6 tips to Understand Math
1. Try to understand math’s by giving time.
2. Practice math’s problems every day.
3. Clear all your doubts, for this you can enroll in a good math tuition classes in pathankot.
4. Remember math’s is not a rote subject try to understand it.
5. There are different formulas in mathematics, remember them in your own way.
6. Revise math as many times as possible.
Providing best math tuition classes in Pathankot
VIP Study Math tuition classes in Pathankot.
Best Institute for Maths tuition Classes in Pathankot, Here you get to play maths related games, which increases your maths technique, you get a chance to practice maths again and again, you get to
practice maths in your life. Here the teacher explains the sutras and teaches the answers to the questions in a simple way to all the students in simple language.
Leave a Comment | {"url":"https://vipstudies.in/best-math-tuition-classes-in-pathankot-easy-to-learn-math-classes-all-formula-of-math-related-tips-for-math-learning-formulas/","timestamp":"2024-11-11T20:32:39Z","content_type":"text/html","content_length":"139194","record_id":"<urn:uuid:a573c0c0-7bb0-49ea-8b81-59f49abc1e30>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00818.warc.gz"} |
MU Computer Graphics & Virtual Reality - May 2016 Exam Question Paper | Stupidsid
MU Information Technology (Semester 5)
Computer Graphics & Virtual Reality
May 2016
Total marks: --
Total time: --
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
1(a) Differentiate between Raster scan display and Random scan display.
5 M
1(b) Prove that two successive rotation transformations are additive
5 M
1(c) Show that the transformation matrix for a reflection about a line y=x is equivalent to reflection to x-axis followed by counter clockwise rotation of 90°
5 M
1(d) Explain 3D trackers & enumerate some important trackers characteristics
5 M
2(a) Specify highlights and drawbacks of Bezier curve. Construct the bezier curve of order three with control P1(0,0), P2(1,3), P3(4,2) and P4(2,1). Generate at least five points on the curve.
10 M
2(b) Write DDA Line drawing Algorithm Compare DDA with Bresenham's Line drawing Algorithm. Calculate the pixel co-ordinates of line Abusing DDA Algorithm 1 where A=(0,0) and B=(4,6).
10 M
3(a) Let ABCD be the rectangular window with A (20,20), B(90,20), C(90,70) and D(20,70). Find region codes for endpoints and use Cohen Sutherland algorithm to clip hte lines
P1 P2 with P1(10,30), P2(80,90)
10 M
3(b) With respect to 3D transformation, describe the steps to be-carried out when an object isto be rotated about an arbitarary axis. Specify all the required matrices. State your assumptions
10 M
4(a) Explain Flood Fill Algorithm for 4 connected and 8 connected. What are its advantages over Boundary Fill Algorithm.
10 M
4(b) Explain an algorithm which uses parametric equation of line clipping. Using same algorithm find the line segment A(10, 10) and B(70, 40) after it is clipped against the window of two vertices
(20, 20) and (40,50).
10 M
5(a) Consider a triangle ABD whose coordinates are A(10, 20) B(30, 40)and 8C(50, 20). Perform the following transformations (Specify the matrices that are used)
(i) Translate the given by 3 units in X direction and -2 units in Y direction.
(ii) Rotate the given triangle by 30
(iii) Reflect the given triangle about X=Y
(iv) Scale the given uniformly by 2 units..
10 M
5(b) What is the significance of modeling in virtual reality? Explain any modeling technique used in virtual reality.
10 M
Write a short note on (Any five)
6(a) Homogeneous Coordinates.
5 M
6(b) Text Clipping
5 M
6(c) Fractals
5 M
6(d) B-spline curve
5 M
6(e) Morphing and warping
5 M
More question papers from Computer Graphics & Virtual Reality | {"url":"https://stupidsid.com/previous-question-papers/download/computer-graphics-virtual-reality-14427","timestamp":"2024-11-04T14:25:39Z","content_type":"text/html","content_length":"61083","record_id":"<urn:uuid:f35b9e9c-427b-499d-8ec5-b63f30d38f93>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00542.warc.gz"} |
Lattice Boltzmann Method Computation of Turbulent
High-Temperature Plasma Jets
Lattice Boltzmann Method Computation of Turbulent High-Temperature Plasma Jets
Received Date: August 22, 2018; Published Date: September 20, 2018
This paper aims, firstly, to provide an investigation of plasma jets dynamics by the help of the Lattice Boltzmann method (LBM) and second to test the computational efficiency of the LB method by
comparison of its results to former measurements and numerical findings. The plasma gas is the argon injected at 520 m/s and 13500 K. In this study we accounted for highly variable thermophysical
properties dependence on temperature. The LB results using the proposed model seems to give an excellent compromise between former experimental and numerical results using classical methods (FDM,
FVM), while the deviations were found with other simulations attempts by the LB method.
Keywords: Lattice boltzmann method; Very high-temperature; Thermo-physical parameters; Simulation; Plasma jet
The Computational Fluid Dynamics (CFD) technique has been developed into a branch of fluid mechanics since 1960s and is mainly designed to solve the hydrodynamic equations based on the continuum
assumption. Afterwards, the molecular dynamic simulation (MDS) was used to understand micro/Nano scale phenomena due to the growing interest in science and technology. The MDS is too expensive and
its application has been limited to disadvantageous temporal and spatial scales. Researchers interest focused thereafter on multiscale flows. New techniques called ‘mesoscopic methods’ have met with
particular attention. The so popular Lattice Boltzmann Method (LBM) is one of the emerging mesoscopic methods and had a number of distinctive features.
The LB method has received exceptional attention the last two decades. A rapid rise was noted for the application of the LBM, including physical, chemical, biological, and sciences for academic,
research and engineering purpose. The number of published scientific articles marks the trend of an exponential increase. Moreover and unlike conventional methods based on a macroscopic continuum
equation, the LB method starts from mesoscopic kinetic equation of the Boltzmann equation, to determine macroscopic fluid dynamics. The macroscopic fluid dynamics has emerged from the causal dynamics
of a fictitious set of particles, whose motion and interactions are confined to a regular spacetime lattice. The kinetic nature brings certain advantages over conventional numerical methods, as
natural for parallel computing, easy handling of complex geometries, particle-based method, only density distribution function fk (x, t) as dependent variable, algebraic operation, a large range of
applicability from microscopic to macroscopic scales. The LBM has, also, the beneficial feature of simulating complex fluid flows such as multiphase flows [1], multicomponents [2] flows, porous media
flows, flows with suspensions and compressible flows [3-5].
Besides, the LB thermal models have been used to simulate free fluid flows, flows and heat transfer in porous media, jet flows for laminar and turbulent regimes. The models have also been used to
investigate droplet impingement and splashing on solid surface, droplet dynamics, droplets solidification, evaporation, deformation and break-up [6-9]. The temperature is varied to some hundreds of
Celsius degrees (°C) where the profiles can be interpolated or linearized by hundreds of intervals.
However, our work on plasma jets and plasma spraying processes presents new issue due to the extremely high-temperatures and the non-smoothed temperature dependence of all thermo-physical gas
properties near ionization and dissociation points (Figure 1).
However, for the simulation of plasma jets, the temperature may reach 20kk. In this context of complexity, we have established a general framework of conversion between LB and Ph units (spaces) for
the simulation of flows and transfers at extremely hightemperature with help of the Lattice Boltzmann method.
Along with the present study we will present the conversion principle to follow for the simulation of some complex problems. We will present also some results of validation for axisymmetric thermal
LB model and the results of physical temperature and velocity distributions.
Some basics of the LB method
The LBM method and its ancestor LGA (lattice gas automata), in contrast to conventional approaches, are mesoscopic approaches based on the kinetic theory. The Boltzmann equation solves mesoscopic
equations for the overall average of a fluid particle distribution in motion and interaction in a discreet lattice. A multiscale analysis is then performed to find or recover the macroscopic
quantities. Figure 2 describes the difference between the PDEs resolution procedures used by conventional approaches and Boltzmann method.
The Boltzmann equation is derived from the statistical physics and describes the probability of existence of a particle at the time t having a velocity
The so-popular BGK lattice Boltzmann model assumes that the distribution function f relaxes towards the Maxwellian distribution f^eq over an average time τ[f] which controls the rate toward
equilibrium from the non-equilibrium state and is related to the time between two particle-collisions,
The equilibrium distribution function f ^eq was chosen to be an expansion in the velocity and to ensure that the conservation laws were obeyed.
The discretization of Equation (1) along specific directions of linkages defining the velocities lattice leads to its discrete form in terms of functions f[k] as follows:
The indexes k and i describe the linkages directions and the Cartesian coordinates components, respectively. Following a time and space finite difference discretization and assuming Δx = Δt , the
Boltzmann equation general form accounting for forcing term k[F] , is written as:
The time evolution term of Equation (4) is solved in two steps known as the collision-streaming process:
The Navier-Stokes equations are second-order nonlinear equations in velocity; therefore, the quantity
Where ω[k]=4 / 9 for k=0, ω[k]=1/ 9 for k=1-4, ω[k]=1/ 36 for k=5-8,|
LB Modeling of Plasma Jet
In the present study, an axisymmetric thermal LB model for APS process has been developed based on the following assumptions for both the hot gas flow and the powder dynamics:
• The argon plasma is in the LTE and optically thin to radiation,
• The surrounding atmosphere is the argon plasma,
• The plasma flow is considered turbulent, transient, and axisymmetric. The turbulence is modeled using the LES model,
• The swirling component of jet velocity can be neglected comparison to the other components,
• Electric, magnetic and gravity forces and viscous heat dissipation are negligible.
Plasma jet dynamics
In this study argon plasma jet issuing into argon surrounding is simulated using a Fortran-based code developed for transient simulation of thermal plasma in axisymmetric coordinate system and
uniform meshing. The calculation domain (Figure 4) is a halfsection containing the centerline axis. The flow problem is solved in a (x, y) standard LB formulation. In the LB dynamics, fluid
particles’ moving differs from other particle based numerical methods such as dissipative pseudo particle dynamics, direct simulation Monte Carlo dynamics. In fact, particles occupying the discrete
lattice nodes move from one node to the next in a streaming phase, then, particles collide and get a new speed in a collision phase. The simulation progresses in an alternation between the collision
and the spread (streaming) of the particles. The present model consists of a 9-bits lattice for the hydrodynamic quantities and a 4-bits lattice for the thermal field to have finally a D2Q9-D2Q4 LB
thermal model. The velocity vectors of such moving directions (Figure 3) are expressed as follows:
Where ζ is a unit velocity vector of the lattice. Following the two selected lattices, two populations of the LB equations based on single relaxation times (SRT) are used in this study. The time
evolution equation of the density distribution function with source/ force term is written as [10]:
Where F[i]=-ρ(u[i]u[r]/r+2υu[i]/r^2δ[ir]), ω[0] = 4 / 9 , ϖ [k] = 1/ 9 for k=1-4 and ω[k]= 1/ 36 for k=5-8.
The Zhou’s revised-model is successfully extended here to thermal flow and then, similarly for the advection-diffusion equation of the scalar field, the LB equation is written as:
The new SRTs τ k and
The SRTs of the standard LB equations are linked to the diffusion parameters as:
The fluid density ρ, velocity component ui and temperature θ are computed locally from the distribution function fk and gk in the same manner as that in the standard LB method for Cartesian
coordinates which is the highest advantage of the Zhou’s revised model:
The continuity and Navier–Stokes equations can be recovered through the Chapman-Enskog expansion for the density distribution function. The final results of the Navier–Stokes equation and continuity
equation are recovered as below [10]:
Under the low Mach number condition,
Besides, the plasma jet is laminar in its core but turbulent in its fringes due to the high field gradients (200 K/mm and 10 m/s/mm). In LBM-LES modelling of turbulence, only the collision relaxation
time is locally readjusted, by adding the eddy viscosity to the molecular viscosity [11]. Note that in LB turbulence modeling, the local shear rate tensor ij S is available at each computational node
without recourse to classic finite differencing.
For the chosen D2Q9 lattice, the effective relaxation time obeys the following equation:
Note that in LB turbulence modeling, the local shear rate tensor
The computational domain is a half plane (Figure 4) sized in a 100mm 48mm in (z, r) cylindrical coordinates. The domain is mapped by uniform mesh in a 200 × 96 cells. The nozzle radius sizes 4 mm and
its exit are governed by the following parabolic inlet conditions:
The boundary conditions on the remaining parts are given in Figure 5. The nozzle front face AB of the torch sizes 12mm. The Jets & Poudres [12] results are among others used to validate the present
LB results either for the plasma jet or the powders dynamics and heating. In the solution procedure, the LB background, the turbulence modeling and the LB-physical conversion platform are considered
and the thermo-physical properties temperaturevariation is fully accounted-for in our model [13].
Dynamic framework of conversion to physical space
In the following sections we dealt with some mono-dimensional and two-dimensional cases involving PDEs resolution n LB and dimensionless spaces.
We, consequently, presented the conversion procedure to return to physical space. This was even for constant or variable thermophysical properties.
Axisymmetric high temperature plasma jet
Plasma jets simulation are of many practical applications and are of many complexities due to the high temperature and velocities gradients especially near the nozzle exit and due to the extra
nonlinearity of thermophysical properties with temperature as shown in Figure 1. The problem here is not to normalize and convert the physical variables (velocity, temperature, etc.) but to convert
the transport properties (thermophysical parameters) themselves to their LB corresponding values that insure the variability with temperature and the stability condition of the relaxation time
For the kinematic viscosity υ[Ph] case (same for α[Ph]) in D2Q9 lattice we have:
Using Equations (17) & (18), we can draw the conversion diagram. This procedure was applied to argon plasma jet. The boundary conditions may be found in Djebali [13]. The calculation chart is as
The simulated results of argon plasma jet at 13500K for the conditions free are depicted in Figure (6 & 7).
To show the validity of the proposed procedure of conversion and the novelty against others LB attempts to simulate this complex thermal flow, a validation analysis based on a free jet was performed.
The centerline velocity and temperature profiles were compared to former predictions and measurements [12, 14-16]. The predicted results for the centerline axial velocity and temperature field are
plotted in Fig. 7 and gathered with former works. As one can see, the LB results using the proposed conversion dynamic framework provide a good compromise between the Pfender and Jets & Poudres data,
which use FV and FD numerical methods and different turbulence models.
One can remark, also, that the axial temperature gradient near the inlet (interval 0-20mm) is close to 190 K/mm then close to 200 K/mm observed experimentally counter 136 K/mm and 152 K/mm for Jets&
Poudres and Pfender results respectively and the velocity gradient is close to 8.8 (m/s)/mm counter 10.48 (m/s)/mm and 9.48 (m/s)/mm for Jets&Poudres and Pfender results respectively which agrees
well with former experimental and numerical observations as noted here-above. It is also clear that our results go well with Jets&Poudres ones. The disparity between the two results in the potential
core of the plasma jet (hot zone) is probably due to the fact that ramps are used in Jets&Poudres code for the inlet temperature and velocity profiles instead of our parabolic ones. After that, in
the plasma jet core, the profiles become Gaussian and the two curves go together. However, the Zhang et al. [15] and Sun et al. [16] results using the Lattice Boltzmann method present significant
In the present study a framework-based lattice Boltzmann method for the simulation of turbulent argon plasma jets dynamics at very high temperatures was presented. The present case involves high
temperature dependence of the thermophysical properties. It has been concluded from this study that:
• The thermophysical properties of the plasma gas are hugely varying as function of temperature in the range ambient temperature – 13500 K.
• The LBM computation of turbulent properties is very simply compared to conventional discretization methods and the convergence is reached in about 20000 iteration resulting in a CPU time of about
five minutes for the chosen mesh resolution.
• The temperature and velocity gradients in the vicinity of the nozzle exit (0-20mm) are close to 190 K/mm and 8.8 (m/s)/ mm (respectively). Such a values are in excellent agreement with former
observations from numerical predictions and measurements.
• The LB method results using the proposed model have shown excellent agreement with experimental and numerical results of classical methods (FDM, FVM). However, some deviations were found with
others attempts of plasma jet simulations by the LB method as has been shown for the axial velocity and temperature profiles. | {"url":"https://irispublishers.com/gjes/fulltext/lattice-boltzmann-method-computation-of-turbulent-high-temperature-plasma-jets.ID.000503.php","timestamp":"2024-11-02T18:21:44Z","content_type":"text/html","content_length":"124468","record_id":"<urn:uuid:337f74bc-3729-4b56-9d86-3a27812bdf51>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00488.warc.gz"} |
How do I solve a magic square?
Magicians often include acts that are not strictly magic, but leave their audiences feeling that they have witnessed something impossible. Memory stunts, unusual scientific demonstrations, playing
chess blindfolded and rapid mental mathematics are some examples.
Successful stunts of this kind suggest the performer has an enhanced talent. In some cases, this is true – they may have an extraordinary memory or be a chess grandmaster. But in most cases the
performer is using a system. These systems can be simple, or can require tremendous skill – but they are still easier than doing the stunt without any system at all.
A popular mathematical stunt is to create a “magic square”. This is a grid, most commonly 3×3 or 4×4, filled with numbers. The numbers in every row add up to the same number. Here is an example:
As you can see all the rows add up to 15. Notice that each number from 1 to 9 is used once. If you could repeat numbers, many magic squares would become trivially easy, like a grid made entirely of
1s that added up to 3!
There’s nothing amazing about that.
Making a magic square before an audience, if done properly, can be an impressive demonstration of one’s apparent command of higher-level mathematics. I’ll show you how to do this, using a seemingly
more difficult 4×4 grid.
Take a business card and write this 4×4 magic square on the back:
This magic square adds up to 34. This is the smallest sum possible using the numbers 1 to 16. Keep this card and you’ll be able to perform this stunt any time you wish.
After dinner, say, turn the conversation towards numbers and bring out your business card. Explain the basic idea behind a magic square; that every column and row adds up to the same number. Let your
companions verify this if they wish by adding the columns and rows.
Tell them you’ll try and make a magic square just for them. Ask them to give you any two-digit number higher than 34.
Let’s assume they give you the number 87. Leave your business card where you can see it and bring out another piece of paper. Draw a 4×4 grid. From the target number that your guests named, subtract
the number 34. In our example 87 is the target number: 87 minus 34 = 53. You then divide 53 by 4, to the nearest whole number. That gives 13 with a remainder of 1 (13 x 4 = 52; 52 + 1 = 53).
Remember the number 13. We call that the quotient, and it’s one of two special numbers you’ll need. The other special number is the quotient plus the remainder. In our case, that’s 14 (13 + 1).
Now you’re set. Begin by stealing a glance at your business card. Look at the top row. Add your first special number to the 8 that appears in row 1, column 1 of your card: 13 + 8 = 21. Write this
down in the corresponding box on your new grid.
You will continue on like this for all 16 spaces in the grid, with four exceptions. When you get to the numbers 13, 14, 15 or 16 on your original card, you have to add the special number plus the
The correctly filled grid will look like this:
Notice that each result in this grid is simply the number from your original business card added to 13, with the exception of the four numbers in red. These are your original numbers plus 14.
You can now show that every row and column adds up to 87.
But wait, there’s more! You also managed to get the two diagonals to add up to 87, as well as the four numbers in each of the corner quadrants …
… and the top centre and lower centre quadrants!
Unfortunately, the two side quadrants will not add to 87 but the four corners of the large grid will!
The truth is, the diagonals, quadrants and the four corners added up to 34 on your original business card too, but don’t reveal this beforehand. Save that for the magic square that you make for your
Because 87 is an odd number, we had a remainder that we needed to use for the boxes with 13, 14, 15 and 16 in them. If your spectator names an even number (such as 54) you won’t have any remainder.
Remember the equation: 54 (the target number) minus 34 (our original magic square total) = 20. And then you divide 20 by 4 to get 5 with no remainder! All you have to do is add 5 to each of the 16
numbers in your new grid and it will work.
Incidentally, if your target number is even, then those two side quadrants will also add up to the target number.
If you want to make this look tougher, you can memorise your original magic square that adds up to 34. That way you don’t have to look at your crib sheet. Also, instead of working straight across the
rows one at a time, you can ask your spectator to point to any empty box and you can add your special number (or special number plus remainder) to whatever number belongs in that box.
You’re now well on your way to being a mathematical genius! BYO business card.
Related Reading: Maths puzzle will help plan your next party | {"url":"https://cosmosmagazine.com/science/mathematics/how-to-solve-a-magic-square/","timestamp":"2024-11-14T01:10:17Z","content_type":"text/html","content_length":"97559","record_id":"<urn:uuid:5d7c81b5-dccd-48e2-9163-44b4c81082b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00729.warc.gz"} |
When Einstein Met Lemaître | Evolution News
When Einstein Met Lemaître
Editor’s note: Discovery Institute Press is delighted to announce the publication of The Big Bang Revolutionaries: The Untold Story of Three Scientists Who Reenchanted Cosmology, by Jean-Pierre
Luminet. The book has received rave reviews including from three Nobel Prize winners. The following is an excerpt from Chapter 12.
Georges Lemaître’s work is notable for providing the first interpretation of cosmological redshifts as a natural effect of the expansion of the universe within the framework of general relativity,
instead of attributing it to the real motion of galaxies. As it is given in equation 23 — R′/R = v/cr — space is constantly expanding and consequently increases the apparent separations between
galaxies. This idea would prove to be one of the most profound discoveries of our time.
The proportional relation between the recession velocity v and the distance r is an approximation valid at not too large distances which can be used, he writes, “within the limits of the visible
spectrum.” Then, using the available astronomical data, Lemaître calculates the relation in equation 24, with a factor 625 or 575 km/s/Mpc (which means that galaxies that are 1 megaparsec away have a
recession speed of 625 or 575 km/sec). Depending on the choice of observations, this presented an enormous scatter:
Using the 42 extra-galactic nebulae in the Hubble and Strömberg lists, and taking into account the Sun’s own speed, we find an average distance of 0.95 million parsecs and a radial velocity of
600 km/s, or 625 km/s at 10^6 parsecs. We will therefore adopt R’/R = v/rc = 0.68´ 10^-27 cm^-1 (equation 24).
For this the Belgian scientist uses a list of forty-two radial velocities compiled by Gustav Strömberg, a Swedish astronomer at the Mount Wilson Observatory, and deduces the distance of the
corresponding extra-galactic nebulae from an empirical formula relating the distance and the absolute magnitude provided by Hubble, who himself took the magnitudes from Hopmann. This was the first
calculation of the so-called Hubble law and the Hubble constant, to be recognized only much later.
Mostly Unnoticed
The significance of Lemaître’s work remained mostly unnoticed for three years. A reason commonly given is that it was published not in one of the prestigious astronomical journals of the time but in
French and in a journal that has been characterized as obscure and inaccessible. There is a grain of truth in this explanation, but as D. Lambert has rightly pointed out, the journal in question,
Annales de la Société Scientifique de Bruxelles, published some articles in English, was of an excellent scientific level, and therefore was included in a large number of academic libraries and
observatories around the world. Moreover, a much larger scientific audience then than today could read French. Rather, the main obstacle to a larger diffusion of Lemaître’s article was that most of
the physicists of the time, such as Einstein and Hubble, could not accept the idea of a non-static universe. This was not the case with Eddington. Unfortunately, his former mentor, to whom Lemaître
had sent a copy, either forgot to read it in time, or failed to understand its importance.
The Fifth Solvay Conference
From October 24–29, 1927, the Fifth Solvay Conference in Physics took place in Brussels, one of the great meetings of world science. The Solvay Conference was devoted to the new discipline of quantum
mechanics, whose problems disturbed many physicists. Among them was Einstein. For Lemaître, it was the opportunity to meet and talk with the father of general relativity. He later reported on this
While walking in the alleys of the Parc Léopold, [Einstein] spoke to me about an article, little noticed, which I had written the previous year on the expansion of the universe and which a friend
had made him read. After some favorable technical remarks, he concluded by saying that from the physical point of view that appeared completely abominable to him. As I sought to prolong the
conversation, Auguste Piccard, who accompanied him, invited me to go up by taxi with Einstein, who was to visit his laboratory at the University of Brussels. In the taxi, I spoke about the speeds
of nebulae and I had the impression that Einstein was hardly aware of the astronomical facts.
André Deprit, a former student of Lemaître, gave a more picturesque and slightly different version of this encounter:
Einstein had been invited to discuss his deterministic conception of the world with the young pioneers of quantum mechanics; Lemaître was pacing up and down in front of the Institute, hoping to
hook him on the way. While it’s true that Professor Piccard, who was escorting Einstein that afternoon, picked Lemaître up in the taxi, the fact remains that Lemaître felt spurned. Yes, Einstein
had read the note that had just appeared in the Annales de la Société Scientifique; the mathematics were correct, but the physics of the article, what an abomination! Need one say more? To defend
himself, Lemaître mumbled a discreet allusion to Hubble’s observations in English, which he withdrew immediately so as not to embarrass Professor Piccard, as he understood that Einstein was not
aware of them. The conversation stopped for a moment, and Piccard resumed it with Einstein, but in German: Lemaître, who knew no German, had no choice but to remain silent.
Einstein’s response to Lemaître shows the same unwillingness to change his position that characterized his former response to Friedmann: he accepted the mathematics, but not a physically expanding
universe. According to D. Lambert, this reaction came from the fact that Einstein’s implicit philosophy was inspired by Spinoza. For the Dutch philosopher, “God” (Deus) was identified with “Nature” (
Natura): “Deus sive Natura.” Consequently, due to the immutability of God, one could not accept any motion or evolution of Nature itself. Einstein thus rejected the idea of an evolving universe,
i.e., a world with a real history. This “theological” prejudice led him also to criticize strongly the idea of expanding (and contracting) universes put forward by Friedmann and Lemaître.
In July 1928, Lemaître went to Leiden, where de Sitter presided over the third assembly of the International Astronomical Union, but did not meet him. The hour of the Big Bang had obviously not yet
The Hour Approaches
The same year H. P. Robertson published an article seeking to replace de Sitter’s metric with a “mathematically equivalent [metric] in which many of the apparent paradoxes inherent in [de Sitter’s
solution] were eliminated.” He got the formula v = cd/R where d is the distance of the nebula and R the radius of curvature of the universe, but in the framework of a static solution. Robertson used
the same set of observations that Lemaître used (though he did not know of Lemaître’s articles of 1925 and 1927) and that Hubble would later use. From this he calculated R = 2 ´ 10^27 cm and a
proportionality constant of 464 km/s/Mpc. In a trailblazing article the following year, Robertson related his detailed search for all the mathematical models satisfying a spatially homogeneous and
isotropic universe and that also imply strong symmetries in the solutions to Einstein’s equations.
In 1929, Hubble used the experimental data on the Doppler redshifts, mostly given by Slipher, and found a linear velocity-distance relation v = Hr with H = 465 ± 50 km/s/Mpc for twenty-four objects
and 513 ± 60 km/s/Mpc for nine groups. The law was strictly identical to Lemaître’s equation 24, with almost the same proportionality factor.
However, Hubble did not take the crucial step to expanding-universe models. He stated, “The outstanding feature, however, is the possibility that the velocity-distance relation may represent the de
Sitter effect.” In the introduction to his 1936 book, The Realm of Nebulae, Hubble discussed the interface between observation and theory, and honestly stated, “The author of this book is primarily
an observer.” In fact, out of the 202 pages in the book, he discusses the theoretical interpretation of his observations only on page 198, in a last paragraph entitled “Theories of Cosmology.” He
makes no mention of the work of Lemaître, but quotes Friedmann, Robertson, and Arthur Milne (who attempted a Newtonian explanation with his theory of “kinematic relativity”). Moreover, Hubble makes
the mistake of considering the spectral shift as a pure Doppler effect (due to the galaxies’ own recession velocity) and not as an expansion effect (increase of the space scale radius over time).
Life as a Skeptic
And all during his life he would remain skeptical about the general relativistic interpretation of his observations. As his biographer G. Christianson has pointed out, Hubble was chary of “all
theories of cosmic expansion long after most astronomers and physicists had been won over. When queried about the matter as late as 1937, he sounded like an incredulous schoolboy: ‘Well, perhaps the
nebulae are all receding in this peculiar manner. But the notion is rather startling.’”
Indeed, the idea that the expansion of the universe was discovered by Hubble is a myth that was first propagated by his collaborator Milton Humason as early as 1931 and by Hubble himself. Fiercely
territorial, Hubble wrote in a letter to de Sitter, dated August 21, 1930: “I consider the velocity-distance relation, its formulation, testing and confirmation, as a Mount Wilson contribution and I
am deeply concerned in its recognition as such.” | {"url":"https://evolutionnews.org/2024/05/when-einstein-met-lemaitre/","timestamp":"2024-11-09T08:04:03Z","content_type":"text/html","content_length":"197518","record_id":"<urn:uuid:65f30dd0-290d-403e-ac65-6afb1ca5435b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00034.warc.gz"} |
A model for how long our coal resources will last is given by $$T=\frac{\ln (300 r+1)}{\ln (r+1)}$$ where \(r\) is the percent increase in consumption from current levels of use and \(T\) is the time
(in years) before the resource is depleted. a. Use a graphing utility to graph this equation. b. If our consumption of coal increases by \(3 \%\) per year, in how many years will we deplete our coal
resources? c. What percent increase in consumption of coal will deplete the resource in 100 years? Round to the nearest tenth of a percent.
Short Answer
Expert verified
The answer to part a requires a graphing utility and is therefore not provided. For part b, after evaluating the equation, one will get the number of years until coal resources are depleted at a 3%
yearly increase in consumption. For part c, the percentage increase in consumption that will deplete coal resources in 100 years should be calculated by solving the rearranged equation numerically
and converting to a percentage.
Step by step solution
Substitute given value into equation for part b
For part b, it asks for the time it will take to deplete our coal resources if our yearly consumption increases by 3%. Here, \(r=0.03\). Substituting this value into the equation will provide us with
the value of \(T\):\[T = \frac{\ln(300 \cdot 0.03 + 1)}{\ln(0.03 + 1)}\]
Evaluate the equation for part b
By evaluating the above equation we would get the value of \(T\), which is the number of years until the coal resource is depleted. This can be done with a scientific calculator or a software that
can handle logarithms.
Rearrange equation to solve for \(r\) for part c
For part c, it asks for the percent increase in consumption that will deplete our coal resources in 100 years. Here, \(T=100\). This requires us to solve for \(r\) in the model equation \(T = \frac{\
ln(300r + 1)}{\ln(r + 1)}\), therefore, this equation can be rearranged to \[\ln(300r + 1) = T \cdot \ln(r + 1)\]
Numerical solution for \(r\) for part c
With \(T=100\), we get a transcendental equation \[\ln(300r + 1) = 100 \cdot \ln(r + 1)\] which can be solved using numerical methods or software that can handle such equations to get a value for \(r
\), then multiply the result by 100 to get it in percentage.
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Logarithmic Equations
Logarithmic equations involve the logarithm function, which is the inverse of exponential functions. They are widely used in many fields, including in modeling growth and decay processes. A
logarithmic equation may look something like \(\log_b(x) = n\), and the goal is often to solve for the variable \(x\).
In the context of coal consumption, the logarithm helps to express the complex relationship between the percent increase in consumption and the time it will take for the resource to be depleted.
Solving logarithmic equations requires understanding log rules and sometimes utilizing a calculator for computation.
In the case of the coal depletion model \(T = \frac{\ln(300r + 1)}{\ln(r + 1)}\), to solve for \(r\), you would typically isolate \(r\) on one side of the equation and then exponentiate to remove the
logarithm. This could involve detailed steps but is simplified with modern computation tools.
Graphing Calculators
Graphing calculators are powerful tools that allow users to visually understand complex mathematical relationships. They can handle various mathematical functions, including logarithms, exponentials,
and trigonometric functions. To graph an equation like the coal consumption model, using a graphing calculator can illuminate how changes in the percent increase \(r\) affect the time \(T\) before
By entering the logarithmic equation into a graphing calculator, students can quickly see a visual representation of the problem they are trying to solve. In educational settings, this interaction
with technology can enhance understanding and problem-solving skills. Graphing calculators are often used not only to draw the graph but also to find numerical solutions, intercepts, and to perform
'What if?' analyses for different scenarios.
Percent Increase
The percent increase is a measure of how much a quantity grows over a specific period, expressed as a percentage. It's calculated by the formula \(\text{Percent Increase} = \frac{\text{New Amount} -
\text{Original Amount}}{\text{Original Amount}} \times 100\).
In economic, environmental, and social sciences, understanding percent increases is crucial to evaluate changes over time. As for our coal consumption problem, the percent increase applies to the
yearly growth of coal usage. A higher percent increase in consumption means our coal reserves will deplete faster. It's vital to accurately measure and understand this increase to model our resources
effectively and plan for sustainability.
Resource Depletion
Resource depletion refers to the exhaustion of natural resources as a result of overconsumption, population growth, and unsustainable practices. The concept is a fundamental component of
environmental science, as it illustrates the consequences of human activity on the Earth's reserves.
The coal consumption model in our exercise highlights resource depletion by estimating the time left until coal, a finite resource, is completely exhausted. This model quantifies the need for
sustainable use and can inspire strategies for conservation and alternative energy solutions. By understanding models of resource depletion and the factors that contribute to it, we can make more
informed decisions about our consumption patterns and their long-term impacts. | {"url":"https://www.vaia.com/en-us/textbooks/math/essentials-of-precalculus-1-edition/chapter-3/problem-65-a-model-for-how-long-our-coal-resources-will-last/","timestamp":"2024-11-09T01:22:26Z","content_type":"text/html","content_length":"245164","record_id":"<urn:uuid:045da162-cb5d-4643-9d3b-2a5ac5b4b1c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00182.warc.gz"} |
Value floating leg interest rate swap
The valuation of the swap is the sum of the discounted (and signed) future cash flows of each leg. As of June 30, 2015, the interest rate swap valuation is negative: -7,1 million EUR.
22 Jul 2019 Pricing Floating Legs of Interest Rate Swaps with R else { rate <- swap$strike } # Part 2: Calculate the value of the accrual swap.dates 4.1.1 Variants of Interest Rate Swap . Notional/
Principal Amount: This is the amount or value linked to the interest rate is called the “floating leg”, while 1 Sep 2019 The key interest rate swap products which are not Basis Swaps traded in the a
notional principal in exchange for a floating rate of interest (i.e.; fixed/floating). of $25,000 per basis point to the nearest million in face value. When trading EFPs for BAB/OIS, the futures leg
should be set off the offer of the capital market practitioners are increasingly using the interest rate swap curve as their flows of the fixed and floating legs of the swap are equal. The fixed
leg rate of 4% applying to a notional value of $10,000,000 and the actual interest rate on coupon swap, the interest rate on one leg is floating, and the interest rate.
15 Apr 2018 Interest rate swaps are certainly one of the most widely used type of but using different interest rates, generally a fixed rate and a floating rate. The nominal amount for each of these
two parts to the swap, called legs, are not the fixed rate is typically chosen in such a way as to make the present value of
4.1.1 Variants of Interest Rate Swap . Notional/Principal Amount: This is the amount or value linked to the interest rate is called the “floating leg”, while 1 Sep 2019 The key interest rate swap
products which are not Basis Swaps traded in the a notional principal in exchange for a floating rate of interest (i.e.; fixed/floating). of $25,000 per basis point to the nearest million in face
value. When trading EFPs for BAB/OIS, the futures leg should be set off the offer of the capital market practitioners are increasingly using the interest rate swap curve as their flows of the fixed
and floating legs of the swap are equal. The fixed leg rate of 4% applying to a notional value of $10,000,000 and the actual interest rate on coupon swap, the interest rate on one leg is floating,
and the interest rate. 9 Mar 2016 The fixed leg cash flows are set when the contract is initiated, whereas the The current market value of an interest rate swap is determined by the valuing interest
rate swaps – the overnight curve and the floating rate index
4.1.1 Variants of Interest Rate Swap . Notional/Principal Amount: This is the amount or value linked to the interest rate is called the “floating leg”, while
Cross-currency and basis swaps; % LIBOR swaps; Non-generic interest rate swaps; Fixed legs; Floating Rate Notes. To evaluate the FINCAD solutions to value 15 Jul 2019 Part 1 gave the general idea
behind tidy pricing interest rate swaps using a 7 lines The only part missing was calculating the accrual for the floating leg. " 500px") Table 1: Output from Quandl Date Value 2018-12-31 -0.237
28 Jan 2020 Floating Leg. As mentioned before, floating legs coupon values are dependent on market values. To demonstrate an example, we will use LIBOR
It represents the mid-price for interest rate swaps (the fixed leg), at particular times of ICE Swap Rate is used as the exercise value for cash-settled swaptions, for early terminations of interest
rate swaps, for some floating rate bonds and for 24 Mar 2019 Collateral Posting, FX Forward Rates, Present Value, Pricing, Par The floating basis spread over Libor on the trade leg in currency ccy,.
21 May 2019 The two payment streams are called fixed leg and floating leg of the IRS, respectively. The difference between the expected value of the
To price a swap, we need to determine the present value of cash flows of each leg of the transaction. In an interest rate swap, the fixed leg is fairly straightforward since the cash flows are
specified by the coupon rate set at the time of the agreement. Pricing the floating leg is more complex since, by definition, the cash flows change with future changes in the interest rates.
15 Jul 2019 Part 1 gave the general idea behind tidy pricing interest rate swaps using a 7 lines The only part missing was calculating the accrual for the floating leg. " 500px") Table 1: Output from
Quandl Date Value 2018-12-31 -0.237 Interest Rate Swap Product, Pricing and Valuation Introduction and Practical Guide for vanilla swaps that involve the exchange of a fixed interest rate for a
floating rate, or vice versa. There are two legs associated with each party: a f more of the reset cash flow should be added into the present value of the floating leg. In an interest rate swap,
parties are exchanging fixed interest rate payments for floating interest rate payments on some notional value. To define an interest rate This is the third in a series of articles that will go
from the basics about interest rate swaps, to how to value them and how to build a zero curve. interest rate swap. At the time of the swap agreement, the total value of the swap's fixed rate flows
will be equal to the value of expected floating rate payments implied by the forward | {"url":"https://bestoptionslcshmx.netlify.app/bruhn57766puti/value-floating-leg-interest-rate-swap-415.html","timestamp":"2024-11-11T11:25:13Z","content_type":"text/html","content_length":"31223","record_id":"<urn:uuid:085a5fbf-5a24-4b1d-8b93-1ba005b0fef1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00686.warc.gz"} |
32 research outputs found
Let $N$ be a non-squarefree positive integer and let $\ell$ be an odd prime such that $\ell^2$ does not divide $N$. Consider the Hecke ring $\mathbb{T}(N)$ of weight $2$ for $\Gamma_0(N)$, and its
rational Eisenstein primes of $\mathbb{T}(N)$ containing $\ell$, defined in Section 3. If $\mathfrak{m}$ is such a rational Eisenstein prime, then we prove that $\mathfrak{m}$ is of the form $(\ell,
~\mathcal{I}^D_{M, N})$, where the ideal $\mathcal{I}^D_{M, N}$ of $\mathbb{T}(N)$ is also defined in Section 3. Furthermore, we prove that $\mathcal{C}(N)[\mathfrak{m}] eq 0$, where $\mathcal{C}(N)$
is the rational cuspidal group of $J_0(N)$. To do this, we compute the precise order of the cuspidal divisor $\mathcal{C}^D_{M, N}$, defined in Section 4, and the index of $\mathcal{I}^D_{M, N}$ in $
\mathbb{T}(N)\otimes \mathbb{Z}_\ell$.Comment: Many arguments are clarified, and many details are filled i
Let $p$ be a prime greater than 3. Consider the modular curve $X_0(3p)$ over $\mathbb{Q}$ and its Jacobian variety $J_0(3p)$ over $\mathbb{Q}$. Let $\mathcal{T}(3p)$ and $\mathcal{C}(3p)$ be the
group of rational torsion points on $J_0(3p)$ and the cuspidal group of $J_0(3p)$, respectively. We prove that the $3$-primary subgroups of $\mathcal{T}(3p)$ and $\mathcal{C}(3p)$ coincide unless $p\
equiv 1 \pmod 9$ and $3^{\frac{p-1}{3}} \equiv 1 \!\pmod {p}$
For any positive integer $N$, we completely determine the structure of the rational cuspidal divisor class group of $X_0(N)$, which is conjecturally equal to the rational torsion subgroup of $J_0(N)$
. More specifically, for a given prime $\ell$, we construct a rational cuspidal divisor $Z_\ell(d)$ for any non-trivial divisor $d$ of $N$. Also, we compute the order of the linear equivalence class
of the divisor $Z_\ell(d)$ and show that the $\ell$-primary subgroup of the rational cuspidal divisor class group of $X_0(N)$ is isomorphic to the direct sum of the cyclic subgroups generated by the
linear equivalence classes of the divisors $Z_\ell(d)$.Comment: Comments are welcom
Following the method of Seifert surfaces in knot theory, we define arithmetic linking numbers and height pairings of ideals using arithmetic duality theorems, and compute them in terms of n-th power
residue symbols. This formalism leads to a precise arithmetic analogue of a 'path-integral formula' for linking numbers
In this paper, we apply ideas of Dijkgraaf and Witten [6, 32] on 3 dimensional topological quantum field theory to arithmetic curves, that is, the spectra of rings of integers in algebraic number
fields. In the first three sections, we define classical Chernā Simons actions on spaces of Galois representations. In the subsequent sections, we give formulas for computation in a small class of
cases and point towards some arithmetic applications
Let l >= 5 be a prime and let N be a square-free integer prime to l. For each prime p dividing N, let ap be either 1 or -1. We give sufficient criteria for the existence of a newform f of weight 2
for G0( N) such that the mod l Galois representation attached to f is reducible and Upf = apf for primes p dividing N. The main techniques used are level raising methods based on an exact sequence
due to Ribet. c.2018 American Mathematical Societ | {"url":"https://core.ac.uk/search/?q=author%3A(Yoo%2C%20Hwajong)","timestamp":"2024-11-12T13:45:19Z","content_type":"text/html","content_length":"141927","record_id":"<urn:uuid:40b98827-dd0f-4299-9cd9-bfe0e04d889f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00887.warc.gz"} |
Automated hydrograph separation
“Division of a hydrograph into direct and groundwater runoff as a basis for subsequent analysis is known as hydrograph separation or hydrograph analysis. Since there is no real basis for
distinguishing between direct and groundwater flow in a stream at any instant, and since definitions of these two components are relatively arbitrary, the method of separation is usually equally
arbitrary.” - Linsley et.al., 1975
The Idea
For hydrograph separation, it is generally assumed that total flow ($q$) at any particular time ($t$) of a streamflow hydrograph can be partitioned into two primary components:
1. The slow flow component $(b)$, which is itself composed of the gradual release of water from watershed stores in addition to groundwater discharging into streams, the “groundwater runoff” in
Linsley etal.). The slow flow component has been commonly referred as “baseflow.” and,
2. The quick flow component $(f)$, which originates from rainfall and/or snow melt events (i.e., “direct runoff” in Linsley et.al., 1975).
Together, the slow and quick flow components sum to total flow: $q=b+f$. Conceptually, after a period of time following a precipitation event, streamflow continues to decrease at a predictable rate
as it is composed entirely of slowflow $(f=0)$. Upon the onset of a heavy rain event, the hydrograph quickly rises, as quick flow is added to the slowflow signature. One could imagine that should
this rain event never occur, the underlying slowflow would have continued uninterrupted (such as in Reed et.al., 1975). The difference between total flow and this “underlying” slowflow is perceived
as quickflow.
The Need
Hydrologists found the need to separate the hydrograph into its constitutive components as it was found that runoff created from a precipitation event (i.e., rainfall and/or snow melt) tended to
correlate best with the quickflow component only, as opposed to the total flow hydrograph (Beven, 2012). Consequently, a number of automatic hydrograph separation routines were proposed, all being
“equally arbitrary” (Linsley et.al., 1975).
For many groundwater flow models in southern Ontario, it is assumed that the long-term rate of slowflow is predominantly groundwater discharge. Therefore, long-term average rates of slowflow serve as
an important constraint to groundwater flow models.
Slowflow Quantification
A number of metrics associated hydrograph separation that help describe the relationship between quick and slow flow.
Baseflow Index
The first is the baseflow index $(BFI)$, which is the ratio of long term baseflow discharge to total discharge:
\[\text{BFI}=\frac{\sum b}{\sum q}\]
Recession Coefficient $(k)$
The second is the slowflow recession coefficient $(k)$, which describes the withdrawal of water from storage within the watershed (Linsley et.al., 1975). The recession coefficient is a means of
determining the amount the rate of slowflow recedes after a given period of time, and is reasonably simulated by an exponential decay function:
where $b_{t-1}$ represents the slow flow calculated at one timestep prior to $b_t$. (Implied here is that flow measurements are reported at equal time intervals.)
Quickflow Cessation Time $(N)$
Linsley et.al. (1975) also offered an approximate means of determining the time (in days) after peak flow discharge to when quickflow ceases $(f\to0)$, making total flow entirely composed of the
slowflow component, whose behaviour can be predicted by the recession coefficient. As a “rule of thumb” (Linsley et.al., 1975) the number of days $(N)$ when quick flow terminates is approximated by:
where $A$ is the watershed area (km²).
The above empirical relation is included here as many automatic hydrograph separation algorithms discussed below utilize this approximation. Hydrograph components and “quickflow cessation” $(N)$ is
implicitly conceptualized when performing automatic hydrograph separation routines.
Note for reference to the image above: “direct runoff” = quickflow, and “ground-water runoff” = slowflow. Linsley and Franzini (1964)
Digital Filters
Digital filters represent a set of automatic hydrograph separation algorithms that require no input other than the measured stream flow signal $(q)$. Considering the streamflow hydrograph as a signal
is quite apt when dealing with digital filters, as they themselves were inspired from signal processing of Lyne and Hollick, 1979 (Nathan and McMahon, 1990). With respect to the quick and slow
hydrograph components, hydrograph separation is nothing more than the application of a low-pass filter to the total streamflow signal.
Another point to note is that many authors have applied these digital filters in multiple passes, either in two-passes (forward $\to$ backward) or three-passes (forward $\to$ backward $\to$ forward)
to increase the smoothing of the resulting slow flow signal (Chapman, 1991).
The General Form
With digital filters, there is no physical interpretation to the algorithm, it only produces a baseflow signal that resembles what one would expect. The general form of all digital filters used for
hydrograph separation follows:
\[b_t = \alpha b_{t-1} + \beta\left(q_t + \gamma q_{t-1}\right),\]
where $q_{t-1}$ represents the total flow measured at one timestep prior to $q_t$, and $\alpha$, $\beta$ and $\gamma$ are parameters. The above equation is a three-parameter equation, however most
implementations do not require every parameter be specified or, in other cases, two or more parameters can be specified as a function of another.
Lyne and Hollick
For example, the Lyne and Hollick (1979) equation (the earliest of digital filters used for hydrograph separation), is a one-parameter equation found by a single smoothing parameter $a$ suggested to
be set between the values of 0.9–0.95 (Nathan and McMahon, 1990), where:
\[\alpha = a \qquad \beta = \frac{1-a}{2} \qquad \gamma=1.0\]
Chapman (1991)
After noting some conceptual discrepancies with the Lyne and Hollick (1979) equation, Chapman (1991) modified the equation into a parameter-less form as a function of the recession coefficient $k$,
discussed above. The Chapman (1991) algorithm takes the form:
\[\alpha = \frac{3k-1}{3-k} \qquad \beta = \frac{1-k}{3-k} \qquad \gamma=1.0\]
Chapman and Maxwell
Chapman and Maxwell (1996) later simplified the above equation by assuming that slow flow is the weighted average of quick flow and the slow flow from the previous timestep (Chapman, 1999), that is
$b_t=kb_{t-1}+(1-k)f_t$, leading to:
\[\alpha = \frac{k}{2-k} \qquad \beta = \frac{1-k}{2-k} \qquad \gamma=0.0\]
Boughton & Eckhardt
Boughton (1993) used a similar approach to Chapman and Maxwell (1996), except added an adjustment parameter $C$, such that $b_t=kb_{t-1}+Cf_t$. The Boughton (1993) form of the digital filter thus
\[\alpha = \frac{k}{1+C} \qquad \beta = \frac{C}{1+C} \qquad \gamma=0.0\]
While also investigating the generalized digital filter, Eckhardt (2005) discovered an interpretation of the Boughton (1993) algorithm that eliminated the $C$ parameter and introduced the concept of
$\text{BFI}_\text{max}$: the maximum value of the baseflow index that can be achieved using the digital filter. The Eckhardt (2005) digital filter is found by:
\[\alpha = \frac{(1-\text{BFI}_\text{max})k}{1-k\text{BFI}_\text{max}} \qquad \beta = \frac{(1-k)\text{BFI}_\text{max}}{1-k\text{BFI}_\text{max}} \qquad \gamma=0.0\]
or made equivalent to Boughton (1993) by setting:
\[C = \frac{(1-k)\text{BFI}_\text{max}}{1-\text{BFI}_\text{max}}\]
Eckhardt (2005) suggests estimates for $\text{BFI}_\text{max}=0.8$ for perennial streams; $0.5$ for ephemeral streams; and $0.25$ for perennial stream over hard-rock aquifers.
Jakeman and Hornberger
The Jakeman and Hornberger (1993) algorithm closely follows that of Boughton (1993) and Chapman and Maxwell (1996), except it was formulated from a component of the IHACRES data-based model rather
than being intended strictly for hydrograph separation (Chapman, 1999). Nonetheless, the IHACRES model can be shown to fit the general digital filter of equation above, using 3 parameters, where:
\[\alpha = \frac{a}{1+C} \qquad \beta = \frac{C}{1+C} \qquad \gamma=\beta\alpha_s\]
Note that setting $\alpha_s<0$ is conceptually correct, as it implies that the rate of change of slow flow is positively correlated the rate of change of total flow (Chapman, 1999). Suggested
value for $\alpha_s=-exp(-1/k)$.
Tularam and Ilahee
Lastly, Tularam and Ilahee (2008) most recently presented an digital filter that also resembled that of Chapman and Maxwell (1996), with the slight difference of assuming that slow flow is the
weighted average of the slow flow of the previous timestep and total flow, not quick flow (i.e., $b_t=ab_{t-1}+(1-a)q_t$). This formulation is essentially the same as Lyne and Hollick (1979) with the
exception that Tularam and Ilahee (2008) does not average total flow of the current and previous timestep. The one-parameter Tularam and Ilahee (2008) form yields:
\[\alpha = a \qquad \beta = 1-a \qquad \gamma=0.0\]
Digital filter equations in their published form:
Lyne and Hollick (1979):
\[b_t = ab_{t-1} + \frac{1-a}{2}\left(q_t + q_{t-1}\right)\]
Chapman (1991):
\[b_t = \frac{3k-1}{3-k}b_{t-1} + \frac{1-k}{3-k}\left(q_t + q_{t-1}\right)\]
Chapman and Maxwell (1996):
\[b_t = \frac{k}{2-k}b_{t-1} + \frac{1-k}{2-k}q_t\]
Boughton (1993):
\[b_t = \frac{k}{1+C}b_{t-1} + \frac{C}{1+C}q_t\]
Eckhardt (2005):
\[b_t = \frac{(1-\text{BFI}_\text{max})kb_{t-1} + (1-k)\text{BFI}_\text{max}q_t}{1-k\text{BFI}_\text{max}}\]
Jakeman and Hornberger (1993):
\[b_t = \frac{a}{1+C}b_{t-1} + \frac{C}{1+C}\left(q_t + \alpha_s q_{t-1}\right)\]
Tularam and Ilahee (2008):
Moving-window methods
A second class of hydrograph separation schemes are here considered “moving window methods” also known as “manual separation techniques” in Arnold and Allen (1999). These methods do not follow an
equation, per se, rather a methodology based on the explicit/manual selection of discharge values assumed representative of slowflow discharge within a window of a set number of days.
In total, 10 estimates of slowflow discharge are computed using variants of 4 methods. Many of these methods are included in stand-alone software packages and have been re-coded here. The methods
UKIH (3)
The UKIH/Wallingford (Institute of Hydrology, 1980) method operates by locating minimum discharges in a (user specified) $N$-day window. This set of minimum discharge is then further screened,
automatically, for discharges that are considered representative of “baseflow,” which are deemed “turning points.” Linear interpolation is then conducted between subsequent turning points yielding
the final slowflow discharge. In a similar fashion to the digital filters, this method extracts a filtered/smoothed hydrograph of total flow minima, and is therefore often also referred to as the
“smoothed minima technique.”
Piggott et.al. (2005) discussed how the UKIH technique can yield alternate baseflow estimates depending on the origin of the $N$-day window. They proposed staggering $N$-sets of UHIK baseflow
estimates to create an overall aggregate baseflow hydrograph. Three versions of this modification are included here:
1. Sweeping minimum: returns the daily minimum of the staggered hydrographs;
2. Sweeping maximum: returns the daily maximum of the staggered hydrographs; and,
3. Sweeping median: returns the median of the $N$-staggered hydrographs.
HYSEP (3)
The HYSEP (Sloto and Crouse, 1996) method depends on the computed days of quick flow termination $N$. Like the UKIH method, the HYSEP techniques then proceed to determine minimum discharges within
the $2N^\ast$-day window, where “the interval $2N^\ast$ used for hydrograph separations is the odd integer between 3 and 11 nearest to $2N$ “ (Sloto and Crouse, 1996). Three methods of producing
baseflow estimates are computed in HYSEP and are reproduced here, they include:
1. Fixed interval: where baseflow is assumed to be the minimum discharge reported within sequential, non-overlapping $2N^* $-day windows. Like the UKIH method, results from the fixed interval method
is dependent on the (“fixed”) window origin;
2. Sliding interval: where baseflow is assumed to be the minimum discharge found within a moving $[(2N^*-1)/2]$-day window. In contrast, this method tends to yield a higher BFI; and,
3. Local minimum: linearly-interpolates total flow minima within a moving $[(2N^*-1)/2]$-day window.
PART (3)
The PART technique (Rutledge, 1998) aims to reproduce the conceptual hydrograph represented in the Figure above. Using quick flow termination estimates $(N)$, recession coefficients $(k)$, and the
concept of the “antecedent recession requirement,” a combination of forward and backward filtering techniques are used in producing the final hydrograph separation estimates. Three estimates using
the PART method are produced here, based on the suggested “antecedent recession requirement” choices offered by Rutledge (1998):
…once considering the requirement of antecedent recession to be the largest integer that is less than the result of N, and once for each of the next two larger integers.
Then, “linear interpolation is used to estimate ground-water discharge during periods of surface runoff.”
The Clarifica Inc., (2002) technique. This method separates the total flow hydrograph by performing two sweeps on the hydrograph. The first is a 6-day moving minimum, followed by a 5-day moving
average (3-days previous, 1-day ahead). This method was designed for use within southern Ontario watersheds and tends to produce higher estimates of baseflow during peak flow events.
Physically-based digital filters
Another class of hydrograph separation routines are those deemed “physically-based” (Furey and Gupta, 2001). These methods of separation allow for additional input, such as climate conditions, to
help guide the separation of the hydrograph; contrast this to the above methods where only parameters need tuning until the desired (“arbitrary”) slowflow signal is produced.
Furey and Gupta (2001) presented a digital filter that was formulated this way. At the moment, physically-based digital filters have not been applied.
Another honorable mention is the proceedure from the benchmark paper by Hewlett and Hibbert (1967). Here, quick flow is separated by identifying rising portions of the hydrograph that exceed a
pre-defined “separation” rate, defined by the authors as being 0.05 ft³/s/mi²/hr ( $\approx$ 0.00055 m³/s/km²/hr) and deemed suitable for “small forested watersheds in the Appalachian-Piedmont
This approach too, has not been applied.
Source code
The above algorithms are available using the following jupyter script, some edits may be required to accommodate data format. In its current state, it readily reads a hydrograph .csv file with the
header: "Date,Flow,Flag".
Boughton, W.C., 1993. A hydrograph-based model for estimating the water yield of ungauged catchments. Hydrology and Water Resources Symposium, Institution of Engineers Australia, Newcastle: 317-324.
Chapman, T.G. and A.I. Maxwell, 1996. Baseflow separation - comparison of numerical methods with tracer experiments.Institute Engineers Australia National Conference. Publ. 96/05, 539-545.
Chapman T.G., 1999. A comparison of algorithms for stream flow recession and baseflow separation. Hydrological Processes 13: 710-714.
Clarifica Inc., 2002. Water Budget in Urbanizing Watersheds: Duffins Creek Watershed. Report prepared for the Toronto and Region Conservation Authority.
Eckhardt, K., 2005. How to construct recursive digital filters for baseflow separation. Hydrological Processes 19, 507-515.
Furey, PR. and VK Gupta. 2001. A physically based filter for separating base flow from streamflow records. Water Resources Research, 37(11): 2709-2722.
Hewlett J.D. and A.R. Hibbert, 1967. Factor Affecting the Response of Small Watersheds to Precipitation in Humid Areas. in: W.E. Sopper and H.W. Lull (ed.), Forest Hydrology, Pergamon, New York,
N.Y., pp. 275-290.
Institute of Hydrology, 1980. Low Flow Studies report. Wallingford, UK.
Jakeman, A.J. and Hornberger G.M., 1993. How much complexity is warranted in a rainfall-runoff model? Water Resources Research 29: 2637-2649.
Lyne, V. and M. Hollick, 1979. Stochastic time-variable rainfall-runoff modelling. Hydrology and Water Resources Symposium, Institution of Engineers Australia, Perth: 89-92.
Piggott, A.R., S. Moin, C. Southam, 2005. A revised approach to the UKIH method for the calculation of baseflow. Hydrological Sciences Journal 50(5): 911-920.
Reed, D.W., P. Johnson, J.M. Firth, 1975. A Non-Linear Rainfall-Runoff Model, Providing for Variable Lag Time. Journal of Hydrology 25: 295–305.
Rutledge, A.T., 1998. Computer Programs for Describing the Recession of Ground-Water Discharge and for Estimating Mean Ground-Water Recharge and Discharge from Streamflow Records-Update,
Water-Resources Investigation Report 98-4148.
Sloto, R.A. and M.Y. Crouse, 1996. HYSEP: A Computer Program for Streamflow Hydrograph Separation and Analysis U.S. Geological Survey Water-Resources Investigations Report 96-4040.
Tularam, A.G., Ilahee, M., 2008. Exponential Smoothing Method of Base Flow Separation and its Impact on Continuous Loss Estimates. American Journal of Environmental Sciences 4(2):136-144. | {"url":"https://owrc.github.io/info/hydrographseparation/","timestamp":"2024-11-06T08:24:34Z","content_type":"text/html","content_length":"26368","record_id":"<urn:uuid:5f68ce50-1368-40e8-a70d-ac8587420410>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00125.warc.gz"} |
25.3: The Law of Refraction
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Learning Objectives
By the end of this section, you will be able to:
• Determine the index of refraction, given the speed of light in a medium.
It is easy to notice some odd things when looking into a fish tank. For example, you may see the same fish appearing to be in two different places (Figure \(\PageIndex{1}\)). This is because light
coming from the fish to us changes direction when it leaves the tank, and in this case, it can travel two different paths to get to our eyes. The changing of a light ray’s direction (loosely called
bending) when it passes through variations in matter is called refraction. Refraction is responsible for a tremendous range of optical phenomena, from the action of lenses to voice transmission
through optical fibers.
Definition: REFRACTION
The changing of a light ray’s direction (loosely called bending) when it passes through variations in matter is called refraction.
The speed of light \(c\) not only affects refraction, it is one of the central concepts of Einstein’s theory of relativity. As the accuracy of the measurements of the speed of light were improved, \
(c\) was found not to depend on the velocity of the source or the observer. However, the speed of light does vary in a precise manner with the material it traverses. These facts have far-reaching
implications, as we will see in "Special Relativity." It makes connections between space and time and alters our expectations that all observers measure the same time for the same event, for example.
The speed of light is so important that its value in a vacuum is one of the most fundamental constants in nature as well as being one of the four fundamental SI units.
Figure \(\PageIndex{1}\): Looking at the fish tank as shown, we can see the same fish in two different locations, because light changes directions when it passes from water to air. In this case, the
light can reach the observer by two different paths, and so the fish seems to be in two different places. This bending of light is called refraction and is responsible for many optical phenomena.
Why does light change direction when passing from one material (medium) to another? It is because light changes speed when going from one material to another. So before we study the law of
refraction, it is useful to discuss the speed of light and how it varies in different media.
The Speed of Light
Early attempts to measure the speed of light, such as those made by Galileo, determined that light moved extremely fast, perhaps instantaneously. The first real evidence that light traveled at a
finite speed came from the Danish astronomer Ole Roemer in the late 17th century. Roemer had noted that the average orbital period of one of Jupiter’s moons, as measured from Earth, varied depending
on whether Earth was moving toward or away from Jupiter. He correctly concluded that the apparent change in period was due to the change in distance between Earth and Jupiter and the time it took
light to travel this distance. From his 1676 data, a value of the speed of light was calculated to be \(2.26 \times 10^{8} m/s\) (only 25% different from today’s accepted value). In more recent
times, physicists have measured the speed of light in numerous ways and with increasing accuracy. One particularly direct method, used in 1887 by the American physicist Albert Michelson (1852–1931),
is illustrated in Figure \(\PageIndex{2}\). Light reflected from a rotating set of mirrors was reflected from a stationary mirror 35 km away and returned to the rotating mirrors. The time for the
light to travel can be determined by how fast the mirrors must rotate for the light to be returned to the observer’s eye.
Figure \(\PageIndex{2}\): A schematic of early apparatus used by Michelson and others to determine the speed of light. As the mirrors rotate, the reflected ray is only briefly directed at the
stationary mirror. The returning ray will be reflected into the observer's eye only if the next mirror has rotated into the correct position just as the ray returns. By measuring the correct rotation
rate, the time for the round trip can be measured and the speed of light calculated. Michelson’s calculated value of the speed of light was only 0.04% different from the value used today.
The speed of light is now known to great precision. In fact, the speed of light in a vacuum \(c\) is so important that it is accepted as one of the basic physical quantities and has the fixed value.
\[ c &\equiv 2.99792458 \times 10^{8} \\[5pt] &\sim 3.00 \times 10^{8} m/s \]
The approximate value of \(3.00 \times 10^{8} m/s\) is used whenever three-digit accuracy is sufficient. The speed of light through matter is less than it is in a vacuum, because light interacts with
atoms in a material. The speed of light depends strongly on the type of material, since its interaction with different atoms, crystal lattices, and other substructures varies.
Definition: INDEX OF REFRACTION
We define the index of refraction \(n\) of a material to be
\[n = \frac{c}{v}, \label{index}\]
where \(v\) is the observed speed of light in the material. Since the speed of light is always less than \(c\) in matter and equals \(c\) only in a vacuum, the index of refraction is always greater
than or equal to one. That is, \(n \gt 1\).
Table \(\PageIndex{1}\) gives the indices of refraction for some representative substances. The values are listed for a particular wavelength of light, because they vary slightly with wavelength.
(This can have important effects, such as colors produced by a prism.) Note that for gases, \(n\) is close to 1.0. This seems reasonable, since atoms in gases are widely separated and light travels
at \(c\) in the vacuum between atoms. It is common to take \(n = 1\) for gases unless great precision is needed. Although the speed of light \( v\) in a medium varies considerably from its value \( c
\) in a vacuum, it is still a large speed.
Table \(\PageIndex{1}\): Index
of Refraction in Various Media
Medium n
Gases at \(0ºC, 1 atm\)
Air 1.000293
Carbon dioxide 1.00045
Hydrogen 1.000139
Oxygen 1.000271
Liquids at 20ºC
Benzene 1.501
Carbon disulfide 1.628
Carbon tetrachloride 1.461
Ethanol 1.361
Glycerine 1.473
Water, fresh 1.333
Solids at 20ºC
Diamond 2.419
Fluorite 1.434
Glass, crown 1.52
Glass, flint 1.66
Ice at 20ºC 1.309
Polystyrene 1.49
Plexiglas 1.51
Quartz, crystalline 1.544
Quartz, fused 1.458
Sodium chloride 1.544
Zircon 1.923
Example \(\PageIndex{1}\): Speed of Light in Matter
Calculate the speed of light in zircon, a material used in jewelry to imitate diamond.
The speed of light in a material, \(v\), can be calculated from the index of refraction \(n\) of the material using the equation \(n = c/v\).
The equation for index of refraction (Equation \ref{index}) can be rearranged to determine \(v\)
\[v = \frac{c}{n}. \nonumber\]
The index of refraction for zircon is given as 1.923 in Table \(\PageIndex{1}\), and \(c\) is given in the equation for speed of light. Entering these values in the last expression gives
\[ \begin{align*} v &= \frac{3.00 \times 10^{8} m/s}{1.923} \\[5pt] &= 1.56 \times 10^{8} m/s. \end{align*}\]
This speed is slightly larger than half the speed of light in a vacuum and is still high compared with speeds we normally experience. The only substance listed in Table \(\PageIndex{1}\) that has a
greater index of refraction than zircon is diamond. We shall see later that the large index of refraction for zircon makes it sparkle more than glass, but less than diamond.
Law of Refraction
Figure \(\PageIndex{3}\) shows how a ray of light changes direction when it passes from one medium to another. As before, the angles are measured relative to a perpendicular to the surface at the
point where the light ray crosses it. (Some of the incident light will be reflected from the surface, but for now we will concentrate on the light that is transmitted.) The change in direction of the
light ray depends on how the speed of light changes. The change in the speed of light is related to the indices of refraction of the media involved. In the situations shown in Figure \(\PageIndex{3}
\), medium 2 has a greater index of refraction than medium 1. This means that the speed of light is less in medium 2 than in medium 1. Note that as shown in Figure \(\PageIndex{3a}\), the direction
of the ray moves closer to the perpendicular when it slows down. Conversely, as shown in Figure \(\PageIndex{3b}\), the direction of the ray moves away from the perpendicular when it speeds up. The
path is exactly reversible. In both cases, you can imagine what happens by thinking about pushing a lawn mower from a footpath onto grass, and vice versa. Going from the footpath to grass, the front
wheels are slowed and pulled to the side as shown. This is the same change in direction as for light when it goes from a fast medium to a slow one. When going from the grass to the footpath, the
front wheels can move faster and the mower changes direction as shown. This, too, is the same change in direction as for light going from slow to fast.
Figure \(\PageIndex{3}\): The change in direction of a light ray depends on how the speed of light changes when it crosses from one medium to another. The speed of light is greater in medium 1 than
in medium 2 in the situations shown here. (a) A ray of light moves closer to the perpendicular when it slows down. This is analogous to what happens when a lawn mower goes from a footpath to grass.
(b) A ray of light moves away from the perpendicular when it speeds up. This is analogous to what happens when a lawn mower goes from grass to footpath. The paths are exactly reversible.
The amount that a light ray changes its direction depends both on the incident angle and the amount that the speed changes. For a ray at a given incident angle, a large change in speed causes a large
change in direction, and thus a large change in angle. The exact mathematical relationship is the law of refraction, or "Snell's Law," which is stated in equation form as
THE LAW OF REFRACTION (Snell's Law)
\[n_{1} \sin \theta_{1} = n_{2} \sin \theta_{2}.\label{25.4.2}\]
Here, \(n_{1}\) and \(n_{2}\) are the indices of refraction for medium 1 and 2, and \(\theta_{1}\) and \(\theta_{2}\) are the angles between the rays and the perpendicular in medium 1 and 2, as shown
in Figure \(\PageIndex{3}\). The incoming ray is called the incident ray and the outgoing ray the refracted ray, and the associated angles the incident angle and the refracted angle. The law of
refraction is also called Snell’s law after the Dutch mathematician Willebrord Snell (1591–1626), who discovered it in 1621. Snell’s experiments showed that the law of refraction was obeyed and that
a characteristic index of refraction \(n\) could be assigned to a given medium. Snell was not aware that the speed of light varied in different media, but through experiments he was able to determine
indices of refraction from the way light rays changed direction.
A classic observation of refraction occurs when a pencil is placed in a glass half filled with water. Do this and observe the shape of the pencil when you look at the pencil sideways, that is,
through air, glass, water. Explain your observations. Draw ray diagrams for the situation.
Example \(\PageIndex{2}\): Determine the Index of Refraction from Refraction Data
Find the index of refraction for medium 2 in Figure \(\PageIndex{3a}\), assuming medium 1 is air and given the incident angle is \(30.0^{\circ}\) and the angle of refraction is \(22.0^{\circ}\).
The index of refraction for air is taken to be 1 in most cases (and up to four significant figures, it is 1.000). Thus \(n_{1} = 1.00\) here. From the given information, \(\theta_{1} = 30.0^{\circ}\)
and \(\theta_{2} = 22.0^{\circ}\) With this information, the only unknown in Snell’s law is \(n_{2}\), so that it can be used to find this unknown.
Snell's law (Equation \ref{25.4.2}) can be rearranged to isolate \(n_{2}\) gives
\[n_{2} = n_{1}\frac{\sin{\theta_{1}}}{\sin{\theta_{2}}}.\]
Entering known values,
\[ \begin{align*} n_{2} &= n_{1}\frac{\sin{30.0^{\circ}}}{\sin{22.0^{\circ}}} \\[5pt] &= \frac{0.500}{0.375} \\[5pt] &=1.33. \end{align*}\]
This is the index of refraction for water, and Snell could have determined it by measuring the angles and performing this calculation. He would then have found 1.33 to be the appropriate index of
refraction for water in all other situations, such as when a ray passes from water to glass. Today we can verify that the index of refraction is related to the speed of light in a medium by measuring
that speed directly.
Example \(\PageIndex{3}\): A Larger Change in Direction
Suppose that in a situation like that in the previous example, light goes from air to diamond and that the incident angle is \(30.0^{\circ}\). Calculate the angle of refraction \(\theta_{2}\) in the
Again the index of refraction for air is taken to be \(n_{1} = 1.00\), and we are given \(\theta_{1} = 30.0^{\circ}\). We can look up the index of refraction for diamond in Table \(\PageIndex{1}\),
finding \(n_{2} = 2.419\). The only unknown in Snell’s law is \(\theta_{2}\), which we wish to determine.
Solving Snell’s law (Equation \ref{25.4.2}) for \(\sin{\theta_{2}}\) yields
\[ \sin{\theta_{2}} = \frac{n_{1}}{n_{2}}\sin{\theta_{1}}.\]
Entering known values,
\[ \begin{align*} \sin{\theta_{2}} &= \frac{1.00}{2.419} \sin{30.0^{\circ}} \\[5pt] &= \left( 0.413 \right) \left( 0.500 \right) \\[5pt] &= 0.207. \end{align*}\]
The angle is thus
\[\theta_{2} = \sin{0.207}^{-1} = 11.9^{\circ}.\]
For the same \(30^{\circ}\) angle of incidence, the angle of refraction in diamond is significantly smaller than in water (\(11.9^{\circ}\) rather than \(22^{\circ}\) -- see the preceding example).
• The changing of a light ray’s direction when it passes through variations in matter is called refraction.
• The speed of light in vacuuum \(c = 2.99792458 \times 10^{8} \sim 3.00 \times 10^{8} m/s\)
• Index of refraction \(n = \frac{c}{v}\), where \(v\) is the speed of light in the material, \(c\) is the speed of light in vacuum, and \(n\) is the index of refraction.
• Snell’s law, the law of refraction, is stated in equation form as \(n_{1} \sin_{\theta_{1}} = n_{2} \sin_{\theta_{2}}\).
changing of a light ray’s direction when it passes through variations in matter
index of refraction
for a material, the ratio of the speed of light in vacuum to that in the material | {"url":"https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/25%3A_Geometric_Optics/25.03%3A_The_Law_of_Refraction","timestamp":"2024-11-11T14:55:34Z","content_type":"text/html","content_length":"155120","record_id":"<urn:uuid:19132b75-23dd-458b-8a65-e78a9ef3c5ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00676.warc.gz"} |
Frontiers | Development and initial validation of a mathematics-specific spatial vocabulary scale
• ^1Department of Psychological Sciences, University of Missouri, Columbia, SC, United States
• ^2Jacobs Institute for Innovation in Education, University of San Diego, San Diego, CA, United States
This study describes the development and initial validation of a mathematics-specific spatial vocabulary measure for upper elementary school students. Reviews of spatial vocabulary items, mathematics
textbooks, and Mathematics Common Core State Standards identified 720 mathematical terms, 148 of which had spatial content (e.g., edge). In total, 29 of these items were appropriate for elementary
students, and a pilot study (59 fourth graders) indicated that nine of them were too difficult (< 50% correct) or too easy (> 95% correct). The remaining 20 items were retained as a spatial
vocabulary measure and administered to 181 (75 girls, mean age = 119.73 months, SD =4.01) fourth graders, along with measures of geometry, arithmetic, spatial abilities, verbal memory span, and
mathematics attitudes and anxiety. A Rasch model indicated that all 20 items assessed an underlying spatial vocabulary latent construct. The convergent and discriminant validity of the vocabulary
measure was supported by stronger correlations with theoretically related (i.e., geometry) than with more distantly related (i.e., arithmetic) mathematics content and stronger relations with spatial
abilities than with verbal memory span or mathematics attitudes and anxiety. Simultaneous regression analyses and structural equation models, including all measures, confirmed this pattern, whereby
spatial vocabulary was predicted by geometry knowledge and spatial abilities but not by verbal memory span, mathematics attitudes and anxiety. Thus, the measure developed in this study helps in
assessing upper elementary students' mathematics-specific spatial vocabulary.
The development of mathematical competencies is a critical part of children's schooling and sets the foundation for future educational and occupational opportunities and contributes to functioning
(e.g., financial decision-making) in other aspects of life in the modern world (National Mathematics Advisory Panel, 2008; Joensen and Nielsen, 2009; Kroedel and Tyhurst, 2012; Ritchie and Bates,
2013). There are many factors that influence children's mathematical development, including spatial abilities. In fact, the relation between some areas of mathematics and conceptions of space can be
traced back to the early emergence of mathematics as an academic discipline (Dantiz, 1954). Modern cognitive scientists define spatial abilities as the capacity to perceive, retain, retrieve, and
mentally transform the static and dynamic visual information of objects and their relationships (Wai et al., 2009; Uttal et al., 2013a; Verdine et al., 2014). Related studies confirm the relationship
between spatial abilities, various aspects of mathematical development (Lachance and Mazzocco, 2006; Li and Geary, 2013, 2017; Gilligan et al., 2017; Verdine et al., 2017; Zhang and Lin, 2017; Geer
et al., 2019; Mix, 2019; Hawes and Ansari, 2020; Attit et al., 2021; Geary et al., 2023), innovation in science, technology, engineering, and mathematics (STEM) fields (Wai et al., 2009; Kell et al.,
2013; Uttal et al., 2013b) and competence in technical–mechanical blue-collar occupations (Humphreys et al., 1993; Gohm et al., 1998).
Although the relation between general spatial abilities and mathematics is well established, the more specific relations between different aspects of spatial abilities and mathematical learning and
knowledge are not well understood. For example, mental rotation abilities predicted standardized mathematics achievement and accuracy of placing whole numbers on a number line for 6- and 7-year-olds
but not for older students (Gilligan et al., 2019). For older students, in contrast, visuospatial attention, not mental rotation skills, predicted the accuracy of fractions placements on a number
line task (Geary et al., 2021a). Other studies suggest that spatial abilities may be particularly important for learning some types of newly presented mathematical material and may become less
important as students become familiar with this material (Casey et al., 1997; Mix et al., 2016).
Most of what we know about these relations is based on measures of spatial abilities, with comparatively less known about the contributions of students' developing spatial vocabulary (below) to their
mathematical competencies. Mathematics-specific spatial vocabulary represents explicit statements about the intersection between spatial abilities and mathematical concepts. For instance, spatial
ability includes the inherent brain and cognitive systems for processing information about objects which is eventually applied to geometric shapes (Izard and Spelke, 2009); the intersection is
represented with, for instance, an understanding of the meaning of edge and face for geometric solids. A full understanding of the spatial-mathematics relation will require tracking developing the
spatial vocabulary of students and examining how vocabulary contributes to this relation. To facilitate the study of this relation, we developed and provided the initial validation of a
mathematics-focused spatial vocabulary measure for elementary school students.
Mathematics vocabulary and achievement
There is a misconception that early mathematical development largely involves learning symbolic arithmetic and associated concepts and procedural rules (Crosson et al., 2020). It does, of course,
involve these but also includes the development of a mathematical language, including a specific mathematics vocabulary (Toll and Van Luit, 2014; Purpura and Logan, 2015; Hornburg et al., 2018). Even
though there is no agreed-upon definition, in the most general sense, mathematical language is defined as keywords and concepts representing mathematical activities (for a review, see Turan et al.,
2022). Sistla and Feng (2014) highlighted that mathematical language often differs from general language, stating that “In Math, there are many words used for the same operation, for example, ‘add
them up, ‘the sum,' ‘the total,' ‘in all,' and ‘altogether' are phrases used to mean to use the addition operation, but these are not terms used in everyday language” (p. 4).
A recent meta-analysis, including 40 studies with 55 independent samples, revealed that mathematics vocabulary is moderately but consistently associated with mathematics achievement (Lin et al., 2021
). However, the association is nuanced, depending on students' age and achievement levels, the novelty of topics, and the domain of mathematics (Powell et al., 2017; Peng and Lin, 2019; Lin et al.,
2021; Ünal et al., 2021; Espinas and Fuchs, 2022). More specifically, mathematics vocabulary appears to play a more substantial role during the initial learning of mathematics subdomains (e.g.,
arithmetic) and needs to become increasingly nuanced with the introduction of more complex mathematics across grades (Lin et al., 2021; Ünal et al., 2021). Furthermore, depending on the topic, some
aspects of mathematics vocabulary seem more critical than others. For instance, Peng and Lin (2019) found that word problem performance was more strongly associated with measurement and
geometry-related vocabulary than with numerical operations-related vocabulary.
The importance of a strong mathematics vocabulary is illustrated by Hughes et al. (2020) finding that seventh-grade mathematics books contained over 450 mathematics vocabulary words. The measurement
of mathematics vocabulary is thus an essential component of tracking students' mathematical development, but the content of these measures varies across studies. Some measures combine different areas
(e.g., comparative terms, such as combine and take away, and spatial terms, such as near and far; Purpura et al., 2017), whereas others focus on specific areas (e.g., measurement vocabulary, such as
decimeter; geometry vocabulary, such as parallelogram; and numerical operations vocabulary, such as fraction) (Peng and Lin, 2019). Although general mathematics vocabulary measures are useful,
measures that assess content-specific vocabulary (e.g., geometry related) are important for tracking students' development in specific areas of mathematics (Peng and Lin, 2019).
Mathematics-specific spatial vocabulary is one such area. To be sure, there are mathematics vocabulary assessments that include spatial terms, and these are sometimes found to mediate the relation
between spatial abilities and mathematics outcomes for younger students (Purpura and Logan, 2015; Georges et al., 2021; Gilligan-Lee et al., 2021). For instance, Gilligan-Lee et al. (2021) showed
that spatial vocabulary was predictive of overall mathematics achievement, controlling spatial abilities, and general vocabulary. However, their measure was composed of items that were focused on
spatial direction (e.g., to the right) and location (e.g., above) and not spatial terms that have specific mathematical meanings (e.g., edge of a cube). Moreover, most of these studies have focused
on students in early elementary school, kindergarten, or preschool (e.g., Toll and Van Luit, 2014; Purpura and Logan, 2015; Powell and Nelson, 2017; Vanluydt et al., 2020), although there are a few
studies focusing on older students (e.g., Peng and Lin, 2019; Ünal et al., 2021).
Hence, there is a need for a mathematics vocabulary assessment explicitly focusing on mathematics-specific spatial terms for upper-elementary school students, hereafter, referred to as spatial
vocabulary. This is important because some aspects of spatial-related mathematics vocabulary are not typically included in mathematics vocabulary measures. Some of these measures include terms
associated with shape (e.g., cube and parallelogram), operation (e.g., quotient and sum), geometry (e.g., line, angle, and edge), or number (e.g., odd and even) (Powell et al., 2020), but less often
include more specific key spatial concepts. For example, “edge” may be a spatial term included in mathematical vocabulary scales; however, those scales may not include terms that represent
relationships between objects in space, such as “perpendicular,” “parallel,” “intersecting,” or “adjacent.” The same is true for geometry terms, which may include types of angles and lines and
properties of shapes but may be less likely to include words representing relationships between them.
Current study
This study aimed to develop an easy-to-administer measure of elementary students' mathematics-specific spatial vocabulary. We developed the measure by compiling items from multiple existing sources
and then assessed its convergent and discriminant validity (Campbell and Fiske, 1959). Convergent validity is established when spatial vocabulary scores are strongly correlated with mathematics and
cognitive measures that have a clear spatial component to them, specifically geometry and spatial abilities. Discriminant validity is established when the correlations between spatial vocabulary and
geometry and spatial abilities are significantly stronger than the correlations with mathematics and ability domains that do not have a clear spatial component to them, specifically arithmetic and
verbal memory span. We also assessed the relation between spatial vocabulary and mathematics attitudes and anxiety as a further control. The latter is often related to concurrent mathematics
achievement and longitudinal gains in achievement (Eccles and Wang, 2016; Geary et al., 2021b). Discriminant validity would be further supported when scores on the spatial vocabulary measure are not
strongly related to mathematics attitudes and anxiety.
Participants included 181 fourth graders (mean age = 119.73 months, SD =4.01). In total, 96 students identified as boys, with 75 identified as girls, 1 preferred not to identify their gender, and the
remaining did not complete this item. Students were asked whether they preferred to speak a language other than English at home, and 39 students indicated that they did (predominantly Spanish).
Students were recruited through advertisements and through schools in several large urban districts in California; specifically, teachers shared information on the project with students in their
classrooms, and students within these classrooms volunteered for the study.
Mathematics measures
The mathematics measures assessed fluency at solving whole number and fractions arithmetic problems, the accuracy of whole and fractions number line placements, accuracy at solving non-standard
arithmetic problems, and geometry. The tests were administered in small groups on the students' computers using Qualtrics (Qualtrics, Provo, UT).
Arithmetic fluency
The test included 24 whole-number addition (e.g., 87 + 5), subtraction (e.g., 35–8), and multiplication (e.g., 48 x 2) problems. The problems were presented with an answer, and the student responded
Yes (correct) or No (incorrect). Half the problems were incorrect, with the answer +1 or 2 from the correct answer. Students had 2 min to solve as many problems as possible. A composite arithmetic
fluency score was based on the correct answer selected across the three operations (M = 9.79, SD = 4.63; α = 0.90).
Fractions arithmetic
The test included 24 fractions addition (e.g., $\frac{1}{4}$ + 1/8 = 3/8) and fractions multiplication problems (e.g., 2$\frac{1}{2}$ x $\frac{1}{4}$ = 5/8). The problems were presented with an
answer, and the student responded Yes (correct) or No (incorrect). Half the problems were incorrect, with error foils based on common fractions errors (e.g., $\frac{1}{4}$ + 2/4 = 3/8). A composite
fractions arithmetic score was based on the correct number selected (M = 6.55, SD = 4.21; α = 0.80).
Whole number line
The student was asked to place 26 target numbers on a 0-1000 number line. The placements were made by moving a slider to the chosen location on the number line with 0 to 1000 endpoints. Following
Siegler and Booth (2004), the accuracy of number line estimation was determined by calculating their mean percent absolute error [PAE = (|Estimate – Target Number|)/1000, M = 7.98%, SD = 4.52%, α =
0.89]. For the analyses, these scores were multiplied by −1 so that positive scores represent better performance.
Fractions number line
The student was asked to place 10 target fractions on a 0–5 number line (10/3, 1/19, 7/5, 9/2, 13/9, 4/7, 8/3, 7/2, 17/4, and 11/4). The placements were made by moving a slider to the chosen location
on a number line with 0 to 5 endpoints. Following Siegler et al. (2011), accuracy was determined by calculating their mean percent absolute error [PAE = (|Estimate – Target Number|)/5, M = 27.17%, SD
= 10.70%, α = 0.67]. For the analyses, these scores were multiplied by −1 so that positive scores represent better performance.
Equality problems
Students' understanding of mathematical equality (i.e., the meaning of =) can be assessed using problems in non-standard formats, such as 8 = __ + 2 – 3 (Alibali et al., 2007; McNeil et al., 2019).
We used the 10-item measure developed by Scofield et al. (2021), where items are presented in a multiple-choice format (4 options). The score was the mean percent correct for the 10 items (M = 70.0,
SD = 28.71, α = 0.88).
In total, 20 items were from the released item pool from the 4th grade National Assessment of Educational Progress (NAEP; https://nces.ed.gov/nationsreportcard/). The items assess students' knowledge
of shapes and solids, including identification (e.g., rectangle and cylinder) and their properties (e.g., number of sides, faces, the diameter of a circle, and angles in a triangle), as well as
knowledge of lines (e.g., parallel). The students were given 10 min to complete the test.
The items were submitted to a Rasch model, grounded on an IRT analysis for the core sample of students (n = 170, scores for the remaining students were imputed, below), following Hughes et al. (2020)
. Three types of fit statistics were used: item difficulty, infit, and outfit statistics. The item difficulty metric provided information about whether the difficulty of each item is suitable to the
person's ability levels on the latent trait (Van Zile-Tamsen, 2017). The items within the range of−3.0 to 3.0 were kept in the measure. The infit statistics show unanticipated response patterns based
on items targeted to the individuals' imputed latent ability based on prior responses. The outfit statistics are more susceptible to guessing or mistakes, such as when the individual guesses
correctly on an item that is well above their imputed ability level or misses an item that should be relatively easy (Runnels, 2012). The acceptable range of mean-square values (MNSQ) is from 0.7 to
1.3 (Linacre, 2007); items with infit–outfit values within that range were retained.
The analyses were conducted using the mirt package in R (Chalmers, 2012; R Core Team, 2022). The results indicated that one item (Item 3) was not contributing to the measurement of geometry knowledge
and was dropped, leaving 19 items for the final measure. The IRT-based scores and the total correct from the 19 items were highly correlated (r = 0.99, p < 0.001), and thus total correct was used in
the analyses (M = 9.43, SD = 4.09, α = 0.88).
Spatial measures
The spatial measures assessed a range of competencies, including visuospatial attention, mental rotation abilities, and spatial visualization. The measures were administered on the students'
computers in small groups. In addition to the measures mentioned below, we also administered the Corsi Block Tapping Task (Corsi, 1972; Kessels et al., 2000), but the scores were not reliable for
this sample, and thus the measure was dropped.
Visual spatial attention
Visuospatial attention was assessed using the Judgment of Line Angle and Position test (Collaer and Nelson, 2002; Collaer et al., 2007; JLAP). The task requires students to match the angle of a
single line to one of the 15-line options in an array below the target line. There were 20 sequentially presented test items, with students selecting the item that matched the angle of the target.
Each trial began immediately after the student's response, or at the 10 s time limit. The score was the number of correct trials (M = 7.72, SD = 3.35, α = 0.88).
Mental rotation
Ganis and Kievit (2015) software was used to generate 24 mental rotation items. The items included a three-dimensional baseline object (constructed from cubes) and a target stimulus that was either
the same or different from the baseline object but rotated 0 to 150 degrees (the baseline and target objects were the same for 12 items and different for 12 items). The task was to determine whether
the objects were the same or different, and the score was the number of correct trials (M = 15.93, SD = 4.28, α = 0.94).
Spatial visualization
Ekstrom and Harman (1976) Paper Folding Test assessed visualization abilities. Students were asked to imagine a paper being folded and a hole punched through the folds. They were then asked to select
the image that represents what that same paper would look like if it were unfolded. Students were shown one example problem with an explanation of the correct answer. Students completed 10 items, and
the score was the total correct across items (M = 3.88, SD = 2.28, α = 0.70).
Spatial transformation
This measure was developed for this project and included items that required students to identify the shape corresponding to two-dimensional representations of the front, right, and top of a figure,
as shown in Figure 1. In total, 22 of these items were created and administered to 59 fourth graders in two classrooms. Performance on six items was poorly correlated (rs < 0.20) with performance on
the other items and was therefore dropped. The resulting 16-item measure was administered to the current sample, and the score was the number correct (M = 8.58, SD = 3.80, α = 0.72). The measure
loaded on the same spatial factor as the other spatial measures (below), confirming it is tapping spatial ability.
FIGURE 1
Visuomotor skills were assessed with the Beery-Buktenica Developmental Test of Visual-Motor Integration (Beery et al., 2010). The measure includes 30 geometric forms that are arranged from simple to
more complex. The task is to draw the figures, which are then scored as correct (1) or not (0) based on standard procedures (M = 23.65, SD = 4.09).
Memory span measures
Digit span
Both forward and backward verbal digit spans were assessed. The former started with three digits and the latter with two. For each trial, students heard a sequence of digits at 1 s intervals. The
task was to recall the digit list by tapping on a circle of digits displayed on the student's computer screen. The student advanced to the next level if the response was correct (in digits and
presentation order). If the response was incorrect, the same level was presented a second time. If a consecutive error occurred, the student regressed one level. Each direction (forward and then
backward) ended after 14 trials. The student's score was the highest digit span correctly recalled before making two consecutive errors at the same span length.
Mathematics attitudes
The 10 items were from the student attitudes assessment of the Trends in International Mathematics and Science Study (TIMSS; Martin et al., 2015). The items assessed interest in mathematics (e.g., “I
learn many interesting things in mathematics,” “I like mathematics”). The items were on a 1 (Disagree a lot) to 4 (Agree a lot) scale, with negatively worded items (e.g., “Mathematics is boring”)
reverse coded. The score was the mean across items (M = 3.12, SD = 0.85, α = 0.90).
The 9 items were from the student attitudes assessment of the TIMSS (Martin et al., 2015). The items assessed mathematics self-efficacy (e.g., “I usually do well in mathematics,” “I learn things
quickly in mathematics”). The items were on a 1 (Disagree a lot) to 4 (Agree a lot) scale, with negatively worded items (e.g., “I am just not good at mathematics”) reverse coded. The score was the
mean across items (M = 3.04, SD = 0.81, α = 0.72).
Ramirez et al. (2013) 8-item measure was used to assess students' mathematics anxiety (e.g., “How do you feel when taking a big test in math class?”, “How do you feel when you have to solve 27 + 15?
”). Students responded by clicking on one of the three options in Figure 2, and thus higher scores (1 to 3) reflected lower anxiety (M = 2.90, SD = 0.32, α = 0.83).
FIGURE 2
Spatial vocabulary
Scale development
We began with four main mathematics education resources: (1) Cannon et al. (2007) Spatial Language Coding Manual; (2) the Quantile Framework for Mathematics (a standardized measure of mathematical
skills and concepts based on the Lexile Framework for Reading; Cuberis, 2021); (3) the Mathematics Common Core State Standards (focusing on grades third through fifth; http://www.corestandards.org/
Math/); and (4) a mathematics vocabulary measure developed by Powell et al. (2017) based on three common third and fifth-grade mathematics textbooks.
A total of 720 mathematical terms were extracted from these resources, and three independent researchers determined that 148 of them were spatially relevant. Two independent researchers then assessed
whether the items were appropriate for elementary school children, which yielded 29 words for the initial version of the measure. This version contained seven parts that focused on position,
direction, pattern, dimension, orientation, action, and geometry-relevant vocabulary. An electronic version of the assessment was created using Qualtrics.
The assessment was piloted on 36 incoming 5th-grade students through a virtual STEAM course that provided hands-on learning experiences related to spatial reasoning and problem-solving through
origami. Students were asked to complete the Qualtrics version of the assessment before and after completion of the virtual course. An item-level analysis was conducted to determine internal
consistency and level of difficulty. Items were determined to be too easy if >95% of students answered correctly before the lessons. Words were considered too difficult if < 50% of students answered
correctly before the lessons. Based on these criteria, nine words were excluded.
The remaining 20 items were submitted to an IRT analysis, following the same procedures described for the geometry test for the core sample of students (n = 170, scores for the remaining students
were imputed, see below). The results indicated that all items contributed to the measurement of spatial vocabulary and were retained for the final measure. The items, along with an Item Person Map (
Supplementary Figure A1), are shown in the Supplementary material. The IRT-based scores and the total correct from the 20 items were highly correlated (r = 0.99, p < 0.001), and thus total correct
was used in the analyses (M = 12.72, SD = 4.16, α = 0.81).
After receiving parent consent and student assent, students completed a battery of assessments online on the students' computer, including the spatial vocabulary, mathematics, and spatial ability
measures. Students completed measures in virtual groups of 6–8 students that were proctored by trained researchers. Assessments were given once a week over the course of 3 weeks. Sessions were
approximately 1 h long. Students were scheduled to meet at the same time and day of the week over the 3 weeks with the same proctor. Most of the measures were assessed through a Qualtrics survey, but
the spatial and verbal memory span measures were administered using customized programs developed through Inquisit by Millisecond (https://www.millisecond.com).
During the first session, students were provided a Qualtrics link and were asked about their sex, preferred language, and attitudes toward math. After completing these assessments, they completed the
digit span and the JLAP, mental rotation, and Corsi measures on the Inquisit platform. During the second session, students completed the Beery assessment and a second battery of assessments on
Qualtrics. Each student was sent a Beery assessment to their homes. The assessment was sealed in a manila envelope with instructions not to open it until instructed to do so, along with a
pre-addressed mailer to return the test. Once students were ready to begin, the researcher gave explicit instructions on how to proceed. Once a student had completed the Beery form, the researcher
would watch as they placed the form into the pre-addressed mailer and sealed the envelope. Students were then sent a battery of assessments on Qualtrics. The assessments included arithmetic fluency,
fractions arithmetic, spatial transformation, and the two number line estimation tasks. At the end of the second session, the researchers then gave students instructions to leave the mailer with the
Beery assessment outside their homes for UPS pickup or to drop it off at their nearest post office. During the third session, students were provided a final Qualtrics link that included the spatial
vocabulary assessment, Paper Folding, geometry assessment, and equality problems.
The 11% of missing values were estimated using the multiple imputations procedure in SAS (2014). The imputations were based on all key variables and were the average across five imputations. Scores
were then standardized (M = 0, SD = 1). The first goal was to reduce the number of variables by creating composite measures. The five arithmetic measures were submitted to principal components factor
analyses with Promax rotation (allowing correlated factors) using proc factor (SAS, 2014), as were the seven cognitive (i.e., spatial, verbal memory span) measures and three attitude measures.
Factors with Eigenvalues > 1 were retained; the next lowest Eigenvalue was 0.77 for the arithmetic measures and cognitive measures and 0.38 for the attitudes measures. The composite measures were
then used to assess the convergent and discriminant validity of the spatial measure.
We then ran follow-up structural equation models (SEM) in Proc Calis (SAS, 2014). The goal was to isolate variance common to all measures (composites for arithmetic, spatial, verbal memory span, and
mathematics attitudes), which included general cognitive ability (e.g., top-down attentional control; Ünal et al., 2023) and any method variance (Campbell and Fiske, 1959). All variables defined a
general factor for the baseline model. For Model 2, paths from geometry and spatial abilities were added to the baseline model. For Model 3, paths from the alternative measures (i.e., arithmetic,
verbal memory span, and mathematics attitudes) were added to the baseline model. Convergent validity would be supported by the finding of significant geometry to spatial vocabulary and spatial
abilities to spatial vocabulary paths in Model 2, and discriminant validity by non-significant paths from alternative measures to spatial vocabulary in Model 3.
We estimated the fit of the various models using standard measures, that is, χ^2 (non-significant effects indicate better model fit), root mean square error approximation (RMSEA), standardized root
mean square residual (SRMR values < 0.06 indicate good model fit), and the comparative fit index (CFI). The χ^2 value varies directly with the sample size and thus is not always a good measure of
model fit. The combination of absolute (RMSEA, SRMR) and comparative (CFI) measures reduces the overall proportion of Type I and Type II errors (Hu and Bentler, 1999). Hu and Bentler suggested that
good fit is obtained when CFI > 0.95 and RMSEA < 0.06. However, others have recommended a more graded set of guidelines for RMSEA, such that an RMSEA < 0.05 is considered good, values between 0.05
and 0.08 are considered acceptable, and values between 0.08 and 0.10 are considered marginal (Fabrigar et al., 1999).
Mean unstandardized scores for all the measures are shown in Table 1.
TABLE 1
Factor structure
Two components emerged from the correlation matrix among the arithmetic measures (standardized loadings > 0.50). The first had an Eigenvalue of 2.01 and explained 40% of the covariance between
measures and the second had an Eigenvalue of 1.15 and explained 23% of the covariance. The standardized regressions from the rotated factor pattern are shown in the top section of Table 2. The first
factor, hereafter simple arithmetic, was defined by the mean of the arithmetic fluency, fractions arithmetic, and fractions number line measures. The second factor, hereafter complex arithmetic, was
defined by the mean of the equality and whole number line measures.
TABLE 2
As shown in the second section in Table 2, two components emerged for the cognitive measures. The first had an Eigenvalue of 3.26 and explained 47% of the covariance among measures, whereas the
second had an Eigenvalue of 1.02 and explained 15% of the covariance. The first factor, hereafter spatial abilities, was defined by means of paper folding, spatial transformation, JLAP, MRT, and
Berry measures. The second factor, hereafter memory span, was defined by the mean of the digit span forward and digit span backward measures.
As shown in the third section of Table 2, the mathematics attitudes measures defined a single factor that explained 80% of the covariance among them (Eigenvalue = 2.41). The score was defined by
means of the three attitude and anxiety measures. The spatial vocabulary and geometry measures were not included in the factors analyses because the former is the core dependent measure in the
analyses, and the latter is a core measure for the assessment of the convergent validity of the spatial vocabulary measure.
Convergent and discriminant validity
Correlational and regression analyses
As noted, the convergent and discriminant validity of the spatial vocabulary measure can be assessed by the pattern of correlations with mathematics measures that have a clear spatial component to
them (i.e., the geometry test) and those that do not (i.e., the arithmetic tests; Campbell and Fiske, 1959). Similarly, if the development of spatial vocabulary is influenced by spatial abilities,
then the measure should be more strongly correlated with spatial ability than memory span.
As shown in Table 3, both patterns emerged. The table presents correlations among the measures and reliabilities (alphas) on the diagonal. The key correlations are in bold, and all are higher than
other correlations in the matrix. Spatial vocabulary is more strongly related to geometry (r = 0.73, p < 0.001) than simple (r = 0.32, p < 0.001) or complex (r = 0.52, p < 0.001) arithmetic, and more
strongly related to spatial abilities (r =0.65, p < 0.001) than Memory Span (r =0.35, p < 0.001). Table 4 shows the results of a simultaneous regression analysis, whereby spatial vocabulary was
regressed on the geometry, simple arithmetic, complex arithmetic, spatial abilities, memory span, and mathematics attitudes measures. The results revealed that only geometry (p < 0.001) and spatial
abilities (p < 0.001) were significant predictors of spatial vocabulary (all other ps > 0.283); R^2 = 0.57, F[(6, 174)] = 39.04, p < 0.001.
TABLE 3
TABLE 4
Structural equation models
As noted, the baseline model involved estimating paths from a general factor to spatial vocabulary, spatial abilities, geometry, simple arithmetic, complex arithmetic, memory span, and mathematics
attitudes. As can be seen in Table 5, the fit statistics for the baseline model were acceptable for CFI, SRMR, and marginal for RMSEA. The standardized path estimates for this model are shown in
Figure 3, all of which were significant (ps < 0.001).
TABLE 5
FIGURE 3
Estimating paths from spatial abilities and geometry to spatial vocabulary (Model 2) resulted in an improvement in overall model fit, Δχ^2(2) = 5.56, p = 0.062, relative to the baseline model, and
improvements in all fit statistics. Examination of the paths from this model indicated that the path from the general factor to spatial vocabulary was no longer significant (p = 0.597) and thus was
dropped, creating Model 2b. The overall fit of Model 2b, Δχ^2(1) = 5.26, p = 0.022, was improved relative to the baseline model, and all fit indices were acceptable.
Estimating paths from simple and complex arithmetic, memory span, and mathematics attitudes to spatial vocabulary (Model 3) did not improve overall model fit, Δχ^2(4) = 7.26, p = 0.123, relative to
the baseline model. Moreover, only the path from mathematics attitudes to spatial vocabulary was significant, but the coefficient was negative, β = −0.11, se = 0.058, t = −1.98, p = 0.047.
The results indicate that Model 2b is the best representation of the covariance among the variables. The associated standardized path coefficients are shown in Figure 4.
FIGURE 4
The goal of this study was to develop and provide the initial validation for a mathematics-specific spatial vocabulary measure for late elementary school students. The goal stemmed from the
contribution of mathematics vocabulary to students' mathematical development (Toll and Van Luit, 2014; Purpura and Logan, 2015; Hornburg et al., 2018), and its correlation with mathematics
achievement (Lin et al., 2021). The goal was also based on the relationship between spatial abilities and mathematical development and innovation in STEM fields (Kell et al., 2013; Geary et al., 2023
), as well as its importance for performance in technical-mechanical blue-collar fields (Humphreys et al., 1993; Gohm et al., 1998). The latter is a critical but underappreciated occupation that are
particularly attractive to adolescent boys and men from blue-collar backgrounds (Stoet and Geary, 2022), and the cognitive abilities associated with success in them include spatial and mechanical
abilities (Gohm et al., 1998). In any case, the study builds on prior studies that have largely focused on younger students and typically include vocabulary items that cover different mathematics
topics (Toll and Van Luit, 2014; Purpura and Logan, 2015; Powell and Nelson, 2017; Vanluydt et al., 2020; e.g., measurement, number) or include spatial items that are not mathematics specific (
Gilligan-Lee et al., 2021).
As an example of the latter, Gilligan-Lee et al. (2021) developed a spatial vocabulary measure for elementary school students that focused on spatial-specific terms (e.g., under, over, to the right
of). Performance on this measure was correlated with spatial abilities and was predictive of overall mathematics achievement, controlling spatial abilities. Our focus, in contrast, was on spatial
terms that have a specific mathematics meaning and are frequently used in mathematics textbooks (Powell et al., 2017) and included in the Mathematics Common Core State Standards for upper elementary
school students (http://www.corestandards.org/Math/). The utility of our spatial vocabulary measure was evaluated following a combination of Hughes et al.'s (2020). Rasch model procedure for
developing a mathematic vocabulary measure and Campbell and Fiske's (1959) convergent and discriminant validity approach.
Convergent validity requires the measure to be more strongly related to conceptually similar than dissimilar measures. Thus, our inclusion of a geometry measure composed of items from the high-stakes
NAEP and standard spatial ability measures. Much of geometry has a spatial component to it (Clements and Battista, 1992), and prior research shows that the development of spatial abilities and
spatial vocabulary co-occurs (e.g., Gilligan-Lee et al., 2021). Although spatial abilities and spatial vocabulary are correlated with aspects of arithmetic performance and may contribute to
development in these areas (Geary and Burlingham-Dubree, 1989; Gilligan et al., 2019; Geary et al., 2021a; Gilligan-Lee et al., 2021), these correlations should, in theory, be weaker than those
between spatial vocabulary and geometry. This is what we found: a result that supports the convergent and discriminant validity of the measure within mathematics. If the spatial vocabulary measure is
simply a reflection of general cognitive ability, which is correlated with vocabulary and academic achievement broadly (Roth et al., 2015), then it should show similar relations to spatial abilities
and verbal memory span, but it did not. In keeping with the convergent and discriminant validity within the cognitive domain, spatial vocabulary was more strongly related to spatial abilities than to
verbal memory span.
Moreover, mathematics outcomes are often related to mathematics attitudes and anxiety (Eccles and Wang, 2016; Geary et al., 2021a), and they were significantly correlated with geometry and arithmetic
scores, as well as with spatial vocabulary, in this study (Table 3). The key finding here is that spatial vocabulary was unrelated to mathematics attitudes (combined attitudes and anxiety) once
spatial abilities and geometry performance were controlled. In total, the results suggest that our spatial vocabulary measure is capturing aspects of mathematical competencies that have a strong
spatial component to them (geometry in this case; Clements and Battista, 1992), and are related to spatial abilities, as expected (Gilligan-Lee et al., 2021) and, critically, is only weakly related
to performance in mathematical and cognitive domains that are not strongly spatial and is not influenced by students' mathematics attitudes and anxiety.
The primary limitation is the correlational nature of the data. In the regression analyses, we used mathematics, cognitive, and attitudes measures to predict spatial vocabulary scores but we could
have just as easily used spatial vocabulary to predict performance on these measures. The regressions, however, were not used to imply some type of causal relation between geometry and spatial
abilities and students' emerging spatial vocabulary but to show that the latter was not tapping individual differences in non-spatial arithmetic abilities, verbal memory span, or attitudes. In other
words, the regression results and the correlations show that spatial vocabulary is more strongly related to spatial-related mathematics and abilities than to alternative constructs that are related
to children's mathematical development.
Another potential limitation is that we did not have a more general mathematics vocabulary measure. The assessment of our spatial vocabulary measure would have been strengthened with a demonstration
that it is related to geometry and spatial abilities above and beyond the relation between general mathematics vocabulary and these constructs. Despite these limitations, this study provides a first
step in the development of a mathematics-specific spatial vocabulary measure for older elementary school students, adding to prior studies that have largely focused on younger students, general
mathematics, and spatial-specific vocabulary measures.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material. Data and R codes are available on Open Science Framework (https://osf.io/f6phe/?view_only=
8a91da5d41304cc8b1f41e68c72596a8). Further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by the University of San Diego (IRB-2019-479). Written informed consent to participate in this study was provided by the
participants' legal guardian/next of kin.
Author contributions
LR, YL, CG, TR, and PM collected the data. ZÜ and DG analyzed the data and wrote the manuscript. All authors contributed to the article and approved the submitted version.
This study was supported by the grants DRL-1920546 and DRL-1659133 from the National Science Foundation.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2023.1189674/full#supplementary-material
Alibali, M. W., Knuth, E. J., Hattikudur, S., McNeil, N. M., and Stephens, A. C. (2007). A longitudinal examination of middle school students' understanding of the equal sign and equivalent
equations. Mathematic Thinking Learn. 9, 221–247. doi: 10.1080/10986060701360902
Attit, K., Power, J. R., Pigott, T., Lee, J., Geer, E. A., Uttal, D. H., et al. (2021). Examining the relationships between spatial skills and mathematical performance: a meta-analysis. Psychonomic
Bullet. Rev. 1, 1–22. doi: 10.3758/s13423-021-02012-w
Beery, K. E., Beery, N. A., and Beery, V. M. I. (2010). The Beery-Buktenica Developmental Test of Visual-motor Integration with Supplemental Developmental Tests of Visual Perception and Motor
Coordination: And, Stepping Stones Age Norms from Birth to Age Six. Framingham, CT: Therapro.
Campbell, D. T., and Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol. Bulletin 56, 81–105. doi: 10.1037/h0046016
Cannon, J., Levine, S., and Huttenlocher, J. (2007). A system for analyzing children and caregivers' language about space in structured and unstructured contexts. Ashburn, VC: Spatial Intelligence
and Learning Center (SILC) Technical Report.
Casey, M. B., Nuttall, R. L., and Pezaris, E. (1997). Mediators of gender differences in mathematics college entrance test scores: A comparison of spatial skills with internalized beliefs and
anxieties. Dev. Psychol. 33, 669–680. doi: 10.1037/0012-1649.33.4.669
Chalmers, R. P. (2012). Mirt: A multidimensional item response theory package for the R environment. J. Stat. Software 48, 1–29. doi: 10.18637/jss.v048.i06
Clements, D. H., and Battista, M. T. (1992). “Geometry and spatial reasoning,” in Handbook of Research on Mathematics Teaching and Learning, ed D. A. Grouws (New York, NY: Macmillan), 420–464.
Collaer, M. L., and Nelson, J. D. (2002). Large visuospatial sex difference in line judgment: Possible role of attentional factors. Brain Cognit. 49, 1–12. doi: 10.1006/brcg.2001.1321
Collaer, M. L., Reimers, S., and Manning, J. T. (2007). Visuospatial performance on an internet line judgment task and potential hormonal markers: Sex, sexual orientation, and 2D: 4D. Archives Sexual
Behav. 36, 177–192. doi: 10.1007/s10508-006-9152-1
Corsi, P. M. (1972). Human memory and the medial temporal region of the brain. Unpublished doctoral dissertation. McGill University, Montreal, Canada.
Crosson, A. C., Hughes, E. M., Blanchette, F., and Thomas, C. (2020). What's the point? Emergent bilinguals' understanding of multiple-meaning words that carry everyday and discipline-specific
mathematical meanings. Reading Writing Q. 36, 84–103. doi: 10.1080/10573569.2020.1715312
Cuberis. (2021). The Quantile Framework for Mathematics. Quantile. Available online at: https://www.quantiles.com/ (accessed July 1, 2021).
Dantiz, T. (1954). Number: The Language of Science, 4th Edn. NY, New York: Scribner.
Eccles, J. S., and Wang, M. T. (2016). What motivates females and males to pursue careers in mathematics and science? International J. Behav. Dev. 40, 100–106. doi: 10.1177/0165025415616201
Ekstrom, R. B., and Harman, H. H. (1976). Manual for Kit of Factor-Referenced Cognitive Tests. Princeton, NJ: Educational Testing Service.
Espinas, D. R., and Fuchs, L. S. (2022). The effects of language instruction on math development. Child Dev. Perspectives 16, 69–75. doi: 10.1111/cdep.12444
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., and Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 4, 272–299. doi: 10.1037/
Ganis, G., and Kievit, R. A. (2015). A new Set of three-dimensional shapes for investigating mental rotation processes: Validation data and stimulus set. J. Open Psychol. Data 3, e3. doi: 10.5334/
Geary, D. C., and Burlingham-Dubree, M. (1989). External validation of the strategy choice model for addition. J. Exp. Child Psychol. 47, 175–192. doi: 10.1016/0022-0965(89)90028-3
Geary, D. C., Hoard, M. K., Nugent, L., and Scofield, J. E. (2021a). In-class attentive behavior, spatial ability, and mathematics anxiety predict across-grade gains in adolescents' mathematics
achievement. J. Educ. Psychol. 113, 754–769. doi: 10.1037/edu0000487
Geary, D. C., Hoard, M. K., and Nugent, L. Ünal, Z. E. (2023). Sex differences in developmental pathways to mathematical competence. J. Educ. Psychol. 115, 212–228. doi: 10.1037/edu0000763
Geary, D. C., Scofield, J. E., Hoard, M. K., and Nugent, L. (2021b). Boys' advantage on the fractions number line is mediated by visuospatial attention: evidence for a parietal-spatial contribution
to number line learning. Dev. Sci. 24, e13063. doi: 10.1111/desc.13063
Geer, E. A., Quinn, J. M., and Ganley, C. M. (2019). Relations between spatial skills and math performance in elementary school children: a longitudinal investigation. Dev. Psychol. 55, 637–652. doi:
Georges, C., Cornu, V., and Schiltz, C. (2021). The importance of visuospatial abilities for verbal number skills in preschool: Adding spatial language to the equation. J. Exp. Child Psychol. 201,
104971. doi: 10.1016/j.jecp.2020.104971
Gilligan, K. A., Flouri, E., and Farran, E. K. (2017). The contribution of spatial ability to mathematics achievement in middle childhood. J. Exp. Child Psychol. 163, 107–125. doi: 10.1016/
Gilligan, K. A., Hodgkiss, A., Thomas, M. S. C., and Farran, E. K. (2019). The developmental relations between spatial cognition and mathematics in primary school children. Dev. Sci. 22, 786. doi:
Gilligan-Lee, K. A., Hodgkiss, A., Thomas, M. S., Patel, P. K., and Farran, E. K. (2021). Aged-based differences in spatial language skills from 6 to 10 years: Relations with spatial and mathematics
skills. Learning Instr. 73, 101417. doi: 10.1016/j.learninstruc.2020.101417
Gohm, C. L., Humphreys, L. G., and Yao, G. (1998). Underachievement among spatially gifted students. Am. Educ. Res. J. 35, 515–531. doi: 10.3102/00028312035003515
Hawes, Z., and Ansari, D. (2020). What explains the relationship between spatial and mathematical skills? A review of evidence from brain and behavior. Psycho. Bullet. Rev. 27, 465–482. doi: 10.3758/
Hornburg, C. B., Schmitt, S. A., and Purpura, D. J. (2018). Relations between preschoolers' mathematical language understanding and specific numeracy skills. J. Exp. Child Psychol. 176, 84–100. doi:
Hu, L. T., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struc. Eq. Model. 6, 1–55. doi: 10.1080/
Hughes, E. M., Powell, S. R., and Lee, J. Y. (2020). Development and psychometric report of a middle-school mathematics vocabulary measure. Assessment Eff. Interv. 45, 226–234. doi: 10.1177/
Humphreys, L. G., Lubinski, D., and Yao, G. (1993). Utility of predicting group membership and the role of spatial visualization in becoming an engineer, physical scientist, or artist. J. Appl.
Psychol. 78, 250–261. doi: 10.1037/0021-9010.78.2.250
Izard, V., and Spelke, E. S. (2009). Development of sensitivity to geometry in visual forms. Human Evol. 23, 213–248.
Joensen, J. S., and Nielsen, H. S. (2009). Is there a causal effect of high school math on labor market outcomes? J Hum. Res. 44, 171–198. doi: 10.1353/jhr.2009.0004
Kell, H. J., Lubinski, D., Benbow, C. P., and Steiger, J. H. (2013). Creativity and technical innovation: spatial ability's unique role. Psychol. Sci. 24, 1831–1836. doi: 10.1177/0956797613478615
Kessels, R. P. C., van Zandvoort, M. J. E., Postma, A., Kappelle, L. J., and Haan, E. H. F. (2000). The corsi block-tapping task: standardization and normative data. Appl. Neuropsychol. 7, 252–258.
doi: 10.1207/S15324826AN0704_8
Kroedel, C., and Tyhurst, E. (2012). Math skills and labor-market outcomes: evidence from a resume-based field experiment. Econ. Educ. Rev. 31, 131–140. doi: 10.1016/j.econedurev.2011.09.006
Lachance, J. A., and Mazzocco, M. M. (2006). A longitudinal analysis of sex differences in math and spatial skills in primary school age children. Learn. Ind. Diff. 16, 195–216. doi: 10.1016/
Li, Y., and Geary, D. C. (2013). Developmental gains in visuospatial memory predict gains in mathematics achievement. PloS ONE 8, e70160. doi: 10.1371/journal.pone.0070160
Li, Y., and Geary, D. C. (2017). Children's visuospatial memory predicts mathematics achievement through early adolescence. PloS ONE 12, e0172046. doi: 10.1371/journal.pone.0172046
Lin, X., Peng, P., and Zeng, J. (2021). Understanding the relation between mathematics vocabulary and mathematics performance: A meta-analysis. Elementary School J. 121, 504–540. doi: 10.1086/712504
Linacre, J. M. (2007). A User's Guide to WINSTEPS-MINISTEP: Rasch-Model > Computer Programs. Chicago, IL: winsteps.com.
Martin, M. O., Mullis, I. V. S., Hooper, M., Yin, L., Foy, P., and Palazzo, L. (2015). “Creating and interpreting the TIMSS 2015 context questionnaire scales,” in Methods and Procedures in TIMSS 2015
,eds M. O. Martin, I. V. S. Mullis, and M. Hooper (Chestnut Hill, MA: Boston College), 558–869.
McNeil, N. M., Hornburg, C. B., Devlin, B. L., Carrazza, C., and McKeever, M. O. (2019). Consequences of individual differences in children's formal understanding of mathematical equivalence. Child
Dev. 90, 940–956. doi: 10.1111/cdev.12948
Mix, K. S. (2019). Why are spatial skill and mathematics related? Child Dev. Persp. 13, 121–126. doi: 10.1111/cdep.12323
Mix, K. S., Levine, S. C., Cheng, Y.-L., Young, C., Hambrick, D. Z., Ping, R., et al. (2016). Separate but correlated: The latent structure of space and mathematics across development. J. Exp.
Psychol. Gen. 145, 1206–1227. doi: 10.1037/xge0000182
National Mathematics Advisory Panel. (2008). Foundations for Success: Final Report of the National Mathematics Advisory Panel. Washington, DC: United States Department of Education.
Peng, P., and Lin, X. (2019). The relation between mathematics vocabulary and mathematics performance among fourth graders. Learning and Individual Differences 69, 11–21. doi: 10.1016/
Powell, S. R., Berry, K. A., and Tran, L. M. (2020). Performance differences on a measure of mathematics vocabulary for English Learners and non-English Learners with and without mathematics
difficulty. Reading Writing Q. Overcoming Learning Difficulties 36, 124–141. doi: 10.1080/10573569.2019.1677538
Powell, S. R., Driver, M. K., Roberts, G., and Fall, A. M. (2017). An analysis of the mathematics vocabulary knowledge of third-and fifth-grade students: Connections to general vocabulary and
mathematics computation. Learn. Ind. Diff. 57, 22–32. doi: 10.1016/j.lindif.2017.05.011
Powell, S. R., and Nelson, G. (2017). An investigation of the mathematics-vocabulary knowledge of first-grade students. Elementary School J. 117, 664–686. doi: 10.1086/691604
Purpura, D. J., and Logan, J. A. (2015). The nonlinear relations of the approximate number system and mathematical language to early mathematics development. Dev. Psychol. 51, 1717. doi: 10.1037/
Purpura, D. J., Logan, J. A., Hassinger-Das, B., and Napoli, A. R. (2017). Why do early mathematics skills predict later reading? The role of mathematical language. Dev. Psychol. 53, 1633. doi:
R Core Team. (2022). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, Austria.
Ramirez, G., Gunderson, E. A., Levine, S. C., and Beilock, S. L. (2013). Math anxiety, working memory, and math achievement in early elementary school. J. Cognit. Dev. 14, 187–202. doi: 10.1080/
Ritchie, S. J., and Bates, T. C. (2013). Enduring links from childhood mathematics and reading achievement to adult socioeconomic status. Psychol. Sci. 24, 1301–1308. doi: 10.1177/0956797612466268
Roth, B., Becker, N., Romeyke, S., Schäfer, S., Domnick, F., Spinath, F. M., et al. (2015). Intelligence and school grades: a meta-analysis. Intelligence 53, 118–137. doi: 10.1016/
Runnels, J. (2012). Using the Rasch model to validate a multiple choice English achievement test. Int. J. Lang. Studies 6, 141–155.
SAS. (2014). Statistical Analysis System 9, 2. Cary, NC: SAS.
Scofield, J. E., Hoard, M. K., Nugent, L., LaMendola, J. V., and Geary, D. C. (2021). Mathematics clusters reveal strengths and weaknesses in adolescents' mathematical competencies, spatial
abilities, and mathematics attitudes. J. Cognit. Dev. 22, 695–720. doi: 10.1080/15248372.2021.1939351
Siegler, R. S., and Booth, J. L. (2004). Development of numerical estimation in young children. Child Develop. 75, 428–444. doi: 10.1111/j.1467-8624.2004.00684.x
Siegler, R. S., Thompson, C. A., and Schneider, M. (2011). An integrated theory of whole number and fractions development. Cognitive Psychol. 62, 273–296. doi: 10.1016/j.cogpsych.2011.03.001
Sistla, M., and Feng, J. (2014). “More than numbers: Teaching ELLs mathematics language in primary grades,” in Proceedings of the Chinese American Educational Research & Development Association
Annual Conference (Philadelphia, PA).
Stoet, G., and Geary, D. C. (2022). Sex differences in adolescents' occupational aspirations: variations across time and place. Plos ONE 17, e0261438. doi: 10.1371/journal.pone.0261438
Toll, S. W., and Van Luit, J. E. (2014). The developmental relationship between language and low early numeracy skills throughout kindergarten. Excep. Children 81, 64–78. doi: 10.1177/
Turan, E., and Smedt, D. e. B. (2022). Mathematical language and mathematical abilities in preschool: A systematic literature review. Educ. Res. Rev. 4, 100457. doi: 10.1016/j.edurev.2022.100457
Ünal, Z. E., Greene, N. D., Lin, X., and Geary, D. C. (2023). What is the source of the correlation between reading and mathematics achievement? Two meta-analytic studies. Educ. Psychol. Rev. 35, 4.
doi: 10.1007/s10648-023-09717-5
Ünal, Z. E., Powell, S. R., Özel, S., Scofield, J. E., and Geary, D. C. (2021). Mathematics vocabulary differentially predicts mathematics achievement in eighth grade higher-versus lower-achieving
students: Comparisons across two countries. Learning Ind. Diff. 92, 102061. doi: 10.1016/j.lindif.2021.102061
Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., et al. (2013a). The malleability of spatial skills: A meta-analysis of training studies. Psychol. Bulletin 139,
352–402. doi: 10.1037/a0028446
Uttal, D. H., Miller, D. I., and Newcombe, N. S. (2013b). Exploring and enhancing spatial thinking. Curr. Direc. Psychol. Sci. 22, 367–373. doi: 10.1177/0963721413484756
Van Zile-Tamsen, C. (2017). Using Rasch analysis to inform rating scale development. Res. Higher Educ. 58, 922–933. doi: 10.1007/s11162-017-9448-0
Vanluydt, E., Supply, A. S., Verschaffel, L., and Van Dooren, W. (2020). (2021). The importance of specific mathematical language for early proportional reasoning. Early Childhood Res. Q. 55,
193–200. doi: 10.1016/j.ecresq.2020.12.003
Verdine, B. N., Golinkoff, R. M., Hirsh-Pasek, K., and Newcombe, N. S. (2017). I. Spatial skills, their development, and their links to mathematics. Monographs Soc. Res. Child Dev. 82, 7–30. doi:
Verdine, B. N., Golinkoff, R. M., Hirsh-Pasek, K., Newcombe, N. S., Filipowicz, A. T., and Chang, A. (2014). Deconstructing building blocks: preschoolers' spatial assembly performance relates to
early mathematical skills. Child Dev. 85, 1062–1076. doi: 10.1111/cdev.12165
Wai, J., Lubinski, D., and Benbow, C. P. (2009). Spatial ability for STEM domains: aligning over 50 years of cumulative psychological knowledge solidifies its importance. J. Educ. Psychol. 101,
817–835. doi: 10.1037/a0016127
Zhang, X., and Lin, D. (2017). Does growth rate in spatial ability matter in predicting early arithmetic competence? Learning Instr. 49, 232–241. doi: 10.1016/j.learninstruc.2017.02.003
Keywords: mathematics vocabulary, spatial vocabulary, mathematics achievement, elementary school, spatial abilities
Citation: Ünal ZE, Ridgley LM, Li Y, Graves C, Khatib L, Robertson T, Myers P and Geary DC (2023) Development and initial validation of a mathematics-specific spatial vocabulary scale. Front. Educ.
8:1189674. doi: 10.3389/feduc.2023.1189674
Received: 19 March 2023; Accepted: 26 May 2023;
Published: 20 June 2023.
Copyright © 2023 Ünal, Ridgley, Li, Graves, Khatib, Robertson, Myers and Geary. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The
use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in
accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Zehra E. Ünal, zeuqq6@mail.missouri.edu; zehrae.unal@icloud.com; David C. Geary, gearyd@missouri.edu | {"url":"https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2023.1189674/full","timestamp":"2024-11-09T17:54:01Z","content_type":"text/html","content_length":"562395","record_id":"<urn:uuid:d60d4e0e-1897-4587-ae84-e01e3ff8fb6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00416.warc.gz"} |
RD Sharma Class 6 Maths Solutions - Free PDF Download
RD Sharma is a detailed Maths book that helps students to prepare comprehensively for their exams. RD Sharma provides various questions covering all aspects of the particular topics helping the
students in their preparation and understanding of the topic as well as the subject. Therefore, Vedantu acts as a true saviour by providing them with all the required solutions. Hence, all their
doubts are clarified with just a click.
We bring you the facility of online tutoring, and also offer the CBSE Class 6 Maths RD Sharma solutions. These PDFs are available for download absolutely free, giving every student access to expertly
solved answers. Teachers around the country often swear by RD Sharma's mathematics textbooks, and we believe our platform takes the books to the next level. Our NCERT Solutions are thorough, well
written, and designed specifically for young minds. Subjects like Science, Maths, English, Hindi will become easy to study if you have access to NCERT Solution for Class 6 Science, Maths solutions
and solutions of other subjects.
FAQs on RD Sharma Class 6 Solutions
1. Why Choose Vedantu for RD Sharma Class 6 Solutions?
Vedantu guarantees you 100% success if you follow the tips and all its study materials properly. It doesn't only clarifies your doubts regarding a chapter or topic but also provides accurate
solutions of reference books like RD Sharma.
• It provides RD Sharma Class 6 solutions in detail, so that you are left with no further confusion or doubts.
• All the solutions of RD Sharma for Class 6 are provided according to the chapters of the reference book.
• Vedantu teachers have shown the correct approach to solve each and every question precisely.
2. Does Vedantu have RD Sharma Class 6 Solutions According to the Latest Edition of the Book?
Yes, Vedantu makes sure that you always get the solutions to the latest questionnaires of RD Sharma. It also makes sure that it follows the latest CBSE prescribed syllabus. Therefore, to score better
marks, one can easily rely on Vedantu for the solutions of RD Sharma for Class 6. Hence, Vedantu is the place where you will get every source related to studies in one place. So students preparing
through Vedantu can be well versed with all the topics and clear with all the concepts.
3. Is R D Sharma the right book for CBSE Class 6 students?
The R D Sharma textbook has a lot of questions. The primary textbook of a CBSE student is the NCERT textbook; after solving that, Class 6 students can use the R D Sharma textbook for further
practice. This book will help in the development of concepts for the ones who are building their foundation in maths. It will also help to practise a lot of sums, which is very essential for any
student, to get a good grip on maths.
4. How to solve R D Sharma textbook’s questions?
Before solving the questions, students should make sure that their concepts are clear. To make their concepts clear, they can take help either from the NCERT textbook or from the RD Sharma textbook.
After clearing the concepts, students should go through the solved questions in the textbook, to have an idea of how the questions are solved. Then they should move on to the unsolved questions.
Following these steps will help them understand all the concepts and solve the questions too.
5. How to understand if the questions have been solved correctly?
Students might not be able to solve all the questions correctly, at first. That should not pull down the confidence of the students. Rather, they should strive more towards achieving perfection.
After solving all the questions, students should recheck all the questions twice, to make sure they haven’t made any mistakes. Then they can go through the Vedantu solutions to check their sums. If
they see that they have made mistakes, it is advisable to go through the concepts once again. | {"url":"http://eukaryote.org/rd-sharma-class-6-solutions.html","timestamp":"2024-11-06T20:54:15Z","content_type":"text/html","content_length":"341966","record_id":"<urn:uuid:a093d251-7e43-474a-a5ab-a0ee9a171261>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00882.warc.gz"} |
Approximate Structural Analysis
Engineers routinely need to analyze and design indeterminate structures. Today they would use any one of several analytical software packages commonly loaded on their computers. The programs are so
powerful that it does not take long to develop the input and perform the analysis, even for a major building. Sifting through the output and finding the desired answers might be the greatest effort
However, before the regular use of computers (sorry for the history lesson, but it was not that long ago), the analysis method of choice was the moment distribution method, developed in the 1930s by
Professor Hardy Cross of the University of Illinois. Besides being a clever analytical concept, it has an inherent simplicity and physical logic that is easy to grasp. It works like this for
analyzing a structure with continuous beams with multiple supports: 1) every beam-column joint in the structure is assumed to be fixed from rotation and appropriate fixed-end bending moments are
applied to the joints, and 2) then each fixed joint is released sequentially, and the fixed-end moments, which at the time of the first joint release are not in equilibrium, are distributed to
adjacent members. The process of fixing, releasing, and re-distributing bending moments is repeated many times until an equilibrium is reached or until the engineer decides that the amount of
undistributed moment is small enough to ignore.
Whether the engineer is working today or in the past, getting the right answer, or better yet, a “precise” answer (since there is never a “right answer”) to the problem, may involve more effort than
the engineer wants to exert or has the time or budget to spend. Often, an approximate answer to an indeterminate problem is good enough. Engineers appreciate that, with ever smaller powerful handheld
computers, the “right answer” is never that far away. However, there are many times when calculating an approximate answer, while at a job site, on the way home from work, or away from the office, is
something an engineer needs to do because that is all that is really needed at the time.
As is discussed later, these approximate analytical techniques are the foundations of structural engineering. Of course, today’s engineers can consider loading conditions that prior generations knew
about but lacked the tools (or the computational desire) to evaluate. They can also study the post-elastic response of a structure subjected to a suite of actual earthquake ground motions, with the
computer churning away while they take a long lunch.
When it comes to hiring entry-level engineering staff, many employers look for candidates having the very advanced knowledge discussed above. But more importantly, they want engineers with solid
educational backgrounds who have the ability to think on their feet and quickly assess and solve problems without needing to perform a computer analysis, which, as is commonly known, is just an
approximation. Being able to cut through the clutter and get an answer that is in the “ballpark” is highly valued by employers but hard to teach. To underscore the point, how many times has an
engineer gotten a confusing answer from a computer model and then needed to go back to first principles to figure out what was wrong and how to fix it? Like troubleshooting a finicky automobile, an
engineer needs to assess what is working and what is not using handy and trustworthy tools.
This article discusses four different indeterminate structures that are encountered regularly in engineering practice and that firms also use in their entry-level employee interviews, namely:
1. Lateral analysis of a one-story, one-bay frames.
2. Gravity analysis of a one-story, two-bay frame with a pinned base.
3. Lateral analysis of a one-story, two-bay frame with a fixed base.
4. Lateral analysis of a multi-bay, multi-story, slender high-rise frame.
It would be fair to say that many older, experienced engineers believe that solving these issues quickly and approximately is part and parcel of being an engineer and would wonder why an interviewer
would ask an applicant engineer about these situations. But for those not involved in hiring, it would be surprising to know how many engineers, many educated at our most prestigious universities,
have trouble with these concepts, even after accounting for some degree of nervousness and the pressure of a job interview. Maybe these concepts are not taught anymore, or the amount of time allotted
to teaching them is too short. In either case, this does a disservice to our engineering graduates. Most firms have the capacity to train engineers in more advanced analytical and design techniques.
However, if the new hire’s foundation in statics and mechanics is lacking or weak, more advanced tasks are more difficult to learn.
Analyzing the Indeterminate Structure
Solving statically determinate structures is straightforward because basic statics can be employed, namely the summing of forces in the x and y directions (for 2-D systems) and the summing of
rotational moments (caused by the applied forces) about a point. Solving statically indeterminate structures is mainly the task of turning the indeterminate structure into a determinate structure.
This is done by making simplifying assumptions about the location of inflection points (also known as points of counter-flexure) in structural elements that are bending under load (either due to
gravity or lateral loads) and judging whether structures subjected to lateral loads resist loads primarily from a lateral shearing action or primarily from a lateral bending response. Once this is
done, the rest is just math.
Lateral Analysis of One-Story, One-Bay Frames
There are three variations of this simple structure: a) a pinned base with beam and columns of approximately equal stiffness, b) a fixed base with beam and columns of approximately equal stiffness,
and c) a fixed base with a rigid beam and two flexible columns of approximately equal stiffness. Assume there is a lateral load applied at the beam level, there is no gravity load, the members have
infinite axial stiffness, the shear in the columns is equal, and the members have no mass.
For all three conditions, the first step is to identify the counter-flexure points in the beams and columns. Start this process by drawing the deflected shape for each structure. Going back to the
job interview process, many applicants have difficulty with the rotation of the beam-column joint and the curvatures of the beam and column at the joint. Rather than applying the forces to determine
the moment and rotation, they guess and get it all backward. However once the curvatures are drawn correctly, the counter-flexure points can be located. This creates a determinate structure, and the
shears and bending moments in the beams and columns can then be determined. The pinned-base structure produces the largest bending loads because the base (say the ground) helps the least. The
fixed-base, rigid-beam structure has equal top and bottom bending moments and the least column bending. The fixed-base, flexible-beam structure falls somewhere in the middle. The inflection points in
the columns are near (or slightly above) mid-height, creating the potential for slightly higher bending moments in the columns than that for the fixed-fixed structure, depending on the relative
stiffnesses of the beams and columns. Since the analysis is approximate, it is also acceptable to assume the inflection point is at mid-height.
One might wonder – why spend so much time on such a simple structure? The answer is that these are the potential conditions for the first story columns in many kinds of buildings: a structure without
a basement or any base rigidity, a structure with a basement and “normal” second-floor framing, and a structure with a basement and very rigid beams at the second floor. The required sizes of the
columns, and the beams but to a lesser extent, can vary greatly.
Gravity Analysis of a One-Story, Two-Bay Frame with a Pinned Base
The structure is a one-story, two-bay frame with unequal beam spans. Assume there is gravity load applied at the beam level, there is no lateral load, the members have infinite axial stiffnesses, and
the members have no mass.
It would take a long time to analyze this structure with hand calculations, and it would take a fair amount of time (allowing for a few modeling errors), even with a computer. As with a one-bay
frame, the first step is to identify the counter-flexure points in the beams. Start this process by drawing the deflected shape of the beams. Once the beam curvatures are drawn, the counter-flexure
points can be located. The beam design aids in the American Society of Steel Construction’s (AISC) Steel Construction Manual show the inflection points for fixed-fixed beams. The inflection points
near the center column are more similar to the fixed-fixed condition. Since the exterior beam-column joints rotate to some extent, the inflection points are closer to the columns. The columns do not
have inflection points due to the pinned base condition. One should exercise some judgment here, remembering this is an approximate analysis. Adding the inflection points creates a determinate
structure, and the shears and bending moments in the beams and the shears, bending moments, and axial loads in the columns can then be determined.
Lateral Analysis of a One-Story, Two-Bay Frame with a Fixed Base
A variation of the previous structure is a one-story, two-bay frame with equal beam spans with only lateral loads due to earthquake loads at the floor level. As with the one-bay frame, assume there
is no gravity load applied at the beam level, the members have infinite axial stiffnesses, and the members have no mass.
The interior beam-column joint is roughly twice as stiff rotationally as the exterior joints (two beams compared to one beam), so assume that the interior column resists twice as much shear as the
exterior columns. This is the basic assumption in the portal frame method.
As with the other structures discussed above, the next step is to identify the counter-flexure points in the beams and columns. Start this process by drawing the deflected shape of the structure
using the same logic as for the one-bay frames. Assume the inflection points are at mid-length of the beams and mid-height of the columns. This sets the counter-flexure points. This creates a
determinate structure, and the shears and bending moments in the beams and the shears, bending moments, and axial loads in the columns can then be determined. A fixed-base structure replicates a
building with a basement where the columns extend downward to the basement floor level or a building without a basement but with a rigid grade beam system near the surface grade intended to provide
column base fixity. A pinned base would replicate a building without a basement or one with a minimal flexible grade beam system.
Lateral Analysis of a Multi-Bay, Multi-Story, Slender High-Rise Frame
For the last structure, a tall, slender building of indeterminate height is examined. The height does not really matter so long as the building is considerably taller than it is wide. The predominant
response to lateral loading is flexural bending of the tower as opposed to shearing action.
Assume lateral loads due to earthquakes are applied at each floor level in a triangular shape with the centroid at ⅔ of the height, H; there is no gravity load applied at the beam levels; the beams
have infinite axial rigidity; the columns have equal axial stiffnesses; and, the members have no mass.
Rather than identifying the counter-flexure points in the beams and columns as with the other structures whose response is predominately a shearing action, assume the tower bends like a cantilevered
pole extending from the ground. Due to the lateral loads only, the columns on one face of the tower are in tension, and the columns on the other face are in compression. For the example building with
three columns, sum the moments about the center column (the neutral axis) and determine the axial column’s loads. This simplification is the essence of the cantilever method. Statics dictate that the
center column gets no axial load from the lateral load condition. Suppose the structure has more columns, assuming that plane sections remain plane in bending. In that case, the columns will sustain
axial loads proportional to their distance from the neutral axis at the centerline of the building.
If the inflection points are assumed at the mid-lengths of the beams and mid-heights of the columns, the beam and column bending moments and shears can then be determined. Tall buildings have large
column axial loads; it would be fair to simply add those to the column’s loads already determined based on tributary area.
The ability to determine approximate answers to complicated problems is handy in the real world of engineering. Engineers will be amazed at how much respect they earn when they can provide a
contractor with a quick answer in the field without having to go back to the office to figure it out, or when they can, in just a few minutes, help another engineer troubleshoot a computer model that
has been frustrating them for several hours. The basics never go out of style and are reliable tools if learned early and well.■ | {"url":"https://www.structuremag.org/fr/article/approximate-structural-analysis/","timestamp":"2024-11-07T23:35:09Z","content_type":"text/html","content_length":"99134","record_id":"<urn:uuid:df66f83b-e048-4f87-9032-b2bf569daa8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00166.warc.gz"} |
New Proof Threads the Needle on a Sticky Geometry Problem | Quanta Magazine
Samuel Velasco/Quanta Magazine
In 1917, the Japanese mathematician Sōichi Kakeya posed what at first seemed like nothing more than a fun exercise in geometry. Lay an infinitely thin, inch-long needle on a flat surface, then rotate
it so that it points in every direction in turn. What’s the smallest area the needle can sweep out?
If you simply spin it around its center, you’ll get a circle. But it’s possible to move the needle in inventive ways, so that you carve out a much smaller amount of space. Mathematicians have since
posed a related version of this question, called the Kakeya conjecture. In their attempts to solve it, they have uncovered surprising connections to harmonic analysis, number theory and even physics.
“Somehow, this geometry of lines pointing in many different directions is ubiquitous in a large portion of mathematics,” said Jonathan Hickman of the University of Edinburgh.
But it’s also something that mathematicians still don’t fully understand. In the past few years, they’ve proved variations of the Kakeya conjecture in easier settings, but the question remains
unsolved in normal, three-dimensional space. For some time, it seemed as if all progress had stalled on that version of the conjecture, even though it has numerous mathematical consequences.
Now, two mathematicians have moved the needle, so to speak. Their new proof strikes down a major obstacle that has stood for decades — rekindling hope that a solution might finally be in sight.
What’s the Small Deal?
Kakeya was interested in sets in the plane that contain a line segment of length 1 in every direction. There are many examples of such sets, the simplest being a disk with a diameter of 1. Kakeya
wanted to know what the smallest such set would look like.
He proposed a triangle with slightly caved-in sides, called a deltoid, which has half the area of the disk. It turned out, however, that it’s possible to do much, much better.
Merrill Sherman/Quanta Magazine
In 1919, just a couple of years after Kakeya posed his problem, the Russian mathematician Abram Besicovitch showed that if you arrange your needles in a very particular way, you can construct a
thorny-looking set that has an arbitrarily small area. (Due to World War I and the Russian Revolution, his result wouldn’t reach the rest of the mathematical world for a number of years.)
To see how this might work, take a triangle and split it along its base into thinner triangular pieces. Then slide those pieces around so that they overlap as much as possible but protrude in
slightly different directions. By repeating the process over and over again — subdividing your triangle into thinner and thinner fragments and carefully rearranging them in space — you can make your
set as small as you want. In the infinite limit, you can obtain a set that mathematically has no area but can still, paradoxically, accommodate a needle pointing in any direction.
“That’s kind of surprising and counterintuitive,” said Ruixiang Zhang of the University of California, Berkeley. “It’s a set that’s very pathological.”
This result can be generalized to higher dimensions: It’s possible to construct a set with arbitrarily small volume that contains a unit line segment pointing in every direction in n-dimensional
Besicovitch seemed to have solved Kakeya’s question completely. But decades later, mathematicians started to work on another version of the problem in which they replaced area (or volume, in the
higher-dimensional case) with a different notion of size.
To understand this reframing of the question, first take each line segment in a Kakeya set and fatten it up a little bit — as if you were using an actual needle, rather than an idealized one. In the
plane, your set will consist of extremely thin rectangles; in three-dimensional space, you will have a collection of extremely thin tubes.
These fattened sets always have some area (or volume, but we’ll stick to the two-dimensional case for now). As you change the width of your needle, this area will change. In the 1970s, the
mathematician Roy Davies (who died last month) showed that if the total area changes by a small amount, the width of each needle must change drastically. For instance, if you want a fattened version
of Besicovitch’s set to have an area of 1/10 of a square inch, each needle needs to have a thickness of around 0.000045 inch: e^−10 of an inch, to be precise. But if you wanted to make the total area
1/100 of a square inch — 10 times smaller — the needle would have to be e^−100 of an inch thick. (Forty-three zeros follow the decimal point before you get to the other digits.)
“If you tell me how small you want the area to be, then I have to demand a needle which is just unbelievably thin,” said Charles Fefferman of Princeton University.
Mathematicians measure the “size” of the Kakeya set using a quantity called the Minkowski dimension, which is related to but not quite the same as an ordinary dimension (defined as the number of
independent directions you need to describe a space).
Merrill Sherman/Quanta Magazine
Here’s one way to think about the Minkowski dimension: Take your set and cover it with tiny balls that each have a diameter of one-millionth of your preferred unit. If your set is a line segment of
length 1, you’ll need at least 1 million balls to cover it. If your set is a square of area 1, you’ll need many, many more: a million squared, or a trillion. For a sphere of volume 1, it’s about 1
million cubed (a quintillion), and so on. The Minkowski dimension is the value of this exponent. It measures the rate at which the number of balls you need to cover your set grows as the diameter of
each ball gets smaller. A line segment has dimension 1, a square has dimension 2, and a cube has dimension 3.
These dimensions are familiar. But using Minkowski’s definition, it becomes possible to construct a set that has a dimension of, say, 2.7. Though such a set doesn’t fill up three-dimensional space,
it’s in some sense “bigger” than a two-dimensional surface.
When you cover a set with balls of a given diameter, you’re approximating the volume of the fattened-up version of the set. The more slowly the volume of the set decreases with the size of your
needle, the more balls you need to cover it. You can therefore rewrite Davies’ result — which states that the area of a Kakeya set in the plane decreases slowly — to show that the set must have a
Minkowski dimension of 2. The Kakeya conjecture generalizes this claim to higher dimensions: A Kakeya set must always have the same dimension as the space it inhabits.
That simple statement has been surprisingly difficult to prove.
A Tower of Conjectures
Until Fefferman made a startling discovery in 1971, the conjecture was viewed as a curiosity.
He was working on an entirely different problem at the time. He wanted to understand the Fourier transform, a powerful tool that allows mathematicians to study functions by writing them as sums of
sine waves. Think of a musical note, which is made up of lots of overlapping frequencies. (That’s why a middle C on a piano sounds different from a middle C on a violin.) The Fourier transform allows
mathematicians to calculate the constituent frequencies of a particular note. The same principle works for sounds as complicated as human speech.
Mathematicians also want to know whether they can rebuild the original function if they’re given just some of its infinitely many constituent frequencies. They have a good understanding of how to do
this in one dimension. But in higher dimensions, they can make different choices about which frequencies to use and which to ignore. Fefferman proved, to his colleagues’ surprise, that you might fail
to rebuild your function when relying on a particularly well-known way of choosing frequencies.
His proof hinged on constructing a function by modifying Besicovitch’s Kakeya set. This later inspired mathematicians to develop a hierarchy of conjectures about the higher-dimensional behavior of
the Fourier transform. Today, the hierarchy even includes conjectures about the behavior of important partial differential equations in physics, like the Schrödinger equation. Each conjecture in the
hierarchy automatically implies the one below it.
The Kakeya conjecture lies at the very base of this tower. If it is false, then so are the statements higher in the hierarchy. On the other hand, proving it true wouldn’t immediately imply the truth
of the conjectures located above it, but it might provide tools and insights for attacking them.
“The amazing thing about the Kakeya conjecture is that it’s not just a fun problem; it’s a real theoretical bottleneck,” Hickman said. “We don’t understand a lot of these phenomena in partial
differential equations and Fourier analysis because we don’t understand these Kakeya sets.”
Hatching a Plan
Fefferman’s proof — along with subsequently discovered connections to number theory, combinatorics and other areas — revived interest in the Kakeya problem among top mathematicians.
In 1995, Thomas Wolff proved that the Minkowski dimension of a Kakeya set in 3D space has to be at least 2.5. That lower bound turned out to be difficult to increase. Then, in 1999, the
mathematicians Nets Katz, Izabella Łaba and Terence Tao managed to beat it. Their new bound: 2.500000001. Despite how small the improvement was, it overcame a massive theoretical barrier. Their paper
was published in the Annals of Mathematics, the field’s most prestigious journal.
Katz and Tao later hoped to apply some of the ideas from that work to attack the 3D Kakeya conjecture in a different way. They hypothesized that any counterexample must have three particular
properties, and that the coexistence of those properties must lead to a contradiction. If they could prove this, it would mean that the Kakeya conjecture was true in three dimensions.
They couldn’t go all the way, but they did make some progress. In particular, they (along with other mathematicians) showed that any counterexample must have two of the three properties. It must be
“plany,” which means that whenever line segments intersect at a point, those segments also lie nearly in the same plane. It must also be “grainy,” which requires that the planes of nearby points of
intersection be similarly oriented.
That left the third property. In a “sticky” set, line segments that point in nearly the same direction also have to be located close to each other in space. Katz and Tao couldn’t prove that all
counterexamples must be sticky. But intuitively, a sticky set seems like the best way to force a lot of overlap among the line segments, thereby making the set as small as possible — precisely what
you need to create a counterexample. If someone could show that a sticky Kakeya set had a Minkowski dimension of less than 3, it would disprove the 3D Kakeya conjecture. “It sounds like ‘sticky’
would be the most worrisome case,” said Larry Guth of the Massachusetts Institute of Technology.
It’s no longer a worry.
The Sticking Point
In 2014 — more than a decade after Katz and Tao attempted to prove the Kakeya conjecture — Tao posted an outline of their approach on his blog, giving other mathematicians the chance to try it out
for themselves.
In 2021, Hong Wang, a mathematician at New York University, and Joshua Zahl of the University of British Columbia decided to pick up where Tao and Katz had left off.
They started by assuming the existence of a sticky counterexample with a Minkowski dimension of less than 3. They knew from previous work that such a counterexample had to be plany and grainy. “So we
were in the kind of world that Terry Tao and Nets Katz were thinking about,” Zahl said. Now they needed to show that the plany, grainy and sticky properties played off each other and led to a
contradiction, which would mean that this counterexample couldn’t actually exist.
To get that contradiction, however, Wang and Zahl turned their attention in a direction that Katz and Tao hadn’t anticipated — toward an area known as projection theory.
They started out by analyzing the structure of their sticky counterexample in more detail. If you consider the idealized version of the set, it has an infinite number of line segments pointing in
every direction. But in this problem, remember that you’re dealing with fattened-up versions of those line segments — a bunch of needles. Each of those needles can contain many of the idealized line
segments, meaning that you can encode the entire infinite set with a finite number of needles. Depending on how thick the needles are, your fattened set might look very different.
If the set is sticky, it will look more or less the same no matter how thick the needles are.
Wang and Zahl used this property to show that as the needles get thinner, the set becomes more and more plany. Through this process, they could “extract an even more pathological object,” Zahl said —
something that seemed to have impossible qualities.
That’s what they showed next. They proved that this pathological object had to look one of two ways, both of which led to contradictions. Either you’d be able to project it down into 2D space in a
way that made it much smaller in many directions — something that Wang and her colleagues had just shown to be impossible. Or, in the second case, the needles in the set would be organized according
to a very specific kind of function, which Zahl and his collaborators had recently proved couldn’t exist, because it would lead to other kinds of projections that didn’t make sense.
Wang and Zahl now had their contradiction — meaning that there are no sticky counterexamples to the Kakeya conjecture. (They showed this not only for the Minkowski dimension, but also for a related
quantity called the Hausdorff dimension.) “The result rules out this entire class of counterexamples,” Zahl said — the exact type of set mathematicians had considered most likely to disprove the
The new work “is strong support for the Kakeya conjecture being true,” said Pablo Shmerkin of the University of British Columbia. While it only applies to the three-dimensional case, some of its
techniques might be useful in higher dimensions. After spending years making progress on the conjecture in other number systems, mathematicians are excited by this return to the problem’s original
domain of real numbers.
“It’s remarkable that they solved this case completely,” Zhang said. “In the real setting, that’s extremely rare.” And if anyone can prove that a counterexample must be sticky, the new result will
imply the full conjecture in three dimensions. The hierarchy of conjectures built above it will then remain safe, its foundation stable.
“Somehow, these two different problems in projection theory, which on the face of it don’t have a lot to do with each other, fit together quite nicely to give exactly what was needed for Kakeya,”
Zahl said. | {"url":"https://www.quantamagazine.org/new-proof-threads-the-needle-on-a-sticky-geometry-problem-20230711/","timestamp":"2024-11-11T08:14:09Z","content_type":"text/html","content_length":"220414","record_id":"<urn:uuid:f65caa06-0ffa-436b-86d6-29f7fdc4a4cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00558.warc.gz"} |
Kwek, Leong Chuan
Natural Sciences & Science Education (NSSE)
Personal Site(s)
Dataverse Researcher Profile
• Rigorous noise reduction with quantum autoencoders
American Institute of Physics, 2024
Mok, Wai Keong
Zhang, Hui
Haug, Tobias
Luo, Xianshu
Lo, Guo Qiang
Li, Zhenyu
Cai, Hong
Kim, M. S.
Liu, Ai Qun
Reducing noise in quantum systems is a significant challenge in advancing quantum technologies. We propose and demonstrate a noise reduction scheme utilizing a quantum autoencoder, which offers
rigorous performance guarantees. The quantum autoencoder is trained to compress noisy quantum states into a latent subspace and eliminate noise through projective measurements. We identify
various noise models in which the noiseless state can be perfectly reconstructed, even at high noise levels. We apply the autoencoder to cool thermal states to the ground state and reduce the
cost of magic state distillation by several orders of magnitude. Our autoencoder can be implemented using only unitary transformations without the need for ancillas, making it immediately
compatible with state-of-the-art quantum technologies. We experimentally validate our noise reduction methods in a photonic integrated circuit. Our results have direct applications in enhancing
the robustness of quantum technologies against noise.
• NISQ algorithm for Hamiltonian simulation via truncated Taylor series
SciPost, 2022
Lau, Jonathan Wei Zhong
Haug, Tobias
; ;
Kishor Bharti
Simulating the dynamics of many-body quantum systems is believed to be one of the first fields that quantum computers can show a quantum advantage over classical computers. Noisy
intermediate-scale quantum (NISQ) algorithms aim at effectively using the currently available quantum hardware. For quantum simulation, various types of NISQ algorithms have been proposed with
individual advantages as well as challenges. In this work, we propose a new algorithm, truncated Taylor quantum simulator (TQS), that shares the advantages of existing algorithms and alleviates
some of the shortcomings. Our algorithm does not have any classical-quantum feedback loop and bypasses the barren plateau problem by construction. The classical part in our hybrid
quantum-classical algorithm corresponds to a quadratically constrained quadratic program (QCQP) with a single quadratic equality constraint, which admits a semidefinite relaxation. The QCQP based
classical optimization was recently introduced as the classical step in quantum assisted eigensolver (QAE), a NISQ algorithm for the Hamiltonian ground state problem. Thus, our work provides a
conceptual unification between the NISQ algorithms for the Hamiltonian ground state problem and the Hamiltonian simulation. We recover differential equation-based NISQ algorithms for Hamiltonian
simulation such as quantum assisted simulator (QAS) and variational quantum simulator (VQS) as particular cases of our algorithm. We test our algorithm on some toy examples on current cloud
quantum computers. We also provide a systematic approach to improve the accuracy of our algorithm.
WOS© Citations 6 60
• Atomtronic multiterminal Aharonov-Bohm interferometer
American Physical Society, 2023
Lau, Jonathan Wei Zhong
Gan, Koon Siang
Dumke, Rainer
Amico, Luigi
; ;
Haug, Tobias
We study a multifunctional device for cold atoms consisting of a three-terminal ring circuit pierced by a synthetic magnetic flux, where the ring can be continuous or discretized. The flux
controls the atomic current through the ring via the Aharonov-Bohm effect. Our device shows a flux-induced transition of reflections from an Andreev-like negative density to positive density.
Further, the flux can direct the atomic current into specific output ports, realizing a flexible nonreciprocal switch to connect multiple atomic systems or sense rotations. By changing the flux
linearly in time, we convert constant matter wave currents into an ac modulated current. This effect can be used to realize an atomic frequency generator and study fundamental problems related to
the Aharonov-Bohm effect. We experimentally demonstrate Bose-Einstein condensation into the light-shaped optical potential of the three-terminal ring. Our work opens up the possibility of
atomtronic devices for practical applications in quantum technologies.
• Resource-efficient high-dimensional subspace teleportation with a quantum autoencoder
American Association for the Advancement of Science, 2022
Zhang, Hui
Wan, Lingxiao
Haug, Tobias
Mok, Wai Keong
Paesani, Stefano
Shi, Yuzhi
Cai, Hong
Chin, Lip Ket
Muhammad Faeyz Karim
Xiao, Limin
Luo, Xianshu
Gao, Feng
Dong, Bin
Syed Assad
Kim, M. S.
Laing, Anthony
; ;
Liu, Ai Qun
Quantum autoencoders serve as efficient means for quantum data compression. Here, we propose and demonstrate their use to reduce resource costs for quantum teleportation of subspaces in
high-dimensional systems. We use a quantum autoencoder in a compress-teleport-decompress manner and report the first demonstration with qutrits using an integrated photonic platform for future
scalability. The key strategy is to compress the dimensionality of input states by erasing redundant information and recover the initial states after chip-to-chip teleportation. Unsupervised
machine learning is applied to train the on-chip autoencoder, enabling the compression and teleportation of any state from a high-dimensional subspace. Unknown states are decompressed at a high
fidelity (~0.971), obtaining a total teleportation fidelity of ~0.894. Subspace encodings hold great potential as they support enhanced noise robustness and increased coherence. Laying the
groundwork for machine learning techniques in quantum systems, our scheme opens previously unidentified paths toward high-dimensional quantum computing and networking.
WOS© Citations 5Scopus© Citations 17 57 74
• Fast-forwarding with NISQ processors without feedback loop
IOP Publishing, 2021
Lim, Kian Hwee
Haug, Tobias
; ;
Kishor Bharti
Simulating quantum dynamics is expected to be performed more easily on a quantum computer than on a classical computer. However, the currently available quantum devices lack the capability to
implement fault-tolerant quantum algorithms for quantum simulation. Hybrid classical quantum algorithms such as the variational quantum algorithms have been proposed to effectively use current
term quantum devices. One promising approach to quantum simulation in the noisy intermediate-scale quantum (NISQ) era is the diagonalisation based approach, with some of the promising examples
being the subspace variational quantum simulator (SVQS), variational fast forwarding (VFF), fixed-state variational fast forwarding (fs-VFF), and the variational Hamiltonian diagonalisation (VHD)
algorithms. However, these algorithms require a feedback loop between the classical and quantum computers, which can be a crucial bottleneck in practical application. Here, we present the
classical quantum fast forwarding (CQFF) as an alternative diagonalisation based algorithm for quantum simulation. CQFF shares some similarities with SVQS, VFF, fs-VFF and VHD but removes the
need for a classical-quantum feedback loop and controlled multi-qubit unitaries. The CQFF algorithm does not suffer from the barren plateau problem and the accuracy can be systematically
increased. Furthermore, if the Hamiltonian to be simulated is expressed as a linear combination of tensored-Pauli matrices, the CQFF algorithm reduces to the task of sampling some many-body
quantum state in a set of Pauli-rotated bases, which is easy to do in the NISQ era. We run the CQFF algorithm on existing quantum processors and demonstrate the promise of the CQFF algorithm for
current-term quantum hardware. We compare CQFF with Trotterization for a XY spin chain model Hamiltonian and find that the CQFF algorithm can simulate the dynamics more than 105 times longer than
Trotterization on current-term quantum hardware. This provides a 104 times improvement over the previous record.
WOS© Citations 6Scopus© Citations 8 64 134
• Nonclassical states in strongly correlated bosonic ring ladders
American Physical Society, 2019
Victorin, Nicolas
Haug, Tobias
; ;
Amico, Luigi
Minguzzi, Anna
We study the ground state of a bosonic ring ladder under a gauge flux in the vortex phase, corresponding to the case where the single-particle dispersion relation has two degenerate minima. By
combining exact diagonalization and an approximate fermionization approach we show that the ground state of the system evolves from a fragmented state of two single-particle states at weak
interparticle interactions to a fragmented state of two Fermi seas at large interactions. Fragmentation is inferred from the study of the eigenvalues of the reduced single-particle density matrix
as well as from the calculation of the fidelity of the states. We characterize these nonclassical states by the momentum distribution, the chiral currents, and the current-current correlations.
WOS© Citations 5Scopus© Citations 6 106 76
• Readout of the atomtronic quantum interference device
American Physical Society, 2018
Haug, Tobias
Tan, Joel
Theng, Mark
Dumke, Rainer
; ;
Amico, Luigi
A Bose-Einstein condensate confined in ring shaped lattices interrupted by a weak link and pierced by an effective magnetic flux defines the atomic counterpart of the superconducting quantum
interference device: the atomtronic quantum interference device (AQUID). In this paper, we report on the detection of current states in the system through a self-heterodyne protocol. Following
the original proposal of the NIST and Paris groups, the ring-condensate many-body wave function interferes with a reference condensate expanding from the center of the ring. We focus on the rf
AQUID which realizes effective qubit dynamics. Both the Bose-Hubbard and Gross-Pitaevskii dynamics are studied. For the Bose-Hubbard dynamics, we demonstrate that the self-heterodyne protocol can
be applied, but higher-order correlations in the evolution of the interfering condensates are measured to readout of the current states of the system. We study how states with macroscopic quantum
coherence can be told apart analyzing the noise in the time of flight of the ring condensate.
WOS© Citations 24Scopus© Citations 26 260 157
• Noisy intermediate-scale quantum algorithms
American Physical Society, 2022
Kishor Bharti
Cervera-Lierta, Alba
Kyaw, Thi Ha
Haug, Tobias
Alperin-Lea, Sumner
Abhinav Anand
Degroote, Matthias
Heimonen, Hermanni
Kottmann, Jakob S.
Menke, Tim
Mok, Wai Keong
Sim, Sukin
; ;
Aspuru-Guzik, Alan
A universal fault-tolerant quantum computer that can efficiently solve problems such as integer factorization and unstructured database search requires millions of qubits with low error rates and
long coherence times. While the experimental advancement toward realizing such devices will potentially take decades of research, noisy intermediate-scale quantum (NISQ) computers already exist.
These computers are composed of hundreds of noisy qubits, i.e., qubits that are not error corrected, and therefore perform imperfect operations within a limited coherence time. In the search for
achieving quantum advantage with these devices, algorithms have been proposed for applications in various disciplines spanning physics, machine learning, quantum chemistry, and combinatorial
optimization. The overarching goal of such algorithms is to leverage the limited available resources to perform classically challenging tasks. In this review, a thorough summary of NISQ
computational paradigms and algorithms is provided. The key structure of these algorithms and their limitations and advantages are discussed. A comprehensive overview of various benchmarking and
software tools useful for programming and testing NISQ devices is additionally provided.
WOS© Citations 406Scopus© Citations 700 115 1052
• Stroboscopic Hamiltonian engineering in the low-frequency regime with a one-dimensional quantum processor
American Physical Society, 2022
Bastidas, Victor M.
Haug, Tobias
Gravel, Claude
; ;
Munro, W. J.
Nemoto, Kae
We propose a scheme to perform stroboscopic Hamiltonian engineering in the low frequency regime using a quantum system with one-dimensional nearest-neighbor coupling that are commonly available
in the NISQ era. Computational problems are encoded in the effective Hamiltonian of the quantum systems under the effect of external driving. Our approach is nonperturbative and it does not rely
on high-frequency expansions, which are a common tool in Floquet engineering. In our paper, the effective Hamiltonian that we want to engineer is fully tailored through designing the periodic
driving. We illustrate how this quantum computation proceeds with two examples, an instance from the 3-SAT problem and the LiH molecule quantum chemistry simulation. In the case of the 3-SAT
Hamiltonian, we show that by starting from the ground state of the trivial Hamiltonian, the quantum systems go through an adiabatic process in the stroboscopic picture towards the target
Hamiltonian of the problem.
WOS© Citations 1Scopus© Citations 2 152 166 | {"url":"https://repository.nie.edu.sg/entities/person/3faadf48-4129-4ff2-871a-00cf3855c752/publications?spc.page=1&f.author=3faadf48-4129-4ff2-871a-00cf3855c752,authority&f.author=Haug,%20Tobias,equals","timestamp":"2024-11-04T23:57:07Z","content_type":"text/html","content_length":"1050345","record_id":"<urn:uuid:208e527b-e9a2-4e28-8006-aae6456fbfe8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00161.warc.gz"} |
Systems and methods of updating a multi-level data structure for controlling an agent. The method may include: accessing a data structure defining one or more nodes. A non-leaf node of the one or
more nodes may be associated with one or more edges for traversing to a subsequent node. An edge of the one or more edges may be associated with a visit count and a softmax state-action value
estimation. The method may include identifying a node trajectory including a series of nodes based on an asymptotically converging sampling policy, where the node trajectory includes a root node and
a leaf node of the data structure, determining a reward indication associated with the node trajectory; and for at least one non-leaf node, updating the visit count and the softmax state-action value
estimate associated with one or more edges of the non-leaf node based on the determined reward indication.
This application claims priority from U.S. provisional patent application No. 62/795,970, filed on Jan. 23, 2019, the entire contents of which are hereby incorporated by reference herein.
Embodiments of the present disclosure generally relate to the field of machine learning, and in particular to systems and methods of controlling a machine learning agent.
Reinforcement learning is associated with decision making or action selection. An agent may conduct actions in an environment and may receive observations or feedback form the environment, including
a reward indicator and/or new state information. The agent may utilize the reward indicator to evaluate the taken action and to determine how best to subsequently make action selections in the
In one aspect, the present application may provide a method of updating a multi-level data structure for controlling an agent. The method may include: accessing a data structure defining one or more
nodes, wherein a non-leaf node of the one or more nodes is associated with one or more edges for traversing to a subsequent node, and wherein an edge of the one or more edges is associated with a
visit count and a softmax state-action value estimation; for each of a plurality of rounds, identifying a node trajectory including a series of nodes based on an asymptotically converging sampling
policy, wherein the node trajectory includes a root node and a leaf node of the data structure; determining a reward indication associated with the node trajectory; and for at least one non-leaf node
in the node trajectory, updating the visit count and the softmax state-action value estimate associated with one or more edges of the non-leaf node based on the determined reward indication
associated with the node trajectory
In another aspect, the present application may provide a system for updating a multi-level data structure for controlling an agent. The system may include a processor; and a memory coupled to the
processor. The processor may store processor-executable instructions that, when executed, configure the processor to: access a data structure defining one or more nodes, wherein a non-leaf node of
the one or more nodes is associated with one or more edges for traversing to a subsequent node, and wherein an edge of the one or more edges is associated with a visit count and a softmax
state-action value estimation; for each of a plurality of rounds, identify a node trajectory including a series of nodes based on an asymptotically converging sampling policy, wherein the node
trajectory includes a root node and a leaf node of the data structure; determine a reward indication associated with the node trajectory; and for at least one non-leaf node in the node trajectory,
update the visit count and the softmax state-action value estimate associated with one or more edges of the non-leaf node based on the determined reward indication associated with the node
In another aspect, the present application may provide a non-transitory computer-readable medium or media having stored thereon machine interpretable instructions which, when executed by a processor,
cause the processor to perform a computer implemented method of updating a multi-level data structure for controlling an agent, the method including: accessing a data structure defining one or more
nodes, wherein a non-leaf node of the one or more nodes is associated with one or more edges for traversing to a subsequent node, and wherein an edge of the one or more edges is associated with a
visit count and a softmax state-action value estimation; for each of a plurality of rounds, identifying a node trajectory including a series of nodes based on an asymptotically converging sampling
policy, wherein the node trajectory includes a root node and a leaf node of the data structure; determining a reward indication associated with the node trajectory; and for at least one non-leaf node
in the node trajectory, updating the visit count and the softmax state-action value estimate associated with one or more edges of the non-leaf node based on the determined reward indication
associated with the node trajectory.
In another aspect, the present application may provide a multi-level data structure for controlling an agent comprising a plurality of nodes including a non-leaf node associated with one or more
edges for traversing to a subsequent node, wherein an edge of the one or more edges is associated with a visit count and a softmax state-action value estimation, wherein the multi-level data
structure was updated by a method comprising: accessing the data structure; for each of a plurality of rounds, identifying a node trajectory including a series of nodes based on an asymptotically
converging sampling policy, wherein the node trajectory includes a root node and a leaf node of the data structure; determining a reward indication associated with the node trajectory; and for at
least one non-leaf node in the node trajectory, updating the visit count and the softmax state-action value estimate associated with one or more edges of the non-leaf node based on the determined
reward indication associated with the node trajectory.
In another aspect, a non-transitory computer-readable medium or media having stored thereon machine interpretable instructions which, when executed by a processor may cause the processor to perform
one or more methods described herein.
In various further aspects, the disclosure provides corresponding systems and devices, and logic structures such as machine-executable coded instruction sets for implementing such systems, devices,
and methods.
In this respect, before explaining at least one embodiment in detail, it is to be understood that the embodiments are not limited in application to the details of construction and to the arrangements
of the components set forth in the following description or illustrated in the drawings. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of
description and should not be regarded as limiting.
Many further features and combinations thereof concerning embodiments described herein will appear to those skilled in the art following a reading of the present disclosure.
In the figures, embodiments are illustrated by way of example. It is to be expressly understood that the description and figures are only for the purpose of illustration and as an aid to
Embodiments will now be described, by way of example only, with reference to the attached figures, wherein in the figures:
FIG. 1 illustrates a system, in accordance with an embodiment of the present application;
FIG. 2 illustrates graphical results of an evaluation of softmax value estimation in a synthetic tree environment, in accordance with an embodiment of the present application;
FIG. 3 illustrates graphical results of an evaluation of online planning in a synthetic tree environment, in accordance with an embodiment of the present application;
FIG. 4 illustrates a flowchart of a method of updating a multi-level data structure for controlling an agent, in accordance with embodiments of the present application; and
FIG. 5 illustrates a block diagram of a computing device, in accordance with an embodiment of the present application.
Multi-level data structures may be configured for sequential decision making or selection problems. In some examples, Monte Carlo planning processes, such as the Monte Carlo Tree Search (MCTS)
process, may be configured for controlling a computing device to conduct sequential decision operations. The Monte Carlo Tree Search process includes conducting a search tree of states that may be
evaluated by averaging over outcomes based on simulations. MCTS operations may configure a computing device to simulate state-action trajectories using domain generative models, and to build a search
tree based on incrementally gathered data during simulations conducted by the computing device. In some embodiments, the search tree may be combined with domain knowledge, such as function
approximations based on online [17] or offline [12, 13] data. MCTS operations may be selective, where a bandit algorithm may be applied for balancing between exploring uncertain search tree branches
and exploiting known/promising search tree branches [9]. In some examples, MCTS operations may converge to an optimal bandit algorithm policy when exploitation and exploration may be balanced [9,7].
MCTS operations may be reliant on state value estimations. At individual nodes of the search tree, the value estimation may be used to calculate a value of an action leading to that node. The
convergence rate of the state value estimation may influence the convergence rate for states further up in the search tree.
In some scenarios, Monte Carlo value estimates (e.g., average over simulation outcomes) may not result in convergence when a value is back-propagated in the search tree. For a given search tree node,
the sampling policy a subtree may change and payoff sequences may drift over time. Compounding error based at least on the search tree data structure and/or uncertainty of operations relating to
Monte Carlo estimation may result in operations exhibiting a polynomial convergence rate for selecting a preferred action at a root node of the search tree. Systems and methods to more efficiently
conduct decision making or selections based on the multi-level data structure may be desirable.
As non-limiting illustrative examples, embodiments of the present application may relate to reinforcement learning systems, where an agent (e.g., learner) and an environment may sequentially
interact. A multi-level data structure, such as a tree data structure, may include a plurality of nodes representing states. One or more nodes may be associated with one or more edges (e.g.,
associated with an action) to traverse to a subsequent node. In each round of interaction, the agent may select an action and the environment may sample a reward for revelation to the agent. With
each round, statistics associated with the one or more edges may be updated such that selection of an action during a subsequent round may be based on the statistics associated with the one or more
In another non-illustrative example, a round may be associated with a simulation of a trajectory of nodes from a root node to a leaf node of the tree data structure. It may be desirable to provide
systems and methods of updating multi-level data structures storing statistics for identifying an optimal sequence of actions from a root node in a sequential decision making or selection
For example, embodiments of the present application may be implemented in data center cooling control systems for identifying cooling requirements at the respective data center regions. In some
examples, embodiments of the present application may be used for order execution, portfolio optimization, option pricing, or the like in the capital markets of the financial industry. Further,
embodiments of the present application may be used in the health care decision making for patient-specific treatment decisions or selection of adaptive treatment strategies. In some examples,
embodiments of the present application may include selection making processes associated with electricity or utility service delivery. In some examples, embodiments of the present application may be
used for decision making processes during execution of games, such as Go, Atari-based games, or the like. In some embodiments, features of decision making or selection operations may be used for
determining transportation scheduling decisions for minimizing passenger wait times or delays. Other example applications of embodiment features described herein may be contemplated.
Systems and methods described in the present application may provide operations for updating a multi-level data structure for controlling an agent, such as a computing device or decision making
operations. The operations for updating a multi-level data structure may include softmax value estimate in MCTS based on a maximum entropy policy optimization framework.
For example, features of embodiments described in the present application may include operations of a stochastic softmax bandit framework for single-step decision problems, where softmax values may
be sequentially estimated at an optimal convergence rate in terms of mean squared error. Further, features of embodiments described in the present application may include back-propagating softmax
state values in a multi-level data structure, leading to an increased convergence rate of finding an optimal or more desirable/beneficial action associated with a root node of the multi-level data
structure. In some examples, the convergence rate associated with embodiments of the present application may be exponential, in comparison to a polynomial convergence rate of other methods and
Reference is made to FIG. 1, which illustrates a system 100 for controlling an agent, in accordance with an embodiment of the present application. The system 100 may transmit and/or receive data
messages to/from a client device 110 via a network 150. The network 150 may include a wired or wireless wide area network (WAN), local area network (LAN), a combination thereof, or the like.
The system 100 includes a processor 102 configured to implement processor readable instructions that, when executed, configure the processor 102 to conduct operations described herein. For example,
the system 100 may be configured to conduct operations for generating and/or updating a multi-level data structure. In some embodiments, the multi-level data structure may be a data tree structure
associated with operations of a Monte Carlo Tree search, or other types of data structure searches.
In some embodiments, the system 100 may be configured to conduct operations for updating the multi-level data structure for controlling an agent. In some embodiments, the agent may be the client
device 110. In some other embodiments, the agent may be at the system 100. In some embodiments, the agent may be associated with operations for decision making or selection conducted at the system
100. Other operations of the system 100 may be contemplated.
The system 100 may include a communication interface 104 to communicate with other computing devices, to access or connect to network resources, or to perform other computing applications by
connecting to a network (or multiple networks) capable of carrying data. In some embodiments, the network 150 may include the Internet, Ethernet, plain old telephone service (POTS) line, public
switch telephone network (PSTN), integrated services digital network (ISDN), digital subscriber line (DSL), coaxial cable, fiber optics, satellite, mobile, wireless (e.g. Wi-Fi, WMAX), SS7 signaling
network, fixed line, local area network, wide area network, and others, including combination of these. In some examples, the communication interface 104 may include one or more busses,
interconnects, wires, circuits, and/or any other connection and/or control circuit, or combination thereof. The communication interface 104 may provide an interface for communicating data between
components of a single device or circuit.
The system may include memory 106. The memory 106 may include one or a combination of computer memory, such as static random-access memory (SRAM), random-access memory (RAM), read-only memory (ROM),
electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.
The memory 106 may store a decision selection application 112 including processor readable instructions for conducting operations described herein. In some examples, the decision selection
application 112 may include operations for updating a multi-level data structure for controlling an agent. Other operations of the decision selection application 112 may be contemplated.
The system 100 may include a data storage 114. In some embodiments, the data storage 114 may be a secure data store. In some embodiments, the data storage 114 may store multi-level data structures,
data structure state values, decision/selection simulation outcome data, softmax state-action values, or other types of data.
The client device 110 may be a computing device including a processor, memory, and a communication interface. In some embodiments, the client device 110 may be a computing device associated with a
local area network. The client device 110 may be connected to the local area network and may transmit one or more data sets to the system 100. For example, the client device 110 may transmit data or
receive data from the system 100, and the system 100 may be configured to conduct operations to update multi-level data structures for controlling the client device 110 operations. Other operations
may be contemplated.
In some embodiments, sequential decision making or selection may be based on Markov Decision processes. A Markov Decision Process may be defined as a 5-tuple {,,P,R,H}, where may be the state space,
may be the action space. Further, H may be the maximum number of steps at each episode, and P and R may be the transition and reward functions, such that P(⋅|s, a) and R(s, a) give the next state
distribution and reward of taking action a at state s. In some embodiments, transition and reward operations may be deterministic. Example operations may include stochastic transitions and/or
rewards, and may include dependencies on variances of transition and/or reward distributions. In some examples, a solution of a Markov Decision Process may be based on a policy π that maps a state s
to a probability distribution over actions. An optimal policy may maximize, on expectation, the cumulative sum of rewards, which may be defined as:
$G t = ∑ k = 0 H + 1 R t + k , R t = { R ( s t , a t ) , t ≤ H v ( s H + 1 ) , t = H + 1$
In the above, an oracle function v may assign stochastic evaluations for states at the end of episode. In some examples, the above described definition may be a general formulation for planning
algorithms in infinite horizon Markov Decision Processes, as H can be considered as the maximum search depth and a stochastic evaluation function may be applied. In some examples, v may be
subgaussian and has variance σ^2.
For policy π, the state value function V^π(s) may be defined to be the expected sum of rewards from s as V^π(s)=^π[G[t]|s[t]=s]. The state-action value function, also known as the Q-value, may be
defined as Q^π(s, a)=^π[G[t]|s[t]=s, a[t]=a]. The optimal value functions may be the maximum value achievable by any policy, V*(s)=max[π ]V^π(s), Q*(s, a)=max[π ]Q^π(s, a). The optimal policy may be
defined by the greedy policy with respect to Q*, π*(s)=argmax[a ]Q*(s, a). In some embodiments, optimal values may be recursively defined by the Bellman optimality equation,
$Q * ( s , a ) = R ( s , a ) + s ′ s , a [ V * ( s ′ ) ] , V * ( s ) = max a Q * ( s , a ) .$
The present application may describe systems and methods directed to online planning problems that may utilize a generative model of the Markov Decision Process to compute an optimal policy at a
given input state, given a fixed sampling budget. The generative model may be a randomized algorithm that may provide as output a reward R(s, a) and may sample a next state s′ from P(⋅|s, a), given a
state-action pair (s, a) as the input. For example, in the game of Go, if the rules of the game are known, the next board state may be predicted after a move. To solve the online planning problem,
decision making or selection operations may utilize the generative model to sample an episode at each round and propose an action for the input state after the sampling budget is expended. The
performance of an online planning algorithm may be measured by its probability of proposing the optimal action for the state of interest.
In some embodiments, the system 100 may conduct operations associated with a Monte Carlo Tree Search (MCTS). MCTS may incrementally generate a look-ahead tree online and may evaluate states based on
Monte Carlo simulations [3]. Each node in may be labeled by a state s and may store a value estimate Q(s, a) and visit count N(s, a) for each action a. The estimate Q(s, a) may be the mean return of
all simulations starting from s and a. The root of may be labeled by the state of interest.
At each iteration of operations including the MCTS operations, one simulation starts from the root of the search tree and may proceed in two stages: a tree policy may be used to select actions while
within the tree until a leaf of is reached. An evaluation function may be conducted at the leaf to obtain a simulation return. In some embodiments, typical choices of the evaluation function may
include function approximation with a neural network, and Monte Carlo simulations using a roll-out policy. The return may be propagated upwards to all nodes along the path to the root. may be
expanded by expanding the leaf reached during the simulation.
The system 100 may conduct operations of Bandit algorithms to balance between exploring uncertain tree structure branches and exploiting promising tree structure branches. Operations of upper
confidence bounds applied to trees (UCT) may apply UCB1 as its tree policy to balance expansion of the search tree [9]. At each node of , a tree policy may select an action with the maximum upper
confidence bound
$UCB ( s , a ) = Q ( s , a ) + c log N ( s ) N ( s , a ) ,$
where N(s)=Σ[a ]N(s, a), and c is a parameter controlling exploration. The UCT algorithm may be desirable in example practical problems. For example, the above described operations may be used for
conducting operations associated with AlphaGo [12, 13]. UCT may be asymptotically optimal: the value estimated by UCT may converge in probability to the optimal value,
$Q ( s , a ) → p Q * ( s , a ) , ∀ s ∈ , ∀ a ∈ .$
The probability of finding a suboptimal action at the root converges to zero at a rate of
$O ( 1 t ) ,$
where t may be the simulation budget [9].
In some embodiments, the system 100 may conduct operations associated with maximum entropy policy optimization, which may augment standard expected reward objectives of an entropy regularizer. Given
K actions and the corresponding K-dimensional reward vector r∈^K, the entropy regularized policy optimization problem may identify a policy by solving
$max π { π · r + τℋ ( π ) }$
where τ≥0 is a user-specified temperature parameter controlling a degree of exploration. The above-described optimization problem may be a closed form solution. In some embodiments, the softmax [τ]
and the soft indmax f[τ] functions may be:
f[τ](r)=exp {(r−[τ](r))/τ} [τ](r)=τ log Σ[a]exp(r(a)/τ)
In some embodiments, the softmax [τ] outputs a scalar while the soft indmax f[τ] maps a reward vector r to a Boltzmann policy. [τ](r), f[τ](r) and
$ℱ τ ( r ) = max π { π · r + τℋ ( π ) } = f τ ( r ) · r + τℋ ( f τ ( r ) ) .$
The above relation may suggest that the softmax value is an upper bound on the maximum value, and a gap may be upper bounded by the product of τ and the maximum entropy. As τ→0, the entropy
regularized policy optimization problem described above may approach the standard expected reward objective, where the optimal solution may be the hard-max policy. The entropy regularized
optimization may be used to define the softmax value functions by replacing the hard-max operator in with the softmax operators [4, 11],
Q[sft]*(s,a)=R(s,a)+[s′═s,a][V[sft]*(s′)], V[sft]*(s)=τ log Σ[a ]exp {Q[sft]*(s,a)/τ}.
In some embodiments, an optimal softmax policy may be characterized by:
π[sft]*(a|s)=exp {(Q[sft]*(s,a)−V[sft]*(s))/τ}.
Embodiments of the present application may include operations to combine the maximum entropy policy optimization framework with MCTS by estimating softmax values back propagated from simulations. In
some scenarios, softmax values may be backpropagated in a search tree, and providing an increased convergence rate to the optimal policy at a root node.
Embodiments of the present application include systems and methods associated with a stochastic softmax bandit problem, a bandit algorithm associated with the stochastic softmax bandit problem, and
operations to identify a tight upper bound on convergence rate. The upper bound may match the lower bound in order and in coefficient of a dominating term.
In some embodiments, a reinforcement learning system may include an agent (or a learner) and an environment. A stochastic bandit setting may include arms set . At each round t, an agent or learner
may choose an action A[t]∈. An environment may sample a random reward R[t ]and may reveals it to the learner. Let r(a) be the expected value of the reward distribution of action a∈. In some examples,
r(a)∈[0,1], and that all reward distributions may be σ^2-subgaussian.
In some examples, the probability space carries the tuple of random variables S[T]={A[0], R[0], . . . , A[T], R[T]}. For every time step t−1 the historical observation may defines a σ-algebra [t−1 ]
and A[t ]is [t−1]-measurable, the conditional distribution of A[t ]is our policy at time π[t], and the conditional distribution of the reward R[A][t]−r(A[t]) is a martingale difference sequence.
For round t, N[t](a) may be defined as the number of times a is chosen so far, and {circumflex over (r)}[t](a) may be defined as the empirical estimate of r(a),
N[t](a)=Σ[i=1]^t{A[t]=a} {circumflex over (r)}[t](a)=Σ[i=1]^t{A[t]=a}R[i]/N[t](a),
where {⋅} may be the indicator function. Let r∈[0,1]^K be the vector of expected rewards, and {circumflex over (r)}[t ]be the empirical estimates of r at round t. π[sft]*=f[τ](r) may be denoted as
the optimal soft indmax policy defined by the mean reward vector r. The stochastic bandit setting may be considered as a special case of an episodic MDP with H=1.
Theorem 1:
In a stochastic softmax bandit problem, instead of finding the policy with maximum expected reward as in original stochastic bandits [10], embodiments of the present application may estimate the
softmax value V[sft]*=[τ](r) for some τ>0. Operations may define U*=Σ[a ]exp {r(a)/τ} and U[t]=Σ[a ]exp {{circumflex over (r)}[t](a)/τ}, and may propose to use the estimator V[t]=[τ]({circumflex over
(r)}[t])=τ log U[t]. In some embodiments, the goal is to find a sequential sampling algorithm that can minimize the mean squared error, [t]=[(U*−U[t])^2]. The randomness in [t ]may be associated with
both the sampling algorithm and the observed rewards. Our first result gives a lower bound on [t].
In some embodiments, in the stochastic softmax bandit problem, for any algorithm that achieves
$ɛ t = O ( 1 t ) ,$
there exists a problem setting such that
$lim t → ∞ t ɛ t ≥ σ 2 τ 2 ( Σ a exp ( r ( a ) / τ ) ) 2 .$
To achieve the lower bound, there must be for any a∈, lim[t→∞]N[t](a)/t=π[sft]/(a). In the above,
$ɛ t = O ( 1 t )$
is presumed, but not that the algorithm achieves (asymptotically) unbiased estimates for each arm. The lower bound may also reflect the consistency between the softmax value and the soft indmax
policy. To achieve the lower bound on the mean squared error, the sampling policy may converge to π[sft]* asymptotically.
In some embodiments, systems and methods may include operations of Empirical Exponential Weight (E2W) (or alternately Regression-based Exponential Weight (REW) operations) for the stochastic softmax
bandit problem described herein. In some embodiments, operations may enforce exploration to guarantee good estimation of {circumflex over (r)}, and make the policy converge to π* asymptotically, as
suggested by the lower bound. In some embodiments, the lower bound may be a mean squared error lower bound.
As a non-limiting illustrating example, pseudocode for operations of Empirical Exponential Weight (or alternately Regression-based Exponential Weight) may include:
1: ∀a, {circumflex over (r)}(a) = 0, N(a) = 0 2: for t = 1, 2, . . . do 3: λ[t ]= . . . (TODO) 4: ∀a, {circumflex over (π)}[t](a) = f[τ] ({circumflex over (r)}) (a) 5: $∀ a , π t ( a ) = (
1 - λ t ) π ^ ( a ) + λ t 1 K$ 6: Draw action A[t ]by sampling from π[t ]and play it. 7: Observe R[t ]from the environment. 8: N(A[t]) = N(A[t]) + 1 9: $r ^ t ( A t ) = r ^ t ( A t
) + R t - r ^ t ( A t ) N ( A t )$ 10: end for
In some embodiments, at round t, the algorithm may select an action by sampling from the distribution:
$π t ( a ) = ( 1 - λ t ) f τ ( r ^ ) ( a ) + λ t 1 ,$
where λ[t]=ε||/log(t+1) is a decay rate for exploration, with exploration parameter ε>0. Our next theorem provides an exact convergence rate for E2 W.
Theorem 2:
For the softmax stochastic bandit problem, E2 W may provide:
$lim t → ∞ t ɛ t = σ 2 τ 2 ( Σ a exp ( r ( a ) / τ ) ) 2 .$
Theorem 2 illustrates that E2 W may be an asymptotically optimal sequential sampling strategy for estimating the softmax value in stochastic multi-armed bandits. Softmax bandits may be used for
estimating each state's softmax value and balancing expansion or growth of a search tree.
Maximum Entropy MCTS:
In some embodiments of the present application, systems and methods may conduct operations including a combination of a maximum entropy policy optimization with Monte Carlo Tree Search operations.
The operations may be referred to herein as a Maximum Entropy for Tree Search (MENTS). MENTS may include operations of Empirical Exponential Weight (E2W) described herein as a tree policy and may
include operations for evaluating search nodes based on softmax values back-propagated from simulations.
In some embodiments, let be a generated look-ahead search tree. Each node n(s)∈ may be labeled by a state s, and may contain a softmax value estimate Q[sft](s, a) and a visit count N(s, a) for each
action a. Q[sft](s) may denote a ||-dimensional vector with components Q[sft](s, a). Let N(s)=Σ[a ]N(s, a) and V[sft](s)=[τ](Q[sft](s)). During an in-tree phase of a simulation, the tree policy may
select an action based on:
$π t ( a s ) = ( 1 - λ s ) f τ ( Q sft ( s ) ) ( a ) + λ s 1 $
where λ=ε||/log(Σ[a ]N(s, a)+1).
In some embodiments, let {s[0], a[0], s[1], a[1], . . . , s[T]} be the state action trajectory in the simulation, where n(s[T]) is a leaf node of . Operations of an evaluation function may be called
on s[T ]and may return an estimate R. In some examples, R[t ]may be replaced by a sample from an evaluation function, and a martingale assumption may be extended to the selection policy and the
evaluation function on leaves.
In some embodiments, may be expanded by expanding n(s[T]). Statistics may initialized by Q[sft](s[T], a)=0 and N (s[T], a)=0 for all actions a. For all nodes in the trajectory, the visiting counts
may be updated by N (s[t], a[t])=N (s[t], a[t])+1, and the Q-values may be updated using a softmax backup,
$Q sft ( s t , a t ) = { r ( s t , a t ) + R t = T - 1 r ( s t , a t ) + ℱ τ ( Q sft ( s t + 1 ) ) t < T - 1$
In some embodiments, operations associated with MENTS may be extended to utilize domain knowledge, such as function approximations learned offline. For example, suppose that a policy network {tilde
over (π)}(⋅|s) is available. Then the statistics may be initialized by Q[sft](s[T], a)=log {tilde over (π)}(a|s[T]) and N(s[T], a)=0 for all actions a during the expansion. Finally, at each time step
t, MENTS proposes the action with the maximum estimated softmax value at the root s[0]; i.e. a[t]=argmax[a ]Q[sft ](s[0], a).
To illustrate features of embodiments of the present application, in some examples, a method may include operations of a max search problem in a tree with branching factor (action) K of depth D. Let
⊂ be the set of the leaves of the tree. For a leaf node s, a stochastic value function may be assigned. In some examples, the stochastic value may have subgaussian error. The value of s may be the
expectation of the stochastic value function. The operations of the max search problem may be for identifying a path from a root node to the optimal leaf (a leaf with a largest value) as quickly as
possible. In some examples, the max search problem may be a model for one or more planning algorithms, where K may be the number of actions, D may be the maximum search depth, and a stochastic value
estimation may be applied.
Embodiment methods of the present application may be called a Regression-based Exponential Weight for Tree (REWT). In some embodiments, methods described herein may utilize operations of a
regression-based exponential weight (REW) method for in-tree selection and may conduct operations to apply softmax function for state value evaluations.
In the present example, a node s in the search tree may correspond to a specific node ξ(s)∈. Each node s in the search tree, where ξ(s)∈, may maintain two statistics {N(s), {circumflex over (V)}Z
(s)}, where N(s) may be a visit count and {circumflex over (V)}(s) may be a mean state value.
A node s that ξ(s)∈\ may contain edges (s, a) for all actions a∈(s) and may store a set of statistics, including a softmax state value estimation V[sft](s). In some embodiments, for each edge {N(s,
a), {circumflex over (Q)}[sft](s, a)}, N(s,a) may be the visit count and Q[sft]*(s, a) may be the softmax state-action value estimation.
In some embodiments, operations of REWT may include iterations of 4 operations, including a select operation, an evaluate operation, an expand operation, and a backup operation.
At the select operation, the method may include generating a trajectory of nodes based on the REW method. Generating the trajectory of nodes may include a series of nodes including a root node of the
search tree, so, and ending with a leaf node, s[L], of the search tree. In some embodiments, a probability of selecting an action a at node s may be defined by:
$π ( a s ) = ( 1 - λ s ) exp ( Q ^ sft ( s , a ) / τ ) Σ b exp ( Q ^ sft ( s , b ) / τ ) + λ s 1 K$
where the discount factor λ[s ]may be defined based on N(s)=Σ[a ]N(s, a).
At the evaluate operation, an evaluation function may be used to generate a value estimation for s[L]. In an example of the Monte Carlo evaluation, the method may simulate a sequence of nodes from s
[L ]to a leaf node of the tree and may utilize a stochastic value function to observe a return R.
At the expand operation, if ξ(s[L])∈\, the node s[L ]may be expanded by adding one or more children to the search tree. The statistics stored in s[L ]may be initialized by:
{circumflex over (V)}[sft](s[L])←exp(R/τ)
∀a∈(s[L])N(s[L],a),{acute over (Q)}(s[L],a)←0
When ξ(s[L])∈, the method may initialize N(s[L]) and {circumflex over (V)}(s[L]) by 0.
At the backup operation, if ξ(s[L])∈, the method may update node statistics as follows:
$N ( s L ) ← N ( s L ) + 1$ $V ^ ( s L ) ← V ^ ( s L ) + R - V ^ ( s L ) N ( s L )$
In some examples, the edge statistics may be updated in a backward pass at each step t≤L−1, as follows:
{circumflex over (Q)}[sft](s[t],a[t])←{circumflex over (V)}[sft](s[t+1])
{circumflex over (V)}[sft](s[t])←({circumflex over (Q)}[sft](s[t]))
where {circumflex over (Q)}[sft](s[t]) is a K dimensional vector, and each component of which is {circumflex over (Q)}[sft](s[t], a) for action a. As an non-limiting illustrating example, pseudocode
for operations of REWT may include:
1: for t = 1, 2, . . . do 2: Select a trajectory of nodes {s[0], a[0], . . . s[L−1], s[L]}. 3: Evaluate at s[L ]and observe R. 4: if ξ(s[L]) ϵ \ then 5: Expand s[L ]and initialize
according to (5) 6: end if 7: Backup according to (6) and (7) 8: end for
As in operations associated with UCT, respective iterations of operations of REWT may expand one node and add children to the current search tree.
In another embodiment of generating a tree, the select operation and the expand operation may differ. For example, instead of finishing when a leaf node of the current search tree is reached, the
method may expand the leaf and initialize statistics as follows:
{circumflex over (V)}[sft](s[L])←exp(R/τ)
∀a∈(s[L])N(s[L],a),{circumflex over (Q)}[sft](s[L],a)←0.
Further, in the present embodiment, the method may include selecting actions to take until a node s is identified, where ξ(s)∈. The method may include utilizing the stochastic value function at a
node s and a result of the stochastic value function may be used to update the search tree as in the above-described in the backup operation. In the present example, because each iteration may add a
separate path of nodes from the root to the leaf of to the current search tree, the present example may be called a path-REWT.
To illustrate a theoretical analysis of convergence property for operations associated with MENTS, for any node in a search tree, after its subtree has been explored, the estimated softmax value may
converge to an optimal value at an exponential rate. Recall that in Theorem 1, an optimal sampling algorithm for the softmax stochastic bandit problem may guarantee lim[t→∞]N[t](a)/t=π[sft]*(a) for
any action a. This may be shown in E2 W with high probability and may be based on the proof of Theorem 2.
Theorem 3:
E2 W may be applied to the stochastic softmax bandit problem. Let N[t]*(a)=π[sft]*(a)·t. Then there exists some constants C and {tilde over (C)} such that,
$ℙ ( N t ( a ) - N t * ( a ) > Ct log t ) ≤ C ~ t exp { - t ( log t ) 3 } .$
In an example of the bandit case, reward distribution of each arm may be assumed to be subgaussian. When applying bandit algorithms at internal nodes of a search tree, the payoff sequence experienced
from each action may drift over time, as the sampling probability of the actions in the subtree may be changing. In embodiments described herein, even under such example drift over time conditions, a
softmax value may be efficiently estimated according to back-propagation operations, where Q-values may use a softmax backup (as described above).
Theorem 4:
For any node n(s)∈, define the event:
$E s = { ∀ a ∈ , N ( s , a ) - N * ( s , a ) < N * ( s , a ) 2 }$
where N*(s, a)=π[sft]*(a|s)·N(s). For ϵ∈[0,1), there exist some constant C and {tilde over (C)} such that for sufficiently large t,
$ℙ ( V sft ( s ) - V sft * ≥ ϵ E s ) ≤ C ~ exp { - N ( s ) τ 2 ϵ 2 C σ 2 } .$
Without loss of generality, Q*(s, 1)≥Q*(s, 2)≥ . . . ≥Q*(s, ||) may be assumed for any n(s)∈, and define Δ=Q*(s, 1)−Q*(s, 2). As described in some embodiments of the present application, the gap
between the softmax and maximum value is upper bounded by τ times the maximum of entropy. As long as τ is chosen small enough such that this gap is smaller than Δ, the best action may also have the
largest softmax value. To illustrate the probability that the above-described operations (or algorithm) fail to find the optimal arm at the root, Theorem 5 is provided.
Theorem 5:
Let a[t ]be the action returned by MENTS at iteration t. Then for large enough t with some constant C,
$ℙ ( a t ≠ a * ) ≤ Ct exp { - t ( log t ) 3 } .$
In some embodiments, operations associated with MENTS may provide faster convergence rate than operations associated with UCT. For example, MENTS may apply operations associated with E2 W (as
described herein) as a tree policy during simulations. The softmax value functions used in MENTS may be estimated in an optimal rate and the tree policy may converge to the optimal softmax policy π
[sft]* asymptotically, as suggested by Theorem 1 and/or Theorem 2 described herein. Further, Theorem 4 may illustrate that the softmax value may be efficiently back-propagated in the search tree.
Accordingly, the probability of MENTS failing to identify the best decision at the root may decay exponentially, thereby improving upon the polynomial rate of decay of operations associated with UCT.
In some embodiments described herein, a maximum entropy formulation may provide improvements in exploration and robustness based on adapting a smoothed optimization objective and acquiring diverse
policy behaviour. Embodiments of systems and methods described herein directed to the MENTS operations may be associated with the softmax Bellman operator, which may be used as the value propagation
formula in MCTS. It may be appreciated that operations associated with UCT may take a long time to discover an optimal branch that initially may look undesirable.
In some examples, other example value back-propagation strategies, such as maximum backup may not be desirable because the Monte-Carlo estimation may be too noisy when the number of simulations is
low, which may misguide the algorithm, particularly at the beginning of search of the search tree. As compared to other example value back-propagation strategies, embodiments of systems and methods
of the present application may include operations to exploit softmax backup for faster convergence rate of value estimation.
To illustrate embodiments of the present application, experiments may be conducted. As an example, a test domain may be a Synthetic Tree environment. A tree may have branching factor (number of
actions) k of depth d. At each leaf of the tree, a standard Gaussian distribution may be assigned as an evaluation function. That is, each time a leaf is visited, the distribution may be used to
sample a stochastic return. The mean of the Gaussian distribution may be determined. When initializing the environment, each edge of the tree may be assigned a random value and the mean of the
Gaussian distribution at a leaf is the sum of values along the path from the root to the leaf. The environment may be similar to a P-game tree environment used to model two player minimax games. In
the present example, the single (max) player version is considered. All means may be normalized in [0,1].
To illustrate embodiments of the present application, methods conducting operations associated with MENTS for five Atari games (e.g., BeamRider, Breakout, Q*bert, Seaquest, and SpaceInvaders) were
tested. For the games, a vanilla Deep Q-network (DQN) was trained and used as an evaluation function for the tree search as discussed in the AlphaGo [12,13]. The softmax of Q-values may be used as
the state value estimate, and the Boltzmann distribution over the Q-values may be used as the policy network to assign a probability prior to each action when expanding a node. A temperature may be
set to 0.1. In illustrating experiments, the UCT algorithm adopts the tree-policy introduced in AlphaGo [13]:
$PUCT ( s , a ) = Q ( s , a ) + cP ( s , a ) Σ b N ( s , b ) 1 + N ( s , a )$
where P(s, a) is the prior probability. Operations associated with MENTS may apply the same evaluation function. The prior probability is used to initialize the Q[sft ]as described in some examples
of the present application. The DQN may be trained using a hard-max target. Training a neural network using softmax targets such as soft Q-learning or PCL may be more suitable for operations
associated with MENTS. In the illustrating experiments, DQN may be used in MENTS to present a useful comparison with UCT, as both sets of operations or algorithms may apply the same evaluation
Value Estimation in Synthetic Tree:
In embodiments of the present application, softmax values may advantageously be efficiently estimated and back-propagated in a data search tree structure. To verify these observations, the value
estimation error of operations associated with MENTS and operations associated with UCT in both the bandit and tree search setting may be compared. For operations associated with MENTS, the error may
be measured by the absolute difference between the estimated softmax value V[sft](s[0]) and the true softmax state value V[sft]*(s[0]) of the root s[0]. For operations associated with UCT, the error
may be measured by the absolute difference between the Monte Carlo value estimation V(s[0]) and the optimal state value V*(s[0]) at the root.
Reference is made to FIG. 2, which illustrates graphical results 200 of an evaluation of softmax value estimation in a synthetic tree environment, in accordance with an embodiment of the present
application. In FIG. 2, the x-axis illustrates a number of simulations and the y-axis illustrates a value estimation error. A shaded area illustrates a standard error. In some scenarios, the softmax
value may be efficiently estimated based on features of embodiments described herein.
In illustrating experiments, each data point can be averaged over 5×5 independent experiment (5 runs on 5 randomly initialized environment). In all of the test environments, observations illustrated
that operations associated with MENTS estimates the softmax values efficiently. By comparison, observations illustrated that the Monte Carlo estimation used in operations associated with UCT converge
more slowly to the optimal state value, even in the bandit setting (d=1).
Online Planning in Synthetic Tree:
A comparison of operations associated with MENTS and operations associated with UCT for online planning in a synthetic tree environment may be compared based on experiments. Both sets of operations
may utilize Monte Carlo simulation with a uniform rollout policy as the evaluation function. The error may be evaluated by V*(s[0])−Q*(s[0], a[t]), where a[t ]is the action proposed by the algorithm
at simulation step t, and s[0 ]is the root of the synthetic tree. Optimal values Q* and V* may be computed by back-propagating the true values from the leaves when the environment is initialized.
Reference is made to FIG. 3, which illustrates graphical results 300 of an evaluation of online planning in a synthetic tree environment, in accordance with an embodiment of the present application.
In FIG. 3, the x-axis illustrates the number of simulations and the y-axis illustrates the planning error. The shaded area illustrates the standard area. From the illustration in FIG. 3, it may be
observed that when conducting operations of MENTS described in the present application, there may be smaller errors as compared to when conducting operations associated with UCT. The contrast may be
observed in larger domains.
As in the previous experiment described above, each data point can be averaged over 5×5 independent experiment (5 runs on 5 randomly initialized environment). Operations associated with UCT may
converge faster than operations associated with MENTS in the bandit environment (d=1). This may be attributed to an advantage of operations associated with MENTS, being the usage of softmax state
values, which can be efficiently estimated and back-propagated in the search tree. In the bandit case, such an advantage may not exist.
In the tree case (d>0), illustrating examples may show that operations associated with MENTS outperforms operations associated with UCT, especially in a large domain. For example, in synthetic tree
with k=8 d=5, operations associated with UCT may fail to identify an optimal action at the root in some of the random environments, resulting in a large regret given the simulation budgets. However,
operations associated with MENTS may continuously make progress towards the optimal solution in random environments, thereby illustrating that operations associated with MENTS may scale with larger
tree depth.
In illustrating examples, operations associated with MENTS and operations associated with UCT may be compared based on running Atari games. In illustrating examples, each time step may utilize 500
simulations to generate a move. Results may be illustrated in Table 1 below:
TABLE 1 Performance comparison of Atari games playing. Agent BeamRider Breakout Q*bert Seaquest SpaceInvaders DQN 19280 345 14558 1142 625 UCT 21952 367 16010 1129 656 MENTS 18576 386 18336 1161 1503
Table 1 also illustrates results when operations associated with DQN were conducted. In the Atari games Breakout, Q*bert, and SpaceInvaders, the operations associated with MENTS outperforms
operations associated UCT and the operations associated with DQN, as conducted by an agent. In the Atari games BeamRider and Seaquest, the operations of the agents performed similarly, as search
algorithms may only use the DQN as the evaluation function and 500 simulations were applied to generate a move. It may be appreciated that there may be greater illustrated performance for operations
associated with MENTS when larger simulation budgets are used.
Reference is made to FIG. 4, which illustrates a flowchart of a method 400 of updating a multi-level data structure for controlling an agent, in accordance with embodiments of the present
application. The method 400 may be conducted by the processor 102 of the system 100 (FIG. 1). Processor-readable instructions may be stored in memory 106 and may be associated with the decision
selection application 112 or other processor readable applications not illustrated in FIG. 1.
In some embodiments, the agent be a computing device or may be a control module within a computing device. In an example reinforcement learning system, the agent may interact with an environment and
may receive reinforcement learning feedback from the environment. The multi-level data structure may be a representation of the agent interacting with or within the environment, and may be a
structure for storing state data or for storing statistics associated with one or more nodes. The state data or statistics may be associated with basis for an agent to conduct actions or to predict a
next state. In some examples, the state data or statistics may be used for identifying a probability of next successor states, given a current state and one or more actions taken by an agent.
In some embodiments, the multi-level data structure may store data associated with a plurality of nodes, and the plurality of nodes may respectively include data associated with edges. In some
embodiments, the plurality of nodes may be associated with combinations or permutations of decision making paths for sequential actions. In some embodiments, the data associated with nodes and/or
edges may be associated with statistics, such as visit count information, mean state values, softmax state-action value estimates, or similar data. Embodiments of methods described herein may conduct
operations to identify or select subsequent nodes along a node trajectory represented by the data structure.
At operation 402, the processor may access a data structure defining one or more nodes. In some embodiments, the data structure may be stored in the data storage 114 (FIG. 1) of a system 100 and the
processor, at operation 402, may retrieve an instance or copy of the data structure from memory or a network resource. For example, the data structure may be stored as a data set and may be loaded
into random access memory or local memory of the system 100, such that the processor may read the data set or manipulate/update/expanding the data set.
In some embodiments, the processor, at operation 402, may be initializing a new data structure and, thus, may be allocating memory in the data storage 114 for the data structure.
In some embodiments, the data structure may be a data tree structure. In some embodiments, the processor may access the data tree structure such that tree search operations may be conducted. As an
illustrating example, the processor may access the data tree structure for conducting operations associated with a Monte Carlo tree search.
The one or more nodes may include a root node, one or more leaf nodes, and/or one or more non-leaf nodes. A non-leaf node of the one or more nodes may be associated with one or more edges for
traversing to a subsequent node. For example, in a data tree structure, an edge may define a traversal to a node to a subsequent node. The one or more edges may be associated with statistics, such as
a visit count and a softmax state-action value estimate.
As a non-limiting example, the one or more nodes may be associated with a state of the agent and the one or more edges for traversing to a subsequent node may be associated with an action that the
agent may select. To illustrate, in a tree data structure, a given node (e.g., a geographical position of an agent) may be associated with two edges (or branches), where each edge may be connected to
a subsequent node. Each edge may be associated with a decision by the agent to move (e.g., move left or move right). The respective subsequent node may be a subsequent geographical position upon
selecting to move left or to move right. In some embodiments, the visit count may be associated with a number of times that the movement has been selected, and the softmax-state-action value estimate
may be associated with an outcome or a reward. Continuing with the present example, the outcome or the reward may be an indication of proximal distance the agent may be to a desired geographical
position. The outcome may be associated with a particular action conducted by the agent or may be associated with a series of actions conducted by the agent.
In some embodiments, the softmax state-action value estimation may be based on a softmax function defined by:
[τ](r)=τ log Σ[a ]exp(r(a)/τ)
where τ≥0 is a user-specified temperature parameter controlling a degree of exploration, and r(a) is a reward for an action a.
In some embodiments, the processor may conduct one or more operations for updating the multi-level data structure for controlling the agent. In some embodiments, the one or more operations for
updating the multi-level data structure may be conducted for each of a plurality of rounds. For example, the data structure may be associated with a reinforcement learning system, and the
reinforcement learning system may be provided with a budget setting a set number of operations for updating the multi-level data structures. Accordingly, in some embodiments, a round may be
associated with a simulation of a sequence of decisions, which may be associated with a sequence of nodes in the multi-level data structure.
At operation 404, the processor may identify a node trajectory including a series of nodes based on an asymptotically converging sampling policy. The node trajectory may include a root node and a
leaf node of the data structure. In some embodiments, the asymptotically converging sampling policy may be associated with a mean squared error lower bound. Continuing with the above-described
example, as a reinforcement learning system may be provided with a budget for updating the data structure, in some embodiments, the processor may determine a set of node trajectories based on the
asymptotically converging sampling policy for optimizing the number of simulation outcomes that may most efficiently update the data structure.
In some embodiments, node trajectories including two or more nodes may be associated with a sequence of interrelated actions, where an outcome of the sequence of interrelated actions may be of
significance or of interest to the agent. In some embodiments, the node trajectories may be associated with sequences of actions for updating the data structure, and the updated data structure may be
analyzed for pre-emptive decision making or planning operations (e.g., optimizing utility delivery services to minimize transportation costs, optimizing public transit bus schedules to minimize
passenger wait times, or the like).
In some embodiments, the asymptotically converging sampling policy may be based on a distribution defined by:)
$π t ( a ) = ( 1 - λ s ) f τ ( r ^ ) ( a ) + λ t 1 $
where λ[t]=ε/log(t+1) is a decay rate for exploration and the exploration parameter is ε>0.
At operation 406, the processor may determine a reward indication associated with the node trajectory. In some embodiments, determining the reward indication associated with the node trajectory may
be based on a Monte Carlo evaluation simulating the node trajectory from the root node to the leaf node of the data structure.
In some embodiments, the reward indication may be an indication of an outcome associated with a given action or a series of actions. For example, in the example of actions representing movement to a
subsequent position, the reward indication may be an indication of how close the agent may be to the desired geographical position.
In some embodiments, the processor may have previously generated the data structure and stored the data structure in the data storage 114. Thus, in some embodiments, the processor, may at operation
406 determine, based on at least the visit count and the softmax state-action value estimation associated with edges of the data structure, reward indications of one or more node trajectories.
For at least one non-leaf node in the node trajectory, the processor, at operation 408, may update the visit count and the softmax state-action value estimate associated with one or more edges of the
non-leaf node based on the determined value estimation associated with the node trajectory. At operation 408, the processor updates the data structure, such that subsequent analysis or traversal of
the data structure by the agent may assist the agent with identifying a subsequent node traversal or with identifying a node trajectory that may maximize a desired outcome or reward indication.
In some embodiments, updating the visit count and the softmax state-action value estimate associated with one or more edges of the non-leaf node may include operations defined by:
{circumflex over (Q)}[sft](s[t],a[t])←{circumflex over (V)}[sft](s[t+1])
{circumflex over (V)}[sft](s[t])←[τ]({circumflex over (Q)}[sft](s[t]))
where {circumflex over (Q)}[sft](s[t]) is a K dimensional vector, each component of which is {circumflex over (Q)}[sft](s[t], a) for action a, and where [τ] is a softmax function, and N(s[t], a[t])
is the state-action visit count.
In some embodiments, the accessed data structure (from operation 402) may have been generated by the processor using a policy or operations that may not rely on features of softmax state-action value
estimation, described in some examples of the present application. Accordingly, in some embodiments, the processor, at operation 408, may be configured to back-propagate softmax values to update
statistics (e.g., visit count, softmax values, or the like) based on features of embodiments of the present application to increase efficiency or effectiveness of sequential decision or selection
At operation 410, the processor may identify an action for controlling the agent based on the updated multi-level data structure. In some embodiments, the processor may identify the action for
controlling the agent based on statistics that may be stored in the one or more edges associated with a node. In some embodiments, the processor may identify an action based on two or more statistics
associated with the one or more edges. For example, the processor may determine an action for reaching a subsequent node based on a weighted calculation of visit count, softmax state-action value
estimation, and/or other statistics associated with the one or more edges.
In some embodiments, the processor may, for each of the plurality of rounds, determine an action for controlling the agent based on the maximum estimate softmax value at the root node s[0]. For
example, the action for controlling the agent may be defined by:
a[t]=argmax[a ]Q[sft](s[0],a)
In some embodiments, for the leaf node in the node trajectory, the processor may update statistics associated with the leaf node based on operations defined by:
$N ( s L ) ← N ( s L ) + 1$ $V ^ ( s L ) ← V ^ ( s L ) + R - V ^ ( s L ) N ( s L ) ,$
where N(s[L]) may be a visit count associated with the leaf node, and where R may be the value estimation, and where {circumflex over (V)}(s[L]) may be the mean state value.
In some embodiments, the method may further include operations to expand the data structure. For example, the method may include operations to generate a further leaf node in the data structure. In
some embodiments, the one or more edges associated with nodes of the data structure may include statistics, such as visit count and softmax state-action value estimation, that may be initialized with
zero value. For example, the initialization of statistics may be defined by:
∀a∈(s[L])N(s[L],a),{circumflex over (Q)}[sft](s[L],a)←0.
As described herein, embodiments of the present application may include operations for updating a multi-level data structure for controlling an agent based on evaluating statistics associated with a
given node in a search tree based on softmax values back-propagated from simulations. In some examples, softmax values may be estimated at an optimal convergence rate in terms of mean square error.
Further softmax values may be more efficiently back-propagated from simulations in a tree search, such that the probability of the operations described herein failing to identify the best decision at
a given node may decay exponentially.
Reference is made to FIG. 5, which illustrates a block diagram of a computing device 500, in accordance with an embodiment of the present application. As an example, the system 100 or the client
device 110 of FIG. 1 may be implemented using the example computing device 500 of FIG. 5.
The computing device 500 includes at least one processor 502, memory 504, at least one I/O interface 506, and at least one network communication interface 508.
The processor 502 may be a microprocessor or microcontroller, a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, a
programmable read-only memory (PROM), or combinations thereof.
The memory 504 may include a computer memory that is located either internally or externally such as, for example, random-access memory (RAM), read-only memory (ROM), compact disc read-only memory
(CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM).
The I/O interface 506 may enable the computing device 500 to interconnect with one or more input devices, such as a keyboard, mouse, camera, touch screen and a microphone, or with one or more output
devices such as a display screen and a speaker.
The networking interface 508 may be configured to receive and transmit data sets representative of the machine learning models, for example, to a target data storage or data structures. The target
data storage or data structure may, in some embodiments, reside on a computing device or system such as a mobile device.
In some embodiments, there may be processor-executable instructions or software that, when executed, by a processor converts the computing device 500 into a special purpose computing device to
perform particular operations pursuant to instructions of the processor-executable instructions.
The term “connected” or “coupled to” may include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional
element is located between the two elements).
Although the embodiments have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope. Moreover, the
scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the
As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be
developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims
are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
The description provides many example embodiments of the inventive subject matter. Although each embodiment represents a single combination of inventive elements, the inventive subject matter is
considered to include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the
inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed.
The embodiments of the devices, systems and methods described herein may be implemented in a combination of both hardware and software. These embodiments may be implemented on programmable computers,
each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one
communication interface.
Program code is applied to input data to perform the functions described herein and to generate output information. The output information is applied to one or more output devices. In some
embodiments, the communication interface may be a network communication interface. In embodiments in which elements may be combined, the communication interface may be a software communication
interface, such as those for inter-process communication. In still other embodiments, there may be a combination of communication interfaces implemented as hardware, software, and combination
Throughout the foregoing discussion, numerous references will be made regarding servers, services, interfaces, portals, platforms, or other systems formed from computing devices. It should be
appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor configured to execute software instructions stored on a computer readable
tangible, non-transitory medium. For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described
roles, responsibilities, or functions.
The technical solution of embodiments may be in the form of a software product. The software product may be stored in a non-volatile or non-transitory storage medium, which can be a compact disk
read-only memory (CD-ROM), a USB flash disk, or a removable hard disk. The software product includes a number of instructions that enable a computer device (personal computer, server, or network
device) to execute the methods provided by the embodiments.
The embodiments described herein are implemented by physical computer hardware, including computing devices, servers, receivers, transmitters, processors, memory, displays, and networks. The
embodiments described herein provide useful physical machines and particularly configured computer hardware arrangements.
As can be understood, the examples described above and illustrated are intended to be exemplary only.
Applicant notes that the described embodiments and examples are illustrative and non-limiting. Practical implementation of the features may incorporate a combination of some or all of the aspects,
and features described herein should not be taken as indications of future or existing product plans. Applicant partakes in both foundational and applied research, and in some cases, the features
described are developed on an exploratory basis.
• [1] Tristan Cazenave. Sequential halving applied to trees. IEEE Transactions on Computational Intelligence and AI in Games, 7(1):102-105, 2015.
• [2] Pierre-Arnaud Coquelin and Rémi Munos. Bandit algorithms for tree search. In Uncertainty in Artificial Intelligence, 2007.
• [3] Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, pages 72-83. Springer, 2006.
• [4] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017.
• [5] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:
1801.01290, 2018.
• [6] Zohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed bandits. In International Conference on Machine Learning, pages 1238-1246, 2013.
• [7] Michael Kearns, Yishay Mansour, and Andrew Y Ng. A sparse sampling algorithm for near-optimal planning in large markov decision processes. Machine learning, 49(2-3):193-208, 2002.
• [8] Piyush Khandelwal, Elad Liebman, Scott Niekum, and Peter Stone. On the analysis of complex backup strategies in monte carlo tree search. In International Conference on Machine Learning, pages
1319-1328, 2016.
• [9] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on machine learning, pages 282-293. Springer, 2006.
• [10] Tor Lattimore and Csaba Szepesvári. Bandit algorithms. 2018.
• [11] Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. In Advances in Neural Information Processing Systems,
pages 2775-2785, 2017.
• [12] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, loannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering
the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
• [13] David Silver, Julian Schrittwieser, Karen Simonyan, loannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go
without human knowledge. Nature, 550(7676):354, 2017.
• [14] Stephen J J Smith and Dana S Nau. An analysis of forward pruning. In AAAI, 1994.
• [15] Richard S Sutton, Andrew G Barto, et al. Introduction to reinforcement learning, volume 135. MIT press Cambridge, 1998.
• [16] David Tolpin and Solomon Eyal Shimony. Mcts based on simple regret. In Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012.
• [17] Chenjun Xiao, Jincheng Mei, and Martin Müller. Memory-augmented monte carlo tree search. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
A. Proofs for Multi-Armed Bandits
We first introduce a Lemma that approximates the exponential function of empirical estimator using delta method [Bickel & Doksum (2015) Bickel and Doksum]. This Lemma will be used for both lower
bound and upper bound analysis.
Lemma 3
Let X[1], . . . , X[n ]be i.i.d. random variables, such that [X[i]]=μ and [X[i]]=σ^2<∞, X[n]=Σ[i=1]^n X[i]/n. The first two moment of exp (X[n]/τ) could be approximated by,
$ [ exp ( X _ n τ ) ] = e μ / τ + σ 2 2 n ( e μ / τ τ 2 ) + R ( n ) ( 5 ) [ exp ( X _ n τ ) ] = σ 2 n ( e μ / τ τ ) 2 + R ′ ( n ) ( 6 )$
where |R(n)|<O(n^−2), |R′(n)|≤O(n^−2).
By Taylor's expansion,
$exp ( X _ n τ ) = e μ / τ + e μ / τ τ 2 ( X _ n - μ ) + e μ / τ 2 τ 2 ( X _ n - μ ) 2 + e ξ / τ 6 τ 3 ( X _ n - μ ) 3$
for some ξ between μ and X[n]. Taking the expectation on both sides,
$ [ exp ( X _ n τ ) ] = e μ / τ + 0 + e μ / τ 2 τ 2 [ X _ n ] + e ξ / τ 6 τ 3 [ ( X _ n - μ ) 3 ] .$
$R ( n ) = e ξ / τ 6 τ 3 [ ( X _ n - μ ) 3 ] .$
By Lemma 5.3.1 of [Bickel & Doksum(2015)Bickel and Doksum], |R(n)|≤O(n^−2), which gives Fq. (5).
Furthermore, note that
$( [ exp ( X _ n τ ) ] ) 2 = ( e μ / τ + σ 2 2 n ( e μ / τ τ 2 ) + R ( n ) ) 2 = e 2 μ / τ + σ 2 n ( e μ / τ τ ) 2 + C 1 n 2 + C 2 R ( n ) + C 3 R ( n ) n$
for some constant C[1], C[2], C[3]. On the other hand, following the same idea of deriving Eq. (5),
$ [ ( exp ( X _ n τ ) ) 2 ] = e 2 μ / τ + 2 σ 2 n ( e μ / τ τ ) 2 + R ~ ( n )$
where |R(n)|≤O(n^−2). The proof of Eq. (6) ends by taking the difference of the above two equations.
A.1 Proof of Theorem 1
We consider the learning problem in a Bayesian setting, and prove that
$lim t → ∞ [ t ( U - U ^ t ] 2 - σ 2 τ 2 ( ∑ a π _ ( a ) e r ( a ) / τ ) 2 ] ≥ 0 ,$
where the expectation is taken on the randomness of the algorithm, the rewards r[1:K], and the observation X[k,i ]given r[1:K]. Thus the existence of r[1:K ]is guaranteed since r[1:K ]satisfies the
property in expectation.
The benefit of considering the problem in a Bayesian setting is that the Bayes estimate of U, Ū[t ]for the squared loss [(U−Û[t])^2] is the posterior mean, i.e. the conditional expectation of U given
the observations X[k,i]. Such Bayes estimator can further be decomposed into the Bayes estimator of each arm, even without the assumption that Û[t ]is decomposable or Û[t ]has (asymptotic) unbiased
estimator for each arm.
We still need some preparation before proving the main result. The following lemma shows that for an algorithm that performs well on all possible environments, it must pull each arm at least in an
order of log(t) for t rounds.
Lemma 4
For any algorithm A such that
$ℰ t = O ( 1 t ) ,$
it holds that N[t](k)=Ω(log(t)) for any arm k.
Unlike in the regret analysis for stochastic multi-armed bandits, where one only cares about how many times the suboptimal arms should be pulled, the log(t) lower bound on N[t](k) for suboptimal arms
is not strong enough to provides a tight lower bound of .
We define the learning problem in the Bayesian setting, as follows. Assume that the reward of each arm r[k ]is independently sampled from a Gaussian prior (0, 1/τ[0]). At time step t, conditioned on
the action A[t], we define the problem such that the reward X[t ]at time step t is sampled from (τ[A][t]σ^2) independent to all the previous observations. Without loss of generality, we assume that σ
=1 and τ=1. We can also assume that π(a)=1 for any a. To see why, note that
Thus, one can always define the surrogate reward function as {tilde over (r)}(a)=r(a)+log(π(a)) for any fixed π.
By the tower rule and the fact that Bayes estimator is the minimizer of the mean squared error,
$ [ t ( U - U ^ t ) 2 ] ≥ [ t ( U - U ~ t ) 2 ] = [ [ t ( U - U ~ t ) 2 r ] ] ,$
where Ũ[t ]is the Bayesian estimator of U at time step t. It then suffices to prove that
$lim t → ∞ [ t ( U - U ~ t ) 2 r ] ≥ - ( ∑ a e r ( a ) ) 2$
for any r. The rest of the proof is always conditioned on r. Note that the Bayes estimator of U can be decomposed as
$U ~ t = [ U X i , i = 1 , … , K ] = ∑ j [ exp ( r j ) X i , i = 1 , … , K ] = ∑ j [ exp ( r j ) X j ] .$
Further noting that exp(X[i,k]) has a log-normal distribution with a Gaussian prior, its posterior distribution is still log-normal with the mean
$exp ( ∑ k = 1 N t ( j ) X j , k + 1 / 2 τ 0 + N t ( j ) ) .$
Therefore, the Bayesian estimator of U is
$U ~ t = ∑ j exp ( ∑ k = 1 N t ( j ) X j , k + 1 / 2 τ 0 + N t ( j ) ) .$
It remains to bound (U−Ũ[t])^2 conditioned on r. Note that
$( U - U ~ t ) 2 = ( ∑ j exp ( r j ) - exp ( ∑ k = 1 N t ( j ) X j , k + 1 / 2 τ 0 + N t ( j ) ) ) 2 = ∑ j Δ j 2 + ∑ i ≠ j Δ j Δ i ,$
$Δ j = exp ( r j ) - exp ( ∑ k = 1 N t ( j ) X j , k + 1 / 2 τ 0 + N t ( j ) ) .$
Further Let
$ Φ j = ∑ k = 1 N t ( j ) X j , k + 1 / 2 τ 0 + N t ( j ) = N t ( j ) τ 0 + N t ( j ) r ^ j + 1 / 2 τ 0 + N t ( j ) , and$ $Γ j = Φ j - r j = ∑ k = 1 N t ( j ) X j , k +
1 / 2 τ 0 + N t ( j ) - r j = N t ( j ) N t ( j ) + τ 0 ( r ^ j - r j ) + 1 / 2 - τ 0 r j τ 0 + N t ( j ) .$
By Lemma 6, we have
$ [ Δ j N t ( j ) , r ] = O ( 1 N t ( j ) )$ $and$ $ [ Δ j 2 N t ( j ) , r ] = exp ( 2 r ( j ) ) ( N t ( j ) ( N t ( j ) + τ 0 ) 2 + O ( 1 N t 2 ( j ) ) ) .$
Finally, define P[t](j)=N[t](j)/t and let τ[0]→0,
$lim t → ∞ t [ ( U - U ~ t ) 2 r ] = lim t → ∞ t [ [ ( U - U ~ t ) 2 N t ( 1 ) , … , N t ( k ) , r ] ] = lim t → ∞ [ ∑ j exp ( 2 r ( j ) ) + O
( 1 N t ( j ) ) P t ( j ) ] ≥ ( ∑ a e r ( a ) ) 2$
where the last inequality follows by Cauchy-Schwarz inequality and Lemma 4. Note that for the inequality to hold there must be for all action k∈[K], N[t](k)=N[t]*(k).
For the general case, where σ,τ≠1, we can simply scale the reward by τ, then the variance of X[j,k], is
$σ 2 τ 2 .$
The proof still holds and we obtain the following inequality,
$lim t → ∞ t [ ( U - U ~ t ) 2 r ] ≥ σ 2 τ 2 ( ∑ a π _ ( a ) exp ( r ( a ) / τ ) ) 2 .$
A.2 Concentration of N[t](a) in Bandit
Define Ñ[t](k)=Σ[s ]τ[s](k), where τ[s ]is the policy followed by REW at time step s. By Theorem 2.3 in [Wainwright (2015)] or [Wainwright (2017)], we have the following concentration result.
$ℙ ( N t ( k ) - N ~ t ( k ) > ϵ ) ≤ 2 exp - ( ϵ 2 2 ∑ s = 1 t σ s 2 ) ≤ 2 exp ( - 2 ϵ 2 t ) ,$
where σ[s]^2≤¼ is the variance of Benoulli distribution with p=τ[s](k) at time step s. Denote the event
{tilde over (E)}[e]={∀k∈[K],|Ñ[t](k)−N[t](k)|<ϵ}.
Thus we have
$ℙ ( E ~ ϵ c ) ≤ 2 K exp ( - 2 ϵ 2 t ) .$
It remains to bound (|Ñ[t](k)−N[t]*(k)|≥ϵ).
Lemma 5
$λ s = K log ( 1 + s ) ,$
Algorithm 1 guarantees that, for t≥4.
$ℙ ( r - r ^ t ∞ ≥ 2 σ log ( 2 + t ) ) ≤ 4 K exp ( - t ( log ( 2 + t ) ) 3 )$
By the choice of
$λ s = K log ( 1 + s ) ,$
it follows that for all a and t≥4,
$N ~ t ( a ) = ∑ s = 1 t π s ( a ) ≥ ∑ s = 1 t 1 log ( 1 + s ) ≥ ∑ s = 1 t 1 log ( 1 + s ) - s / ( s + 1 ) ( log ( 1 + s ) ) 2 ≥ ∫ 1 1 + t 1 log ( 1 + s ) - s / ( s + 1 ) (
log ( 1 + s ) ) 2 ds = 1 + t log ( 2 + t ) - 1 log 2 ≥ t 2 log ( 2 + t )$
Conditioned on the event {tilde over (E)}[e ]where we set
$ϵ = t 4 log ( 2 + t ) ,$
it follows that
$N t ( a ) ≥ t 4 log ( 2 + t ) .$
Then, for any fixed k by the definition of sub-Gaussian,
$ℙ ( r ( k ) - r ^ t ( k ) > 8 σ 2 log ( 2 δ ) log ( 2 + t ) t ) ≤ ℙ ( r ( k ) - r ^ t ( k ) > 2 σ 2 log ( 2 δ ) N t ( k ) ) ≤ δ .$
Let δ satisfy that
$log ( 2 / δ ) = t ( log ( 2 + t ) ) 3 , ℙ ( r ( k ) - r ^ t ( k ) > 2 σ log ( 2 + t ) ) ≤ 2 exp ( - t ( log ( 2 + t ) ) 3 )$
Therefore for t≥2
$ℙ ( r t - r ^ t ∞ ≥ 2 σ log ( 2 + t ) ) ≤ ℙ ( r t - r t ∞ ≥ 2 σ log ( 2 + t ) E ~ ϵ ) + ℙ ( E ~ ϵ C ) ≤ ∑ k ℙ ( r ( k ) - r ^ t ( k ) > 2 σ log ( 2 + t ) E
~ ϵ ) + ℙ ( E ~ ϵ C ) ≤ 2 K exp ( - t ( log ( 2 + t ) ) 3 ) + 2 K exp ( - t 2 ( log ( t + 2 ) ) 2 ) ≤ 4 K exp ( - t ( log ( 2 + t ) ) 3 )$
Theorem 6
$λ s = K log ( 1 + s ) ,$
Algorithm 1 guarantees that, there exists some constant C only depending on K, τ and σ, such that,
$ℙ ( N t ( a ) - N t * ( a ) > Ct log ( 2 + t ) ) ≤ O ( Kt 2 exp ( - t ( log t ) 3 ) ) .$
We denote the event that
${ r - r ^ t ∞ < 2 σ log ( 2 + t ) }$
by E[r][t]. Note that by the definition of π[s](k),
|π[s](k)−π*(k)|≤|{circumflex over (π)}[s](k)−π*(k)|+λ[s].
Thus, to bound |N[t](k)−N[t]*(k)|, conditioned on the event ∩[i=1]^tE[r][t ]and for t≥4 there is,
$ N ~ t ( k ) - N t * ( k ) ≤ ∑ s = 1 t π ^ s ( k ) - π * ( k ) + ∑ s = 1 t λ s ≤ ( 1 + 1 τ ) ∑ s = 1 t r ^ s - r ∞ + ∑ s = 1 t λ s ( by Lemma 7 ) ≤ ( 1
+ 1 τ ) ∑ s = 1 t 2 σ log ( 2 + s ) + ∑ s = 1 t λ s ( by Lemma 5 ) ≤ ( 1 + 1 τ ) ∫ s = 0 t 2 σ log ( 2 + s ) ds + ∫ s = 0 t K log ( 1 + s ) ds ≤ Ct log ( 2
+ t )$
for some constant C depending on K, σ and τ. Finally,
$ℙ ( N ~ t ( k ) - N t * ( k ) ≥ Ct log ( 2 + t ) ) ≤ ∑ i = 1 t ℙ ( ξ r i c ) = ∑ i = 1 t 4 K exp ( - t ( log ( 2 + t ) ) 3 ) ≤ 4 Kt exp ( - t ( log ( 2 + t ) )
3 ) .$
$ℙ ( N t ( k ) - N t * ( k ) ≥ ( 1 + C ) t log ( 2 + t ) ) ≤ ℙ ( N ~ t ( k ) - N t * ( k ) ≥ Ct log ( 2 + t ) ) + ℙ ( N t ( k ) - N ~ t ( k ) > t log ( 2 + t ) )
≤ 4 Kt exp ( - t log ( 2 + t ) 3 ) + 2 K exp ( - 2 t log ( 2 + t ) 2 ) ≤ O ( Kt 2 exp ( - t ( log t ) 3 ) )$
A.3 Proof of Theorem 2
[Proof of Theorem 2] Let
$δ t = t 5 6$
with some constant C. Define the following set
and its complementary set G[t]^c={1, 2, . . . , t}\G[t].
By Theorem 6, ∀a∈{1, . . . , K}, with probability at least 1−O(t exp(−C[2]t^1/3)), N[t](a)∈G[t]. By law of total expectation and Lemma 3,
$ [ exp ( r ^ t ( a ) τ ) ] = ∑ s = 1 t ℙ ( N t ( a ) = s ) · [ exp ( r ^ t ( a ) τ ) N t ( a ) = s ] = ∑ s = 1 t ℙ ( N t ( a ) = s ) · ( e r ( a ) / τ + σ 2 2
s ( e r ( a ) / τ τ 2 ) + O ( s - 3 2 ) ) = e r ( a ) / τ + ∑ s = 1 t ℙ ( N t ( a ) = s ) · ( σ 2 2 s ( e r ( a / τ ) τ 2 ) + O ( s - 3 2 ) ) ( 7 )$
We divide the summation in two parts. For s∈G[t]^c, by Theorem 6,
$∑ s ∈ t c ℙ ( N t ( a ) = s ) · ( σ 2 2 s ( e r ( a ) / t τ 2 ) + O ( s - 3 2 ) ) ≤ O ( 1 t ) ( 8 )$
For s∈G[t],
$∑ s ∈ t c ℙ ( N t ( a ) = s ) · ( σ 2 2 s ( e r ( a ) / τ τ 2 ) + O ( s - 3 2 ) ) ≤ O ( ( N t * ( a ) - δ t ) - 1 ) ( 9 )$
Combine the above together,
$lim t → ∞ t ( U - [ U ^ t ] ) 2 = lim t → ∞ t ( ∑ a π _ ( a ) ( [ exp ( r ^ t ( a ) τ ) ] - exp ( r t ( a ) τ ) ) ) 2 = lim τ → ∞ t ( ∑ a π _ ( a ) ( O ( 1
t ) + O ( ( N t * ( a ) - δ t ) - 1 ) ) ) 2 = 0 ( 10 )$
To bound the variance, by the law of total variance,
$ [ exp ( r ^ t ( a ) τ ) ] = [ [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ] + [ [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ] ( 10 )$
Note that by Lemma 3, the first term is
$ [ [ exp ( r ^ t ( a ) τ N t ( a ) ] ] = ∑ s = 1 t ℙ ( N t ( a ) = s ) [ exp ( r ^ t ( a ) τ ) N t ( a ) ] = ∑ s = 1 t ℙ ( N t ( a ) = s ) ( σ 2 s ( e r (
a ) / τ τ ) 2 + O ( s - 3 2 ) )$
Using the same idea in Eq. (8) and Eq. (9), we consider the summation in two parts. For s∈G[t]^c,
$∑ s ∈ t c ℙ ( N t ( a ) = s ) · ( σ 2 s ( e r ( a ) / τ τ ) 2 + O ( s - 3 2 ) ) ≤ O ( 1 t )$
For s∈G[t],
$∑ s ∈ t ℙ ( N t ( a ) = s ) · ( σ 2 s ( e r ( a ) / τ τ ) 2 + O ( s - 3 2 ) ) ≤ σ 2 τ 2 · e 2 r ( a ) / τ N t * ( a ) - δ t + O ( ( N t * ( a ) - δ t ) - 3 2 )$
Put these together we have,
$ [ [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ] ≤ O ( 1 t ) + σ 2 τ 2 · e 2 r ( a ) / τ N t * ( a ) - δ t + O ( ( N t * ( a ) - δ t ) - 3 2 ) ( 11 )$
For the second term of Eq. (10) we have,
$ [ [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ] = [ ( [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ) 2 ] - ( [ exp ( r ^ t ( a ) τ ) ] ) 2$
For the first term, by Lemma 3,
$ [ ( [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ) 2 ] = ∑ s = 1 t ℙ ( N t ( a ) = s ) ( [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ) 2 = ∑ s = 1 t ℙ ( N t ( a ) = s ) (
e 2 r ( a ) / τ + σ 2 s ( e r ( a ) / τ τ ) 2 + O ( s - 3 / 2 ) ) ≤ e 2 r ( a ) / τ + O ( 1 t ) + σ 2 τ 2 · e 2 r ( a ) / τ N t * ( a ) - δ t + O ( ( N t * ( a ) - δ t ) - 3
2 )$
where the last inequality follows by the same idea of proving Eq. (11). For the second term of Eq. (12), by Eq. (7),
$( [ exp ( r ^ t ( a ) τ ) ] ) 2 ≥ exp ( 2 r ( a ) τ )$
Then we have,
$ [ [ exp ( r ^ t ( a ) τ ) N t ( a ) ] ] ≤ O ( 1 t ) + σ 2 τ 2 · e 2 r ( a ) / τ N t * ( a ) - δ t + O ( ( N t * ( a ) - δ t ) - 3 2 ) ( 12 )$
Note that
$lim t → ∞ t · σ 2 τ 2 · e 2 r ( a ) / τ N t * ( a ) - δ t = lim t → ∞ σ 2 τ 2 · e 2 r ( a ) / τ π * ( a ) - δ t t = σ 2 τ 2 · e r ( a ) / τ π _ ( a ) · ( ∑ a π _ ( a ) exp
( r ( a ) / τ ) ) ( 13 )$
Combine Eq. (11), Eq. (12) and Eq. (13) together,
$lim t → ∞ t [ U ^ t ] = lim t → ∞ t ( ∑ a π _ 2 ( a ) [ exp ( r ^ t ( a ) τ ) ] ) ≤ lim t → ∞ t ( ∑ a π _ 2 ( a ) ( O ( 1 t ) + σ 2 τ 2 · e 2 r (
a ) / τ N i * ( a ) - δ t + O ( ( N t * ( a ) - δ t ) - 3 2 ) ) ) = σ 2 τ 2 ( ∑ a π _ ( a ) exp ( r ( a ) / τ ) ) 2$
A.4 Technical Lemmas Proof:
[Proof of Lemma 4] Consider two gaussian environments v[1 ]and v[2 ]with unit variance. The vector of means of the reward per arm in v[1 ]is (r(1), . . . , r(K)) and (r(1)+2ϵ, r(2), . . . , r(K)) in
v[2]. Define
U[1]=Σ[i=1]^Ke^r^i, U[2]=e^r^1^+2c+Σ[i=2]^Ke^r^1
Let [1 ]and [2 ]be the distribution induced by v[1 ]and v[2 ]respectively. Denote the event,
By definition, the error [t,v][1 ]under v[1 ]satisfies
and the error [t,v][2 ]under v[2 ]satisfies
Therefore, under the assumption that the algorithm suffers O(1/t) error in both environments.
$O ( 1 t ) = ɛ t , P 1 + ɛ t , P 2 ≥ ℙ 1 ( E ) e 2 r 1 ϵ 2 + ℙ 2 ( E c ) e 2 r 1 ϵ 2 = e 2 r 1 ϵ 2 ( ℙ 1 ( E ) + ℙ 2 ( E c ) ) ≥ 1 2 e 2 r 1 ϵ 2 e - 2 N t ( k
) ϵ 2 .$
where the last inequality follows by Pinsker's inequality and Divergence decomposition Lemma [Lattimore & Szepesvári (2018) Lattimore and Szepesvári]. Thus,
Lemma 6
$Δ j = exp ( r j ) - exp ( ∑ k = 1 N t ( j ) X j , k + 1 / 2 τ 0 + N t ( j ) )$ $and$ $Γ j = N t ( j ) N t ( j ) + τ 0 ( r ^ j - r j ) + 1 / 2 - τ 0 r j τ 0 + N t ( j
) . [ Δ j N t ( j ) , r ] = e r j - [ e Φ j N t ( j ) , r ] = e r j ( 1 - [ e Γ j N t ( j ) , r ] ) = O ( 1 N t ( j ) )$ $and$ $ [ Δ j 2 N t ( j ) , r ] =
exp ( 2 r ( j ) ) ( N t ( j ) ( N t ( j ) + σ 0 ) 2 + O ( 1 N t 2 ( j ) ) ) .$
By the fact that the variance of X[j,k ]given r is 1,
$ [ Γ N t ( j ) , r ] = 1 / 2 - τ 0 r j τ 0 + N t ( j ) . [ Γ 2 N t ( j ) , r ] = N t ( j ) ( N t ( j ) + τ 0 ) 2 + O ( 1 N t 2 ( j ) ) ,$
Lemma 7
Given two soft indmax policies, π^(1)=f[τ](r^(1)) and π^(2)=f[τ](r^(2)), we have
$ π ( 1 ) - π ( 2 ) ∞ ≤ ( 1 + 1 τ ) r ( 1 ) - r ( 2 ) ∞$
[Proof of Lemma 7] Note that
$ π ( 1 ) - π ( 2 ) ∞ ≤ log π ( 1 ) - log π ( 2 ) ∞ ≤ 1 τ r ( 1 ) - r ( 2 ) ∞ + ℱ τ ( r ( 1 ) ) - ℱ τ ( r ( 2 ) ) $
The proof ends by using the fact |[τ](r^(1))−[τ](r^(2))|≤∥r^(1)−r^(2)∥[∞], which follows Lemma 8 of [Nachum et al. (2017) Nachum, Norouzi, Xu, and Schuurmans].
B Proofs for Tree
This section contains the detailed proof for Theorem 4 and Theorem 5.
Proof: Theorem 4
We prove this using induction on the depth of tree.
Base Case:
For s∈,
$ℙ ( U ^ ( s ) - U ( s ) ≥ ϵ U ( s ) E ) ≤ ℙ ( e V ^ ( s ) / τ ≥ ( 1 + ϵ ) e V ( s ) / τ ξ N ) + ℙ ( e V ^ ( s ) / τ ≤ ( 1 - ϵ ) e V ( s ) / τ E
) = ℙ ( V ^ ( s ) ≥ τ log ( 1 + ϵ ) + V ( s ) E ) + ℙ ( V ^ ( s ) ≤ τ log ( 1 - ϵ ) + V ( s ) E ) ≤ 2 exp ( - ϵ 2 N t ( s ) 16 σ 2 ) .$
where the last inequality follows by the subgaussian assumption of {circumflex over (V)}(s) on the leaf.
For some internal node s, assume that the values of its children, {U(1), . . . . , U(K)}, can be estimated with the concentration property. We prove,
$ℙ ( U ^ ( s ) - U ( s ) ≥ ϵ U ( s ) ) ≤ C ~ exp ( - ϵ 2 t C ~ σ 2 )$
for some constant {tilde over (C)}. Note that by the definition of U, we have Û(s)=Σ[k ]Û(k) and U(s)=Σ[k ]U(k). For any α[i ]such that Σ[k ]α[k]U(k)=ϵU(s),
$ℙ ( U ^ ( s ) - U ( s ) ≥ ϵ U ( s ) E ) ≤ ∑ k ℙ ( U ^ ( k ) - U ( k ) ≥ α k U ( k ) E ) ≤ ∑ k C exp ( - α k 2 π k * t 2 C σ 2 ) ,$
where the last inequality is by the induction hypothesis. Let α[k]^2π*(k)=M where
$M = ϵ U ( s ) Σ k U ( k ) / π * ( k ) .$
One can verify that Σ[k ]α[k]U(k)=ϵU(s). Therefore,
$ℙ ( U ^ ( s ) - U ( s ) ≥ ϵ U ( s ) ) ≤ ∑ k C exp ( - ϵ 2 t C σ 2 ( U ( s ) Σ k U ( k ) / π * ( k ) ) 2 ) ≤ KC exp ( - ϵ 2 t C σ 2 U ( s )
2 ( Σ k U ( s ) U ( k ) ) 2 ) ≤ KC exp ( - ϵ 2 t C σ 2 U ( s ) 2 ( Σ k U ( k ) ) 2 ) ≤ KC exp ( - 1 2 K ϵ 2 t C σ 2 ) ≤ C ~ 1 exp ( - ϵ 2 t C ~
2 σ 2 ) .$
Picking {tilde over (C)}=max {{tilde over (C)}[1], {tilde over (C)}[2]} leads to the conclusion.
Proof: Theorem 5
Let {1, . . . , K} be the arms of the root s[0]. Without of loss of generality, we assume that 1 is the arm with largest softmax value. The probability of SBT choosing a wrong arm at s[0 ]is,
$ℙ ( ∃ k ∈ [ K ] , U ^ ( k ) > U ^ ( 1 ) ) ≤ ℙ ( ∃ k ∈ [ K ] , U ^ ( k ) > U ^ ( 1 ) E ) + ℙ ( E c ) ≤ ∑ k ℙ ( U ^ ( k ) > U ^ ( 1 ) E ) + ℙ ( E c ) .$
Since we have developed an upper bound on (E^c), it remains to bound (Û(k)>Û(1)|E). Note that
$ℙ ( U ^ ( k ) > U ^ ( 1 ) E ) = ℙ ( U ^ ( k ) - U ^ ( 1 ) - U ( k ) - U ( 1 ) > U ( 1 ) - U ( k ) E ) ≤ ℙ ( U ^ ( 1 ) - U ( 1 ) > α 1 U ( 1 ) E ) + ℙ ( U
^ ( k ) - U ( k ) > α k U ( k ) E ) ≤ C exp ( - N t * ( 1 ) α 1 2 2 C σ 2 ) + C exp ( - N t * ( k ) α k 2 2 C σ 2 )$
where α[k]U(k)+α[1]U(1)=U(1)−U(k). Picking
$α = ( U ( 1 ) - U ( k ) ) π * ( 1 ) π * ( 1 ) U ( 1 ) + π * ( k ) U ( k ) and β = ( U ( 1 ) - U ( k ) ) π * ( 1 ) π * ( 1 ) U ( 1 ) + π * ( k ) U ( k
) ,$
we have (Û(k)>Û(1)) is upper bounded by
$2 C exp ( - t 2 C σ 2 ( U ( 1 ) - U ( k ) ) 2 Σ b U ( b ) ) .$
$ℙ ( ∃ k ∈ [ K ] , U ^ ( k ) > U ^ ( 1 ) ) ≤ ∑ k ( U ^ ( k ) > U ^ ( 1 ) E ) + ℙ ( E c ) ≤ 2 C ∑ k exp ( - t 2 C σ 2 ( U 1 - U k ) 2 Σ b U b ) + Ct 2 exp ( - t
( log t ) 3 ) ,$
for some constant C not depending on t.
1. A method of updating a multi-level data structure for controlling an agent, the method comprising:
accessing a data structure defining one or more nodes, wherein a non-leaf node of the one or more nodes is associated with one or more edges for traversing to a subsequent node, and wherein an
edge of the one or more edges is associated with a visit count and a softmax state-action value estimation;
for each of a plurality of rounds,
identifying a node trajectory including a series of nodes based on an asymptotically converging sampling policy, wherein the node trajectory includes a root node and a leaf node of the data
determining a reward indication associated with the node trajectory; and
for at least one non-leaf node in the node trajectory, updating the visit count and the softmax state-action value estimate associated with one or more edges of the non-leaf node based on the
determined reward indication associated with the node trajectory.
2. The method of claim 1, comprising determining an action for controlling the agent based on the maximum softmax state-action value estimation at a given node.
3. The method of claim 1, wherein the asymptotically converging sampling policy is based on a distribution defined by: π t ( a ) = ( 1 - λ s ) f τ ( r ^ ) ( a ) + λ t 1 where λi=ε/log
(t+1) is a decay rate for exploration and the exploration parameter is e>0.
4. The method of claim 1, wherein the asymptotically converging sampling policy is associated with a mean squared error lower bound.
5. The method of claim 1, wherein the softmax state-action value estimation is based on a softmax function defined by: where τ≥0 is a user-specified temperature parameter controlling a degree of
exploration, and r(a) is a reward for an action a.
τ(r)=τ log Σa exp(r(a)/τ)
6. The method of claim 1, wherein updating the visit count and the softmax state-action value estimate associated with one or more edges of the non-leaf node includes: wherein {circumflex over (Q)}
sft(st) is a K dimensional vector, each component of which is {circumflex over (Q)}sft(st, a) for action a, and wherein τ is a softmax function, and N(st,at) is the state-action visit count.
{circumflex over (Q)}sft(st,at)←{circumflex over (V)}sft(st+1)
{circumflex over (V)}sft(st)←τ({circumflex over (Q)}sft(st))
7. The method of claim 1, wherein determining a reward indication associated with the node trajectory is based on a Monte Carlo evaluation simulating the node trajectory from the root node to the
leaf node of the data structure.
8. The method of claim 1, comprising: for the leaf node in the node trajectory, updating statistics associated with the leaf node by: N ( s L ) ← N ( s L ) + 1 V ^ ( s L ) ← V ^ ( s L ) + R
- V ^ ( s L ) N ( s L ) wherein N(sL) is a visit count associated with the leaf node, and wherein R is the reward indication, and wherein {circumflex over (V)}(sL) is the mean state value.
9. The method of claim 1, comprising expanding the data structure by generating a further leaf node in the data structure, and wherein for at least one edge associated with the data structure,
initialize visit count and softmax state-action value estimation to zero as defined by:
∀a∈(sL)N(sL,a),{acute over (Q)}(sL,a)←0
10. The method of claim 1, wherein the data structure is a tree data structure.
11. A system for updating a multi-level data structure for controlling an agent, the system comprising:
a processor; and
a memory coupled to the processor and storing processor-executable instructions that, when executed, configure the processor to: access a data structure defining one or more nodes, wherein a
non-leaf node of the one or more nodes is associated with one or more edges for traversing to a subsequent node, and wherein an edge of the one or more edges is associated with a visit count and
a softmax state-action value estimation;
for each of a plurality of rounds, identify a node trajectory including a series of nodes based on an asymptotically converging sampling policy, wherein the node trajectory includes a root node
and a leaf node of the data structure; determine a reward indication associated with the node trajectory; and for at least one non-leaf node in the node trajectory, update the visit count and the
softmax state-action value estimate associated with one or more edges of the non-leaf node based on the determined reward indication associated with the node trajectory.
12. The system of claim 11, comprising processor-executable instructions that, when executed, configure the processor to determine an action for controlling the agent based on the maximum softmax
state-action value estimation at a given node.
13. The system of claim 11, wherein the asymptotically converging sampling policy is based on a distribution defined by: π t ( a ) = ( 1 - λ s ) f τ ( r ^ ) ( a ) + λ t 1 where λt=ε||
/log(t+1) is a decay rate for exploration and the exploration parameter is ε>0.
14. The system of claim 11, wherein the asymptotically converging sampling policy is associated with a mean squared error lower bound.
15. The system of claim 11, wherein the softmax state-action value estimation is based on a softmax function defined by: where τ≥0 is a user-specified temperature parameter controlling a degree of
exploration, and r(a) is a reward for an action a.
τ(r)=τ log Σa exp(r(a)/τ)
16. The system of claim 11, wherein updating the visit count and the softmax state-action value estimate associated with the one or more edges of the non-leaf node includes: wherein {circumflex over
(Q)}sft(st) is a K dimensional vector, each component of which is {circumflex over (Q)}sft(st, a) for action a, and wherein τ is a softmax function, and N(st, at) is the state-action visit count.
{circumflex over (Q)}sft(st,at)←{circumflex over (V)}sft(st+1)
{circumflex over (V)}sft(st)←τ({circumflex over (Q)}sft(st))
17. The system of claim 11, wherein determining a reward indication associated with the node trajectory is based on a Monte Carlo evaluation simulating the node trajectory from the root node to the
leaf node of the data structure.
18. The system of claim 11, comprising processor-executable instructions that, when executed, configure the processor to: for the leaf node in the node trajectory, updating statistics associated with
the leaf node by: N ( s L ) ← N ( s L ) + 1 V ^ ( s L ) ← V ^ ( s L ) + R - V ^ ( s L ) N ( s L ) wherein N(sL) is a visit count associated with the leaf node, and wherein R is the
reward indication, and wherein {circumflex over (V)}(sL) is the mean state value.
19. A non-transitory computer-readable medium or media having stored thereon machine interpretable instructions which, when executed by a processor, cause the processor to perform a computer
implemented method of updating a multi-level data structure for controlling an agent, the method comprising:
accessing a data structure defining one or more nodes, wherein a non-leaf node of the one or more nodes is associated with one or more edges for traversing to a subsequent node, and wherein an
edge of the one or more edges is associated with a visit count and a softmax state-action value estimation;
for each of a plurality of rounds, identifying a node trajectory including a series of nodes based on an asymptotically converging sampling policy, wherein the node trajectory includes a root
node and a leaf node of the data structure; determining a reward indication associated with the node trajectory; and for at least one non-leaf node in the node trajectory, updating the visit
count and the softmax state-action value estimate associated with one or more edges of the non-leaf node based on the determined reward indication associated with the node trajectory.
20. A multi-level data structure for controlling an agent comprising a plurality of nodes including a non-leaf node associated with one or more edges for traversing to a subsequent node, wherein an
edge of the one or more edges is associated with a visit count and a softmax state-action value estimation, wherein the multi-level data structure was updated by a method comprising:
accessing the data structure;
for each of a plurality of rounds, identifying a node trajectory including a series of nodes based on an asymptotically converging sampling policy, wherein the node trajectory includes a root
node and a leaf node of the data structure; determining a reward indication associated with the node trajectory; and for at least one non-leaf node in the node trajectory, updating the visit
count and the softmax state-action value estimate associated with one or more edges of the non-leaf node based on the determined reward indication associated with the node trajectory.
Patent History
Publication number
: 20200234167
: Jan 23, 2020
Publication Date
: Jul 23, 2020
Patent Grant number
11593693 Inventors
Chenjun XIAO
Ruitong HUANG
Application Number
: 16/751,203
International Classification: G06N 7/00 (20060101); G06F 17/11 (20060101); G06K 9/62 (20060101); G06F 16/901 (20060101); | {"url":"https://patents.justia.com/patent/20200234167","timestamp":"2024-11-05T08:09:00Z","content_type":"text/html","content_length":"315454","record_id":"<urn:uuid:15382714-a49b-43a7-8131-d96d1cc4d482>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00769.warc.gz"} |
[EM] ICT definition. Presumed Kemeny definition.
Kevin Venzke stepjak at yahoo.fr
Mon Apr 23 15:34:23 PDT 2012
Hi Mike,
>ICT definition:
>(as described by Chris Benham, unless I've made an error)
>Iff the number of voters ranking X over Y, plus the number of voters equal-top-rating X and Y, is greater than the
>number of voters ranking Y over X, then X "beats" Y.
>Of course that's a very weak meaning for "beat", and it's possible for X and Y to both beat eachother in that sense. Of course, when
>I say "beat" (with or without the quotes), I mean it in the above-defined sense.
>If there's exactly one beats-all candidate (candidate who beats all of the others), then s/he wins.
>If not, then the winner is the beats-all candidatate who is ranked in 1st place on the most ballots.
>[end of ICT definition]
>As I said, ICT meets FBC, and is defection resistant. Maybe so defection-resistant as to be called defection-proof.
>If Kevin &/or Chis are listening right now, I have a question:
>What if, instead of defining "beat" as above, I said:
>X is unbeaten by Y iff the number of voters ranking X over Y, plus the number of voters equal-top-ranking X and Y, is at least equal
>to the number of voters ranking Y over X.
>If there is exactly one candidate not beaten by anyone, then s/he wins.
>Otherwise, the winner is the unbeaten candidate who is ranked in 1st place on the most ballots
>[end of questioned alternative definition of ICT]
>Would that still meet FBC and be defection-resistant? Would it lack some other desirable property,
>or acquire some undesirable property?
I haven't looked at ICT really. But your revised wording is how ICA is worded. It directs you to look
for a single unbeaten candidate. I think it sounds better to say you're picking the winner from among the
undefeated candidates, than from among the candidates who defeated everybody including each other.
I'm pretty sure your definitions are identical... Unless I'm missing something really obvious.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20120423/6b236ee0/attachment-0004.htm>
More information about the Election-Methods mailing list | {"url":"http://lists.electorama.com/pipermail/election-methods-electorama.com/2012-April/128307.html","timestamp":"2024-11-10T02:42:31Z","content_type":"text/html","content_length":"5612","record_id":"<urn:uuid:7c711fde-8de9-4a24-83c7-eee3a5e78fa4>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00376.warc.gz"} |
Convert millimeters to centimeters
Conversion from mm to cm (millimeters to centimeters) is a piece of cake with our mm to cm length converter.
Input your value in the box with millimeter selected and get your answer instantly. You can change the unit by selecting from the dropdown list.
The millimeter is represented by the symbol "mm."
Millimeter is an SI (International System of Units) unit of length. It is usually the smallest measure in rulers of the metric system.The word "milli" is derived from the Latin word mille, which
means one thousandth. Therefore, a meter is equivalent to a thousand times a millimeter.
The international definition of meter which is the SI base unit of length, is the length of path that light takes in the vacuum over a period of 1/299 792 458 of a second. Thus, 1 millimeter is
equivalent to 1/1000 meter. That implies that a millimeter is the distance that light travels in the vacuum in 1/299792458000 seconds.
Following that, there are 10 millimeters in a centimeter and 100 millimeters in a decimeter. In the other direction, a millimeter is equivalent to 1/10 centimeters. A millimeter is also equivalent to
1000 micrometers.
Interestingly the smallest dimension human eyes can detect is about 0.02 to 0.04 mm. This is about the thickness of human hair!
The term millimeter was first used in the late 1700s. It was introduced during the development and adoption of the metric system of weights and measures in France in 1795.
The first recorded use of the millimeter was in the Journal of Natural Philosophy in 1797, according to OED (Oxford English Dictionary).
Millimeter is a very small unit of measure. Therefore it is usually used for measuring minute dimensions.
The Centimeter is represented by the symbol “cm.”
Centimeter is a unit of length in the metric system which is also the International System of Units (SI). It was the base unit of length in the now deprecated CGS (centimeter-gram-second) system of
Centi is the SI prefix for a factor of 1/100. A centimeter approximately corresponds to the average adult person's fingernail width. This unit is equal to one-hundredth of a meter, the base of the
SI unit of length. So, a centimeter is equal to 1/100 meters. Though measurement units frequently choose SI prefixes for factors of 10^3, such as milli- and kilo-, for many physical values, however,
the centimeter remains a practical unit of length for many everyday measurements.
Centimeters can be changed to other SI units using prefix multipliers based on powers of ten. For example, a millimeter equals 1/10 centimeters. So, a centimeter is equivalent to 10 millimeters.
If you are converting cm to inch, then the calculation is different. One inch is equal to 2.54 centimeters. Therefore, one centimeter is equivalent to 0.393701 inches.
The term centimeter was first used in the late 1700s. The centimeter is a part of the metric system. The metric system of measurement is primarily credited to the French. The French government
officially adopted this system on April 7, 1795.
The centimeter as mentioned before is unit of length. It is more commonly used for dimensions for which meter measure is unrealistically large.
How to convert millimeters to centimeters?
How many millimeters in centimeter? Exactly 10. Convert millimeters to centimeters, multiply mm values by 0.1 or divide by 10.
1 mm = 0.1 cm
10 cm = 1 cm
Millimeter to Centimeter conversion table
Millimeter Centimeter
0.01 mm 0.001 cm
0.1 mm 0.01 cm
1 mm 0.1 cm
2 mm 0.2 cm
3 mm 0.3 cm
5 mm 0.5 cm
10 mm 1 cm
20 mm 2 cm
100 mm 10 cm
1000 mm 100 cm | {"url":"https://www.measurementof.com/length-converter/mm-to-cm","timestamp":"2024-11-01T23:57:59Z","content_type":"text/html","content_length":"471805","record_id":"<urn:uuid:fe0f78b1-46de-4a31-824e-b0958ffb1238>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00281.warc.gz"} |
Safe Lifting Assessment Tool: The NIOSH Lifting Equation
NIOSH Lifting Equation
The NIOSH Lifting Equation for Safe Lifting
– Are You Lifting too Heavy a Load? Find the Recommended Weight Limit For Your Lifting Task at Work!
Boiled down to the essence, the NIOSH Lifting Equation for safe lifting can help you answer two questions:
How heavy a load is safe for me to lift in this particular lifting task?
(Here you get the RWL, the Recommended Weight Limit!)
How high a physical risk do I face when I lift the load that I need to lift?
(Here you get a number on the LI, the Lifting Index)
Further down this article we’ll explain how you can go about finding the RWL and LI.
First though, we’d like to explain a bit more about the NIOSH Lifting Equation – what characterizes it and how it came about.
What is the NIOSH Lifting Equation?
The NIOSH Lifting Equation is a numeric tool (a clever form of calculator, you could say) that safety and health practitioners can use to evaluate the level of risk associated with a certain type of
manual handling action, more specifically the lifting and lowering of loads.
As you are about to discover, the NIOSH Lifting Equation is a rather technical piece of ergonomic risk assessment formula (at least for the common guy, if you’re an engineer, it’s no big deal)
because in order to be able to use it correctly, you need to know the specifics of the lifting action in detail, and you shouldn’t be too scared of number crunching either.
Before we get into the lifting equation itself, let’s quickly take a look at how the NIOSH Lifting Equation came about, when it applies and when it doesn’t apply.
The Birth of the Original NIOSH Lifting Equation
The NIOSH Lifting Equation was born in 1981 by NIOSH (The National Institute for Occupational Safety and Health in the U.S.
The purpose of the 1981 NIOSH Lifting Equation was to identify dangerous lifting jobs that could potentially constitute a risk to the musculoskeletal system and thus if ignored, lead to injuries,
impairment and / or low back pain (LBP).
What this equation for safe lifting could do was to provide a method for finding safe weight limits for the manual handling of loads. The type of lifting tasks for which it could measure weight
limits was fairly simple, namely straight trunk lifting operations known as sagittal lifting tasks.
All this was very well, but the reality of lifting was often more varied and complex, for instance sometimes you’d need to twist the upper body when lifting. Thus the lifting equation was considered
too limited as it didn’t embrace enough types of lifting scenarios.
The Revised NIOSH Lifting Equation
Thus 10 years later in 1991, a revised and more comprehensive NIOSH Lifting Equation was developed, and in 1994 this revised edition for safe lifting was made available for use to the general public.
This revised NIOSH Lifting Equation was perceived to be a step up in protecting workers because:
• The maximum recommended load weight had been reduced from 90 pounds (app. 41 kilo) to 51 pounds (app. 23 kilo).
• It applies to a wider variety of lifting operations. The updated equation for safe lifting embraces more lifting variables such as asymmetrical lifting (twisting the trunk) and the quality of
hand-holds (the lifting of objects with less than optimal hand gripping possibilities).
However, despite it’s renewed sophistication, the NIOSH Lifting Equation is still only for lifting and not for lifts that include e.g.:
• using only one hand
• work-shifts that last more than 8 hours
• more than minimal carrying / walking (no more than a few steps)
• pushing and / or pulling
• only seated or kneeling positions
• loads considered unstable, e.g. containers with liquid
• more than minimal holding time (no more than a few seconds)
• a high degree of repetitiveness that require significant energy expenditure
• manual handling in ‘poor’ environments e.g. characterized by restricted access, slippery floorings or high / low temperatures
What Exactly Do You Get with the NIOSH Lifting Equation
You get a Recommended Weight Limit (RWL) for:
• a very specific two hand lifting task
• that defines the maximum weight of a load
• that is considered safe to lift for 75% of female workers and 90% of male workers
You also get a number on the Lifting Index (LI) that informs you of the level of risk for MSD (Musculoskeletal Disorders) associated with lifting the load you need to lift. The higher the number, the
higher the risk. The goal here is to only have safe lifting tasks with a number score that is less than 1.0.
The NIOSH Lifting Equation Formula
Okay, here is the formula for finding the Recommended Weight Limit for your lifting task:
RWL = LC x HM x VM x DM x AM x FM x CM
In order to find the Recommended Weight Limit (RWL) you need to know the number of seven different factors pertaining to the lifting operation in question.
The first one, LC (Load Constant) is easy. It’s always the same. It’s 51. 51 pounds (app. 23 kilo) represents the maximum load that can be lifted under ideal circumstances.
All the 6 other factors or variables represent complexities that ‘pull’ the recommended weight of the load down in order to ensure a safe lifting operation.
So what are these 6 other variables?
HM (Horizontal Multiplying factor) = The horizontal distance from hands to the centre of the body (defined as mid-point between ankles).
VM (Vertical Multiplying factor) = The vertical distance between the hands (holding the load in the starting position) and the floor.
DM (Distance Multiplying factor) = The vertical distance the load is lifted, measured from starting point to end point.
AM (Asymmetric Multiplying factor) = The angle of asymmetry to the lift. The degree required to twist the body during the lifting task.
FM (Frequency Multiplying factor) = The time between lifts and the overall duration of lifts during a work-shift.
CM (Coupling Multiplying factor) = The quality of the grasp on the load as defined by ‘good’, ‘fair’ or ‘poor’.
The difficult task here is not so much the measuring as it is to understand exactly what is meant by the variables. If you need more information to be able to decipher some of the variables,
Ergo-Plus has a fairly good guide here and The Canadian Centre for Occupational Health and Safety also present more in-depth explanations on the factors here.
Finding the LI:
LI = Weight ÷ RWL =
In order to find the LI (Lifting Index) you need to know the weight of the load being lifted and you need the RWL for the lifting task. The LI will give you an indication of the level of risk
associated with the lifting task. The higher the LI, the higher the risk of MSD or injuries. Numbers below l.0 are considered safe for most workers.
Ready to Use the Safe Lifting Calculator?
– Online NIOSH Lifting Equation Calculators to Find the RWL
If you have the above variables for the lifting equations, you are now ready to find the RWL.
There are two ways to go about this:
1. You can either press the numbers yourself on a calculator
2. Or you can choose to use some online NIOSH Lifting Equation calculators.
Here are some online calculators you can try:
The NIOSH Lifting Equation calculator from The Canadian Centre for Occupational Health and Safety.
The NIOSH Lifting Equation calculator from Ergo-Plus.
If You Feel Like Playing with the Equation!
– Examples to Help You Understand How the Different Variables May Affect the RWL
If you’re curious to learn more about how the different variables in the lifting equation affect the recommended weight limit, The Canadian Centre for Occupational Health and Safety has made some
great examples for you to follow.
Follow the links below to discover:
Other Important Resources:
Busy and Want Simpler Calculators?
If the NIOSH Lifting Equation is a bit too data intensive for you, there are several other free safe lifting calculators online, that are both easy and quick to use for busy employer or employee. Se
our page on online calculators for more information.
Other Related Pages
Country Specific Pages with Ergonomic Guidelines and Risk Assessment Tools
Go to Ergonomics for an overview of rules and regulations worldwide.
Go to our Homepage. | {"url":"https://2lift.com/safe-lifting/","timestamp":"2024-11-11T10:02:51Z","content_type":"text/html","content_length":"125145","record_id":"<urn:uuid:080ce621-e933-47fd-b9e4-d97010392c18>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00825.warc.gz"} |
Calculation of a Power Price Equilibrium under Risk Averse Trading
Monday, October 26, 2015 - 1:30pm for 1 hour (actually 50 minutes)
Raphael Hauser – Mathematical Institute, University of Oxford
Heinrich Matzinger
We propose a term structure power price model that, in contrast to widely accepted no-arbitrage based approaches, accounts for the non-storable nature of power. It belongs to a class of equilibrium
game theoretic models with players divided into producers and consumers. The consumers' goal is to maximize a mean-variance utility function subject to satisfying an inelastic demand of their own
clients (e.g households, businesses etc.) to whom they sell the power. The producers, who own a portfolio of power plants each defined by a running fuel (e.g. gas, coal, oil...) and physical
characteristics (e.g. efficiency, capacity, ramp up/down times...), similarly, seek to maximize a mean-variance utility function consisting of power, fuel, and emission prices subject to production
constraints. Our goal is to determine the term structure of the power price at which production matches consumption. We show that in such a setting the equilibrium price exists and discuss the
conditions for its uniqueness. The model is then extended to account for transaction costs and liquidity considerations in actual trading. Our numerical simulations examine the properties of the term
structure and its dependence on various model parameters. We then further extend the model to account for the startup costs of power plants. In contrast to other approaches presented in the
literature, we incorporate the startup costs in a mathematically rigorous manner without relying on ad hoc heuristics. Through numerical simulations applied to the entire UK power grid, we
demonstrate that the inclusion of startup costs is necessary for the modeling of electricity prices in realistic power systems. Numerical results show that startup costs make electricity prices very
spiky. In a final refinement of the model, we include a grid operator responsible for managing the grid. Numerical simulations demonstrate that robust decision making of the grid operator can
significantly decrease the number and severity of spikes in the electricity price and improve the reliability of the power grid. | {"url":"https://math.gatech.edu/seminars-colloquia/series/other-talks/raphael-hauser-20151026","timestamp":"2024-11-10T05:17:14Z","content_type":"text/html","content_length":"33124","record_id":"<urn:uuid:7f8b9762-468f-4675-ba19-f0d3439837ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00719.warc.gz"} |
Class 9 Maths Notes Exercise 7.3 PDF - FBISE Solved Past Papers
Federal Board Class 9 Maths Notes Exercise 7.3 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page.
Federal Board Class 9 Maths Notes Exercise 7.2 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Exercise 7.2 See also Math Book of Class 9
Federal & Punjab Board PDF Download
Federal Board Class 9 Maths Notes Exercise 2.6 Chapter 2 Real and Complex Numbers PDF are solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Exercise 2.6
See also Class 9 Maths Notes Exercise 16.2 PDF
Federal Board, FBISE Class 9 Physics Notes of Chapter 1 Exercise are given below. All the excise questions are solved. Chapter 1 Exercise Short questions 1.2 What is difference between base
quantities and derived quantities? Give three examples in each case. Ans. Base Quantities Derived Quantities Base quantities are the quantities on the basis of…
Federal Board Class 9 Maths Notes Exercise 3.1 Logarithms PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Exercise 3.1 See also Class 9 Maths
Notes Exercise 5.2 PDF
Federal Board Class 9 Maths Notes Review Exercise 8 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Review Exercise 8 See also Class 9 Maths
Notes Review Exercise 5 PDF
Federal Board, FBISE Class 9 Physics Notes of Chapter 1 Physical Quantities Measurement Numerical Problems are given below. All the Numerical Problems are solved. Physical Quantities Measurement
Numerical Problems Please visit & like our Facebook page. See also Chapter 8 Exercise Class 9 Physics Notes | {"url":"https://fbisesolvedpastpapers.com/class-9-maths-notes-exercise-7-3/","timestamp":"2024-11-04T04:06:36Z","content_type":"text/html","content_length":"120600","record_id":"<urn:uuid:6914f457-d5f5-4268-b912-c4d7841cf63e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00793.warc.gz"} |
Google Bard Fakes Math Skills Despite Being Incompetent
While ChatGPT, although slightly more proficient, still struggles to identify errors when prompted.
Today I was preparing for a year 7 math lesson, and decided to test Google Bard’s math skills by asking it the questions my class is doing:
I have a cage with cats and birds, there are 12 heads and 32 feet, how many birds and cats?
Bard replied:
Let x be the number of birds and y be the number of cats. x+y=12. 2x+4y=32. Adding the two equations, we get 6x=44. Dividing both sides by 6, we get x=7. Subtracting 7 from 12, we get y=5. So there
are 7 birds and 5 cats.
Bard’s response to the question was riddled with errors:
1. Bard incorrectly added the two equations and arrived at an incorrect value of 6x=44;
2. Dividing both sides by 6 would not yield x=7.
So I decided to ask ChatGPT the same question.
In contrast, ChatGPT provided a correct solution with a clear explanation. | {"url":"https://0ce.medium.com/google-bard-fakes-math-skills-despite-being-incompetent-70f3efbb3ba0","timestamp":"2024-11-12T07:10:58Z","content_type":"text/html","content_length":"92418","record_id":"<urn:uuid:32e60d9e-ce8a-49e3-b734-c09fd879999b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00045.warc.gz"} |
NEERC 2009, Eastern subregional contest
At last, Vasechkin had graduated from the university and it was the time to choose his future. Vasechkin recalled all the inadequate outcomes, unsolvable problems, and incomprehensible problem
statements that he encountered at programming contests, so he decided to join a program committee. Soon he was asked to prepare a problem for the forthcoming student contest, which would be dedicated
to binary alphabets. The problem had to fall under that topic. However, Vasechkin wanted the participants to remember his problem for a long time, so he decided to give the problem an unusual and
complicated name.
Vasechkin decided that the name had to consist of the letters “a” and “b” only and contain exactly n letters. In addition, the name had to be as complex as possible. The complexity of a name is
defined as the minimal number of palindromes into which it can be decomposed. Help Vasechkin to invent the most complex name for his problem.
The only line contains an integer n (1 ≤ n ≤ 1000).
Output the required name of length n consisting of the letters “a” and “b” only. If there are several such names, output any of them.
Problem Author: Igor Chevdar
Problem Source: NEERC 2009, Eastern subregional contest | {"url":"https://timus.online/problem.aspx?space=74&num=4","timestamp":"2024-11-14T18:45:37Z","content_type":"text/html","content_length":"6424","record_id":"<urn:uuid:795391e9-aa58-43cf-936e-ebcd915a89c8>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00881.warc.gz"} |
RE: st: RE: trajectory graph can not come out line with my code
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: RE: trajectory graph can not come out line with my code
From Nick Cox <[email protected]>
To "'[email protected]'" <[email protected]>
Subject RE: st: RE: trajectory graph can not come out line with my code
Date Thu, 2 Dec 2010 17:33:09 +0000
Cecilia Ann:
I think you are seriously misreading both this and the posting by Austin Nichols.
You accuse us of "verbal attacks", and much else besides, but we replied to your questions making specific positive suggestions and identifying specific problems with your apparent understanding of Stata and with your formulation of questions. The fact is that "doesn't work" means almost nothing even to Stata experts.
Sorry it hasn't worked out in terms of your understanding of your problems, but we tried. I think you just reduced the number of people on this list who will want to answer your questions if this is how you respond to people who are giving up their time on your behalf.
[email protected]
Cecilia Ann
Nick's comment is meant to be verbal attack !
I copy the code simply because I am learning the logic. This procedure hurts nothing! It has nothing to do with peril!
I think you are confused about the difference of plagiarism and learning!
My suggestion to you is trying all your best to learn the basics!
> From: [email protected]
> I don't understand either why you need to calculate ranks. I imagine you don't. That is just what someone else did for their purposes. This shows the perils of copying code that you don't completely understand, although in some sense almost everybody does precisely that some of the time.
> My eye is drawn to the condition
> (rankpopgrowth==`i'*.1)
> Within a loop i = 1/7 you are testing whether rankpopgrowth is, so far as Stata is concerned, exactly equal to 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7.
> Very likely there are no such values.
> Also, this kind of computation seems to be assuming that Stata is being run on a computer that deep down works with decimal, but that is not the case. Computers work with binary.
> For further reading, type
> . search precision
> in Stata.
> In any case, look at the data you think you are plotting using e.g. -list- or -edit-. My guess is that you have lots of missing values.
> Nick
> [email protected]
> Cecilia
> But on last command, "twoway () () (line x y)" there is no line connecting the dots on my graph. I can not figure out why it happens. My dataset is strongly balanced panel data with years 1960,1970,1980,1990,2000.I want to tract the pattern of population over these five year. So, the population growth rate are all distributed on 1970,1980,1990,2000.
> My code is as follows.
> replace popgrowth=. if popgrowth>23
> egen newid=group(geo_id2)
> sum newid
> drop geo_id2
> rename newid id
> order id year
> sort id popgrowth
> egen popgrowthmean=mean(popgrowth),by(id)
> list id popgrowth popgrowthmean in 1/10
> sort id
> quietly by id:replace popgrowthmean=. if (_n>1)
> egen rankpopgrowth=rank(popgrowthmean)
> local i=1
> while `i'<=7{
> gen sub`i'=(rankpopgrowth==`i'*.1)
> sort id year
> quietly by id:replace sub`i'=sub`i'[1]
> gen pop1`i'=popgrowth if (sub`i')
> drop sub`i'
> local i=`i'+1
> }
> ksm popgrowth year,lowess gen(popgrowthsmth)
> twoway (scatter popgrowth year) (scatter popgrowthsmth year) (line pop11-pop17 year)
> I think the number of pattern of population growth may be seven. So, I set `i'<=7. But, I do not understand why wee need to generate rank of populationgrowthmean.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2010-12/msg00091.html","timestamp":"2024-11-10T03:06:42Z","content_type":"text/html","content_length":"13965","record_id":"<urn:uuid:f892e85e-9771-4cf0-a542-dd2b8ec84cf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00681.warc.gz"} |
Teaching Statistics with World of Warcraft
In an earlier
I proposed an economics course built around
World of Warcraft
. I have much less experience teaching statistics than teaching economics and I suspect the game is less suited for the former than the latter purpose. But it does occur to me that it provides quite
a lot of opportunities for observing data and trying to infer patterns from it and so could be used to both explain and apply statistical inference. And I suspect that, as in the case of economics,
application to a world with which the student was familiar and involved and to problems of actual interest to him would have a significant positive effect on attention and understanding.
Consider the question of whether a process is actually random. Human beings have very sensitive pattern recognition software—so sensitive that it often sees patterns that are not there. There is a
tradeoff, as any statistician knows, between type 1 and type 2 errors, between seeing something that isn't there and failing to see something that is. In the environment humans evolved in, there were
good reasons to prefer the first sort of error to the second. Mistaking a tree branch for a lurking predator is a less costly mistake than misidentifying a lurking predator as a tree branch. One
result is that gamblers routinely see patterns in random events—"hot dice," a "loose" slot machine, or the like.
Players in
World of Warcraft
see such patterns too. But in that case, the situation is made more complicated and more interesting by the fact that the "random" events might not be random, might be the deliberate result of
programming. In the real world it is usually safe to assume that the dice which you have used in the past will continue to produce the same results, about a 1/6 chance of each of the numbers 1-6, in
the future. But in the game it is always possible that the odds have changed, that the latest update increased the drop rate for the items you are questing for from one in four to one in two, even
one in one. It is even possible, although not I think likely, that some mischievous programmer has introduced serial correlation into otherwise random events, that the dice really are sometimes hot
and sometimes cold.
A few days ago I was on a quest which required me to acquire five copies of an item. The item was dropped by a particular sort of creature. Past experience suggested a drop rate of about one in four.
I killed four creatures, got four drops, and began to wonder if something had changed.
It occurred to me that the question was one to which statistics, specifically Bayesian statistics, was applicable. Many students, indeed many people who use statistics, have a very imperfect idea of
what statistical results mean, a point that recently came up in the comment thread to a post here when someone quoted the report of the IPCC explaining the meaning of its confidence results and
getting it wrong. My recent experience in
World of Warcraft
provided a nice example of how one should go about getting the information that people mistakenly believe a confidence result provides.
The null hypothesis is that the drop rate has not changed—each creature I kill has one chance in four of dropping what I want. The alternative hypothesis is that the latest update has raised the rate
to one in one. A confidence result tells us how likely it is that, if the null hypothesis is true, the evidence for the alternative hypothesis will be at least as good as it is. Elementary
probability theory tells us that, if the null hypothesis is correct, the chance of getting four drops out of four is only one in 256. Hence my experiment confirms the alternative hypothesis at
(better than ) the .01 level.
Does that mean that the odds that the drop rate has been raised to one in one are better than 99 to 1? That is how, in my experience, people commonly interpret such results—as when the IPCC report
explained that "very high confidence represents at least a 9 out of 10 chance of being correct; high confidence represents about an 8 out of 10 chance of being correct."
It does not. 1/256 is not the probability that the drop rate has changed, it is the probability that I would get four drops out of four if it had not changed. To get from there to the probability
that it had—the probability that would be relevant if, for example, I wanted to bet someone that the fifth kill would give me my final drop—I need some additional information. I need to know how
likely it is, prior to my doing the experiment, that the drop rate has been changed. That prior probability, plus the result of my experiment, plus Bayes Theorem, gives me the posterior probability
that I want.
Suppose we determine by reading the patch notes of past patches or by getting a Blizzard programmer drunk and interrogating him, that any particular drop rate has a one in ten thousand chance of
being changed in any particular patch. The probability of getting my result via a change in the drop rate is then .0001 (the probability of the change) times 1 (the probability of the result if the
changed occurred--for simplicity I am assuming that if there was a change it raised the drop rate to 1). The probability of getting it without a change by random chance is .9999 (the probability that
there was no change) x 1/256 (the probability of the result if there was no change). The second number is about forty times as large as the first, so the odds that the drop rate is still the same are
about forty to one.
And I suspect, although I may be mistaken, that the odds that a student who spent his spare time playing
World of Warcraft
would find the explanation interesting and manage to follow it are higher than if I were making the same argument in the context of an imaginary series of coin tosses, as I usually do.
16 comments:
I'm actually taking a statistics class now, and I had forgotten how that whole type I/II error worked. Now I think I will remember - thanks.
David wrote: "... The second number is about forty times as large as the first, so the odds that the drop rate is still the same are about forty to one."
Well, as I understand it, the odds that the drop rate is still the same *compared to changing to that particular changed rate* (1:1, or 1/2) is about forty to one, but there are many rates that
it could have changed to. Unfortunately they have a low probability individually of being chosen by the programmers (1/10,000).
If confronted with the four drops in a row, I wouldn't be comparing a particular changed rate to the current one, but maybe for example a range, such as "those rates from 1/4 to 1/2 with some
fixed increments between them").
I've never really understood Bayesian statistics, and I think this business of prior probabilities is a big reason for that. What's an objective basis for assigning prior probabilities? If we can
just assign any subjective assumption we like, then, for example, Pascal's wager looks a lot better: He had a very high prior probability that Catholicism was true, and a very low prior
probability that Judaism, Islam, Lutheranism, Calvinism, or Mithraism was true, so a Bayesian argument might support betting on the Catholic God . . . for Pascal . . . and not be vulnerable to
the classic criticism that it provides equally strong proofs of the desirability of worshiping many incompatible gods. But I take Pascal's wager to be an intellectual sucker bet, and any
methodology that seems to legitimize it strikes me as suspect. Have I gotten a completely wrong impression about Bayesian methodology in some way, or does it actually lead down this road?
I'd have to say that in the WOW example given, the prior probability argument is really screwed up. Simple reason: one thing you know has changed: you've gone on a quest for that particular item.
What are the chances that there was already something in the program that makes creatures more likely to drop something you're questing for? Probably pretty good.
What are the chances that there may be a source of change that you didn't consider when calculating prior probabilities? As the previous paragraph demonstrates, 1:1.
So what are the chances that your prior probability number is accurate? Must be 0:1.
So how useful is this sample calculation, really?
This comment has been removed by the author.
David, it seems to me that there is also a selection bias here in that you wouldn't have considered the possibility of a new drop rate if you hadn't gotten four items in a row.
The probabilities you gave are correct if you now go out and kill four monsters to test the drop rate.
If you only take notice of unusual events then, with probability 1, you will find yourself contemplating an unusual event.
Bryan Eastin
What are the chances that there was already something in the program that makes creatures more likely to drop something you're questing for? Probably pretty good.
As a former WoW player, I don't remember a single instance of this ever happening (except for items that drop only when on quests). So, I think it is low enough in prior probability to ignore it.
David, it seems to me that there is also a selection bias here in that you wouldn't have considered the possibility of a new drop rate if you hadn't gotten four items in a row.
A failure to consider this might lead someone to overestimate the prior. But it does not actually affect the Bayesian calculation so long as the prior is correct.
BTW, if the hypothesis "the drop rate is one in four" and the hypothesis "the drop rate has not changed" are not the same hypothesis.
Your analysis assumes the former hypothesis as the null hypothesis. The latter hypothesis might be more appropriate, though. Also, it might be better to compare it to the hypothesis "there is a
new constant drop rate" rather than the hypothesis "the drop rate is now 1".
To test these, you would need a prior distribution over the constant drop rates (probably the same one would work for all). Then for the null hypothesis modify the distribution based on all data
received (before and after the possible adjustment) and determine the probability of receiving all that data based on the modified distribution. For the change hypothesis, do that separately both
before and after the supposed change, and combine with the penalty to the prior from the unlikeliness of Blizzard changing the drop rate.
... and then, I realize that the hypothesis you are interested in is probably not that the drop rate was changed, but that it was raised. Which means that the assumption that the drop rates
before and after the change were independent, if it ever was a good one, is now bad. The most general (but not very helpful) approach would be to just have some distribution over the
(before,after) drop rate pairs.
...but it won't change the outcome much to just use the "change" hypothesis instead of the "increase" hypothesis, because p(change)=p(increase)+p(decrease) and p(decrease) is low.
Quick pointer to outsiders: presumably the quest required to loot 4 items that are quest_only and after getting the 4th the quest completed, preventing any more from dropping. Perhaps, he was
comparing drop rate on that quest with drop rate he used to get on a previous character doing the same quest.
Increasing drop rates is not that unusual. Some quests that are notoriously "out of line" with the rest in terms of drop rate get a bump to appeal to casual players. I can think of five separate
quests in Lotro that went from <1/8 to >1/2 over a span of several major patches (several months). Quests that are "in line" are unlikely to be fixed. The bigger the deviation, the more likely
the programmers are to step in (whether they are playing the game themselves or are tired of bug reports).
Just as a followup:
The quest was a daily, so I had done it lots of times before, giving me a pretty good estimate of the drop rate. I actually needed five drops, and the fifth try didn't yield one, which eliminates
the hypothesis that the rate is now one in one--but I was describing the calculation as it would have been done before that.
After making the post, it occurred to me that there was another explanation that I should have considered. The daily quest is done in order to get reputation with a particular group. Perhaps when
your reputation level goes up from honored to revered, the drop rate on that quest goes up too.
I'm not sure if my four out of four result was just after my reputation went up or not, since that possibility hadn't occurred to me at that point. I'll be watching drop rates for a while to see
if they have indeed sharply increased for that quest.
William Stoddard asks about how you get your priors. In the post I suggested some possibilities. There isn't a general answer--the point is that without a prior you can't get a posterior
probability from the experiment.
If there's no general answer on how to get a prior... and if you can't compute a posterior without a prior... then it seems you can never be sure you have a meaningful posterior, except perhaps
in very limited cases.
(In the above example, someone who was not familiar with WOW might easily compute a prior several order of magnitude different from someone who was familiar with WOW.)
What does this say about the use of "statistically significant" in scientific research? So far, my takeaway message is that you have to know how likely the null hypothesis is before you can tell
whether you've found significant evidence for a deviation from it.
Or in other words, "Extraordinary claims require extraordinary evidence."
"So far, my takeaway message is that you have to know how likely the null hypothesis is before you can tell whether you've found significant evidence for a deviation from it."
Sort of.
The significance level tells you how good the evidence is. But to reach a conclusion, you need to know both how strong the evidence is and how strong the evidence has to be to make you accept the
The prior is something of a problem, and gets you deep into Knight's uncertainty versus risk. If you want to convince [sensible, mathematically literate] people of something, you need them to
have "reasonable" priors -- ones that aren't too close to zero anywhere you need them not to be too close to zero -- and/or you need a lot of data to overwhelm low prior values.
With many precise relationships among uncertain variables, I often try to solve for the one I have the least confidence in. In this case, if, as a practical matter, what I want to know is whether
the probability that it's still 1/4 is more or less than 60%, I can figure out that that requires a prior of around .997. In some situations, you'll find that this is unreasonably high or low,
and it won't matter what its exact value is. (If you want a precise value, though, you're subject to a different set of biases when you operate this way.)
clarification of post#2:
If confronted with the four drops in a row, I wouldn't compare a particular changed rate to the current rate, but would compare maybe for example a range, such as "those rates from 1/4 to 1/2
with some fixed increments between them") to the current rate.
Wow! (non pun intended.)
First Off, Mr. Friedman, i have just discovered your blog by the glorious "unschooling" method you talk about: I was watching a lecture by Dr. Murray Rothbard at Mises.org (for fun!), and through
a string of links found a wiki about you and your Anarcho-capitalist theories. What a blessing it is to see an intelligent professor who understands how MMOs are fantastic teaching tools!
I have always wanted to read a study that compares the economy of WoW to that of the United States. Do you know of any such studies? its amazing how "free Market" people tend to be in a
simulation, compared to their actual political stance on economic policy in the "real world". I always wanted to see a study showing differing behavior in economic choices in WoW compared to real
I wonder how "scarcity" and money inflation play into effect in WoW (since the only scarce resource seems to be personal time, and labor. All gold, loot, mobs, etc. respawn faster than
Here is a great example of how an oppressive monetary policy fails without a monopolization of force:
I played Warhammer Online for a spell, and in the game guilds could collect a "tax" from its members. well the guild i joined had a 100% tax! after a few kills and loots, i asked why the guild
bank (The central government) got all of the loot i worked for. They replied that they would distribute all the gold evenly, so i left!
Its amazing to see how coercive, non-voluntary taxes can only last with a monopolization on the legal use of force. The guild had no way of punishing me for seceding. Imagine if i didn't want to
pay the government the tax they want. I would be coerced into complying, or suffer some consequence.
I know this was long, but i would like to thank you (and this community) for proving to me that MMO players can be logical intelligent people, and that education CAN be achieved outside the
status quo.
I would be happy to hear any response or get links to economic research done comparing video games to real life.
I may be wrong, I sometimes am, but I don't see how the analysis works out:
P(four drops in a row) =
P(four drops in a row | change in software) * P(change in software)
P(four drops in a row | no change in software) * P(no change in software)
= (1)*(1/10,000) +
which is approximately 0.004
which is about 1/256.
Another way to look at the phenomenon is to ask the following probabilistic question:
If the monster is killed 100 times what is the probability that you never get 4 drops of the object in a row?
It reminds me of Ramsey theory. The more times you do the action the more likely it is that you'll get what looks apparently to be an unlikely string of consecutive occurrences -- although this
is in fact not that unlikely. | {"url":"https://daviddfriedman.blogspot.com/2009/02/teaching-statistics-with-world-of.html","timestamp":"2024-11-03T02:57:16Z","content_type":"text/html","content_length":"167770","record_id":"<urn:uuid:0a71209a-2084-4ab1-a9ac-0d5b290642d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00403.warc.gz"} |
Solving Sequencing Problems Using Excel Solver Solution (2 Suitable Examples) - ExcelDemy
Example 1 – Solving the Traveling Salesman Sequencing Problem Using the Solver
The sample dataset contains 4 Cities and the Distance between them. We’ll find out a sequence of the cities for a traveling salesman so that the total distance covered will be minimal.
• We’ll set up a random order in the I column as shown below.
• Select cell J6.
• Insert the formula:
• Press Enter. The INDEX function extracts the distance between the cities specified in I6 and I5.
• Use AutoFill to get the other outputs.
• Select cell J5.
• Input the formula:
• Use the AutoSum feature in cell J9 to calculate the total distance.
• Go to File and select Options.
• The Excel Options box will pop out.
• In the Add-ins tab, select Excel Add-ins and choose Go.
• Check the box for Solver Add-in.
• Press OK.
• Click Data and select Solver.
• In the Solver Parameters dialog box, select Set Objective and put E15.
• Choose Min.
• Select B11:B14 from the data for the By Changing Variable Cells
• We’ll add the constraints by clicking the Add button.
• Our constraints are:
B11:B14 <= 4
B11:B14 = AllDifferent
B11:B14 = integer
B11:B14 >= 1
• Select Evolutionary as the solving method.
• Press Solve.
• This’ll return a new sequence where the total distance is the lowest value possible.
• In this way, we can solve sequencing problems using the Solver tool.
Read More: How to Use Excel Solver for Linear Programming
Example 2 – Apply the Solver to Solve the Job Schedule Sequencing Problem
The following dataset has 4 types of Jobs, Days Needed, and Due Dates. We’ll make an order of the jobs where the Tardiness will be as little as possible.
• Create a random sequence.
• As a starting day, 0 is placed in cell C11.
• Select cell C12.
• Insert the formula:
• Press Enter to get the result. The MATCH function looks for the C10 cell value in the range B5:B8 and returns the position. The INDEX function extracts the days needed to complete the specified
• The starting date is added to the function output to get the finishing date.
• Select cell D11 and input the formula:
• Press Enter to get the starting date of the next task.
• Use AutoFill to complete the starting and ending series.
• Choose cell C13 to insert the formula:
The MATCH function extracts the position of a specific task. The INDEX function returns the due date for that task.
• Select cell C14.
• Input the formula:
• Click Enter. The MAX function compares the values and returns the largest.
• Use AutoFill.
• To calculate the total tardiness, click cell C16 and insert:
• Go to Data and select Solver. To enable it, see the process in Example 1.
• In the pop-out dialog box, set C16 as the objective.
• Choose Min.
• We’ll change C10:F10 as the variables.
• The constraints in this problem are:
C10:F10 = AllDifferent
C10:F10 = integer
• Choose Evolutionary as the solving method.
• Click Solve.
• You’ll get the best sequence with as little as possible tardiness.
Read More: Example with Excel Solver to Minimize Cost
Download the Practice Workbook
Related Articles
<< Go Back to Excel Solver Examples | Solver in Excel | Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
7 Comments
1. Thank you Professor for the nice and useful illustration. I teach subjects like Operations Management ad Decision Science and use Excel for solving the problems. Your methods are very helpful.
Can you kindly explain how to solve sequencing problem using Johnson’s algorithm of scheduling n-jobs on 2-machines?
□ Greetings, Raja.
I appreciate you asking this question. You can find it here “Sequencing problem using Johnson’s algorithm of scheduling n-jobs on 2-machines”
□ I will pass your request to the author, Jagadeesh!
Glad to know it helped you!
2. Thank you for the very informative tutorial.
I would like some advice though to take the shop scheduling problem 1 step further.
How can I do this at a time/hour level – rather than day?
For example – if we have x amount of jobs in queue; varying in duration
We can only turn around jobs during standard work hours (7 AM and 4 PM).
We also need to meet delivery dates.
How do I arrange jobs in a way that maximizes machine utilization?
□ Greetings, LOIACONO.
I appreciate you asking this question. You have to follow the “3 Shop Scheduling Problem” example from this article. If you wish to do the calculation at a time/hour level instead of a day
level, you need to input hours instead of days in the Days needed column. You must then follow the example’s rest of the process to arrange jobs in a way that maximizes machine utilization.
3. Nice model that can be simplified a bit because the help file for solver stats that:
“A constraint such as A1:A5 = alldifferent, where A1:A5 are decision variable cells, requires that these cells must be integers in the range 1 to N (N = 5 in this example), with each variable
different from all the others at the solution.”
So constraints G3:G8=1 and G3:G8 = integer are not needed
□ Greetings, ALF.
It was helpful to receive the information you provided.
Leave a reply | {"url":"https://www.exceldemy.com/solving-sequencing-problems-using-excel-solver-solution/","timestamp":"2024-11-05T04:20:04Z","content_type":"text/html","content_length":"206179","record_id":"<urn:uuid:70f28803-99e6-4ad0-8485-6b66702da23e>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00281.warc.gz"} |
ADVM Archives - Reduce Flooding
The river gage at US59 and the West Fork of the San Jacinto River is being upgraded. A new Acoustic Doppler Velocity Meter (ADVM) should be operational by Monday, May 14, 2018, according to Jeff
Lindner of Harris County Flood Control.
Concerns About Previous Gage
Some flood forecasting professionals suspected that the previous meter was not accurately predicting flood height because of several factors at this location. They felt sedimentation, scouring at the
base of the bridge, backwater from Lake Houston, and unsteady stream flow all affected the “rating curve.”
How Rating Curves Work
Rating curves show the correlation between the river discharge (flow volume as measured in cubic feet per second) and the river stage (height). Velocity multiplied by the area of the cross section of
the river for any given height equals the discharge rate. Such rates are expressed on a mathematical curve that correlates height and discharge.
These curves can change frequently, especially on a river where both sedimentation and scour can result in a change in the amount of flow at a location, as it does at US59.
Need for More Accurate Gage
One meteorologist suspected that – because the rating curve had shifted – we were seeing higher river levels with less water than pre-Harvey. That impacted the forecast accuracy for the Humble gage.
A river forecaster noted that in the last two flood events, initial forecasts were “underdone” another indicator of a shifted rating curve.
According to the U.S. Geological Service (USGS), ADVMs are indispensable for backwater-influenced gages. The USGS in recent years has built hundreds of index-velocity gages with an ADVM for the
measurement of streamflow. They are especially valuable in reaches where unsteady (varied, nonuniform) streamflow is prevalent that prevents the development of a conventional stage-discharge rating.
ADVMs deliver real-time flow data that would help us better understand what is going on with the rating curve and have more accurate forecasts for the Humble/Kingwood area.
An ADVM measures water velocity by using the Doppler principle applied to sound transmitted under water. Acoustic Doppler systems rely on SONAR, which uses sound waves to determine the distance to
targets. They bounce acoustic signals of a known frequency off sediment in the water and measure the shift in frequency when the signals return. By measuring the time between the original pulse and
the return signal, forecasters can compute the velocity of flow.
Velocity is crucial in computing the volume of water flowing past a measurement station. River forecasters use the formula:
• Q = quantity (cubic feet/second)
• V = average velocity for the cross section (ft/s)
• A = the area of the cross section of the river.
The faster the velocity for any given cross section, the more water that is flowing past that point.
Lindner cautions that although the new gage will begin collecting data immediately, it will take the river rising and falling several times to get enough data to accurately predict flow rates at
different levels.
“A river is always in flux,” says Lindner. “The USGS has already published a new post-Harvey rating which takes into account sedimentation from Harvey near this location.”
Dredging along the West Fork would likely change that rating curve again. “The rating will have frequent changes over the next several months, as the shape of the river evolves both from natural and
man-made causes,” said Lindner.
USGS has already sent the new post-Harvey rating for US59 at the West Fork to the West Gulf River Forecast Center to incorporate into their modeling efforts.
For those interested in learning more about gages and flood forecasting, the USGS has an excellent high-level, non-technical intro to measuring stream discharge. Scientists and engineers may be
interested in a more technical discussion of the advantage of ADVM’s in developing more reliable real-time discharge estimates.
Posted by Bob Rehak, May 13, 2018
257 Days Since Hurricane Harvey
0 0 admin admin2018-05-12 23:59:282018-05-13 00:13:48New River Gage at I-69 Will Measure Flow Accurately Despite Sediment | {"url":"https://reduceflooding.com/tag/advm/","timestamp":"2024-11-07T07:08:36Z","content_type":"text/html","content_length":"54035","record_id":"<urn:uuid:89ddb637-ce26-44ec-9af5-a9fa8eaa759f>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00338.warc.gz"} |
[78b] Hyp-Chart, the Missing Link Between P-values and Bayes Factors - Data Colada
Just two steps are needed to go from computing p-values to computing Bayes factors. This post explains both steps and introduces Hyp-Chart, the missing link we arrive at if we take only the first
step. Hyp-Chart is a graph that shows how well the data fit the null vs. every possible alternative hypothesis [1].
Hyp-Chart may be useful for understanding evidence.
Hyp-Chart is certainly useful for understanding Bayes factors.
Hyp-Chart is also useful for conveying why I do not use Bayes factors in my research.
Hyp-Chart, the graph, is not new, I suspect many people have drawn it when teaching Bayes factors, but it did not seem to have a name, and it needed one for this post [2].
R Code to reproduce all figures in this post
Starting point: the p-value
To compute a p-value we ask: if the null hypothesis were true, how likely is an outcome at least as extreme as the one we observe? To compute that, we need the likelihood of every possible outcome if
the null were true (d[true]=0). Figure 1, which you’ve seen a million times, depicts those likelihoods.
Unfamiliar with d as measure of effect size? See this footnote [3].
Fig 1. If null is true, d[true]=0, what’s the likelihood of each estimate?
So, say you run a study and get d̂ =.5. The likelihood of that estimate under the null of d[true]=0, is found in that distribution. Specifically, it’s represented by the small vertical red line in
Figure 2.
Fig 2. If null is true, d[true]=0, how likely is d̂ =.5?
The p-value is the probability of getting an effect at least as extreme, so the area to the right of that red line (for a one-sided test). But for this post we will just focus on the red line itself.
With this review behind us, let’s now get to the two steps required to get a Bayes factor instead of a p-value.
Step 1: Compute likelihood of alternative hypotheses
A sensible criticism of how some people use p-values, is that they, the people, focus exclusively on the null and not on alternative hypotheses. If the null is false, what is true instead?
Figures 1 & 2 showed us what to expect if d[true]=0. Let’s now make new figures with what to expect if d[true ]≠ 0, that is, if the null is false. Let’s start with the alternative that the effect is
“Medium” (d[true]=.5).
Fig 3. How likely is =.5 under d[true]=0 vs d[true]=.5
[the left chart is Figure 2 again]
We see that if d[true]=.5, getting d̂=.5 is more likely than if d[true]=0. That makes sense. If the effect is indeed bigger, it is more likely that we will estimate it as bigger.
We can quantify how much more likely d̂=.5 is under the alternative, by dividing the long vertical line in Figure 3, by the short vertical line. We get 0.392/0.019=21. That number, 21, is called the
“likelihood ratio”, because it is a ratio, of likelihoods: it tells you that d̂=.5 is 21 times more likely if d[true]=.5 than if d[true]=0
Ok. But, what if the effect were Small (d[true]=.2) or Large (d[true]=.8) instead of Medium?
Fig 4. How likely is d̂ =.5 if d[true]=0 vs if it is Small, Medium or Large?
We can do this for every possible hypothesis in a graph that I will (affectionately) refer to as ‘Hyp-Chart’.
Fig 5. How much more likely is d̂ =.5, for every possible effect, than for d[true]=0?
Getting a feel for Hyp-Chart
The motivation for going beyond p-values, remember, was to look at alternative hypotheses, and not just the null hypothesis. To illustrate how Hyp-Chart achieves this, Figure 6 depicts Hyp-Charts
for the same p-value, p=.02, but different sample sizes (n=100 vs n=500), and thus different observed effect sizes ( d̂ =.33 & d̂ =.15).
Fig 6. Hyp-Charts for p=.02 results, with n=100 vs n=500
Both studies reject the null equally strongly, p=.02, but they tell us different things about the alternative. For example, on top, the data are 7.6 times more consistent with a Medium effect than
with the null, but in the bottom the data are not at all compatible with a Medium effect.
This first step from p-value to Bayes factors, Hyp-Chart, is neat and seems to convey interesting information about alternative hypotheses. I personally prefer confidence intervals, but reasonable
people could disagree with me and prefer Hyp-Chart over, or in addition to, confidence intervals (one could add the confidence interval to the x-axis of the Hyp-Chart).
In contrast, it does not seem reasonable for social scientists at least, to go further, to take the second step, and compute a Bayes factor. OK. Let's see what that second step is.
Step 2: The Bayes factor is a (bad) summary of Hyp-Chart
Like Hyp-Charts, Bayes factors compare the likelihood of the data under the null vs “the alternative”. What's different is that instead of reporting results for each possible effect, the Bayes factor
reports only one number, a single (weighted) average across all the likelihoods in Hyp-Chart. [4].
Figure 7 below illustrates how default Bayes factors average a Hyp-Chart to evaluate “the alternative”.
Fig 7. A Bayes factor vs its underlying Hyp-Chart
The figure is just like the previous Figure 6, but the size of the dots is proportional to the weight given by the default Bayes factor.
The weighted average is BF=1. The Bayes factor, then, summarizes the above Hyp-Chart by saying that the null and ‘the alternative’ are equally consistent with the data. That d̂ =.15 is just as likely
if d[true]=0 as if "the alternative" is true.
But the Hyp-Chart shows that the data are 11 times more consistent with a small effect, d[true]=.2, than with the null of d[true]=0. 15 times more consistent with the observed effect of d[true] =
.15 than with the null. That is relatively strong support for true effects that in many contexts are large enough to be of interest.
I doubt many (any?) readers of a paper that reports “BF=1”, that reads “the null is as consistent with the data as with ‘the alternative’”, or that reads “the data are non-diagnostic from a Bayesian
standpoint”, would imagine that the data are 11 times more consistent with a small effect than with zero.
Bayes factors change their mind about what ‘the alternative’ is.
Hyp-Chart helps convey another reason why I don’t use Bayes factors
With default Bayes Factors: every effect size estimate that ‘supports the null’ for one sample size, ‘supports the alternative’ for a larger sample size [5].
This is not something Bayesian advocates don’t know. But it is something that does not bother them [6], and that I think would bother most social scientists.
To illustrate, let’s consider the observed d̂=.15 we have used a few times.
If you run a study with n=150, and get d̂=.15, the Bayes factor will favor the null
If you run a study with n=750, and get d̂=.15, the Bayes factor will favor the alternative
This is NOT the same as d̂= .15 being statistically significant only for big n.
It is one thing to go from “don’t know” to “x is false", as p-values do. It is another to go from “x is true” to “x is false”, as Bayes factors do.
Let’s dive deeper into these two scenarios, this time aided by Hyp-Charts.
Fig. 8. Bayes Factor is BF < 1/3, so we conclude d̂ =.15 supports the null
Getting d̂=.15 with just n=150 is an inconclusive result, for sure, note that p=.19 after all. But that result is 2.1 times more consistent with a small effect of d=.2 than with the null. The Bayes
factor, however, does not call this study inconclusive. After averaging that 2.1 with lots of 0s from hypotheses with the opposite sign, and hypotheses for quite large effects – hypotheses we have no
specific reason to expect or predict or care about – it computes a low average, a low Bayes factor, BF<1/3, so d̂=.15 “supports the null”.
The conclusion reverses, for the same result, d̂=.15, with a larger sample,
Fig. 9. Bayes Factor is BF > 3, so we conclude =.15 supports ‘the alternative’
The Bayes factor does now give a qualitatively sensible summary of Hyp-Chart. But as a quantitative summary, the Bayes factor seems misleading. The observed value, BF=3.7, means the data are 3.7
times more consistent with ‘the alternative’. But the data are 42 times more compatible with a small effect than with a null effect, and 67 times more compatible with the observed effect than with a
null effect.
Run these two lines of R Code to make your own Hyp-Charts
source("http://webstimate.org/hyp-chart/Hyp-Chart_2019_09_05.R") #Load the Hyp-Chart function
hyp.chart(d.obs=.4, n=50, show.bayes=TRUE) #Set n (per-cell sample) and either observed effect (d.obs) or p-value (p.obs)
Take home.
Reporting a Bayes factor seems like an unnecessary reduction, and distortion, of the available information contained in a Hyp-Chart. If you review a paper that reports a Bayes factor, I recommend you
ask the authors to include a Hyp-Chart alongside it. This will allow readers to interpret the data using their actual prior beliefs and actual theories of interest, instead of using the (irrelevant)
average prediction, made by a default hypothesis they don't hold (or understand).
This is the second post in the Bayes factors series, see the first: Colada[78A]
I shared a draft of this post with Felix Schönbrodt (.htm), and Leonhard Held (.htm) who have written on Bayes factors in the context of hypothesis testing, and with Bence Palfi (@bence_palfi), a PhD
student from Essex who had identified an imprecision/error in my previous post (Colada[78a]). A parallel conversation with Zoltan Dienes (.htm), following that post was also useful. They were all
polite, respectful, and helpful, despite having different summary evaluations of the usefulness of Bayes factors. Thanks.
Subscribe to Blog via Email
1. Technically speaking, every alternative hypothesis involving a "degenerate distribution", which just means hypotheses over single values, e.g., that d[true]=.45 [↩]
2. Alex Etz (.htm) complained that I'd propose a new name, "Hyp-Chart", for what is 'just' a re-scaled likelihood function. Hyp-Chart is indeed a likelihood function, but where we know what's in the
x-axis, the candidate hypotheses fed into a Bayes factor, and more importantly, the y-axis, the ratio with respect to the likelihood of the null. You can see Alex's tweet (.htm), and his previous
relevant writing on likelihoods published in AMPPS .htm and his previous blog post with similar content .htm [↩]
3. Throughout I use Cohen’s d as a measure of effect size. When comparing two means, d is the difference in means divided by the standard deviation. d=(M[1]-M[2])/SD. I use d[true] to symbolize the
different in population means, and d̂ for the estimated difference [↩]
4. In principle Bayes factors can combine any subset of alternative effects (including just one), and in principle Bayes factors can weigh them in any way. But, in practice, Bayes factors are
computed using hard-coded default “alternative hypotheses” which give a pre-specified weight to every effect, of both signs, giving much more weight to d[true]’s near zero. The defaults are
either that d~N(0,.7) or a JZS prior. The distinction is not relevant for this post. Indeed, I plot the Hyp-Charts using the former but compute Bayes factors using the latter [↩]
5. The sole exception is d̂=0. An exact zero estimate, never supports the alternative. Then again, exact zeros are impossible [↩]
6. Bayesian advocates probably would not even agree with my characterization that the same point estimate supporting the null and then supporting the alternative involves a "change of mind", they
would just see it as the natural consequence of a likelihood function for two different sample sizes. But my guess is that for most social scientists, an observed d̂ =.15 cannot simultaneously
support and oppose the same theory. The math behind the Bayes factor is correct, but, in my view, it poorly captures how researchers think and want to think about evidence. [↩] | {"url":"https://datacolada.org/78b","timestamp":"2024-11-14T15:15:13Z","content_type":"text/html","content_length":"83948","record_id":"<urn:uuid:af35059a-206e-42a7-bec9-7e9b95320a73>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00760.warc.gz"} |
Scalar, Matrix and "No solution was found".
Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Community Tip - Want the oppurtunity to discuss enhancements to PTC products? Join a working group! X
The question is : solve, matrix X with matrix equation ?
1) Check my example with x=+-1 above. You see that Mathcad is doing wrong and thats why it gives you -.1806/359. Also in your example Mathcad simply is doing wrong, thats all. Additionally Mathcad
thinks that a 1x1 matrix equals a scalar, which is wrong, too..
2) What do you think is 1/X, when X is a 1x1 matrix? You can't rely on Mathcad with this question!
Mathcads symbolics will solve for scalar variables but not for matrix variables.
Furthermore the program is quite buggy and you can get quite astonishing wrong results when you put your expressions in 1x1 matrices:
What is the "result" in your opinion. How do you define 1 divided by a matrix or even a heavily nested matrix like yours??
While being mathematical nonsense, if you divide a scalar by a matrix, Mathcad will switch to vectorization automatically. But is this really what you intend? | {"url":"https://community.ptc.com/t5/Mathcad/Scalar-Matrix-and-quot-No-solution-was-found-quot/td-p/310514","timestamp":"2024-11-14T14:51:17Z","content_type":"text/html","content_length":"334468","record_id":"<urn:uuid:646042a5-3e91-413d-acc0-9a73eb60bb5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00217.warc.gz"} |
Printable Figure Drawings
Relations And Functions Worksheet
Relations And Functions Worksheet - Decide whether the graph represents y as a function of x. Print the notes on transparencies or display them on a whiteboard. 1), (2, 2), (3, 5), (4, 10), (5, 15)}
function. Mathematics is all about functions, operations, and relations between variables. Does a vertical line represent a function? Systems of two linear inequalities.
Checking if a table represents a function. Both a relation and a function. These math worksheets should be practiced regularly and are free to download in pdf formats. Then state the domain and
range. Neither a relation nor a function.
Web determine if a relation is a function. Web functions and relations worksheet. The relation is a function. Print the notes on transparencies or display them on a whiteboard. Web relations &
functions worksheet.
The set p is _____ and the remaining elements of p × p__________ (b) the function f (x) is defined as x2 x 2. Web relations and functions worksheet | live worksheets. If it is a function, give the
domain and range. Print the notes on transparencies or display them on a whiteboard. In fact, there wouldn’t be the concept.
In fact, there wouldn’t be the concept of mathematics if there were no functions or. Nothing really special about it. Web relations and functions worksheet | live worksheets. Determine whether the
relation is a function, b. They are a huge part of mathematics;
Using the vertical line test, determine if the graph above shows a relation, a function, both a relation and a function, or neither a relation nor a function. Web relations & functions worksheet.
Find the domain of the relation, and c. Notes, worksheets, and all answer keys are included! Determine whether the relation is a function, b.
The relation is not a function. Let’s start by saying that a relation is simply a set or collection of ordered pairs. Total pages in each workbook: Web linear relations and functions. The criterion
for a relation to be a function is that each input should have only one output.
Then state the domain and range. In fact, there wouldn’t be the concept of mathematics if there were no functions or. Print and staple all pages together to make a book. Web feel free to download and
enjoy these free worksheets on functions and relations.each one has model problems worked out step by step, practice problems, as well as challenge.
The criterion for a relation to be a function is that each input should have only one output. Web functions and relations worksheet. If it is a function, give the domain and range. They are a huge
part of mathematics; Does a vertical line represent a function?
Web relations and functions worksheets. Checking if a table represents a function. Then state the domain and range. In the following exercises, use the set of ordered pairs to a. Print and staple all
pages together to make a book.
1), (2, 2), (3, 5), (4, 10), (5, 15)} function. Print the notes on transparencies or display them on a whiteboard. Cover, fill in the blank notes and worksheets. The relation is a function. Web
functions and relations worksheet.
Web linear relations and functions. Systems of two linear inequalities. Functions and relations in math. Cover, filled notes and worksheets. Mathematics is all about functions, operations, and
relations between variables.
Free worksheet (pdf) and answer key on distinguishing functions from relations, stating domain and range and more. Web relations & functions worksheet. Cover, filled notes and worksheets. The
relation is not a function. Mathematics is all about functions, operations, and relations between variables.
Relations And Functions Worksheet - Then state the domain and range. Web understanding basic concepts of relations and functions worksheets. Free trial available at kutasoftware.com. Web relations
and functions | live worksheets. The relation is a function. Web relations & functions worksheet. Web feel free to download and enjoy these free worksheets on functions and relations.each one has
model problems worked out step by step, practice problems, as well as challenge questions at the sheets end. Print the notes on transparencies or display them on a whiteboard. Web determine if a
relation is a function. Web relations and functions worksheets with solutions.
Systems of two linear inequalities. Free trial available at kutasoftware.com. Web relations & functions worksheet. Determine whether the relation is a function, b. An ordered pair, commonly known as
a point, has two components which are the [latex]x [/latex] and [latex]y [/latex] coordinates.
Determine if the following relation is a function. Web relations and functions worksheets. Decide whether the graph represents y as a function of x. The relation is a function.
Web relations & functions worksheet. Free worksheet (pdf) and answer key on distinguishing functions from relations, stating domain and range and more. For each input, there is exactly one output.
Web determine if a relation is a function. Cover, fill in the blank notes and worksheets. The relation is not a function.
Evaluate The Function When X = 3, X = 0, And X =.
Print and staple all pages together to make a book. Web worksheet 4.1 relations and functions relations expressed as ordered pairs determine if the following relations are functions. Create your own
worksheets like this one with infinite algebra 1. Web functions and relations worksheet.
Then State The Domain And Range.
Web linear relations and functions. The relation is not a function. Free trial available at kutasoftware.com. 1), (2, 2), (3, 5), (4, 10), (5, 15)} function.
Web Relations And Function Worksheets Help Students To Understand Concepts Of Variable Functions, Calculus, Probability And Connect Them To The Reasoning Part Of Mathematics.
The set p is _____ and the remaining elements of p × p__________ (b) the function f (x) is defined as x2 x 2. The collection of all output values. Checking if an equation represents a function. Web
relations and functions worksheets with solutions.
Web Relations And Functions Worksheet | Live Worksheets.
Decide whether the relation is a function. Web relations and functions notes and worksheets for high school algebra 1. Both a relation and a function. Find the range of the relation. | {"url":"https://tunxis.commnet.edu/view/relations-and-functions-worksheet.html","timestamp":"2024-11-05T17:19:40Z","content_type":"text/html","content_length":"33977","record_id":"<urn:uuid:a4da403d-b52b-4c86-9208-b12c070dbf06>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00396.warc.gz"} |
When models are wrong, but useful
Applied mathematics provides a collection of methods to allow scientists and engineers to make the most of experimental data, in order to answer their scientific questions and make predictions. The
key link between experiments, understanding, and predictions is a mathematical model: you can find many examples in our case-studies. Experimental data can be used to calibrate a model by inferring
the parameters of a real-world system from its observed behaviour. The calibrated model then enables scientists to make quantifiable predictions and quantify the uncertainty in those predictions.
There are two important caveats. The first is that, for any one scientific phenomenon, there can be as many models as there are scientists (or, probably, more). So, which should you choose? The
second is that “all models are wrong, but some are useful". Often, those that include a high level of detail, and have the potential to make accurate quantitative predictions, are far too complicated
to efficiently simulate, or for mathematicians to analyse. On the other hand, simple models are often amenable to analysis, but may not include all the important mechanisms and so cannot make
accurate predictions. So how does one go from experimental observations to an understanding of the underlying science, when many different models of variable accuracy and tractability are available
to reason with?
In a forthcoming paper, accepted for publication in the SIAM/ASA Journal of Uncertainty Quantification, Oxford Mathematicians Thomas Prescott and Ruth Baker have presented a multifidelity approach to
model calibration. The model calibration procedure often requires a very large number of model simulations to ensure that the parameter estimate is reliable (i.e. has a low variance). Due to the
large number of simulations required, it may not be feasible to calibrate the accurate model within a reasonable timeframe. Suppose there exists a second model which is much quicker to simulate, but
inaccurate. The increased simulation speed means that it is more practical to calibrate this second model to the data. However, although the inaccurate model can be calibrated more quickly, its
inaccuracy means that the resulting estimate of the system’s parameters is likely to be biased (see Fig. 1).
The model calibration task aims to achieve an unbiased estimate of the model’s parameters that balances between two conflicting aims: ensuring that the resulting estimates have reasonably small
variance, but also that they are produced in a reasonably quick time. By combining each model’s strengths (the short simulation time of one, and the accuracy of the other), the key result of this
project deals with the following question: how can the inaccurate model be used to calibrate the accurate model? In particular, how much simulation time should be given to simulating each of the
models, and how should those simulations be combined? The result is a model calibration algorithm, tuned according to a formula that determines how to optimally share the computation effort between
the two models. This algorithm enables the accurate model to be calibrated with an unbiased parameter estimate and with a significantly improved trade-off between variance and speed.
The results can be applied to speed up the calibration of many types of complicated mathematical models against experimental data, whenever a simpler alternative model can be used to help. These
applications exist in fields as varied as ecology, physiology, biochemistry, genetics, engineering, physics, and mathematical finance.
Fig 1. In blue are 500 estimates of a parameter n, each of which was generated using 10,000 simulations of a slow, accurate model, and taking around 40 minutes each. In orange are 500 estimates that
each took only around 7 minutes to generate by simulating a fast, inaccurate model instead. These estimates are biased (i.e. centred around the wrong value). The estimates in green are where, for
every 10 simulations of the inaccurate model, we also produced one simulation of the accurate model. This allows us to remove the bias. But here we see the effect of the trade-off: while the total
simulation time is greatly reduced relative to the accurate model (to 10 minutes), this is at the cost of an increased variance (i.e. spread) of the estimate. | {"url":"https://www.maths.ox.ac.uk/node/34245","timestamp":"2024-11-08T13:46:12Z","content_type":"text/html","content_length":"54230","record_id":"<urn:uuid:306c1a9e-0a55-4763-8a9f-64a57c7891b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00129.warc.gz"} |
(PDF) Sarawak Wildlife Corridors
Author content
All content in this area was uploaded by Jedediah Brodie on Feb 26, 2016
Content may be subject to copyright.
Sarawak Wildlife
Wildlife corridors are essential for long-term conservation. Most national parks and other
protected areas are too small on their own to support viable populations of large animals,
so it is important that animals are able to disperse among parks in order to find food and
mates, avoid inbreeding, and shift locations in response to climate change. Wildlife
corridors facilitate this dispersal, and can prevent wildlife populations from becoming
increasingly isolated in national parks, which could lead to extinction and the loss of
some of Sarawak’s world-renowned biodiversity. We have analyzed the habitat selection
of several Protected and Totally Protected wildlife species in Sarawak and surrounding
areas using camera trap data and metapopulation models. Our goal was to identify the
most important wildlife corridors in Sarawak –the ones that contribute the most to
preventing the extinction. Our analysis shows that the most important dispersal route is
between Kayan Mentarang and Betung Kerihun. Protecting intact forest habitat along that
route would greatly enhance wildlife persistence. Our camera trapping shows that the
Hose Mountains are a very important wildlife area, with mammal diversity and
abundance among the highest anywhere in Malaysian Borneo. Gazetting the Hose
Mountains and Batu Laga National Parks remains a critical priority. Other important
corridors in Sarawak are those connecting the Hose Mountains to Betung Kerihun and
connecting Gunung Mulu, Ulu Temburong, and Usun Apau to Kayan Mentarang.
Koridor habitat adalah penting bagi pemuliharaan jangka panjang hidupan liar.
Kebanyakan saiz taman negara dan kawasan terlindung yang sedia ada adalah agak kecil
untuk menampung populasi mamalia besar dan sesetengah spesies burung. Koridor
habitat yang meyambung dua kawasan terlindung adalah penting bagi haiwan untuk
bergerak antara taman negara bagi mencari makanan dan pasangan. Koridor habitat ini
juga memudahkan proses penyebaran dan dapat mencegah populasi hidupan liar
daripada menjadi semakin terpencil di dalam sesebuah taman negara, di mana ianya
boleh mengakibatkan pembiakan dalam populasi yang kecil dan mungkin mengakibatkan
kepupusan. Kami telah menganalisa habitat-habitat terpilih bagi beberapa spesis hidupan
liar yang dilindungi dan dilindungi sepenuhnya khususnya di di Sarawak dengan
menggunakan kamera infra merah dan menganalisa data tersebut dengan menggunakan
model metapopulasi. Objektif kajian ini adalah untuk mengenalpasti koridor habitat yang
paling sesuai, penting dan strategik di Sarawak yang dapat mencegah atau
mengurangkan kepupusan mamalia-mamalia tersebut. Kajian ini menunjukkan bahawa
laluan yang paling penting adalah di antara Taman Negara Kayan Mentarang dan Betung
Kerihun. Melindungi habitat diantara Taman Negara tersebut akan mengurangkan
kebarangkalian kepupusan hidup hidupan liar yang terancam. Hasil dari kamera infra
merah juga menunjukkan Pergunungan Hose merupakan kawasan hidupan liar yang
sangat penting, dengan kepelbagaian mamalia yang tinggi jika dibanding dengan tempat-
tempat lain di Sabah dan Sarawak. Mewartakan Pergunungan Hose dan Taman Negara
Batu Laga sebagai kawasan terlindung adalah kritikal dan penting untuk hidupan liar.
Koridor lain yang penting dan harus diberi perhatian di Sarawak termasuk kawasan yang
menghubungkan Pergunungan Hose ke Betung Kerihun dan habitat yang
menghubungkan Gunung Mulu, Ulu Temburong, dan Usun Apau ke arah Kayan
I. GOAL
To protect wildlife corridors to help ensure that protected and charismatic animals in
Sarawak are not reduced to isolated populations with reduced gene pools or driven
extinct. Wildlife corridors are areas of forest that are long and wide enough, and are
properly managed, to link national parks and other protected areas into a network. This
will enable animals to disperse amongst these protected areas to find habitat, food, and
Sarawak has some of the highest biodiversity on earth, including many rare, beautiful,
and iconic species. The state also has several important protected areas where
threatened wildlife can live. However, many mammals and birds in tropical rainforests
live at low population density –meaning that even in pristine habitat, the abundance of
the animals will be relatively low. For example, the Sunda clouded leopard (Neofelis
diardi), the largest predator in Sarawak and a species only found in Borneo and Sumatra,
is so rare that only around 2 individual leopards occur per 100 km2 (Brodie and Giordano
2012, IUCN 2015). Based on these estimated population density, this means that even
large protected areas such as Gunung Mulu National Park and Lanjak Entimau Wildlife
Sanctuary could only support about 10 and 36 clouded leopards, respectively.
Orangutans are similarly limited –their average population density is higher than that of
clouded leopards but their distribution is much smaller (Husson et al. 2008). Decades of
research in conservation biology have shown that it is generally necessary to have
hundreds or thousands of individuals of a given species to ensure their long term survival
–any fewer and the species has a high probability of going extinct. Therefore, even
though Sarawak has several important, large protected areas, each of these places is
too small on its own to support populations of large wildlife species into the future.
Animals need to move around to find new spaces to live in, find mates, and to have
“escape routes” from certain areas that become unsuitable, for example too hot, from
climate change. If the protected areas in Sarawak could be linked (with each other
and with those in Kalimantan and Brunei) into a network, wildlife could move among
the various parks and would be much less likely to go extinct than if the animals
become confined to parks that are becoming increasingly isolated by conversion of
surrounding lands. Proper management is also essential to ensure that protected areas
and also wildlife corridors are safe from overhunting and illegal encroachment.
We have used camera traps to study wildlife in Sarawak, to determine the habitat
needs of different mammal species. We used these data in cutting edge analyses to
estimate animal dispersal from every protected area in Sarawak to every other through
the complex, heterogeneous landscapes in between.
The 43 protected area complexes in Sarawak have approximately 900 possible
wildlife corridors between them (Appendix A). It is probably infeasible to protect all of
those wildlife corridors, so it is important to prioritize them. We used newly developed,
cutting-edge analysis to determine which corridors are the most important to wildlife
persistence. We present here the prioritization of wildlife corridors, focusing on the most
important corridors. We propose that legal protection measures are needed to protect
these critical corridors. This will help ensure that wildlife in Sarawak does not go extinct
or become isolated populations.
Protecting landscape connectivity in these areas will also support Sarawak’s
commitment to:
(i) The Malaysian National Policy on Biological Diversity (Goal 3 Target 8 and Action 8.2;
(ii) The Convention on Biological Diversity (Target 11;
https://www.cbd.int/sp/targets/rationale/target-11/), to which Malaysia is a signatory.
(iii) The Heart of Borneo Tri-National agreement, to which Malaysia is a signatory.
(iv) The Sarawak Government’s target of achieving 1 million hectares of Totally Protected
Area in the State.
We used motion-triggered “camera traps” to study the ecology of wildlife in 15 study
areas across Sarawak (as well as Sabah), inside and outside of protected areas, in logged
and unlogged forest, in hunted and unhunted areas, and across elevations from sea level
to 1800 m. This took many years; altogether the data provide an unparalleled picture of
how different wildlife species respond to natural and human-caused variation in habitat
We then used new algorithms to estimate animal dispersal across complex
landscapes. Recently, ecologists have begun to use mathematical models developed for
electrical circuit theory to predict the flow of dispersing animals. Such models were
developed to understand the path that electrical current would take as it flows across a
complex surface from a battery to a ground. These models can predict the relative
proportion of current that would follow one possible route versus another, and how
changing the resistance of the surface would affect the current flow (e.g. Brodie et al.
2015a). Clearly animals are not electrons, and an individual animal might behave in
complex and unpredictable ways. But when considering hundreds or thousands of
individuals together (for jungle mammals and even people), current flow models can
predict average behaviour very well. These models have been verified with genetic data
for both mammals and plants (McRae and Beier 2007). The approach is analogous to
economics –individual people might make irrational choices, but on average people’s
financial decisions tend to follow predictable patterns. So these models allow us to
quantify the numerous and various paths that animals leaving one national park might
take to get to another national park. They also help us understand how changing the
quality of the habitat in between the parks will affect the total number of dispersing
animals that can successfully complete the trip. We mapped all of the protected areas in
Sarawak >100km2 in size and also the large parks on the Sarawak border in Indonesia
and Brunei. We used the circuit-based algorithms discussed above to estimate the routes
that animals leaving each of these protected areas would travel to get to every other
protected area (Figure 1; more details on these methods are provided in the Appendix).
But how do we decide which of the possible linkages between protected areas are
most important? It is probably infeasible to legally protect all of the possible wildlife
corridors among large protected areas in Sarawak, and it might not even be necessary to
do so. Some of the wildlife corridors are probably much more important than others. We
developed a method to prioritize wildlife corridors based on how much each corridor
would contribute to the long-term persistence of the Protected and Totally Protected
mammal species in Sarawak. To do this, we used a branch of ecology called
“Metapopulation Theory”. A metapopulation is a group of populations; for example, all of
the clouded leopards living in Gunung Mulu National Park would be one population, and
those living in Ulu Temburong another population. The metapopulation includes those
two populations, plus all the others in the state (and nearby parks on the Sarawak
border), and dispersal among these populations. We used our field-data-based dispersal
models to look at how much the protection of each individual wildlife corridor would
reduce the probability of extinction of the entire metapopulation. Then we did this for
every species. We ended up with an ordered list of wildlife corridors, ranked in terms of
how much each corridor will reduce the loss of wildlife from Sarawak.
Protecting the corridors that we identify as the most important will greatly
reduce the chances of extinction for wildlife, and showcase Sarawak’s commitment to
long-term conservation based on the best-available science.
Land use change affects every species differently. Some species rely very heavily
on undisturbed forest, such as many carnivores (Brodie et al. 2015b). If wildlife corridors
are not established, the dispersal of these species may be severely reduced. Other species
such as orangutan might still disperse through selectively logged forest (Husson et al.
2008), but their dispersal rates are much lower in disturbed habitat, so corridors of intact
forest could greatly improve their chances of long-term persistence in Sarawak. Other
species persist perfectly well in disturbed habitat (Brodie et al. 2015b), and so do not
need wildlife corridors. Importantly, our analysis integrates across all of these species, and
so our recommendations are based on management options that will have the best outcome
for the most Protected and Totally Protected species in Sarawak.
Based on the analyses described above (with more details provided in the Appendix), we
ranked the potential wildlife corridors across Sarawak (Figure 2). Our analysis showed
that the most important areas for wildlife connectivity are:
(1) Linking Lanjak Entimau and Betung Kerihun to Kayan Mentarang
(2) Linking the Hose Mountains to Betung Kerihun
(3) Linking Pulong Tau and Ulu Temburong to Kayan Mentarang
(4) Linking Usun Apau to Kayan Mentarang
Habitat conditions on the ground in these proposed corridors are variable. Most
have already been selectively logged. However, our previous results demonstrate that if
logged forests are allowed to recover without additional logging, they provide suitable
habitat for many mammal species (including some Protected and Totally Protected
species) relatively quickly.
(1) Gazette the proposed Hose Mountains and Batu Laga National Parks, as these are key
areas for connectivity and wildlife conservation at the state-wide scale. Our camera
trapping work demonstrates that even though portions of these parks have been heavily
logged, the areas support some of the highest mammal diversity and abundance in all of
Malaysian Borneo –on par with the famous Danum Valley and Maliau Basin Conservation
Areas in Sabah. Habitat protection here is essential, and could also help provide a refuge
for animals affected by the Bakun Dam impoundment. Combined with the spectacular
scenery, these parks have a vast and untapped ecotourism potential (Figure 3).
(2) The most urgent connectivity need in Sarawak –indeed in all of Borneo- is for habitat
connection that link Kayan Mentarang to the transboundary protected area complex
composed of Betung Kerihun and Lanjak Entimau (Finding 1 above). This could be
accomplished in several ways (Figure 4):
(Option 2.1 –Most ambitious) Establishment of a new protected area (e.g. National
Park or Conservation Area) between Hose Mountains, Betung Kerihun, and Kayan
Mentarang National Parks. This would address both Findings 1 and 2, above. Ideally
this protected area would include both ridgelines and river courses; this is important
because many dispersing animals follow ridges for travel while other species
(particularly riparian specialists) avoid ridges but instead prefer to move along rivers.
A park such as this would add ~6000 km2 of protected area to the state.
(Option 2.2 –Ambitious) The new national park that has been proposed by the
Sarawak government, Baleh, could be expanded to provide linkage between Kayan
Mentarang, Betung Kerihun, and the Hose Mountains, but provide less direct linkage
from the Hose Mountains to Kayan Mentarang. This would still address Findings 1 and
2 but require less land area than Option 1.1, adding 2400 km2 of protected area to the
state (in addition to the 670 km2 of the proposed Baleh National Park).
(Option 2.3 –Less Ambitious) Establishment of wildlife corridors linking Kayan
Mentarang to Betung Kerihun and, separately, Hose Mountains to Betung Kerihun. This
would still address Findings 1 and 2 but require still less land area than Option 1.2.
Narrow corridors would be less effective for sustaining dispersal be multiple species
because it will be difficult for a single corridor to encompass both ridgelines and
riparian areas. This would add 750 km2 of protected area to the state (in addition to
the 670 km2 of the proposed Baleh National Park).
(3) Establish legally protected wildlife corridors in northern Sarawak that link Gunung
Mulu, Ulu Temburong, Pulong Tau, and Kayan Mentarang National Parks. This addresses
Finding 3 above. WWF has undertaken a corridor to link Mulu, Pulong Tau, and Kayan
Mentarang via Layun and the Kubaan Puak Forest Management Unit –if this could be
extended to also include a corridor between Mulu and Ulu Temburong, it would achieve
this proposed action (Figure 5). Surrounding these corridors are sustainably managed
forests. The corridors, alongside the surrounding sustainably managed landscapes, also
help to conserve ecosystem services.
(4) Establish a legally protected wildlife corridor that links Usun Apau and Kayan
Mentarang National Parks. This addresses Finding 4 above. Usun Apau is a high plateau,
and so likely does not contain many riparian specialists –therefore the corridor that we
propose follows a ridgeline running between the two parks rather than river courses
(Figure 6).
The habitat connectivity proposals made here should be viewed as part of the broader
Systematic Conservation Planning (SCP) exercise led by WWF-Malaysia in collaboration
with UNIMAS, and the Sarawak Forest Department. The SCP has considered a suite of
attributes and used MARXAN algorithms to identify priority areas for conservation. The
wildlife corridors proposed here are implemented to the SCP via a post-MARXAN
analysis. As discussed above, most of the priority corridors identified here have also been
identified by the SCP analysis as areas important for conservation based on species
representation and other ecological attributes.
University of British Columbia
Jedediah Brodie, PhD
Departments of Zoology & Botany
and Biodiversity Research Centre
3529-6270 University Blvd
Vancouver, BC, V6T 1Z4, Canada
+1 (604) 827-0066
Universiti Malaysia Sarawak
Mohd Azlan Jayasilan Abd. Gulam Azad, PhD
Department of Zoology, Faculty of Resource
Science and Technology, Universiti Malaysia
Sarawak, 94300 Kota Samarahan, Sarawak,
082 582-938
World Wildlife Fund, Malaysia
Jason Hon, PhD
Program Leader
Sarawak Conservation Program
Sarawak Office, 7th Floor
Bangunan Binamas, Lot 138,
Section 54, Jalan Padungan
93100 Kuching, Sarawak
APPENDIX A –Technical details of the analytical methods
A.I. Justification (taken from Brodie et al., “Connecting science, policy, and
implementation for landscape-scale habitat connectivity”, In Press, Conservation Biology)
A.I.a. Which potential wildlife corridors are most important?
The fragmented nature of many landscapes means that numerous habitat patches exist,
and the number of possible wildlife corridors between patches becomes vast as the
number of patches increases. The 43 protected area complexes in Sarawak, for example,
have 903 possible corridors between them. (Connecting each of N patches to every other
patch in a network requires (N*(N-1))/2 links.) Given so many options for corridors, and
limited funding and political capital available to provide them all with legal protection
and on-site management, we need to prioritize which potential corridors are most
In some cases the patches that need to be connected are determined politically. In
Sabah, for example, forested habitat between the two large parks in the west, Mt.
Kinabalu (754km2) and the Crocker Range (1,399km2), was lost decades ago, leaving
them effectively isolated. The Sabah Parks department instigated the EcoLinc project
(Table 1) to reestablish connectivity. Likewise, the Sabah Forest Department wanted to
maintain connections between the three flagship conservation areas of central Sabah
(Imbak Canyon, Maliau Basin, and Danum Valley). Although not specifically stated as the
driver for this decision, scientists involved with Danum Valley Conservation Area had
emphasized the ecological importance of the elevational gradient represented by the
Silam-Danum-Maliau-Imbak forest complex (spanning 0 – 1600 m elevation) to support
possible range shifts in response to climatic changes. In Singapore, the hourglass-shaped
Eco-Link wildlife bridge was constructed across a major expressway to re-connect two
nature reserves that were fragmented in 1985 (Chong et al. 2010).
But in other cases, determining which habitat patches warrant connection by
protected forests is not as easy. Planning for the Central Forest Spine (CFS) Masterplan in
West Malaysia revealed 6,119 forest fragments in Peninsular Malaysia. Prioritization of
linkages between these patches was done with expert opinion, based on fragment size,
elevation, and known wildlife habitats. In Sarawak, there is less direction as to how to
prioritize linkages -many protected areas still have forest habitat between them (Gaveau
et al. 2014), and it is not clear which linkages are most important to metacommunity
The problem of corridor prioritization has received substantial attention, usually
in terms of each corridor’s contribution to overall connectivity of the landscape –the
proximate goal of the connectivity strategy. Prioritizations often employ graph theory, a
branch of mathematics based on the analysis of information flow across networks of
nodes (ecologically analogous to patches) and links between the nodes (i.e., corridors;
e.g., Urban et al. 2009, Rayfield et al. 2011). Using graph theory, corridors can be ranked
in terms of the contribution of each to overall connectivity (Urban et al. 2009, Rayfield et
al. 2011) or gene flow (Rozenfeld et al. 2008). However, several problems with these
approaches limit their utility. For example, rankings based on the contribution of each
patch or corridor to landscape connectivity are very sensitive to the connectivity metric
used (Laita et al. 2011, Ziolkowska et al. 2014), and many of the connectivity measures
have divergent and counterintuitive model behaviors (Laita et al. 2011). Overall,
connectivity measures derived from graph theory tend to focus on the dynamics of
immigration and local extinction and not on regional population size or persistence
(Moilanen 2011).
Corridors could also be prioritized based on their relative contributions to the
long-term persistence of metapopulations of the focal species (Nicholson et al. 2006,
Webb and Padgham 2013), thereby addressing the ultimate goal of the connectivity
strategy. This can be problematic, however, due to inconsistencies and difficulties in
estimating metapopulation persistence. Spatially-explicit population models are data and
computation intensive, making optimization across multiple species difficult (Burgman et
al. 2001). Instead, many studies use surrogates of metapopulation persistence rather
than direct estimations of persistence itself (Webb and Padgham 2013). Such surrogates
include species occurrence probabilities (Williams and Araujo 2000) or the proportion of
habitat occupied (Urban and Keitt 2001). Rankings based on the contribution of each link
to overall connectivity in a metapopulation context are also highly sensitive to the
extinction and colonization parameters (Gilarranz and Bascompte 2012), so their utility
may be limited for focal species whose demography is poorly known.
A.I.b. Where should the corridors be located?
Once we determine which habitat patches are to be connected, we need to determine
where exactly the wildlife corridors between them should go. The science is well
advanced for this issue and powerful modeling tools are available for determining
optimal corridor locations. For example, some models estimate the “least-cost path”
between two patches, which is a measure of potential connectivity (Beier et al. 2008).
Other models use electrical circuit algorithms to determine the paths of maximum
dispersal from one patch to another (McRae et al. 2008); these simulate random-walk
dispersal by numerous individuals of the focal species and determine how many
dispersers pass through each landscape pixel, thereby providing information on
functional connectivity. These models are often data intensive, and the necessary habitat
selection information may or may not be available at the outset of a corridor designation
process. The ongoing connectivity planning in Sarawak is based on camera-trapping-
based assessments of habitat quality for the various focal species (Brodie et al. 2015a,
Brodie et al. 2015b). The CFS Masterplan did not have explicit maps of habitat quality, but
accumulated a number of different proxy datasets (e.g., known wildlife habitats, human-
wildlife conflicts, fragment size) and then the final designation of corridor locations was
determined via a multi-criteria prioritization process and fine-tuned by expert opinion
(FDTCP 2010). In this case, a major focus was to reconnect fragmented major forest
blocks; hence, rough locations for the linkages were largely clear.
Expert estimation may be used where direct habitat selection information is
unavailable. In Singapore, least-cost path analysis has been carried out based on
vegetation structural analysis and expert estimation of habitat requirements of moderate
specialist small mammal, amphibians and reptiles, bird and butterfly species. The
proposed maps have been validated for presence or absence of species at selected
patches (Abdul Hamid and Tan 2014). However, it has to be kept in mind that specialist
and generalist species require different solutions for connectivity with short range
corridors for specialists to habitat and resource stepping stones for generalists (Dennis et
al. 2013).
A.II. Analysis (taken from Brodie, Mohd-Azlan, and Schnell, “How individual links affect
network stability in a large-scale, heterogeneous metacommunity”, In Review)
We studied an incipient metacommunity on the island of Borneo (~743,000 km2), which
straddles the equator and was, until the mid-twentieth century, almost entirely covered
by humid tropical rainforest (Bradshaw et al. 2009). Past and ongoing forest clearance
mean that the protected areas (e.g., national parks, wildlife reserves) on Borneo, as in
many parts of the world, are becoming increasingly isolated (Gaveau et al. 2014). We
examined two scenarios for the ongoing formation of a metacommunity from a
previously continuous community consisting of the 30 forest mammal species detected
through previously published, intensive, and wide-scale camera-trapping efforts (Brodie
et al. 2015a, Brodie et al. 2015b). In the first (“deforested matrix” scenario), we assumed
that protected areas (hereafter “patches”) would remain, or recover to, tall rainforest
while the intervening landscape matrix would be deforested, strongly reducing dispersal
permeability for all of the mammal species. In the second (“heterogeneous matrix”
scenario), we assumed that the landscape matrix would remain a heterogeneous mixture
of selectively logged forests and cleared areas (Gaveau et al. 2014), with impacts on
dispersal that varied among the mammal species (see below for how we estimated
species-specific dispersal).
As we do not account for species interactions in determining patch composition,
ours is a neutral metacommunity model (Logue et al. 2011). Neutral models are not
meant to imply that species interactions do not occur, but the models provide
interaction-less approximations that can estimate diversity patterns and dynamics
relatively precisely in a range of systems (Hubbell 2001, Kalyuzhny et al. 2015). In some
of the few mammal-mammal interactions studied in Borneo, trophic impacts of large
predators do not appear to influence occurrence or local abundance of prey or meso-
predators (Brodie and Giordano 2013).
Habitat patches in our system ranged from ~100 – 39,000 km2. Long-term
persistence of large-bodied species in the smaller patches is almost inconceivable
without emigration due to low population densities (Brodie and Giordano 2012). Though
the metacommunity is still forming via ongoing habitat loss and degradation in the matrix
and increasing isolation of the protected areas, we model dynamics of the
metacommunity once it has reached colonization-extinction equilibrium (sensu Hanski
1994). We note that such equilibrium assumptions are the rule rather than the exception
in many types of modeling studies (Williams et al. 2011).
The links in our network are potential dispersal routes between pairs of adjacent
patches. The strength of each link, proportional to the number of dispersers able to
successfully move between the patches, varied depending on whether there was a strip of
intact tall forest (i.e., a “corridor”) between the patches versus only matrix habitat. We
estimated link locations by determining the least-cost paths between each pair of
adjacent patches using Linkage Mapper software (McRae and Kavanagh 2011). We
buffered these paths by 2.5 km on either side and assumed that the 5 km wide wildlife
corridors would remain, or recover to, tall rainforest. Our previous research in the
system suggests that mammal diversity recovers rapidly after the cessation of logging
(Brodie et al. 2015b), and that 5 km wide corridors substantially enhance dispersal for
multiple mammal species (Brodie et al. 2015a).
We then employed circuit-theoretical analysis (McRae et al. 2008) to estimate the
resistance distance between each pair of patches in both of the landscape matrix
scenarios, for all species, with and without wildlife corridors, using Circuitscape software
(McRae and Shah 2009). The resistance distance between any two patches is the
minimum movement costs between them, accounting for multiple dispersal pathways
including through the wildlife corridor itself (when present) and also through the
surrounding landscape matrix (McRae et al. 2008). We set the resistance of a given 1 km2
landscape pixel to 1 in forest habitat (in patches as well as corridors) and to higher values
in selectively logged forest based on species-specific responses to selective logging from
their standardized regression coefficients in a previously published measure of logging
impacts (Brodie et al. 2015b). We set the resistance in deforested matrix to 100; this
value is higher than, but within the same order of magnitude as, the resistance value in
selectively logged forest for the most logging-sensitive species.
We used metapopulation capacity models to assess how much the presence of a
wildlife corridor at a given link affected long-term metapopulation persistence. The
importance of link i to long-term metacommunity stability (Ii) in an assemblage with S
species was:
=∑, −
Eq. 1
where ms,i and ms are the metapopulation capacities for species s with and without
(respectively) a corridor at link i. Metapopulation capacity is the leading eigenvalue of the
matrix M, calculated as f(Dij)AjAi0.5 for all patches where j ≠ i, and a 0 value for j = i. A is the
areas of patches j providing colonists to i, and distances Dij with a colonization ‘survival’
function f applied to it; this is a proxy for the persistence probability of the overall
metapopulation (Hanski and Ovaskainen 2000). In its original formulation,
metapopulation capacity led to illogical values for large-scale landscapes, giving 0
capacity values to single patch scenarios, no matter how large the patch, and even
penalizing the connecting of patches in networks (Schnell et al. 2013). We accounted for
this by incorporating into our models the within-patch recolonization term of Schnell et
al. (2013), i.e., f(Dij) = 1 when j = i.
To generate matrices of between-patch disperser survival probabilities for each
species, needed as input for the metapopulation capacity models, we assumed a decay in
the survival of dispersing individuals with increasing resistance distance. To determine
where survival became zero, we used a power law function for maximum dispersal
distances based on body size in mammals (Santini et al. 2013), and then determined
average resistance distances corresponding to these Euclidian distances.
Dispersal rates and disperser survival are poorly known (at best) for most species,
and unknown for the taxa in our system. Therefore, to assess whether potential errors in
our estimated dispersal matrix biased our conclusions, we performed another analysis
with a 5-fold increase in maximum dispersal distance for all species and determined the
correlation between these and the original results. We also assessed link importance via
removing individual corridors from a network where all adjacent patches were joined
(“removal model”) and compared estimated link importance values to those generated
using the “addition” method outlined in Equation 1.
Finally we examined how traits of the links related to their importance to
metacommunity stability. We used linear models to assess how the importance of links
varied with their length (measured as Euclidean distance between the two patches at
either end, cost-weighed distance, least-cost path length, or resistance distance),
centrality to the network (measured as current-flow centrality in circuit-theoretical
models; McRae 2012), the sizes of the patches they connected, and interactions among
these factors. We generated a set of models corresponding to all possible combinations of
the factors (each standardized to mean = 0 and variance = 1) and ranked them using AIC-
based model selection. Traits or interaction terms were considered to significantly affect
link importance if the 95% confidence intervals of their model-averaged regression
coefficients did not include zero.
APPENDIX B –Literature Cited
Abdul Hamid, A. R., and P. Y. Tan. 2014. Ecological networks: their application for
biodiversity conservation in an urbanized environment. Proceedings of the 20th
annual international sustainable development research conference, Norwegian
University of Science and Technology, Gloshaugen, Norway.
Beier, P., D. R. Majka, and W. D. Spencer. 2008. Forks in the road: Choices in procedures
for designing wildland linkages. Conservation Biology 22: 836-851.
Bradshaw, C. J. A., N. S. Sodhi, and B. W. Brook. 2009. Tropical turmoil: a biodiversity
tragedy in progress. Frontiers in Ecology and the Environment 7: 79-87.
Brodie, J., and A. J. Giordano. 2012. Density of the Vulnerable Sunda clouded leopard
Neofelis diardi in a protected area in Sabah, Malaysian Borneo. Oryx 46: 427-430.
Brodie, J. F., and A. Giordano. 2013. Lack of trophic release with large mammal predators
and prey in Borneo. Biological Conservation 163: 58-67.
Brodie, J. F., A. J. Giordano, B. G. Dickson, M. Hebblewhite, H. Bernard, J. Mohd-Azlan, J.
Anderson, and L. Ambu. 2015a. Evaluating multispecies landscape connectivity in
a threatened tropical mammal community. Conservation Biology 29: 122-132.
Brodie, J. F., A. J. Giordano, E. F. Zipkin, H. Bernard, J. Mohd-Azlan, and L. Ambu. 2015b.
Correlation and persistence of hunting and logging impacts on tropical rainforest
mammals. Conservation Biology 29: 110-121.
Burgman, M. A., H. P. Possingham, A. J. J. Lynch, D. A. Keith, M. A. McCarthy, S. D. Hopper,
W. L. Drury, J. A. Passioura, and R. J. Devries. 2001. A method for setting the size of
plant conservation target areas. Conservation Biology 15: 603-616.
Chong, K. Y., A. T. K. Yee, and C. K. Yeo. 2010. Biodiversity: linking Singapore's fragmented
habitats. Nature 465: 289-289.
Dennis, R. L. H., L. Dapporto, J. W. Dover, and T. G. Shreeve. 2013. Corridors and barriers
in biodiversity conservation: a novel resource-based habitat perspective for
butterflies. Biodiversity and Conservation 22: 2709-2734.
FDTCP. 2010. Final Report Central Forest Spine I: Masterplan for ecological linkages.
Federal Department of Town and Country Planning, Kuala Lumpur, Malaysia.
Gaveau, D. L. A., S. Sloan, E. Molidena, H. Yaen, D. Sheil, N. K. Abram, M. Ancrenaz, R. Nasi,
M. Quinones, N. Wielaard, and E. Meijaard. 2014. Four Decades of Forest
Persistence, Clearance and Logging on Borneo. Plos One 9.
Gilarranz, L. J., and J. Bascompte. 2012. Spatial network structure and metapopulation
persistence. Journal of Theoretical Biology 297: 11-16.
Hanski, I. 1994. A practical model of metapopulation dynamics. Journal of Animal Ecology
63: 151-162.
Hanski, I., and O. Ovaskainen. 2000. The metapopulation capacity of a fragmented
landscape. Nature 404: 755-758.
Hubbell, S. P. 2001. The unified neutral theory of biodiversity and biogeography.
Princeton University Press, Princeton, USA.
Husson, S. J., S. A. Wich, A. Marshall, R. L. H. Dennis, M. Ancrenaz, R. Brassey, and e. al.
2008. Orang-utan distribution, density, abundance and impacts of disturbance.in S.
Wich, S. Utami Atmoko, T. Mitra Setia, and C. van Schaik, editors. Orang-utans:
geographic variation in behavioral ecology. Oxford University Press, Oxford, UK.
IUCN. 2015. Sunda clouded leopard; Red List Assessment. International Union for the
Conservation of Nature.
Kalyuzhny, M., R. Kadmon, and N. M. Shnerb. 2015. A neutral theory with environmental
stochasticity explains static and dynamic properties of ecological communities.
Ecology Letters 18: 572-580.
Laita, A., J. S. Kotiaho, and M. Monkkonen. 2011. Graph-theoretic connectivity measures:
what do they tell us about connectivity? Landscape Ecology 26: 951-967.
Logue, J. B., N. Mouquet, H. Peter, H. Hillebrand, and G. Metacommunity Working. 2011.
Empirical approaches to metacommunities: a review and comparison with theory.
Trends in Ecology & Evolution 26: 482-491.
McRae, B. H. 2012. Centrality Mapper connectivity analysis software. The Nature
Conservancy, Seattle, USA. http://www.circuitscape.org/linkagemapper; accessed
31 March 2015.
McRae, B. H., and P. Beier. 2007. Circuit theory predicts gene flow in plant and animal
populations. Proceedings of the National Academy of Sciences of the United States
of America 104: 19885-19890.
McRae, B. H., B. G. Dickson, T. H. Keitt, and V. B. Shah. 2008. Using circuit theory to model
connectivity in ecology, evolution, and conservation. Ecology 89: 2712-2724.
McRae, B. H., and D. M. Kavanagh. 2011. Linkage Mapper connectivity analysis software.
The Nature Conservancy, Seattle, USA.
McRae, B. H., and V. B. Shah. 2009. Circuitscape user's guide. ONLINE. The University of
California, Santa Barbara, http://www.circuitscape.org; accessed 31 March 2015.
Moilanen, A. 2011. On the limitations of graph-theoretic connectivity in spatial ecology
and conservation. Journal of Applied Ecology 48: 1543-1547.
Nicholson, E., M. I. Westphal, K. Frank, W. A. Rochester, R. L. Pressey, D. B. Lindenmayer,
and H. P. Possingham. 2006. A new method for conservation planning for the
persistence of multiple species. Ecology Letters 9: 1049-1060.
Rayfield, B., M. J. Fortin, and A. Fall. 2011. Connectivity for conservation: a framework to
classify network measures. Ecology 92: 847-858.
Rozenfeld, A. F., S. Arnaud-Haond, E. Hernandez-Garcia, V. M. Eguiluz, E. A. Serrao, and C.
M. Duarte. 2008. Network analysis identifies weak and strong links in a
metapopulation system. Proceedings of the National Academy of Sciences of the
United States of America 105: 18824-18829.
Santini, L., M. di Marco, P. Visconti, D. Baisero, L. Boitani, and C. Rondinini. 2013.
Ecological correlates of dispersal distance in terrestrial mammals. Hystrix-Italian
Journal of Mammalogy 24: 181-186.
Schnell, J. K., G. M. Harris, S. L. Pimm, and G. J. Russell. 2013. Estimating Extinction Risk
with Metapopulation Models of Large-Scale Fragmentation. Conservation Biology
27: 520-530.
Urban, D., and T. Keitt. 2001. Landscape connectivity: A graph-theoretic perspective.
Ecology 82: 1205-1218.
Urban, D. L., E. S. Minor, E. A. Treml, and R. S. Schick. 2009. Graph models of habitat
mosaics. Ecology Letters 12: 260-273.
Webb, J. A., and M. Padgham. 2013. How does network structure and complexity in river
systems affect population abundance and persistence? Limnologica 43: 399-403.
Williams, J. L., M. M. Ellis, M. C. Bricker, J. F. Brodie, and E. W. Parsons. 2011. Distance to
stable stage distribution in plant populations and implications for near-term
population projections. Journal of Ecology 99: 1171-1178.
Williams, P. H., and M. B. Araujo. 2000. Using probability of persistence to identify
important areas for biodiversity conservation. Proceedings of the Royal Society B-
Biological Sciences 267: 1959-1966.
Ziolkowska, E., K. Ostapowicz, V. C. Radeloff, and T. Kuemmerle. 2014. Effects of different
matrix representations and connectivity measures on habitat network
assessments. Landscape Ecology 29: 1551-1570. | {"url":"https://www.researchgate.net/publication/295918971_Sarawak_Wildlife_Corridors","timestamp":"2024-11-13T05:29:33Z","content_type":"text/html","content_length":"485306","record_id":"<urn:uuid:0daf2a51-461c-4196-bfb9-0b77ed57e21d>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00327.warc.gz"} |
Why is homework given to students?
Homework teaches students to work independently and develop self-discipline. Homework encourages students to take initiative and responsibility for completing a task. Homework allows parents to have
an active role in their child’s education and helps them to evaluate their child’s progress.
Who was 0?
“Zero and its operation are first defined by [Hindu astronomer and mathematician] Brahmagupta in 628,” said Gobets. He developed a symbol for zero: a dot underneath numbers. “But he, too, does not
claim to have invented zero, which presumably must have been around for some time,” Gobets added.
What was used before zero?
Some of the first known zero symbols appear in Babylonian clay tablets of between 400 and 300 B.C.; there the zero was used to clarify the symbols for large numbers. Brahmagupta was the first to
write down the rules for arithmetic with zeros.
What if zero was not invented?
Without zero, modern electronics wouldn’t exist. Without zero, there’s no calculus, which means no modern engineering or automation. Without zero, much of our modern world literally falls apart. | {"url":"https://www.joialife.com/students-tips/why-is-homework-given-to-students/","timestamp":"2024-11-14T04:22:30Z","content_type":"text/html","content_length":"38558","record_id":"<urn:uuid:4f33308d-9af1-4797-b6e6-f0b439610ede>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00786.warc.gz"} |
Directed Acyclic Graph
A directed acyclic graph (DAG!) is a directed graph that contains no cycles. A rooted tree is a special kind of DAG and a DAG is a special kind of directed graph. For example, a DAG may be used to
represent common subexpressions in an optimising compiler.
+ +
. . . .
. . . .
* apply *<---- apply
. . . . . . | . .
. . . . . . | . |
a b f * a b | f |
. . ^ v
. . | |
a b |--<----
Tree DAG
expression: a*b+f(a*b)
Example of Common Subexpression.
The common subexpression a*b need only be compiled once but its value can be used twice.
A DAG can be used to represent prerequisites in a university course, constraints on operations to be carried out in building construction, in fact an arbitrary partial-order '<'. An edge is drawn
from a to b whenever a<b. A partial order '<' satisfies:
(i) transitivity, a<b and b<c implies a<c
(ii) non-reflexive, not(a < a)
These condition prevent cycles because v[1]<v[2]<...<v[n]<v[1] would imply that v[1]<v[1]. The word 'partial' indicates that not every pair or values are ordered. Examples of partial orders are
numerical less-than (also a total order) and 'subset-of'; note that {1,2} is a subset of {1,2,3} but that {1,2} and {2,3} are incomparable, i.e. there is no order relationship between them.
Constraints for a small building example are given below.
foundation-->frame-->| ^ |-->paint
| | |-->windows->|
|-->brickwork-->| |
Simplified Construction Constraints.
Note that no order is imposed between 'roof' and 'brick-work', but the plaster cannot be applied until the walls are there for it to stick to and the roof exists to protect it.
Topological Sorting
A topological-sort of a DAG is a (total) linear ordering of the vertices such that v[i] appears before v[j] whenever there is an edge <v[i],v[j]> (or whenever v[i]<v[j]).
-------> ----------> -------->
foundation->frame->roof brick-work->windows plaster doors->paint
Example Topological Sort.
Topological sorting can obviously be useful in the management of construction and manufacturing tasks. It gives an allowable (total) order for carrying out the basic operations one at a time. There
may be several different topological sorts for a given DAG, but there must be at least one. Note that there may be reasons to prefer one ordering to another and even to do some tasks simultaneously.
Topological Sorting Demonstration
Generate a DAG using the HTML FORM below and see the topological sort that results. |V| is the number of vertices in the DAG. The probability, pr, determines how dense the DAG is, on average:
There are two obvious strategies for topological sorting. One is to find an initial vertex, print it and remove it and repeat for the reduced DAG. The other is to find a final vertex, remove and save
it, repeat and finally print the vertices saved in reverse order. These strategies are equivalent as can be seen by reversing every edge and interchanging 'initial' and 'final'. An initial vertex has
no edges arriving at it and a final vertex has no edges leaving from it.
A final vertex can be found by following a path from an initial vertex until it is not possible to extend the path. In fact, a final vertex can be found by following a path from any vertex. If the
final edge is <x,z>, z is a final vertex and can be saved. For every other edge <x,y>, the process must be repeated from all such y. Vertex x then precedes y & z and so on back up to the start
vertex. This is a familiar backtracking process effected by a depth-first traversal (see Tree traversal), but here performed on a graph:
// visited[] is an array of Boolean
procedure Depth_First(i :Vertex)// Note similarities
if not visited[i] then // with Tree traversals.
visited[i] := true;
for all edge <i,j> // j must follow i in top-sort
end for;
save(i) // record or process Vertex i
end if
end Depth_First;
for all i :Vertex // initialise visited[]; been nowhere!
visited[i] := false
end for;
for all i :Vertex // try all possible starting points
end for
Depth-First Traversal of a Graph from a given Vertex.
This algorithm will also traverse an arbitrary graph. It should be compared with the various tree traversal algorithms. The exact coding of the algorithm, in particular the selection of 'each edge',
depends on the method of implementing the graph.
Critical Path Analysis
Critical-path analysis is another management problem. The critical-path of a complex task is the most time-consuming sequence of basic operations that must be carried out sequentially even allowing
for all possible parallelism. It defines the minimum time that the total task must take even if no expense is spared with the maximum allowed amount of activity going on simultaneously.
| 8 ^ 3 |
foundations-->frame--| | |
10* 7* | | v
-->brickwork--| |
7* | -->paint
|--->windows-->| 5*
| 5* |
| |
Example of Critical Path Analysis.
The critical path can be found by a modification of the depth-first search. | {"url":"https://allisons.org/ll/AlgDS/Graph/DAG/","timestamp":"2024-11-12T06:14:08Z","content_type":"text/html","content_length":"13461","record_id":"<urn:uuid:2e31986f-3934-4932-9015-b8ed428352cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00265.warc.gz"} |
The Law of Statistical Regularity is derived from the mathematical theory of probability.
The Law of Statistical Regularity is derived from the mathematical theory of probability. According to W.I.King, “the Law of Statistical Regularity formulated in the mathematical theory of
probability lays down that a moderately large number of items chosen at random from a very large group are almost sure to have the characteristics of the large group.” For example, if we want to find
out the average income of 10,000 people, we take a sample of 100 people and find the average. Suppose another person takes another sample of 100 people from the same population and finds the average,
the average income found out by both the persons will have the least difference. On the other hand if the average income of the same 10,000 people is found out by the census method, the result will
be more or less the same.
1. The item selected will represent the universe and the result is generalized to universe as a whole.
2. Since sample size is large, it is representative of the universe. 3. There is a very remote chance of bias.
Tags : Research Methodology - Questionnaire & Sampling
Last 30 days 7667 views | {"url":"https://arts.brainkart.com/article/the-law-of-statistical-regularity---statistical-laws-578/","timestamp":"2024-11-09T02:38:11Z","content_type":"text/html","content_length":"34476","record_id":"<urn:uuid:1d3a8337-d6c6-4b19-9291-57f2ce13c3ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00412.warc.gz"} |
Examples of Newtonian limits of relativistic spacetimes
A frame theory encompassing general relativity and Newton - Cartan theory is reviewed. With its help, a definition is given for a one-parameter family of general relativistic spacetimes to have a
Newton - Cartan or a Newtonian limit. Several examples of such limits are presented.
Classical and Quantum Gravity
Pub Date:
January 1997 | {"url":"https://ui.adsabs.harvard.edu/abs/1997CQGra..14A.119E","timestamp":"2024-11-04T16:03:42Z","content_type":"text/html","content_length":"33141","record_id":"<urn:uuid:24361926-0907-417b-bf4a-e57aeec3e72d>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00826.warc.gz"} |
What is an examples of continuous reinforcement?
What is an examples of continuous reinforcement?
An example of continuous reinforcement is a reward given to an animal every time they display a desired behavior. An example of partial reinforcement would be a child who is rewarded if they are able
to keep their room clean for a period time and receives a reward.
What is a continuous schedule of reinforcement?
Continuous Schedule. The continuous schedule of reinforcement involves the delivery of a reinforcer every single time that a desired behavior is emitted. Behaviors are learned quickly with a
continuous schedule of reinforcement and the schedule is simple to use.
What is an example of a continuous schedule?
e.g. A continuous schedule also works well with very young children teaching them simple behaviors such as potty training. Toddlers are given candies whenever they use the potty. Their behavior is
reinforced every time they succeed and receive rewards.
What is continuous reinforcement?
A continuous reinforcement (CRF) schedule is the most straightforward response-based schedule of reinforcement to which a subject can be exposed during an experimental session. From: Techniques in
the Behavioral and Neural Sciences, 1993.
What is the difference between partial and continuous reinforcement?
Continuous Reinforcement: A type of learning which reinforces a desired behavior every time it occurs. Partial Reinforcement: A type of learning which occasionally reinforces a desired behavior after
it happens, such as praising a student every other time a question is answered correctly.
What are some real life examples of reinforcement schedules?
An example of using schedule of reinforcements in a parenting scenario is when potty training a child. You might start by giving the child a piece of candy every time they use the potty
What is FR VR Fi VI?
Fixed Ratio (FR) Schedule. Variable Ratio (VR) Schedule. Fixed Interval (FI) Schedule. Variable Interval (VI) Schedule.
What is the problem with continuous reinforcement?
Continuous reinforcement also has a high rate of extinction. Stopping this reinforcement schedule results in the response disappearing faster than when a partial reinforcement schedule is stopped.
This is because the subject gives up when they realize they no longer receive a reward.
What is the difference between continuous reinforcement?
Reinforcement: A consequence applied to encourage a certain behavior to increase the chance of it happening again. Continuous Reinforcement: A type of learning which reinforces a desired behavior
every time it occurs.
What is a disadvantage of continuous reinforcement?
The disadvantage to continuous reinforcement. is difficult to maintain over a long period of time due to the effort of having to reinforce a behavior each time it is performed. Also, this type of
reinforcement is quick to be extinguished.
What are the benefits of continuous reinforcement?
The advantage to continuous reinforcement is that the desired behavior is typically learned quickly. However, this type of reinforcement is difficult to maintain over a long period of time due to the
effort of having to reinforce a behavior each time it is performed.
What is the difference between continuous and partial reinforcement?
What is FR and VR?
What is continuous reinforcement when should it be used?
In continuous reinforcement, the desired behavior is reinforced every single time it occurs. 1 This schedule is best used during the initial stages of learning to create a strong association between
the behavior and response. Imagine, for example, that you are trying to teach a dog to shake your hand.
What is the advantage of continuous reinforcement?
When should continuous reinforcement be used?
Continuous reinforcement is best used when a person or an animal is learning a behavior for the first time. It can be difficult to practice this in the real world, though, because it might not be
possible to observe the behavior you want to reinforce every time it happens.
What does vr3 mean in ABA?
Variable-Interval Schedule (VI) Again the time interval can be any number but must be defined. Following a “VI3” schedule of reinforcement, a teacher could make reinforcement available after 2
minutes, then 5 minutes, then 3 minutes, then 4 minutes and finally 1 minute.
How do you explain VR?
Virtual Reality, or VR, is the use of computer technology to create a simulated environment which can be explored in 360 degrees. Unlike traditional interfaces, VR places the user inside the virtual
environment to give an immersive experience.
What is fr1 in ABA?
in conditioning, an arrangement in which reinforcement is given after a specified number of responses. “FR 1” means that reinforcement is given after each response; “FR 50” means that reinforcement
is given after every 50 responses; and so on.
What does R+ mean in ABA?
Reinforcement (R+, Sr+, Sr-) = a consequent event that occurs after a response and increases the likelihood of the behaviour increasing or happening again. In other words, behaviour was reinforced
and so person is likely to do it again.
What is an example of continuous reinforcement?
Continuous reinforcement or Continuous reinforcement schedule is regarded as one of the simpler forms of schedule of reinforcement; nevertheless, it is incredibly systematic. Examples of Continuous
Reinforcement Giving a child a chocolate every day after he finishes his math homework.
Can negative reinforcement be used in continuous reinforcement schedule?
The third example mentioned here suggests that negative reinforcement can also be used in continuous reinforcement schedule. The expectation of detention shapes the mindset of the child that he must
submit his homework regularly in order to avoid detention.
What is the expected value of a continuous random variable?
Expected value of a continuous random variable : Expected value or Mathematical Expectation or Expectation of a random variable may be defined as the sum of products of the different values taken by
the random variable and the corresponding probabilities. For example, if a continuous random variable takes all real values between 0 and 10,…
Continuous reinforcement requires that you do it each time, but partial reinforcement means you only reinforce part of the time. So, if you only give your dog a treat every third time he rolls over,
you’re partially reinforcing that behavior. | {"url":"https://www.tag-challenge.com/2022/12/04/what-is-an-examples-of-continuous-reinforcement/","timestamp":"2024-11-05T16:56:31Z","content_type":"text/html","content_length":"42842","record_id":"<urn:uuid:ec7c68b8-814d-47c1-b269-e0376299c308>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00659.warc.gz"} |
Obituary for Bent Fuglede
22 December 2023
Obituary for Bent Fuglede
With the death of Bent Fuglede, Denmark has lost one of its very important mathematicians and the Department of Mathematics at the University of Copenhagen has lost a highly valued teacher,
researcher and colleague.
Bent Fuglede (8.10.1925 - 7.12.2023) is survived by his son Einar and daughter-in-law Dorthea and two grandsons. Bent's wife Ólafia Einarsdóttir of more than 60 years died in 2017.
Bent received the high school diploma from Skt. Jørgens Gymnasium in 1943, and he graduated from University of Copenhagen as mag. scient. and cand. mag. in 1948. After employment as scientific
assistant at the Polytechnical Highschool, he was appointed associate professor at the Department of Mathematics at the University of Copenhagen. Having defended the dissertation “Extremal Length and
Closed Extensions of Partial Differential Operators”, he was appointed professor at Denmark's Technical Highschool in 1960.
He returned to the University of Copenhagen as professor in 1965. He started by preparing excellent lecture notes for Mathematics 6, the major course in functional analysis that completed the first
part of the cand. scient. study in mathematics. In the years that followed, he taught at all levels, always with a clear and well-worked presentation and with an easy-to-read blackboard script. His
accompanying lecture notes often contained new results and elegant proofs. In 1992 he allowed himself to retire, but continued as an active researcher and professor emeritus at the Department of
Mathematics and was active until shortly before his death.
Bent spent the years 1949-51 in the USA and stayed partly at Stanford University and partly at the Institute for Advanced Study in Princeton. It was during the visit at Stanford in November 1949 that
“Fuglede's Theorem”, was communicated by John von Neumann to the Proceedings of the National Academy of Sciences. This result was the first and probably most famous of Fuglede's many theorems. lt
states, that if a bounded operator on a Hilbert space commutes with a normal operator, then it also commutes with the adjoint operator. It is a surprisingly deep result, which today is included in
all advanced textbooks on operators in Hilbert spaces.
During Bent's stay at the Institute for Advanced Study, he started collaboration with the American mathematician Richard V. Kadison on, among other things, determinant theory in infinite dimension.
This led to Kadison coming to Copenhagen, and he gained a lifelong connection to Denmark, because he met his wife here. The strong position of operator algebra at our Institute is undoubtedly related
to the influence of Kadison. The Fuglede-Kadison determinant continues to be used, e.g. by Lück in his development of $L^2$-cohomology and in the construction of the Brown measure for non-normal
Subsequently, Bent studied partial differential operators, which led to the aforementioned doctoral dissertation, and also to research on potential theory. This theory deals with the solutions to
Laplace's equation, i.e. harmonic functions, Newton potentials, etc. In this subject, he quickly became a leading figure worldwide, and he often gave lectures at the potential theory seminar in Paris
“Séminaire Brelot-Choquet-Deny”, named after the three leading French mathematicians in the subject. In the 1960s, Brelot and his students had developed an abstract potential theory called harmonic
spaces. A harmonic space is a locally compact topological space, where a vector space of continuous real-valued functions is specified for each open subset, modelling the harmonic functions in open
sets of a Euclidean space. Brelot had established a few fundamental axioms that these abstract spaces had to satisfy in order that many of the key results of potential theory could be deduced. The
significance of the abstract theory was that one could obtain these key results for solution spaces to PDEs related to the Laplace operator by simply checking the axioms.
In harmonic spaces superharmonic functions can be introduced, but as in the classical theory, these are not always continuous, but only lower semi-continuous. In 1940, Henri Cartan got the idea to
replace the classical Euclidean topology by a topology having more open sets, so that superharmonic functions became continuous. Cartan called this new topology the fine topology. The fine topology
also appears in the theory of Brownian motion and similar stochastic processes, because a finely open set can be characterized as follows: If the process is in the finely open set at a specific time
$t_0$, then it remains there almost surely for a positive period of time. When Bent succeeded proving that the fine topology is connected and locally connected, he was inspired to construct what he
called ”finely harmonic functions in finely open sets”, and this was the start of a new era in potential theory and complex analysis. Bent published a comprehensive presentation of the theory in a
Springer Lecture Notes: “Finely Harmonic Functions”, 1972, but already at the International Congress of Mathematicians (ICM) in Nice, 1970, he gave an invited lecture on it.
The topic became ”hot”, and was further developed in the following years by Bent himself and with contributions from many others. Bent experienced the great honour that Heinz Bauer's plenary lecture
on potential theory at the ICM in Vancouver, 1974, was largely about Fuglede's fine harmonic theory.
Just as harmonic functions in the plane are closely related to holomorphic functions, Bent succeeded in developing a rich theory for finely holomorphic functions in finely open sets of the complex
plane. It also turned out that Fuglede's theory was a natural continuation of Borel's theory for ”monogenic functions”, which was published as a monograph as early as 1917.
Bent's work inspired a number of young mathematicians from abroad to new results, e.g. Terry Lyons (England) and Bernt Øksendal (Norway). Bent corresponded with a large number of mathematicians from
home and abroad, and he was always generous with advice and improvements to manuscript drafts. He had a rare ability to penetrate the heart of a mathematical problem, and through this, he was often
able to provide a surprising solution to the problem.
In 1974, Bent wrote a paper in the Journal of Functional Analysis: ”Commuting self-adjoint partial differential operators and a group theoretic problem”. It has today received 298 citations. The
work, which was inspired by a question from Irving Segal, associates a geometric property of an $n$-dimensional domain $G$ with a property of the Hilbert space $L^2(G)$. The result is later known as
Fuglede's Conjecture. The Danish mathematician Steen Pedersen from Ohio has published a number of papers on the subject and later Fields medal recipient Terence Tao published two papers: “Fuglede's
Conjecture holds for convex planar domains”, (2001) and ”Fuglede's Conjecture is false in 5 and higher dimensions”, (2003).
Bent decided to retire in the spring of 1992. He felt his health a little failing after a year's stay at the Institute for Advanced Study in Princeton, where he was accompanied by Ólafia.
Fortunately, his health turned out to be fine and Bent achieved an emeritus period of approx. 30 years, during which he was very active. Of the 114 of his works mentioned in Math. Sci. Net, approx.
40 is written after he became emeritus. During this period, he made significant contributions to many different areas, e.g. isoperimetric inequalities, Riemannian manifolds and moment problems. It
should also be mentioned that he published a research monograph together with James Eells: ”Harmonic maps between Riemannian Polyhedra”, Cambridge Tracts in Mathematics vol. 142, 2001. Together with
Natalia Zorii from Kyiv, he has since 2016 written 8 papers dealing with energy problems with respect to Riesz kernels, an impressive achievement for a person over 90 years of age.
Bent was a member of the Royal Danish Academy of Sciences and Letters, the Finnish Academy of Sciences and the Bavarian Academy of Sciences.
On his retirement, the department celebrated him with a symposium. I had the pleasure of telling about Bent's mathematical results, and there were lectures by guests from several countries. The
Danish Mathematical Society celebrated Bent's 70th birthday and named him an honorary member. For a number of years, he was a member of the editorial board of the journal Expositiones Mathematicae.
With his great and broad knowledge in many areas of mathematics, he was an obvious member of assessment committees at home and abroad. He has thereby had a great influence on how the department of
mathematics developed.
Fuglede was active in bringing foreign capacities to the Institute on a one-year basis, including Masanori Kishi (Nagoya) and Mikhail Sodin (Kharkiv), both specialists in potential theory. Their
appointments have had a great impact on my own research, and Sodin has subsequently been a regular guest at the Institute.
All theories have their time and then the interest in them decreases. This also happened to the abstract theory of harmonic spaces. Fuglede's theory for finely harmonic functions was formulated in
Brelot's abstract theory. It was therefore difficult for new generations to utilize and appreciate Fuglede's theory, and this made Sodin emphasize at regular intervals that Bent should write a new
presentation of the theory about finely harmonic functions, but only in connection with the classical potential theory.
Sodin succeeded in convincing Bent about the project, and he started to prepare a manuscript, which should include the results of his work on classical fine potential theory done in the period
between 1970 and 2010. Bent had for many years corresponded with the Moroccan mathematician Mohamed El Kadiri about potential theory and invited him to collaborate on the book project "Classical Fine
Potential Theory". They agreed on the collaboration and also to enlarge the scope to include results on finely holomorphic functions in one or several complex variables as well as a chapter on fine
pluripotential theory. They also decided to include their joint recent research on Martin boundary for finely open sets. At the end of October 2023, the book was sent to Springer Nature.
The mathematical community can look forward to the publication of this fine closing of Bent Fuglede's career.
Glory be to his memory.
Christian Berg, 27.12.2023. | {"url":"https://www.math.ku.dk/english/about/news/obituary-for-bent-fuglede/","timestamp":"2024-11-03T03:18:09Z","content_type":"text/html","content_length":"46974","record_id":"<urn:uuid:192a9eb0-0306-4ff8-bf0c-f50c0b72e96a>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00477.warc.gz"} |
Geofence Transition Bitwise Logic
You want to set multiple Geofence transitions either programmatically or through user input, but you can only call setTransitionTypes() once.
Use bitwise OR on each transition type to get an int that can represent every combination of transitions.
public void createGeoFence(float rad, double lat, double lng,
boolean enterChecked, boolean exitChecked, boolean dwellChecked) {
//Set transitions based on what the user selects.
int transitions = 0;
transitions = Geofence.GEOFENCE_TRANSITION_ENTER;
transitions = transitions | Geofence.GEOFENCE_TRANSITION_EXIT;
transitions = transitions | Geofence.GEOFENCE_TRANSITION_DWELL;
mGeofenceList.add(new Geofence.Builder()
.setCircularRegion(lat, lng, rad)
.setExpirationDuration(86400000) //24 hours
.setNotificationResponsiveness(5000) //5 seconds
The above sample method gets passed everything you need to construct a GeoFence. The booleans are the results of checking if the check boxes are checked, they will be true for each box that is
checked, and false otherwise. I included a screenshot of the checkboxes for a visual aid.
Note that ints are primitive and can't be null, so if they are not assigned a value they are 0. Further, a transitionType of 0 will throw an error. Make sure to handle this. In another part of my
code(not shown) I simply made it so that if the Dwell, Enter, and Exit controls were all unchecked the user would not be allowed to create the GeoFence and would be informed of their error.
Google has a pretty clever way for setting the transitions for a geofence. They use bitwise logic. So we should run with that. The pipe '|' operator is a bitwise OR.
Maybe you want to programmatically set your transitions on GeoFences, but you are having issues with the fact that you can only call setTransitionTypes() once for each GeoFence Builder. For me, I
wanted to take user input, and add a transition type for each transition type the user selects. Instinctually you may want to examine each user input (checkboxes in this case) and then call
setTransitionsTypes() for each one that the user enabled. But you can't. You can only call setTransitionTypes once, and you can only give it one argument.
So you must build your transition types before calling setTranstionTypes(). One simple and readable way to do this is a bitwise OR. According to the Android API: GEOFENCE_TRANSITION_ENTER has a value
of 1 = 001 binary GEOFENCE_TRANSITION_EXIT has a value of 2 = 010 binary GEOFENCE_TRANSITION_DWELL has a value of 4 = 100 binary
Why these values? Look at the binary logic. Adding any combination of values will produce a unique result.
If we do an OR on ENTER and EXIT we get
Which equals 3.
If we do EXIT and DWELL we get
Which equals 6.
So through bitwise OR you will always get a unique int between 1 and 7, inclusive. This explains why DWELL doesn't have a value of 3. Because that would mean there are multiple ways to get a value of
Let's pretend for a minute that DWELL = 011.
If you set that as the transition type, it's not clear if you want a DWELL, or if you want ENTER and EXIT transitions; these all evaluate to 3. And what if you wanted to set your transition types to
DWELL and ENTER? There would be a bitwise OR between DWELL and ENTER, which would result in a value of 3 yet again, and the GeoFence wouldn't know if you wanted transitions on dwell, or dwell and
Bitwise logic below, pretending DWELL is 011 and ENTER is 001 (which it really is).
Now lets go back to real life where DWELL = 100 binary. How many different combinations of GeoFence transitions can you have?
That would be (2^3) -1 = 7. The minus 1 is because we can't send a value of 0 to setTransitionTypes without it throwing an error.
So what if Google wanted to add another transition type to the GeoFence? What's the lowest positive value they could use to not interfere with their schema?
It would just be the next highest power of 2, which is 8, also known as 1000 binary.
The number of combinations this would allow is (2^4) -1 = 15.
You may have noticed that in the entire scenario discussed in the post you could just add the values together and you will come up with the same int. The reason I chose the bitwise OR was because I
liked the readability of it, and I liked keeping with the same logic Google is using. Adding up 3 ints and passing them to setTransitionTypes may appear to be slightly simpler code, but it makes very
little sense if you don't already understand how setTransitionTypes works. | {"url":"https://blog.brendanlacanette.com/2015/06/geofence-transition-bitwise-logic.html","timestamp":"2024-11-05T20:03:13Z","content_type":"application/xhtml+xml","content_length":"56404","record_id":"<urn:uuid:eb0a8def-ea32-4cf6-9b1b-2f645c2dda11>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00083.warc.gz"} |
Mole Fraction vs. Weight Percent: What's the Difference?
Mole fraction is the ratio of moles of a component to the total moles in a mixture, while weight percent is the ratio of a component's weight to the total mixture's weight, multiplied by 100.
Key Differences
Mole fraction, denoted by a dimensionless number, measures the proportion of a constituent relative to the total number of moles in a mixture. Weight percent, however, expressed as a percentage,
indicates the mass of a component as a proportion of the total mass of the mixture.
The mole fraction is independent of temperature and pressure because it is a ratio of counts of molecules. Weight percent can vary with changes in temperature or pressure, as these can affect the
density and therefore the weight of the components.
Calculating mole fraction involves counting moles, which are a measure of the number of particles in a substance. To calculate weight percent, one must weigh the components, considering the mass of
each substance in the mixture.
Mole fraction is particularly useful in gas mixtures and when dealing with reactions and stoichiometry. Weight percent is commonly used in solutions and is practical in industries where weight
measurements are more feasible.
Mole fraction gives a direct representation of the composition in terms of molecule counts, while weight percent provides a more intuitive understanding of composition in terms of mass.
Comparison Chart
Ratio of moles of a component to total moles
Ratio of component's weight to total weight
Dimensionless number
Percentage (%)
Independent of temperature and pressure
Can vary with temperature and pressure
Calculation Basis
Based on number of particles (moles)
Based on mass of substances
Common Usage
Used in gas mixtures and stoichiometry
Used in solutions and practical applications
Mole Fraction and Weight Percent Definitions
Mole Fraction
Mole fraction is used to express the concentration of a component in a mixture.
The mole fraction of the solute determined its role in the chemical reaction.
Weight Percent
Weight percent measures the mass of a component as a part of the total mass of a mixture.
The weight percent of salt in the solution was 10%.
Mole Fraction
It is a way of expressing composition in a multi-component system.
In the alloy, the mole fraction of each metal was crucial for its properties.
Weight Percent
Weight percent is used in formulations to specify the amount of each ingredient.
In the paint, the pigment had a weight percent of 2%.
Mole Fraction
Mole fraction, a unitless quantity, helps in calculating partial pressures in gases.
We used the mole fraction to find the partial pressure of each gas in the mixture.
Weight Percent
It expresses the concentration of a substance in terms of its weight.
The weight percent of the active ingredient in the cream was 5%.
Mole Fraction
It represents the fraction of a component in terms of its molecular count.
The mole fraction of oxygen in air is approximately 0.21.
Weight Percent
Weight percent gives an intuitive understanding of a mixture's composition by mass.
The weight percent of each metal in the alloy determined its hardness.
Mole Fraction
Mole fraction is the proportion of a substance's moles to the total moles in a mixture.
In the mixture, the mole fraction of nitrogen was calculated to be 0.7.
Weight Percent
It's a common method to describe the composition of mixtures in industries.
Manufacturers listed the components by weight percent on the product label.
What is mole fraction?
It's the ratio of the number of moles of a component to the total number of moles in a mixture.
How is mole fraction calculated?
By dividing the moles of a component by the total moles in the mixture.
What is weight percent?
The percentage ratio of a component’s weight to the total weight of a mixture.
How is weight percent determined?
By dividing the weight of a component by the total weight of the mixture and multiplying by 100.
Is mole fraction affected by temperature?
No, it remains constant regardless of temperature changes.
How does mole fraction relate to chemical reactions?
It helps in stoichiometric calculations and understanding reactant proportions.
In which applications is weight percent commonly used?
In solutions, manufacturing, and any situation where weight-based concentration is relevant.
Can mole fraction exceed 1?
No, it's always a value between 0 and 1.
Why is mole fraction used in gas mixtures?
It simplifies calculations like partial pressures and is independent of conditions.
Is mole fraction a unitless measure?
Yes, it’s expressed as a dimensionless ratio.
Does mole fraction have any limitations?
It may be less intuitive than mass-based measures for some practical applications.
How important is mole fraction in gas laws?
It’s crucial for calculations involving gas mixtures and partial pressures.
Does weight percent change with temperature?
Yes, it can change due to density variations caused by temperature fluctuations.
Can weight percent be more than 100%?
No, it’s always within 0 to 100%.
Why is weight percent useful in industry?
It provides a straightforward way to convey composition by mass, which is practical for formulation and production.
Can mole fraction be used in solids?
Yes, it's applicable to any phase of matter.
Is weight percent suitable for volatile components?
Care must be taken as volatiles can affect the accuracy of weight measurements.
Is weight percent used in pharmaceutical formulations?
Yes, to specify the concentration of each component in drugs.
Is weight percent easy to measure?
Yes, as it involves straightforward weighing, it’s easy to measure.
What’s a common use of weight percent in consumer products?
Listing ingredient concentrations in products like cosmetics and food.
About Author
Written by
Janet White
Janet White has been an esteemed writer and blogger for Difference Wiki. Holding a Master's degree in Science and Medical Journalism from the prestigious Boston University, she has consistently
demonstrated her expertise and passion for her field. When she's not immersed in her work, Janet relishes her time exercising, delving into a good book, and cherishing moments with friends and
Edited by
Aimie Carlson
Aimie Carlson, holding a master's degree in English literature, is a fervent English language enthusiast. She lends her writing talents to Difference Wiki, a prominent website that specializes in
comparisons, offering readers insightful analyses that both captivate and inform. | {"url":"https://www.difference.wiki/mole-fraction-vs-weight-percent/","timestamp":"2024-11-06T12:20:19Z","content_type":"text/html","content_length":"127228","record_id":"<urn:uuid:8fb7ca3b-d350-47a0-b581-d33722fd5ed1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00389.warc.gz"} |
Mail Archives: djgpp/2000/03/20/15:49:57
delorie.com/archives/browse.cgi search
Mail Archives: djgpp/2000/03/20/15:49:57
From: Damian Yerrick <Bullcr_pd_yerrick AT hotmail DOT comRemoveBullcr_p>
Newsgroups: comp.os.msdos.programmer,comp.os.msdos.djgpp
Subject: Re: Is DOS dead?
Organization: Pin Eight Software http://pineight.8m.com/
Message-ID: <fm1ddsc2vpue4j23lroq83rhghn628a26t@4ax.com>
References: <Pine DOT SUN DOT 3 DOT 91 DOT 1000314094608 DOT 4527E-100000 AT is> <38CE19B2 DOT 69C7 AT gmx DOT net> <Pine DOT SUN DOT 3 DOT 91 DOT 1000315104128 DOT 17230G-100000 AT is> <
38CF7CED DOT 505A AT gmx DOT net> <Pine DOT SUN DOT 3 DOT 91 DOT 1000315185511 DOT 20407P-100000 AT is> <38D0B4D1 DOT 380F AT gmx DOT net> <Pine DOT SUN DOT 3 DOT 91 DOT
1000316172141 DOT 5735H-100000 AT is> <38D11897 DOT 2ED0 AT gmx DOT net> <Pine DOT SUN DOT 3 DOT 91 DOT 1000319103934 DOT 13691A-100000 AT is> <38D4BCD3 DOT 767ACF49 AT gmx DOT net
> <Pine DOT SUN DOT 3 DOT 91 DOT 1000319161058 DOT 15795B-100000 AT is> <38D508CC DOT 5F3EA970 AT gmx DOT net> <Pine DOT SUN DOT 3 DOT 91 DOT 1000320114850 DOT 24837F-100000 AT is>
X-Newsreader: Forte Agent 1.7/32.534
MIME-Version: 1.0
Lines: 21
X-Trace: /KiKeWWjlaqO7LgnGRzNVqMhWYEJLNJMjZrle+Z7cAvCCL50QBxnVKXEuU8JqWhFEkW7vJ/0NATD!bYx6YYFSI8VHERM0RC5nTSeHl3TSphp6/Kr64u1CGAiL31ZqFZNf7OqD1t0cPWyAkyoBPr5QazHA!tlMLrQk=
X-Complaints-To: abuse AT gte DOT net
X-Abuse-Info: Please be sure to forward a copy of ALL headers
X-Abuse-Info: Otherwise we will be unable to process your complaint properly
NNTP-Posting-Date: Mon, 20 Mar 2000 20:18:55 GMT
Distribution: world
Date: Mon, 20 Mar 2000 20:18:55 GMT
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp
Reply-To: djgpp AT delorie DOT com
On Mon, 20 Mar 2000 11:49:14 +0200, Eli Zaretskii <eliz AT is DOT elta DOT co DOT il>
>I don't think we are *that* desperate to support NT and W2K quirks to
>go to those lengths. I'm quite sure that eventually, someone will
>come up with a much easier and less painful solution to tell NT from
Isn't there a "Get version of Windows" interrupt call? Wouldn't it
return 4.x for NT 4.x and 5.x for 2000? Allegro seems to be able to
get the version number of the running GUI quite nicely. Or is it
implemented on the 3.1/9x fork only?
Damian Yerrick http://yerricde.tripod.com/
Comment on story ideas: http://home1.gte.net/frodo/quickjot.html
AOL is sucks! Find out why: http://anti-aol.org/faqs/aas/
View full sig: http://www.rose-hulman.edu/~yerricde/sig.html
This is McAfee VirusScan. Add these two lines to your .sig to
prevent the spread of .sig viruses. http://www.mcafee.com/
webmaster delorie software privacy
Copyright © 2019 by DJ Delorie Updated Jul 2019 | {"url":"https://delorie.com/archives/browse.cgi?p=djgpp/2000/03/20/15:49:57","timestamp":"2024-11-04T04:21:54Z","content_type":"text/html","content_length":"8198","record_id":"<urn:uuid:bf08bd8d-5c3c-4cc6-a57e-11c9b26856cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00426.warc.gz"} |
May 2023
NumberPhile did a video on potentially infinite Knight Tours that I found quite interesting. I wanted to test out literate programming using Quarto to solve it and I did (result) 🙂
The RStudio integration with Quarto is really good and for a problem like this it really helps to keep your thoughts and code in the same place. Plots like this one that shows the complete tour are
displayed inline and the experience is comparable with a Jupyter notebook with the advantage that it is not in the browser but in a polished IDE.
ChatGPT, Tikz and option strategies
In my previous post I looked at the Tikz capabilities of ChatGPT. The quality of the graphics produced were above my expectations but I wouldn’t call them useful. While doing research for this
interesting Quant.SE question I came up with a more interesting question. Can ChatGPT explain and plot option strategies? Turns out it can not but since I’m doing science on this blog, I share with
you the results below anyways.
Maybe it can be coaxed into giving better results with better prompts. I’d be happy to hear if someone can. In any case, these were mine:
Please explain me some popular option trading strategies together with Tikz graphics
For each of the graphics can you write Tikz code?
Can you give me the Tikz code for a straddle, a strangle and iron condor as well
For some reason it got distracted when generating the graphs. The results are summarized below. Note again: this is all ChatGPT output, I only combined the graphics and compiled the Tikz.
Sure, I can explain some popular option trading strategies, but as a text-based AI, I’m unable to directly create Tikz graphics. However, I can certainly describe them in a way that can help you
visualize and perhaps draw them yourself.
1. Long Call: This is the most basic options strategy. When you buy a call option, you’re buying the right to purchase a stock at a specified price (the strike price) before a specified date
(the expiration date). You would use this strategy if you believe the price of the underlying asset is going to rise. Visualization: Imagine a graph with profit/loss on the Y-axis and the
price of the underlying asset on the X-axis. The graph would be a diagonal line starting from the negative Y-axis (representing the premium paid for the call) and moving upward as the price
of the underlying asset increases.
2. Long Put: This strategy involves buying a put option, which gives you the right to sell a stock at a specified price before a specified date. You would use this strategy if you believe the
price of the underlying asset is going to fall. Visualization: The graph for a long put is a mirror image of the long call. It starts at the negative Y-axis (representing the premium paid)
and moves downward as the price of the underlying asset increases.
3. Covered Call: This strategy involves owning the underlying asset and selling a call option on that asset. This is typically done when you believe the price of the asset will stay relatively
stable, and you want to earn the premium from selling the call. Visualization: The graph starts at the Y-axis at the point where the premium for selling the call is added to the price of the
asset. It then moves horizontally to the strike price of the call option, at which point it starts moving downward as the price of the asset increases.
4. Protective Put: This involves owning the underlying asset and buying a put option on that asset. This is done when you want to protect against a possible drop in the price of the asset.
Visualization: The graph starts at the Y-axis at the point where the price of the asset is reduced by the premium for the put. It then moves horizontally to the strike price of the put
option, at which point it starts moving upward as the price of the asset decreases.
5. Straddle: This strategy involves buying a call and a put option on the same asset with the same strike price and expiration date. This is typically done when you expect a big move in the
price of the asset, but you’re not sure in which direction. Visualization: The graph for a straddle looks like a V, with the point of the V at the strike price of the options. The left side
of the V represents the profit/loss for the put option and the right side represents the profit/loss for the call option.
6. Strangle: Similar to a straddle, but the call and put options have different strike prices. This is typically done when you expect a big move in the price of the asset, but you’re not sure in
which direction, and you want to reduce the cost of the strategy by choosing strike prices that are out of the money. Visualization: The graph for a strangle looks similar to a straddle, but
the V is wider due to the different strike prices.
7. Iron Condor: This is a more advanced strategy that involves selling a call spread and a put spread on the same asset. This is typically done when you expect the price of the asset to stay
within a certain range. Visualization: The graph for an iron condor looks like a bird in flight, with the body of the bird representing the range within which you profit and
Just as the descriptions the graphics aren’t very good unfortunately. It seems to have to have difficulty with combining the different pay off structures of the parts of the strategy.
Drawing with ChatGPT and Tikz
Sometime ago through HackerNews I found this experiment on drawing a unicorn with ChatGPT on the blog of Adam K Dean. The results left something to be desired. I prefer using Tikz instead of SVG so I
decided to an experiment myself with reasonable results. First I wanted a simple sine wave. In 2014, during an interview, I was asked to sketch a sine wave on a whiteboard as a preliminary step for a
problem the interviewers wanted me to tackle. Regrettably, I ended up poorly drawing a cosine wave instead. For ChatGPT however, it’s not a problem:
A rather detailed plot of a sine wave generated with the prompt: “Can you give me a sine wave plot with tikz”
This is pretty good, I know I would need to some searching to get to this. Next I wanted a house:
A very simple house created with Tikz and the prompt: “Draw a house in Tikz”
This is going better than expected, let’s try to draw a unicorn:
A simple unicorn generated with prompt: “Draw a unicorn in Tikz”
Please add some details:
After prompt: “Please add some details”
It does try hard to give me an Markowitz efficent frontier together with the Capital Allocation Line and the tangency portfolio but didn’t quite succeed. This is the final result of the prompts: “Now
draw an Markowitz efficent frontier with some details”, “Can you add the CAL and the tangency portfolio?” and “Please don’t use path”. The latest command is necessary as my installation of Tikz
doesn’t know about it but it appears to work in another version. The result isn’t great, by default the legend is over the points and I need to comment out the location of the tangency portfolio.
That said, if one really wants to get this graph ChatGPT gives a great start and it writes around 40 lines of Tikz code in 2 minutes which I’m definitely not able to do.
A part of the efficient frontier with individual assets. The tangency portfolio has been commented out and the legend has been moved manually from ‘north west’ to south east. | {"url":"https://bobjansen.net/2023/05/","timestamp":"2024-11-12T23:43:53Z","content_type":"text/html","content_length":"47553","record_id":"<urn:uuid:7131be92-18d7-4e3e-a031-9158b670671e>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00152.warc.gz"} |
Expectation value (quantum mechanics) - Mono Mole
Expectation value (quantum mechanics)
The expectation value of a quantum-mechanical operator $\hat{O}$ is the weighted average value of its observable. It is defined as:
The above equation has roots in probability theory, where the expectation value or expected value of an observable $O$ is $\langle&space;O\rangle=\sum_{i=1}^{N}p_io_i$, with $p_i$ being the
probability of observing the outcome $o_i$.
From eq30, $\hat{O}=\sum_{i=1}^{N}o_i\vert\phi_i&space;\rangle\langle\phi_i\vert$, and so
We further postulate that eq34 is valid for an infinite dimensional Hilbert space.
Why is $p_i=\vert\langle\phi_i\vert\psi\rangle\vert^{2}$ ?
Consider an operator with a complete set of orthonormal basis eigenfunctions $\left&space;\{&space;\phi_i&space;\right&space;\}$. So, any eigenfunction $\psi$ can be written as a linear combination
of these basis eigenfunctions, i.e. $\psi=\sum_{i=1}^{N}c_i\phi_i$. According to the Born rule, the probability that a measurement will yield a given result is $\vert\psi\vert^{2}=\psi^{*}\psi$,
where $\int\psi^{*}\psi\:&space;d&space;\tau=1$. So,
We have used the orthonormal property of in the 2^nd and 3^rd equalities. $\vert&space;c_i\vert^2$ is interpreted as the probability that a measurement of a system will yield an eigenvalue
associated with the eigenfunction . Therefore,
Using the Schrodinger equation, show that the expectation value of the Hamiltonian is $E=\int&space;\psi_i^{*}\hat{H}\psi_i\:&space;d\tau$.
Multiplying both sides of eq40 on the left by $\psi_i^{*}$ and integrating over all space, we have
If the wavefunction is normalized, the above equation becomes $E=\int&space;\psi_i^{*}\hat{H}\psi_i\:&space;d\tau$. | {"url":"https://monomole.com/expectation-value/","timestamp":"2024-11-03T10:06:40Z","content_type":"text/html","content_length":"104096","record_id":"<urn:uuid:c778e2b6-766d-4f1e-978e-2c83783b8b48>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00689.warc.gz"} |
A229382 - OEIS
The left side of the Fermat "equation" 3987^12 + 4365^12 = 4472^12 is 0.000000002% larger than the right side. However, the ones digit on both sides is the same, namely, 6.
Homer Simpson gave this "counterexample" to FLT in an episode of The Simpsons written by David S. Cohen.
S. Singh, The Simpsons and Their Mathematical Secrets, Bloomsbury USA, 2013. | {"url":"https://oeis.org/A229382","timestamp":"2024-11-11T19:59:21Z","content_type":"text/html","content_length":"13956","record_id":"<urn:uuid:0577d1a4-dea9-4f12-9bc3-5fb52f87cc84>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00608.warc.gz"} |
Heavy Balls - Gonitzoggo
You are given a large collection of identical heavy balls and lightweight rods. When two balls are placed at the ends of one rod and interact through their mutual gravitational attraction (as is
shown on the left), the compressive force in the rod is $F = 1N$. Next, four balls and four rods are placed at the vertices and edges of a square (as is shown on the right). What is the compressive
force in each rod in the latter case? | {"url":"https://gonitzoggo.com/problem/467","timestamp":"2024-11-03T18:24:30Z","content_type":"text/html","content_length":"19496","record_id":"<urn:uuid:b949c8f9-1cff-47e4-8858-3b0b7c515158>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00226.warc.gz"} |
3101 -- Astronomy
Time Limit: 2000MS Memory Limit: 65536K
Total Submissions: 7212 Accepted: 1669
There are n planets in the planetary system of star X. They orbit star X in circular orbits located in the same plane. Their tangent velocities are constant. Directions of orbiting of all planets are
the same.
Sometimes the event happens in this planetary system which is called planet parade. It is the moment when all planets and star X are located on the same straight line.
Your task is to find the length of the time interval between two consecutive planet parades.
The first line of the input file contains n — the number of planets (2 ≤ n ≤ 1 000).
Second line contains n integer numbers t[i] — the orbiting periods of planets (1 ≤ t[i] ≤ 10 000). Not all of t[i] are the same.
Output the answer as a common irreducible fraction, separate numerator and denominator by a space.
Sample Input
Sample Output
[Submit] [Go Back] [Status] [Discuss]
All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di
Any problem, Please Contact Administrator | {"url":"http://poj.org/problem?id=3101","timestamp":"2024-11-13T22:54:06Z","content_type":"text/html","content_length":"6189","record_id":"<urn:uuid:e0cce511-f155-485b-a651-3a44a2d7a210>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00838.warc.gz"} |
Derivatives of Trig Functions
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your
device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen
Section 3.5 : Derivatives of Trig Functions
With this section we’re going to start looking at the derivatives of functions other than polynomials or roots of polynomials. We’ll start this process off by taking a look at the derivatives of the
six trig functions. Two of the derivatives will be derived. The remaining four are left to you and will follow similar proofs for the two given here.
Before we actually get into the derivatives of the trig functions we need to give a couple of limits that will show up in the derivation of two of the derivatives.
\[\mathop {\lim }\limits_{\theta \to 0} \frac{{\sin \theta }}{\theta } = 1\hspace{0.75in}\mathop {\lim }\limits_{\theta \to 0} \frac{{\cos \theta - 1}}{\theta } = 0\]
See the Proof of Trig Limits section of the Extras chapter to see the proof of these two limits.
Before proceeding a quick note. Students often ask why we always use radians in a Calculus class. This is the reason why! The proof of the formula involving sine above requires the angles to be in
radians. If the angles are in degrees the limit involving sine is not 1 and so the formulas we will derive below would also change. The formulas below would pick up an extra constant that would just
get in the way of our work and so we use radians to avoid that. So, remember to always use radians in a Calculus class!
Before we start differentiating trig functions let’s work a quick set of limit problems that this fact now allows us to do.
Example 1
Evaluate each of the following limits.
1. \(\displaystyle \mathop {\lim }\limits_{\theta \to 0} \frac{{\sin \theta }}{{6\theta }}\)
2. \(\displaystyle \mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {6x} \right)}}{x}\)
3. \(\displaystyle \mathop {\lim }\limits_{x \to 0} \frac{x}{{\sin \left( {7x} \right)}}\)
4. \(\displaystyle \mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{{\sin \left( {8t} \right)}}\)
5. \(\displaystyle \mathop {\lim }\limits_{x \to 4} \frac{{\sin \left( {x - 4} \right)}}{{x - 4}}\)
6. \(\displaystyle \mathop {\lim }\limits_{z \to 0} \frac{{\cos \left( {2z} \right) - 1}}{z}\)
Show All Solutions Hide All Solutions
\(\displaystyle \mathop {\lim }\limits_{\theta \to 0} \frac{{\sin \theta }}{{6\theta }}\)
Show Solution
There really isn’t a whole lot to this limit. In fact, it’s only here to contrast with the next example so you can see the difference in how these work. In this case since there is only a 6 in the
denominator we’ll just factor this out and then use the fact.
\[\mathop {\lim }\limits_{\theta \to 0} \frac{{\sin \theta }}{{6\theta }} = \frac{1}{6}\mathop {\lim }\limits_{\theta \to 0} \frac{{\sin \theta }}{\theta } = \frac{1}{6}\left( 1 \right) = \frac{1}{6}
\(\displaystyle \mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {6x} \right)}}{x}\)
Show Solution
Now, in this case we can’t factor the 6 out of the sine so we’re stuck with it there and we’ll need to figure out a way to deal with it. To do this problem we need to notice that in the fact the
argument of the sine is the same as the denominator (i.e. both \(\theta \)’s). So we need to get both of the argument of the sine and the denominator to be the same. We can do this by multiplying the
numerator and the denominator by 6 as follows.
\[\mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {6x} \right)}}{x} = \mathop {\lim }\limits_{x \to 0} \frac{{6\sin \left( {6x} \right)}}{{6x}} = 6\mathop {\lim }\limits_{x \to 0} \frac{{\sin \
left( {6x} \right)}}{{6x}}\]
Note that we factored the 6 in the numerator out of the limit. At this point, while it may not look like it, we can use the fact above to finish the limit.
To see that we can use the fact on this limit let’s do a change of variables. A change of variables is really just a renaming of portions of the problem to make something look more like something we
know how to deal with. They can’t always be done, but sometimes, such as this case, they can simplify the problem. The change of variables here is to let \(\theta = 6x\) and then notice that as \(x \
to 0\) we also have \(\theta \to 6\left( 0 \right) = 0\). When doing a change of variables in a limit we need to change all the \(x\)’s into \(\theta \)’s and that includes the one in the limit.
Doing the change of variables on this limit gives,
\[\begin{align*}\mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {6x} \right)}}{x} & = 6\mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {6x} \right)}}{{6x}}\hspace{0.5in}{\rm{let }}\theta = 6x
\\ & = 6\mathop {\lim }\limits_{\theta \to 0} \frac{{\sin \left( \theta \right)}}{\theta }\\ & = 6\left( 1 \right)\\ & = 6\hspace{0.25in}\end{align*}\]
And there we are. Note that we didn’t really need to do a change of variables here. All we really need to notice is that the argument of the sine is the same as the denominator and then we can use
the fact. A change of variables, in this case, is really only needed to make it clear that the fact does work.
\(\displaystyle \mathop {\lim }\limits_{x \to 0} \frac{x}{{\sin \left( {7x} \right)}}\)
Show Solution
In this case we appear to have a small problem in that the function we’re taking the limit of here is upside down compared to that in the fact. This is not the problem it appears to be once we notice
\[\frac{x}{{\sin \left( {7x} \right)}} = \frac{1}{{\,\,\,\frac{{\sin \left( {7x} \right)}}{x}\,\,\,}}\]
and then all we need to do is recall a nice property of limits that allows us to do ,
\[\begin{align*}\mathop {\lim }\limits_{x \to 0} \frac{x}{{\sin \left( {7x} \right)}} & = \mathop {\lim }\limits_{x \to 0} \frac{1}{{\,\,\,\frac{{\sin \left( {7x} \right)}}{x}\,\,\,}}\\ & = \frac{{\
mathop {\lim }\limits_{x \to 0} 1}}{{\,\,\,\mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {7x} \right)}}{x}\,\,\,}}\\ & = \frac{1}{{\,\,\,\mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {7x}
With a little rewriting we can see that we do in fact end up needing to do a limit like the one we did in the previous part. So, let’s do the limit here and this time we won’t bother with a change of
variable to help us out. All we need to do is multiply the numerator and denominator of the fraction in the denominator by 7 to get things set up to use the fact. Here is the work for this limit.
\[\begin{align*}\mathop {\lim }\limits_{x \to 0} \frac{x}{{\sin \left( {7x} \right)}} & = \frac{1}{{\,\,\,\mathop {\lim }\limits_{x \to 0} \frac{{7\sin \left( {7x} \right)}}{{7x}}\,\,\,}}\\ & = \frac
{1}{{\,\,\,7\mathop {\lim }\limits_{x \to 0} \frac{{\sin \left( {7x} \right)}}{{7x}}\,\,\,}}\\ & = \frac{1}{{\left( 7 \right)\left( 1 \right)}}\\ & = \frac{1}{7}\end{align*}\]
\(\displaystyle \mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{{\sin \left( {8t} \right)}}\)
Show Solution
This limit looks nothing like the limit in the fact, however it can be thought of as a combination of the previous two parts by doing a little rewriting. First, we’ll split the fraction up as
\[\mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{{\sin \left( {8t} \right)}} = \mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{1}\frac{1}{{\sin \left( {8t} \
Now, the fact wants a \(t\) in the denominator of the first and in the numerator of the second. This is easy enough to do if we multiply the whole thing by \({\textstyle{t \over t}}\) (which is just
one after all and so won’t change the problem) and then do a little rearranging as follows,
\[\begin{align*}\mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{{\sin \left( {8t} \right)}} & = \mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{1}\frac{1}{{\sin
\left( {8t} \right)}}\frac{t}{t}\\ & = \mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{t}\frac{t}{{\sin \left( {8t} \right)}}\\ & = \left( {\mathop {\lim }\limits_{t \to 0} \frac
{{\sin \left( {3t} \right)}}{t}} \right)\left( {\mathop {\lim }\limits_{t \to 0} \frac{t}{{\sin \left( {8t} \right)}}} \right)\end{align*}\]
At this point we can see that this really is two limits that we’ve seen before. Here is the work for each of these and notice on the second limit that we’re going to work it a little differently than
we did in the previous part. This time we’re going to notice that it doesn’t really matter whether the sine is in the numerator or the denominator as long as the argument of the sine is the same as
what’s in the numerator the limit is still one.
Here is the work for this limit.
\[\begin{align*}\mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{{\sin \left( {8t} \right)}} & = \left( {\mathop {\lim }\limits_{t \to 0} \frac{{3\sin \left( {3t} \right)}}{{3t}}} \
right)\left( {\mathop {\lim }\limits_{t \to 0} \frac{{8t}}{{8\sin \left( {8t} \right)}}} \right)\\ & = \left( {3\mathop {\lim }\limits_{t \to 0} \frac{{\sin \left( {3t} \right)}}{{3t}}} \right)\left(
{\frac{1}{8}\mathop {\lim }\limits_{t \to 0} \frac{{8t}}{{\sin \left( {8t} \right)}}} \right)\\ & = \left( 3 \right)\left( {\frac{1}{8}} \right)\\ & = \frac{3}{8}\end{align*}\]
\(\displaystyle \mathop {\lim }\limits_{x \to 4} \frac{{\sin \left( {x - 4} \right)}}{{x - 4}}\)
Show Solution
This limit almost looks the same as that in the fact in the sense that the argument of the sine is the same as what is in the denominator. However, notice that, in the limit, \(x\) is going to 4 and
not 0 as the fact requires. However, with a change of variables we can see that this limit is in fact set to use the fact above regardless.
So, let \(\theta = x - 4\) and then notice that as \(x \to 4\) we have \(\theta \to 0\). Therefore, after doing the change of variable the limit becomes,
\[\mathop {\lim }\limits_{x \to 4} \frac{{\sin \left( {x - 4} \right)}}{{x - 4}} = \mathop {\lim }\limits_{\theta \to 0} \frac{{\sin \theta }}{\theta } = 1\]
\(\displaystyle \mathop {\lim }\limits_{z \to 0} \frac{{\cos \left( {2z} \right) - 1}}{z}\)
Show Solution
The previous parts of this example all used the sine portion of the fact. However, we could just have easily used the cosine portion so here is a quick example using the cosine portion to illustrate
this. We’ll not put in much explanation here as this really does work in the same manner as the sine portion.
\[\begin{align*}\mathop {\lim }\limits_{z \to 0} \frac{{\cos \left( {2z} \right) - 1}}{z} & = \mathop {\lim }\limits_{z \to 0} \frac{{2\left( {\cos \left( {2z} \right) - 1} \right)}}{{2z}}\\ & = 2\
mathop {\lim }\limits_{z \to 0} \frac{{\cos \left( {2z} \right) - 1}}{{2z}}\\ & = 2\left( 0 \right)\\ & 0\end{align*}\]
All that is required to use the fact is that the argument of the cosine is the same as the denominator.
Okay, now that we’ve gotten this set of limit examples out of the way let’s get back to the main point of this section, differentiating trig functions.
We’ll start with finding the derivative of the sine function. To do this we will need to use the definition of the derivative. It’s been a while since we’ve had to use this, but sometimes there just
isn’t anything we can do about it. Here is the definition of the derivative for the sine function.
\[\frac{d}{{dx}}\left( {\sin \left( x \right)} \right) = \mathop {\lim }\limits_{h \to 0} \frac{{\sin \left( {x + h} \right) - \sin \left( x \right)}}{h}\]
Since we can’t just plug in \(h = 0\) to evaluate the limit we will need to use the following trig formula on the first sine in the numerator.
\[\sin \left( {x + h} \right) = \sin \left( x \right)\cos \left( h \right) + \cos \left( x \right)\sin \left( h \right)\]
Doing this gives us,
\[\begin{align*}\frac{d}{{dx}}\left( {\sin \left( x \right)} \right) & = \mathop {\lim }\limits_{h \to 0} \frac{{\sin \left( x \right)\cos \left( h \right) + \cos \left( x \right)\sin \left( h \
right) - \sin \left( x \right)}}{h}\\ & = \mathop {\lim }\limits_{h \to 0} \frac{{\sin \left( x \right)\left( {\cos \left( h \right) - 1} \right) + \cos \left( x \right)\sin \left( h \right)}}{h}\\ &
= \mathop {\lim }\limits_{h \to 0} \sin \left( x \right)\frac{{\cos \left( h \right) - 1}}{h} + \mathop {\lim }\limits_{h \to 0} \cos \left( x \right)\frac{{\sin \left( h \right)}}{h}\end{align*}\]
As you can see upon using the trig formula we can combine the first and third term and then factor a sine out of that. We can then break up the fraction into two pieces, both of which can be dealt
with separately.
Now, both of the limits here are limits as \(h\) approaches zero. In the first limit we have a \(\sin \left( x \right)\) and in the second limit we have a \(\cos \left( x \right)\). Both of these are
only functions of \(x\) only and as \(h\) moves in towards zero this has no effect on the value of \(x\). Therefore, as far as the limits are concerned, these two functions are constants and can be
factored out of their respective limits. Doing this gives,
\[\frac{d}{{dx}}\left( {\sin \left( x \right)} \right) = \sin \left( x \right)\mathop {\lim }\limits_{h \to 0} \frac{{\cos \left( h \right) - 1}}{h} + \cos \left( x \right)\mathop {\lim }\limits_{h \
to 0} \frac{{\sin \left( h \right)}}{h}\]
At this point all we need to do is use the limits in the fact above to finish out this problem.
\[\frac{d}{{dx}}\left( {\sin \left( x \right)} \right) = \sin \left( x \right)\left( 0 \right) + \cos \left( x \right)\left( 1 \right) = \cos \left( x \right)\]
Differentiating cosine is done in a similar fashion. It will require a different trig formula, but other than that is an almost identical proof. The details will be left to you. When done with the
proof you should get,
\[\frac{d}{{dx}}\left( {\cos \left( x \right)} \right) = - \sin \left( x \right)\]
With these two out of the way the remaining four are fairly simple to get. All the remaining four trig functions can be defined in terms of sine and cosine and these definitions, along with
appropriate derivative rules, can be used to get their derivatives.
Let’s take a look at tangent. Tangent is defined as,
\[\tan \left( x \right) = \frac{{\sin \left( x \right)}}{{\cos \left( x \right)}}\]
Now that we have the derivatives of sine and cosine all that we need to do is use the quotient rule on this. Let’s do that.
\[\begin{align*}\frac{d}{{dx}}\left( {\tan \left( x \right)} \right) & = \frac{d}{{dx}}\left( {\frac{{\sin \left( x \right)}}{{\cos \left( x \right)}}} \right)\\ & = \frac{{\cos \left( x \right)\cos
\left( x \right) - \sin \left( x \right)\left( { - \sin \left( x \right)} \right)}}{{{{\left( {\cos \left( x \right)} \right)}^2}}}\\ & = \frac{{{{\cos }^2}\left( x \right) + {{\sin }^2}\left( x \
right)}}{{{{\cos }^2}\left( x \right)}}\end{align*}\]
Now, recall that \({\cos ^2}\left( x \right) + {\sin ^2}\left( x \right) = 1\) and if we also recall the definition of secant in terms of cosine we arrive at,
\[\begin{align*}\frac{d}{{dx}}\left( {\tan \left( x \right)} \right) & = \frac{{{{\cos }^2}\left( x \right) + {{\sin }^2}\left( x \right)}}{{{{\cos }^2}\left( x \right)}}\\ & = \frac{1}{{{{\cos }^2}\
left( x \right)}}\\ & = {\sec ^2}\left( x \right)\end{align*}\]
The remaining three trig functions are also quotients involving sine and/or cosine and so can be differentiated in a similar manner. We’ll leave the details to you. Here are the derivatives of all
six of the trig functions.
Derivatives of the six trig functions
\[\begin{array}{ll}\displaystyle \frac{d}{{dx}}\left( {\sin \left( x \right)} \right) = \cos \left( x \right) & \hspace{0.5in}\displaystyle \frac{d}{{dx}}\left( {\cos \left( x \right)} \right) = - \
sin \left( x \right)\\ \displaystyle \frac{d}{{dx}}\left( {\tan \left( x \right)} \right) = {\sec ^2}\left( x \right) & \hspace{0.5in}\displaystyle \frac{d}{{dx}}\left( {\cot \left( x \right)} \
right) = - {\csc ^2}\left( x \right)\\ \displaystyle \frac{d}{{dx}}\left( {\sec \left( x \right)} \right) = \sec \left( x \right)\tan \left( x \right) & \hspace{0.5in}\displaystyle \frac{d}{{dx}}\
left( {\csc \left( x \right)} \right) = - \csc \left( x \right)\cot \left( x \right)\end{array}\]
At this point we should work some examples.
Example 2
Differentiate each of the following functions.
1. \(g\left( x \right) = 3\sec \left( x \right) - 10\cot \left( x \right)\)
2. \(h\left( w \right) = 3{w^{ - 4}} - {w^2}\tan \left( w \right)\)
3. \(y = 5\sin \left( x \right)\cos \left( x \right) + 4\csc \left( x \right)\)
4. \(\displaystyle P\left( t \right) = \frac{{\sin \left( t \right)}}{{3 - 2\cos \left( t \right)}}\)
Show All Solutions Hide All Solutions
\(g\left( x \right) = 3\sec \left( x \right) - 10\cot \left( x \right)\)
Show Solution
There really isn’t a whole lot to this problem. We’ll just differentiate each term using the formulas from above.
\[\begin{align*}g'\left( x \right) & = 3\sec \left( x \right)\tan \left( x \right) - 10\left( { - {{\csc }^2}\left( x \right)} \right)\\ & = 3\sec \left( x \right)\tan \left( x \right) + 10{\csc ^2}\
left( x \right)\end{align*}\]
\(h\left( w \right) = 3{w^{ - 4}} - {w^2}\tan \left( w \right)\)
Show Solution
In this part we will need to use the product rule on the second term and note that we really will need the product rule here. There is no other way to do this derivative unlike what we saw when we
first looked at the product rule. When we first looked at the product rule the only functions we knew how to differentiate were polynomials and in those cases all we really needed to do was multiply
them out and we could take the derivative without the product rule. We are now getting into the point where we will be forced to do the product rule at times regardless of whether or not we want to.
We will also need to be careful with the minus sign in front of the second term and make sure that it gets dealt with properly. There are two ways to deal with this. One way it to make sure that you
use a set of parentheses as follows,
\[\begin{align*}h'\left( w \right) & = - 12{w^{ - 5}} - \left( {2w\tan \left( w \right) + {w^2}{{\sec }^2}\left( w \right)} \right)\\ & = - 12{w^{ - 5}} - 2w\tan \left( w \right) - {w^2}{\sec ^2}\
left( w \right)\end{align*}\]
Because the second term is being subtracted off of the first term then the whole derivative of the second term must also be subtracted off of the derivative of the first term. The parenthesis make
this idea clear.
A potentially easier way to do this is to think of the minus sign as part of the first function in the product. Or, in other words the two functions in the product, using this idea, are \( - {w^2}\)
and \(\tan \left( w \right)\). Doing this gives,
\[h'\left( w \right) = - 12{w^{ - 5}} - 2w\tan \left( w \right) - {w^2}{\sec ^2}\left( w \right)\]
So, regardless of how you approach this problem you will get the same derivative.
\(y = 5\sin \left( x \right)\cos \left( x \right) + 4\csc \left( x \right)\)
Show Solution
As with the previous part we’ll need to use the product rule on the first term. We will also think of the 5 as part of the first function in the product to make sure we deal with it correctly.
Alternatively, you could make use of a set of parentheses to make sure the 5 gets dealt with properly. Either way will work, but we’ll stick with thinking of the 5 as part of the first term in the
product. Here’s the derivative of this function.
\[\begin{align*}y' &= 5\cos \left( x \right)\cos \left( x \right) + 5\sin \left( x \right)\left( { - \sin \left( x \right)} \right) - 4\csc \left( x \right)\cot \left( x \right)\\ & = 5{\cos ^2}\left
( x \right) - 5{\sin ^2}\left( x \right) - 4\csc \left( x \right)\cot \left( x \right)\end{align*}\]
\(\displaystyle P\left( t \right) = \frac{{\sin \left( t \right)}}{{3 - 2\cos \left( t \right)}}\)
Show Solution
In this part we’ll need to use the quotient rule to take the derivative.
\[\begin{align*}P'\left( t \right) & = \frac{{\cos \left( t \right)\left( {3 - 2\cos \left( t \right)} \right) - \sin \left( t \right)\left( {2\sin \left( t \right)} \right)}}{{{{\left( {3 - 2\cos \
left( t \right)} \right)}^2}}}\\ & = \frac{{3\cos \left( t \right) - 2{{\cos }^2}\left( t \right) - 2{{\sin }^2}\left( t \right)}}{{{{\left( {3 - 2\cos \left( t \right)} \right)}^2}}}\end{align*}\]
Be careful with the signs when differentiating the denominator. The negative sign we get from differentiating the cosine will cancel against the negative sign that is already there.
This appears to be done, but there is actually a fair amount of simplification that can yet be done. To do this we need to factor out a “-2” from the last two terms in the numerator and the make use
of the fact that \({\cos ^2}\left( \theta \right) + {\sin ^2}\left( \theta \right) = 1\).
\[\begin{align*}P'\left( t \right) & = \frac{{3\cos \left( t \right) - 2\left( {{{\cos }^2}\left( t \right) + {{\sin }^2}\left( t \right)} \right)}}{{{{\left( {3 - 2\cos \left( t \right)} \right)}^
2}}}\\ & = \frac{{3\cos \left( t \right) - 2}}{{{{\left( {3 - 2\cos \left( t \right)} \right)}^2}}}\end{align*}\]
As a final problem here let’s not forget that we still have our standard interpretations to derivatives.
Example 3
Suppose that the amount of money in a bank account is given by \[P\left( t \right) = 500 + 100\cos \left( t \right) - 150\sin \left( t \right)\]
where \(t\) is in years. During the first 10 years in which the account is open when is the amount of money in the account increasing?
Show Solution
To determine when the amount of money is increasing we need to determine when the rate of change is positive. Since we know that the rate of change is given by the derivative that is the first thing
that we need to find.
\[P'\left( t \right) = - 100\sin \left( t \right) - 150\cos \left( t \right)\]
Now, we need to determine where in the first 10 years this will be positive. This is equivalent to asking where in the interval \(\left[ {0,10} \right]\) is the derivative positive. Recall that both
sine and cosine are continuous functions and so the derivative is also a continuous function. The Intermediate Value Theorem then tells us that the derivative can only change sign if it first goes
through zero.
So, we need to solve the following equation.
\[\begin{align*} - 100\sin \left( t \right) - 150\cos \left( t \right) & = 0\\ 100\sin \left( t \right) & = - 150\cos \left( t \right)\\ \frac{{\sin \left( t \right)}}{{\cos \left( t \right)}} & = -
1.5\\ \tan \left( t \right) & = - 1.5\end{align*}\]
The solution to this equation is,
\[\begin{array}{ll}t = 2.1588 + 2\pi n,&\hspace{0.25in}n = 0, \pm 1, \pm 2, \ldots \\ t = 5.3004 + 2\pi n, & \hspace{0.25in}n = 0, \pm 1, \pm 2, \ldots \end{array}\]
If you don’t recall how to solve trig equations go back and take a look at the sections on solving trig equations in the Review chapter.
We are only interested in those solutions that fall in the range \(\left[ {0,10} \right]\). Plugging in values of \(n\) into the solutions above we see that the values we need are,
\[\begin{align*} & t = 2.1588\hspace{0.5in}t = 2.1588 + 2\pi = 8.4420\\ & t = 5.3004\end{align*}\]
So, much like solving polynomial inequalities all that we need to do is sketch in a number line and add in these points. These points will divide the number line into regions in which the derivative
must always be the same sign. All that we need to do then is choose a test point from each region to determine the sign of the derivative in that region.
Here is the number line with all the information on it.
So, it looks like the amount of money in the bank account will be increasing during the following intervals.
\[2.1588 < t < 5.3004\hspace{0.5in}8.4420 < t < 10\]
Note that we can’t say anything about what is happening after \(t = 10\) since we haven’t done any work for \(t\)’s after that point.
In this section we saw how to differentiate trig functions. We also saw in the last example that our interpretations of the derivative are still valid so we can’t forget those.
Also, it is important that we be able to solve trig equations as this is something that will arise off and on in this course. It is also important that we can do the kinds of number lines that we
used in the last example to determine where a function is positive and where a function is negative. This is something that we will be doing on occasion in both this chapter and the next. | {"url":"https://tutorial.math.lamar.edu/Classes/CalcI/DiffTrigFcns.aspx","timestamp":"2024-11-14T11:07:12Z","content_type":"text/html","content_length":"102456","record_id":"<urn:uuid:c821b5b4-afed-4514-90a8-beac82aa2a27>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00273.warc.gz"} |
Programming a(b+c)
• Assume a, b and c are declared variables and that the result is saved in $v0
lw $t0,a # Get value of a
lw $t1,b # Get value of b
lw $t2,c # Get value of c
add $t1,$t1,$t2 # Add b and c
mult $v0,$t0,$t1 # Multiply result times a
This 3-operand multiply pseudoinstruction might be generated as ...
mult $t0,$t1 # Do multiply
mflo $v0 # Get result assuming < 2x109
How can one test to see if the number was small enough? | {"url":"https://courses.cs.washington.edu/courses/cse378/00sp/CSE378-00.Lec7/tsld002.htm","timestamp":"2024-11-08T08:23:13Z","content_type":"text/html","content_length":"1630","record_id":"<urn:uuid:1157ecdb-538c-458e-b162-9ef70217a55a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00176.warc.gz"} |
Sequence logos
1 General =====
• Type: - Matrix Analysis
• Heading: - Misc. (Analysis)
• Source code: not public.
2 Brief description
Create and display sequence logos based on a column containing protein sequence windows centered around sites of interest.
3 Parameters
3.1 Sequences
Selected categorical column that contains the amino acid sequences for which a Sequence logo should be generated (default: first categorical column in the matrix).
Hint: The sequences need to have the same length.
3.2 Column
Selected categorical column that groups the rows according to their value in that column and generates one sequence logo for each value (default: <None>). If <None> is selected one Sequence logo is
generated for all sequences in the column defined in the parameter “Sequences”.
3.3 Compute position-specific p-values
Specifies the input to calculate the position-specific scoring matrix (PSSM) containing the p-values for each position (default: global occurrence). For each Sequence logo one PSSM is calculated
containing the p-value for each amino acid at each position in the sequence. The PSSM can be obtained by clicking on the “Export aa p-values” button in the “Sequence logos” tab of the matrix that was
used to generate the Sequence logo(s). | {"url":"https://cox-labs.github.io/coxdocs/sequencelogo.html","timestamp":"2024-11-12T22:41:10Z","content_type":"application/xhtml+xml","content_length":"26468","record_id":"<urn:uuid:1fa028cc-c865-41fd-ba0e-494df74b6251>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00288.warc.gz"} |
Proportion of Variance Explained
David M. Lane
Analysis of Variance, Partitioning Sums of Squares, Multiple Regression
Learning Objectives
1. State the difference in bias between η^2 and ω^2
2. Compute η^2 Compute ω^2
3. Distinguish between ω^2 and partial ω^2
4. State the bias in R^2 and what can be done to reduce it
Effect sizes are often measured in terms of the proportion of variance explained by a variable. In this section, we discuss this way to measure effect size in both ANOVA designs and in correlational
ANOVA Designs
Responses of subjects will vary in just about every experiment. Consider, for example, the "Smiles and Leniency" case study. A histogram of the dependent variable "leniency" is shown in Figure 1. It
is clear that the leniency scores vary considerably. There are many reasons why the scores differ. One, of course, is that subjects were assigned to four different smile conditions and the condition
they were in may have affected their leniency score. In addition, it is likely that some subjects are generally more lenient than others, thus contributing to the differences among scores. There are
many other possible sources of differences in leniency ratings including, perhaps, that some subjects were in better moods than other subjects and/or that some subjects reacted more negatively than
others to the looks or mannerisms of the stimulus person. You can imagine that there are innumerable other reasons why the scores of the subjects could differ.
Figure 1. Distribution of leniency scores.
One way to measure the effect of conditions is to determine the proportion of the variance among subjects' scores that is attributable to conditions. In this example, the variance of scores is 2.794.
The question is how this variance compares with what the variance would have been if every subject had been in the same treatment condition. We estimate this by computing the variance within each of
the treatment conditions and taking the mean of these variances. For this example, the mean of the variances is 2.649. Since the mean variance within the smile conditions is not that much less than
the variance ignoring conditions, it is clear that "Smile Condition" is not responsible for a high percentage of the variance of the scores. The most convenient way to compute the proportion
explained is in terms of the sum of squares "conditions" and the sum of squares total. The computations for these sums of squares are shown in the chapter on ANOVA. For the present data, the sum of
squares for "Smile Condition" is 27.535 and the sum of squares total is 377.189. Therefore, the proportion explained by "Smile Condition" is:
27.535/377.189 = 0.073.
Thus, 0.073 or 7.3% of the variance is explained by "Smile Condition."
An alternative way to look at the variance explained is as the proportion reduction in error. The sum of squares total (377.189) represents the variation when "Smile Condition" is ignored and the sum
of squares error (377.189 - 27.535 = 349.654) is the variation left over when "Smile Condition" is accounted for. The difference between 377.189 and 349.654 is 27.535. This reduction in error of
27.535 represents a proportional reduction of 27.535/377.189 = 0.073, the same value as computed in terms of proportion of variance explained.
This measure of effect size, whether computed in terms of variance explained or in terms of percent reduction in error, is called η^2 where η is the Greek letter eta. Unfortunately, η^2 tends to
overestimate the variance explained and is therefore a biased estimate of the proportion of variance explained. As such, it is not recommended (despite the fact that it is reported by a leading
statistics package).
An alternative measure, ω^2 (omega squared), is unbiased and can be computed from
where MSE is the mean square error and k is the number of conditions. For this example, k = 4 and ω^2 = 0.052.
It is important to be aware that both the variability of the population sampled and the specific levels of the independent variable are important determinants of the proportion of variance explained.
Consider two possible designs of an experiment investigating the effect of alcohol consumption on driving ability. As can be seen in Table 1, Design 1 has a smaller range of doses and a more diverse
population than Design 2. What are the implications for the proportion of variance explained by Dose? Variation due to Dose would be greater in Design 2 than Design 1 since alcohol is manipulated
more strongly than in Design 1. However, the variance in the population should be greater in Design 1 since it includes a more diverse set of drivers. Since with Design 1 the variance due to Dose
would be smaller and the total variance would be larger, the proportion of variance explained by Dose would be much less using Design 1 than using Design 2. Thus, the proportion of variance explained
is not a general characteristic of the independent variable. Instead, it is dependent on the specific levels of the independent variable used in the experiment and the variability of the population
Table 1. Design Parameters
Design Dose Population
1 0.30 All Drivers between 16 and 80 Years of Age
2 0.00 Experienced Drivers between 25 and 30 Years of Age
0.50 1.00
Factorial Designs
In one-factor designs, the sum of squares total is the sum of squares condition plus the sum of squares error. The proportion of variance explained is defined relative to sum of squares total. In an
A x B design, there are three sources of variation (A, B, A x B) in addition to error. The proportion of variance explained for a variable (A, for example) could be defined relative to the sum of
squares total (SSQ[A] + SSQ[B] + SSQ[AxB] + SSQ[error]) or relative to SSQ[A] + SSQ[error].
To illustrate with an example, consider a hypothetical experiment on the effects of age (6 and 12 years) and of methods for teaching reading (experimental and control conditions). The means are shown
in Table 2. The standard deviation of each of the four cells (Age x Treatment combinations) is 5. (Naturally, for real data, the standard deviations would not be exactly equal and the means would not
be whole numbers.) Finally, there were 10 subjects per cell resulting in a total of 40 subjects.
Table 2. Condition Means
Age Experimental Control
The sources of variation, degrees of freedom, and sums of squares from the analysis of variance summary table as well as four measures of effect size are shown in Table 3. Note that the sum of
squares for age is very large relative to the other two effects. This is what would be expected since the difference in reading ability between 6- and 12-year-olds is very large relative to the
effect of condition.
Table 3. ANOVA Summary Table
Source df SSQ η^2 partial ω^2 partial
η^2 ω^2
Age 1 1440 0.567 0.615 0.552 0.586
Condition 1 160 0.063 0.151 0.053 0.119
A x C 1 40 0.016 0.043 0.006 0.015
Error 36 900
Total 39 2540
First, we consider the two methods of computing^ η^2, labeled η^2 and partial η^2. The value of η^2 for an effect is simply the sum of squares for this effect divided by the sum of squares total. For
example, the η^2 for Age is 1440/2540 = 0.567. As in a one-factor design, η^2 is the proportion of the total variation explained by a variable. Partial η^2 for Age is SSQ[Age] divided by (SSQ[Age] +
SSQ[error]), which is 1440/2340 = 0.615.
As you can see, the partial η^2 is larger than η^2. This is because the denominator is smaller for the partial η^2. The difference between η^2 and partial η^2 is even larger for the effect of
condition. This is because SSQ[Age] is large and it makes a big difference whether or not it is included in the denominator.
As noted previously, it is better to use ω^2 than η^2 because η^2 has a positive bias. You can see that the values for ω^2 are smaller than for η^2. The calculations for ω^2 are shown below:
where N is the total number of observations.
The choice of whether to use ω^2 or the partial ω^2 is subjective; neither one is correct or incorrect. However, it is important to understand the difference and, if you are using computer software,
to know which version is being computed. (Beware, at least one software package labels the statistics incorrectly).
Correlational Studies
In the section "Partitioning the Sums of Squares" in the Regression chapter, we saw that the sum of squares for Y (the criterion variable) can be partitioned into the sum of squares explained and the
sum of squares error. The proportion of variance explained in multiple regression is therefore:
In simple regression, the proportion of variance explained is equal to r^2; in multiple regression, it is equal to R^2.
In general, R^2 is analogous to η^2 and is a biased estimate of the variance explained. The following formula for adjusted R^2 is analogous to ω^2 and is less biased (although not completely
where N is the total number of observations and p is the number of predictor variables.
Please answer the questions: | {"url":"https://onlinestatbook.com/2/effect_size/variance_explained.html","timestamp":"2024-11-11T14:33:49Z","content_type":"text/html","content_length":"19649","record_id":"<urn:uuid:e1b233b9-8981-4ce6-a73c-88c8053fd83c>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00436.warc.gz"} |
Katharina Ensinger
Sep 06, 2023
Abstract:Dynamics model learning deals with the task of inferring unknown dynamics from measurement data and predicting the future behavior of the system. A typical approach to address this problem
is to train recurrent models. However, predictions with these models are often not physically meaningful. Further, they suffer from deteriorated behavior over time due to accumulating errors. Often,
simulators building on first principles are available being physically meaningful by design. However, modeling simplifications typically cause inaccuracies in these models. Consequently, hybrid
modeling is an emerging trend that aims to combine the best of both worlds. In this paper, we propose a new approach to hybrid modeling, where we inform the latent states of a learned model via a
black-box simulator. This allows to control the predictions via the simulator preventing them from accumulating errors. This is especially challenging since, in contrast to previous approaches,
access to the simulator's latent states is not available. We tackle the task by leveraging observers, a well-known concept from control theory, inferring unknown latent states from observations and
dynamics over time. In our learning-based setting, we jointly learn the dynamics and an observer that infers the latent states via the simulator. Thus, the simulator constantly corrects the latent
states, compensating for modeling mismatch caused by learning. To maintain flexibility, we train an RNN-based residuum for the latent states that cannot be informed by the simulator. | {"url":"https://www.catalyzex.com/author/Katharina%20Ensinger","timestamp":"2024-11-13T22:53:19Z","content_type":"text/html","content_length":"100174","record_id":"<urn:uuid:25079097-693b-4022-8a4e-64220f993167>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00437.warc.gz"} |
Diffraction Geometry
XMM-Newton Users Handbook
The diffraction geometry of the reflection gratings is illustrated in Fig. 76. Light strikes the gratings at an angle of incidence with respect to the plane of the grating, and emerges at angle given
by the dispersion equation
where is the radiation wavelength, d the grating spacing, and m the spectral order. The RGS is designed for use with negative orders, .
European Space Agency - XMM-Newton Science Operations Centre | {"url":"https://heasarc.gsfc.nasa.gov/docs/xmm/uhb/rgsdiffgeom.html","timestamp":"2024-11-09T06:08:20Z","content_type":"text/html","content_length":"4932","record_id":"<urn:uuid:c5a3293e-f210-4ed5-8caa-0644b5c1b548>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00317.warc.gz"} |
The np-chart with 3-sigma limits and the ARL-unbiased np-chart revisited
Morais, M. C. ; Wittenberg, P.; Cruz, C. J.
Stochastics and Quality Control, 37 (2022), 107-116
In the statistical process control literature, counts of nonconforming items are frequently assumed to be independent and have a binomial distribution with parameters (n,p), where n and p represent
the fixed sample size and the fraction nonconforming. In this paper, the traditional np-chart with 3-sigma control limits is reexamined. We show that, even if its lower control limit is positive and
we are dealing with a small target value p0 of the fraction nonconforming (p), this chart average run length (ARL) function achieves a maximum to the left of p0. Moreover, the in-control ARL of this
popular chart is also shown to vary considerably with the fixed sample size n. We also look closely at the ARL function of the ARL-unbiased np-chart proposed by Morais (2016), which attains a
pre-specified maximum value in the in-control situation. This chart triggers a signal at sample t with probability one, if the observed number of nonconforming items, xt, is beyond the lower and
upper control limits (L and U); probability gL (resp. gU), if xt coincides with L (resp. U). A graphical display for the ARL-unbiased np-chart is proposed, taking advantage of the qcc package for the
R statistical software. Furthermore, as far as we have investigated, its control limits can be obtained using three different search algorithms; their computation times are thoroughly compared. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?section=statistics&member_id=86&project_id=5&s_member_type=1&status=1&doc_id=3545","timestamp":"2024-11-07T20:33:55Z","content_type":"text/html","content_length":"9573","record_id":"<urn:uuid:a9fc9841-7b01-4ec9-8f15-76fdbfc1ed45>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00367.warc.gz"} |
Form of Dilaton term in polarization tensor
The vertex operator associated with massless state is
$$V(k,\epsilon) = -\frac{2}{\alpha}\epsilon_{\mu\nu}(k)\bar{\partial}X^\mu(\bar{z})\partial X^\nu(z)e^{ik\cdot X(z,\bar{z})}$$
The polarization tensor can be decomposed into symmetric (Graviton), antisymmetric and Trace bit (Dilaton) Lust,theisen 16.9
$$\epsilon^{(h)}_{\mu\nu} = \epsilon^{(h)}_{\nu\mu},\qquad\epsilon^{(h)}_{\mu\nu} \eta^{\mu\nu} = k^{\mu}\epsilon^{(h)}_{\mu\nu} = 0, $$
$$\epsilon^{(B)}_{\mu\nu} = -\epsilon^{(B)}_{\nu\mu},\qquad k^{\mu}\epsilon^{(B)}_{\mu\nu} = 0, $$
$$\epsilon^{(D)}_{\mu\nu} = \frac{1}{\sqrt{d-2}}(\eta_{\mu\nu}-k_{\mu}\bar{k_{\nu}}-k_{\nu}\bar{k_{\mu}})$$
$\bar{k}$ is an arbitrary light like vector orthogonal to $k$. Can you please tell me why we took that particular form for the Dilaton?
This post imported from StackExchange Physics at 2014-09-07 18:45 (UCT), posted by SE-user sol0invictus
The polarisation tensor is not unique and subject to the equivalence $\epsilon_{\mu\nu}(k) \sim \epsilon_{\mu\nu}(k) + k_\mu\xi_\nu+k_\nu\xi_\mu$ which is nothing but diffeomorphisms in target space.
this gauge symmetry shows up in the form the existence of a null state which can be constructed from the state given in Eq (16.11). One fixes this gauge symmetry by choosing $k^\mu \epsilon_{\mu\nu}=
0$ and $k^\nu \epsilon_{\mu\nu}=0$ (this is the analog of $\partial_\mu A^\mu=0$ in $U(1)$ gauge theories). This can be viewed as a transversality condition on the polarisation tensor. On decomposing
into a symmetric traceless, antisymmetric and trace part, this condition remains to be implemented. For the dilation, this is done by introducing an reference vector $\bar{k}$. $\bar{k}$ is chosen
such that $\bar{k}^2=0$ and $\bar{k}\cdot k=1$ (not $0$ as the OP has stated). Then, $(\eta_{\mu\nu}-k_\mu\bar{k}_\nu-k_\nu\bar{k}_\mu)$ is the projector on to the space transverse to $k$.
The graviton represents only the traceless symmetric part of the polarization tensor. The dilaton part belongs to the symmetric part too (the trace part). The symmetric dilaton \(\epsilon^{(D)}_{\mu\
nu} (k)\)represents only \(1\)degree of freedom, so it is necessarily expressed as a function of \(k\), and it must verify the transversality condition : \(k^\mu\epsilon^{(D)}_{\mu\nu} (k)==0\)
The general expression is then \(\epsilon^{(D)}_{\mu\nu} (k) \sim \eta_{\mu\nu} -(k_\mu \bar k_\nu + k_\nu \bar k_\mu) \)
The transversality condition implies \(k_\nu - (k.\bar k) k_\nu =0\), and, so finally \((k.\bar k)=1\)
One may add a supplementary transversality condition : \(\bar k^\mu\epsilon^{(D)}_{\mu\nu} (k)=0\), this clearly implies \(\bar k^2=0\)
This supplementary condition is compatible with the precedent conditions, in particular the dilaton and graviton degrees of freedom are still orthogonal: \(\epsilon^{(D)}_{\mu\nu} (k) (\epsilon^
{(G)}) ^{\mu\nu} (k)=0\) | {"url":"https://www.physicsoverflow.org/23282/form-of-dilaton-term-in-polarization-tensor","timestamp":"2024-11-04T18:03:28Z","content_type":"text/html","content_length":"129494","record_id":"<urn:uuid:9d8a7397-3011-4f45-950b-99a91cee6c04>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00253.warc.gz"} |
Current transformers
Last Modification: October 6, 2013
Fig. 1: Miniature current transformer with a diameter of approx. 11 mm.
Current transformers are often used in applications where large currents must be measured or where a small measuring impedance is essential. A great benefit of the current transformer is the galvanic
isolation between the measured current and the registration instrument.
Usually the term "current transformer" is used to identify a passive current transformer as shown in figure 1, and is only suitable for alternating current applications. The frequency range is very
depending on the construction of the transformer and the way it is applied. A current transformer has a minimal influence on the measurement and the circuit wherein it is inserted by its very low
impedance and therefore the minimal primary voltage drop.
Beside the passive transformers there are also active current transformers who are able to measure direct current. These are discussed in the article Oscilloscope probes.
Construction & usage
Just like a normal transformer a current transformer consists of a primary and secondary winding mounted on a closed core. The core can be made from iron or ferrite, or in special applications just
The primary winding N[P] carries the current to measure. This winding is most of the times a single turn wire whose inserted through the core hole. The secondary winding N[S] has a large number of
turns and is fixed mounted onto the core. This winding carries the transformed primary current and is connected to the measuring instrument.
Transformation ratio
The transformation ratio is determined by the turn ratio N[P]/N[S]. If the primary winding only consists of a wire inserted though the core, than it counts as one single turn.
At a certain primary current and turn ratio the secondary current is calculated as:
Burden resistor
The secondary winding of the current transformer should ideally be shorted. Because in contradiction to power transformers the current is transformed while the voltage must be kept low as possible. A
shorted secondary winding guarantees a non-obstructed secondary current. A pure short circuit isn't feasible in practice, but a low as possible terminating resistance is the goal. This terminating
resistance for current transformers is often called the burden resistor.
Fig. 3: Current transformer terminated with a burden resistor. The reading is done with a voltmeter.
Fig. 2: Current transformer terminated with an amperemeter.
Figure 2 shows how the secondary winding is terminated with a low ohmic amperemeter. In a lot of cases the current will be measured indirectly with a voltmeter and a burden resistor (figure 3). This
burden resistor must be kept low as possible.
A low ohmic burden resistor will keep the voltage on the secondary side low:
And because the rules for current transformers are the same as for every other transformer, the primary voltage is the transformed voltage of the voltage on the secondary side:
When manufacturing amperemeters they aim at a low as possible burden voltage. When a current transformer is used, the primary voltage drop can be kept low by increasing the turn ratio and a low
terminating resistor.
Transformed impedance
The primary impedance is calculated by the primary voltage and current:
This primary impedance is determined directly from the secondary burden resistor and the turn ratio. If the equations 1 to 4 are combined:
Transformer model
The previous chapter described an ideal current transformer. A practical transformer has a limited bandwidth whose dependent on the burden resistor and drive amplitude. To explain the behavior of a
current transformer a simplified model is drawn in figure 4.
Fig. 4: Simplified model of a current transformer.
The transformer (N1, N2) is ideal. L[m] represents the magnetization induction of the core including hysteresis. The magnetic fields of the two windings are never fully coupled, and the leakage
fields caused by this are suggested by two series inductors L[leakP] en L[leakS]. And the ohmic copper resistances of each winding are Rcu[P] en Rcu[S].
Low frequencies
The limitation of the bandwidth on the lower end is due to a combination of the voltage across the secondary burden resistor and the long period times. A higher burden resistor will cause a higher
secondary voltage relative to a lower burden resistor at the same current value. A higher secondary voltage will result in a higher primary voltage and therefore a higher voltage across the
magnetization self induction L[m].
The current change in the magnetization coil as a result of the primary voltage:
How higher the primary voltage V[P] and longer it sustains dt, how higher the magnetization current will be. The phase of this magnetization current is shifted 90° in relation to the current in the
secondary burden resistor. And since the amplitude of the magnetization current will become larger at lower frequencies it will have two noticeable consequences:
• the secondary current will become smaller in relation to the primary current,
• the phase shift between the primary and secondary current will become larger as visible in figure 10.
Overdriven current transformer
Fig. 5: Distorted voltage and current shape due to overdriving a current transformer.
That the magnetization current is becoming larger at lower frequencies also has the effects that the magnetic field strength becomes bigger:
The core is driven more and more in the direction of saturation which leads to distortion of the secondary current as well as the primary and secondary voltage. The result of this is shown in figure
High frequencies
The windings of a transformer are never 100% magnetic coupled. The magnetic field lines of one winding who doesn't reach the other winding are called the leakage field. The primary and secondary
leakage fields are represented as the leakage inductance's L[leakP] and L[leakS]. These leakage inductances have influence on the behavior at higher frequencies. Because these connected in series
with the ideal transformer will they cause an increase of the impedance at higher frequencies. The effective primary impedance is therefore increased.
The properties of the current transformer described in this chapter do relate to a home made one as shown in figure 1. The secondary winding had 50 turns of Ø 0,22 mm wire on a Ferroxcube LT10/6/
4-3E25 ring core. The ohmic resistance of this winding is 320 mΩ. The primary winding is a wire inserted into the core hole and has an ohmic resistance of 3,6 mΩ.
Measurement results
The results shown in the graphs come from the measured RMS-values of the primary voltage and current secondary voltage at different frequencies. The phase difference is measured between the primary
voltage and current and as well as the phase relation between the primary current and secondary voltage.
Figure 6 shows the transformation error as function of the frequency with two different burden resistors. It makes it clear that a low burden resistor is advantageous to the bandwidth. If a
measurement error of ±2 % is set as limit than is the bandwidth with a terminating resistor 8,3 Ω 800 Hz ... 20 MHz, and with a 50 Ω resistor it becomes a lot less: 4 kHz ... 4 MHz.
The current depending error in the lower frequency range is displayed in figure 7. At higher currents the development towards a larger transformation error is sharper than at smaller currents. This
is due to the increase of the core magnetization. A larger current causes a relative abruptly transition towards a saturated core and causes a lot of distortion.
Fig. 6: Transformation fault of a current transformer.
Fig. 7: Current depending transformation error.
The primary impedance as function of the frequency is shown in figure 8. The center section of the impedance characteristic is flat. The primary impedance is equal to the transformed secondary burden
The impedance decreases at lower frequencies because the magnetization current now plays a relatively large role. And because the transformer core becomes overdriven and the resulted distortion of
the voltage and current makes it difficult to calculate the impedance and the error of it. Higher currents have an even bigger influence on this error as can be seen in figure 9.
In the higher frequency range the couple factor between the primary and secondary windings will play a growing role.
Fig. 8: Primary impedance of a current transformer.
Fig. 9: Current depending primary impedance.
Fig. 10: Phase error of a current transformer.
The relative increasement of the magnetization current at lower frequencies will cause a frequency depended phase error between the primary and secondary current. This proporty must taken into
account when a current transformer is used for phase and power measurments. | {"url":"http://meettechniek.info/instruments/current-transformer.html","timestamp":"2024-11-07T10:33:21Z","content_type":"text/html","content_length":"26290","record_id":"<urn:uuid:2f506ec4-cb8a-49e1-9eee-9c66c4d2193d>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00517.warc.gz"} |
The equation of stationary wave along a stretched string is given by
Doubtnut is No.1 Study App and Learning App with Instant Video Solutions for NCERT Class 6, Class 7, Class 8, Class 9, Class 10, Class 11 and Class 12, IIT JEE prep, NEET preparation and CBSE, UP
Board, Bihar Board, Rajasthan Board, MP Board, Telangana Board etc
NCERT solutions for CBSE and other state boards is a key requirement for students. Doubtnut helps with homework, doubts and solutions to all the questions. It has helped students get under AIR 100 in
NEET & IIT JEE. Get PDF and video solutions of IIT-JEE Mains & Advanced previous year papers, NEET previous year papers, NCERT books for classes 6 to 12, CBSE, Pathfinder Publications, RD Sharma, RS
Aggarwal, Manohar Ray, Cengage books for boards and competitive exams.
Doubtnut is the perfect NEET and IIT JEE preparation App. Get solutions for NEET and IIT JEE previous years papers, along with chapter wise NEET MCQ solutions. Get all the study material in Hindi
medium and English medium for IIT JEE and NEET preparation | {"url":"https://www.doubtnut.com/qna/649437237","timestamp":"2024-11-03T22:31:19Z","content_type":"text/html","content_length":"228485","record_id":"<urn:uuid:cdde6091-cb9e-49c2-b5f6-2ff70a743b19>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00686.warc.gz"} |
Conceptual models
Aquifer of infinite lateral extent
✓aquifer is unconfined, of infinite lateral extent, isotropic or anisotropic on the vertical plane;
✓pumping well may be fully penetrating or partially penetrating;
✓drawdown can be estimated in the observation well (fully penetrating or partially penetrating) or in the piezometer at any distance from the pumping well;
✓allowance for storage capacity of the pumping well, skin effect and storage capacity of the observation well (see Moench’s solution accounting for pumping well storage capacity) is made.
a b
c d
e f
Conceptual hydrogeological schemes (cross-sections) of unconfined anisotropic aquifer of infinite lateral extent.
a – partially penetrating pumping well and piezometer; b – partially penetrating pumping and observation wells; c – fully penetrating pumping well and piezometer; d – fully penetrating pumping and
observation wells; e – fully penetrating pumping and observation wells in the unconfined leaky aquifer; f – confined-unconfined aquifer.
Fundamental equations:
1) Neuman’s solution for average drawdown in observation well and drawdown in piezometer in anisotropic aquifer; pumping well as well as penetrating well may be fully penetrating or partially
2) Boulton’s solutions for average drawdown and water table drawdown in fully penetrating well in isotropic aquifer;
3) Moench’s solution for average drawdown in observation well and drawdown in piezometer in anisotropic aquifer; pumping and observation wells being fully penetrating or partially penetrating;
4) Moench’s solution for average drawdown in observation well and drawdown in piezometer in anisotropic aquifer; pumping and penetrating wells being fully penetrating or partially penetrating;
allowance is made for pumping well and piezometer storage capacity, skin effect in the pumping well and shape factor of the piezometer;
5) Moench’s solution for drawdown in fully penetrating or partially penetrating pumping well in anisotropic aquifer with account for pumping well storage capacity and skin effect;
6) Theis’s solution for water table drawdown for fully penetrating well in isotropic aquifer;
7) Hantush’s solution for water table drawdown for fully penetrating well in isotropic leaky aquifer;
8) Moench–Prickett solution for confined-unconfined aquifer.
Pumping regimes:
One well pumped at a constant rate
One or several wells pumped at a variable rate
If Moench’s solution is applied some additional parameters should be specified such as: casing radius of the pumping well, skin hydraulic conductivity, skin thickness and, if needed, shape factor of
the observation well or piezometer.
Barlow P.M., Moench A.F. WTAQ version 2 - a computer program for analysis of aquifer tests in confined and water-table aquifers with alternative representations of drainage from the unsaturated zone
// U.S. Geological Survey. 2011. Technique and Methods 3-B9.
Boulton N.S. Analysis of data from non-equilibrium pumping tests allowing for delayed yield from storage // Proceedings of the Institution of Civil Engineers. 1963. Vol. 26. P. 469–482.
Boulton N.S. The drawdown of the water-table under non-steady conditions near a pumped well in an unconfined formation // Proceedings of the Institution of Civil Engineers. 1954. Vol. 3, N 3.
P. 564–579.
Moench A.F. Computation of type curves for flow to partially penetrating wells in water-table aquifers // Ground Water. 1993. Vol. 31, N 6. P. 966–971.
Moench A.F. Flow to a well in a water-table aquifer: an improved Laplace transform solution // Ground Water. 1996. Vol. 34, N 4. P. 593–596.
Moench A.F. Flow to a well of finite diameter in a homogeneous, anisotropic water table aquifer // Water Resources Research. 1997. Vol. 33, N 6. P. 1397–1407.
Moench A.F., Prickett T.A. Radial flow in an infinite aquifer undergoing conversion from artesian to water table conditions // Water Resources Research. 1972. Vol. 8, N 2. P. 494–499.
Neuman S.P. Analysis of pumping test data from anisotropic unconfined aquifers // Water Resources Research. 1975. Vol. 11, N 2. P. 329–345.
Neuman S.P. Effect of partial penetration on flow in unconfined aquifers considering delayed gravity response // Water Resources Research. 1974. Vol. 10, N 2. P. 303–312.
Neuman S.P. Perspective on «Delayed yield» // Water Resources Research. 1979. Vol. 15, N 4. P. 899–908.
Neuman S.P. Supplementary comments on «Theory of flow in unconfined aquifers considering delayed gravity response» // Water Resources Research. 1973. Vol. 9, N 4. P. 1102–1103.
Neuman S.P. Theory of flow in unconfined aquifers considering delayed gravity response // Water Resources Research. 1972. Vol. 8, N 4. P. 1031–1045. | {"url":"https://ansdimat.com/help/source/unconf0.htm","timestamp":"2024-11-14T01:41:24Z","content_type":"text/html","content_length":"22928","record_id":"<urn:uuid:777d4a3d-1783-4072-a553-e51551d29803>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00041.warc.gz"} |
The sum of the roots of the equation e4t−10e3t+29e2t−20et+4=0 i... | Filo
Question asked by Filo student
The sum of the roots of the equation is [12 May 2023, Shifty
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
11 mins
Uploaded on: 4/17/2024
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Coordinate Geometry
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The sum of the roots of the equation is [12 May 2023, Shifty
Updated On Apr 17, 2024
Topic Coordinate Geometry
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 87
Avg. Video Duration 11 min | {"url":"https://askfilo.com/user-question-answers-mathematics/the-sum-of-the-roots-of-the-equation-is-12-may-2023-shifty-39373939363130","timestamp":"2024-11-09T17:11:43Z","content_type":"text/html","content_length":"220288","record_id":"<urn:uuid:e9387636-84a8-4caa-a66c-3919041d7d52>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00701.warc.gz"} |
talwani3d - Compute geopotential anomalies over 3-D bodies by the method of Talwani
talwani3d [ modeltable ] [ -A ] [ -Drho ] ] [ -Ff|n|v ] [ -Goutfile ] [ -Iincrement ] [ -M[h][v] ] [ -Ntrackfile ] [ -Rregion ] [ -Zlevel|obsgrid ] [ -V[level] ] [ -bibinary ] [ -dnodata ] [ -eregexp
] [ -fg ] [ -iflags ] [ -oflags ] [ -r ] [ -x[[-]n] ]
Note: No space is allowed between the option flag and the associated arguments.
talwani3d will read the multi-segment modeltable from file or standard input. This file contains contours of a 3-D body at different z-levels, with one contour per segment. The segment header must
contain the parameters zlevel rho, which states the z contour level and the density of this slice (individual slice densities may be overridden by a fixed density contrast given via -D). We can
compute anomalies on an equidistant grid (by specifying a new grid with -R and -I or provide an observation grid with elevations) or at arbitrary output points specified via -N. Chose from free-air
anomalies, vertical gravity gradient anomalies, or geoid anomalies. Options are available to control axes units and direction.
Required Arguments¶
The file describing the horizontal contours of the bodies. Contours will be automatically closed if not already closed, and repeated vertices will be eliminated.
x_inc [and optionally y_inc] is the grid spacing. Optionally, append a suffix modifier. Geographical (degrees) coordinates: Append m to indicate arc minutes or s to indicate arc seconds. If one
of the units e, f, k, M, n or u is appended instead, the increment is assumed to be given in meter, foot, km, Mile, nautical mile or US survey foot, respectively, and will be converted to the
equivalent degrees longitude at the middle latitude of the region (the conversion depends on PROJ_ELLIPSOID). If y_inc is given but set to 0 it will be reset equal to x_inc; otherwise it will be
converted to degrees latitude. All coordinates: If +e is appended then the corresponding max x (east) or y (north) may be slightly adjusted to fit exactly the given increment [by default the
increment may be adjusted slightly to fit the given domain]. Finally, instead of giving an increment you may specify the number of nodes desired by appending +n to the supplied integer argument;
the increment is then recalculated from the number of nodes and the domain. The resulting increment value depends on whether you have selected a gridline-registered or pixel-registered grid; see
GMT File Formats for details. Note: if -Rgrdfile is used then the grid spacing has already been initialized; use -I to override the values.
-Rxmin/xmax/ymin/ymax[+r][+uunit] (more …)
Specify the region of interest.
Optional Arguments¶
The z-axis should be positive upwards [Default is down].
Sets fixed density contrast that overrides any setting in model file, in kg/m^3.
Specify desired gravitational field component. Choose between f (free-air anomaly) [Default], n (geoid) or v (vertical gravity gradient).
Specify the name of the output data (for grids, see GRID FILE FORMATS below). Required when an equidistant grid is implied for output. If -N is used then output is written to stdout unless G
specifies an output file.
Sets units used. Append h to indicate horizontal distances are in km [m], and append z to indicate vertical distances are in km [m].
Specifies locations where we wish to compute the predicted value. When this option is used there are no grids and the output data records are written to stdout.
-V[level] (more …)
Select verbosity level [c].
Set observation level either as a constant or give the name of a grid with observation levels. If the latter is used the the grid determines the output grid region [0].
-bi[ncols][t] (more …)
Select native binary input. [Default is 2 input columns].
-d[i|o]nodata (more …)
Replace input columns that equal nodata with NaN and do the reverse on output.
-e[~]”pattern” | -e[~]/regexp/[i] (more …)
Only accept data records that match the given pattern.
Geographic grids (dimensions of longitude, latitude) will be converted to km via a “Flat Earth” approximation using the current ellipsoid parameters.
-h[i|o][n][+c][+d][+rremark][+rtitle] (more …)
Skip or produce header record(s). Not used with binary data.
-icols[+l][+sscale][+ooffset][,…] (more …)
Select input columns and transformations (0 is first column).
-ocols[,…] (more …)
Select output columns (0 is first column).
-r (more …)
Set pixel node registration [gridline].
-x[[-]n] (more …)
Limit number of cores used in multi-threaded algorithms (OpenMP required).
-:[i|o] (more …)
Swap 1st and 2nd column on input and/or output.
-^ or just -
Print a short message about the syntax of the command, then exits (NOTE: on Windows just use -).
-+ or just +
Print an extensive usage (help) message, including the explanation of any module-specific option (but not the GMT common options), then exits.
-? or no arguments
Print a complete usage (help) message, including the explanation of all options, then exits.
For map distance unit, append unit d for arc degree, m for arc minute, and s for arc second, or e for meter [Default], f for foot, k for km, M for statute mile, n for nautical mile, and u for US
survey foot. By default we compute such distances using a spherical approximation with great circles. Prepend - to a distance (or the unit is no distance is given) to perform “Flat Earth”
calculations (quicker but less accurate) or prepend + to perform exact geodesic calculations (slower but more accurate).
To compute the free-air anomalies on a grid over a 3-D body that has been contoured and saved to body.txt, using 1.7 g/cm^3 as the density contrast, try
gmt talwani3d -R-200/200/-200/200 -I2 -G3dgrav.nc body.txt -D1700 -Fg
To obtain the vertical gravity gradient anomaly along the track in crossing.txt for the same model, try
gmt talwani3d -Ncrossing.txt body.txt -D1700 -Fv > vgg_crossing.txt
Finally, the geoid anomaly along the same track in crossing.txt for the same model is written to n_crossing.txt by
gmt talwani3d -Ncrossing.txt body.txt -D1700 -Fn -Gn_crossing.txt
Kim, S.-S., and P. Wessel, 2016, New analytic solutions for modeling vertical gravity gradient anomalies, Geochem. Geophys. Geosyst., 17, http://dx.doi.org/10.1002/2016GC006263.
Talwani, M., and M. Ewing, 1960, Rapid computation of gravitational attraction of three-dimensional bodies of arbitrary shape, Geophysics, 25, 203-225. | {"url":"https://docs.generic-mapping-tools.org/5.4/supplements/potential/talwani3d.html","timestamp":"2024-11-03T03:26:34Z","content_type":"application/xhtml+xml","content_length":"23597","record_id":"<urn:uuid:33d9492e-4aec-49df-8d60-0a5f515a5dba>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00408.warc.gz"} |
Selection Statements Make a decision based on conditions Allows the computer to be intelligent. - ppt download
Presentation is loading. Please wait.
To make this website work, we log user data and share it with processors. To use this website, you must agree to our
Privacy Policy
, including cookie policy.
Ads by Google | {"url":"http://slideplayer.com/slide/766230/","timestamp":"2024-11-03T06:02:37Z","content_type":"text/html","content_length":"156503","record_id":"<urn:uuid:2fe844b5-c9fe-44ae-82fd-fd842cd3363b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00786.warc.gz"} |
QuadraticEquations - The Brainbox Tutorials
Specially designed for ICSE Class 9 students, this online maths quiz aims at clearing the concept of the chapter- “Quadratic Equations“. ICSE CLass 9 Maths Quadratic Equations Test Contains sums
related to Quadratic equations, roots of quadratic equations, solving quadratic equations by factorisation method. ICSE CLass 9 Maths Quadratic Equations Test Students can get access to this online …
Read more
Class 10 Maths Quadratic Equations MCQ Mock Test
The Brainbox Tutorials has specially designed this online maths quiz for Class 10 students. This Class 10 Maths Quadratic Equations MCQ Mock Test covers all the important topics from the capter
Quadratic equation in one variable. This Class 10 Maths Quadratic Equations MCQ Mock Test Contains all the assorted important sums related to nature of roots, solving quadratic equations by
factorisation, quadratic formula or … Read more | {"url":"https://thebrainboxtutorials.com/category/quadraticequations","timestamp":"2024-11-02T17:30:58Z","content_type":"text/html","content_length":"71971","record_id":"<urn:uuid:2e0fb06f-17ff-47d1-8428-9d760ca24886>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00135.warc.gz"} |
Data for "On-grid compressive sampling for spherical field measurements in acoustics" in The Journal of the Acoustical Society of America.
This dataset contains CSV files for the figures in the paper titled "On-grid compressive sampling for spherical field measurements in acoustics" in The Journal of the Acoustical Society of America.
In this paper, we derive a compressive sampling method for spherical harmonic/spherical wavefunction or Wigner D-function series with sparse coefficients. Applications of these sparse expansions
include spherical field measurements in acoustics and spherical near-field antenna measurements, to name a couple. The figures that this dataset is for are examples demonstrating the following:
example acoustic field coefficients in the spherical harmonic/spherical wavefunction basis; relationships between spherical harmonic/spherical wavefunction/Wigner D-function coefficient sparsity and
spatial Fourier coefficient sparsity; example compressive sampling reconstruction using our proposed compressive sampling method with and without noise; and comparisons between classical Nyquist
sampling and our proposed compressive sampling method.
About this Dataset
Updated: 2024-02-22
Metadata Last Updated: 2022-11-09 00:00:00
Date Created: N/A
Data Provided by:
Dataset Owner: N/A
Table representation of structured data
Title Data for "On-grid compressive sampling for spherical field measurements in acoustics" in The Journal of the Acoustical Society of America.
This dataset contains CSV files for the figures in the paper titled "On-grid compressive sampling for spherical field measurements in acoustics" in The Journal of the Acoustical Society
of America. In this paper, we derive a compressive sampling method for spherical harmonic/spherical wavefunction or Wigner D-function series with sparse coefficients. Applications of
Description these sparse expansions include spherical field measurements in acoustics and spherical near-field antenna measurements, to name a couple. The figures that this dataset is for are
examples demonstrating the following: example acoustic field coefficients in the spherical harmonic/spherical wavefunction basis; relationships between spherical harmonic/spherical
wavefunction/Wigner D-function coefficient sparsity and spatial Fourier coefficient sparsity; example compressive sampling reconstruction using our proposed compressive sampling method
with and without noise; and comparisons between classical Nyquist sampling and our proposed compressive sampling method.
Modified 2022-11-09 00:00:00
Publisher National Institute of Standards and Technology
Contact mailto:[email protected]
Keywords compressive sensing , compressive sampling , sparse signal processing , far-field pattern , near-field pattern , antenna characterization , Wigner D-functions , spherical harmonics ,
acoustic fields;
"identifier": "ark:\/88434\/mds2-2842",
"accessLevel": "public",
"contactPoint": {
"hasEmail": "mailto:[email protected]",
"fn": "Marc Valdez"
"programCode": [
"landingPage": "https:\/\/data.nist.gov\/od\/id\/mds2-2842",
"title": "Data for \"On-grid compressive sampling for spherical field measurements in acoustics\" in The Journal of the Acoustical Society of America.",
"description": "This dataset contains CSV files for the figures in the paper titled \"On-grid compressive sampling for spherical field measurements in acoustics\" in The Journal of the Acoustical Society of America. In this paper, we derive a compressive sampling method for spherical harmonic\/spherical wavefunction or Wigner D-function series with sparse coefficients. Applications of these sparse expansions include spherical field measurements in acoustics and spherical near-field antenna measurements, to name a couple. The figures that this dataset is for are examples demonstrating the following: example acoustic field coefficients in the spherical harmonic\/spherical wavefunction basis; relationships between spherical harmonic\/spherical wavefunction\/Wigner D-function coefficient sparsity and spatial Fourier coefficient sparsity; example compressive sampling reconstruction using our proposed compressive sampling method with and without noise; and comparisons between classical Nyquist sampling and our proposed compressive sampling method.",
"language": [
"distribution": [
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_1_sorted_coefficients_case_2a.csv",
"description": "This file contains the data for supplemental figure 1. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 2a as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_1_sorted_coefficients_case_2a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_2_sorted_coefficients_case_2a.csv",
"description": "This file contains the data for supplemental figure 2. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 2a as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_2_sorted_coefficients_case_2a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_3_sorted_coefficients_case_2b.csv",
"description": "This file contains the data for supplemental figure 3. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 2b as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_3_sorted_coefficients_case_2b.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_4_sorted_coefficients_case_2b.csv",
"description": "This file contains the data for supplemental figure 4. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 2b as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_4_sorted_coefficients_case_2b.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_5_sorted_coefficients_case_2c.csv",
"description": "This file contains the data for supplemental figure 5. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 2c as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_5_sorted_coefficients_case_2c.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_4a_SW_coefs_DirPat_CUBE_1098Hz.csv",
"description": "This file contains the data for figure 4a. It shows the magnitude of the spherical wavefunction coefficients for the DirPat CUBE driver 1 loudspeaker at 1098Hz. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_4a_SW_coefs_DirPat_CUBE_1098Hz.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_4b_SW_coefs_DirPat_CUBE_1400Hz.csv",
"description": "This file contains the data for figure 4b. It shows the magnitude of the spherical wavefunction coefficients for the DirPat CUBE driver 1 loudspeaker at 1400Hz. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_4b_SW_coefs_DirPat_CUBE_1400Hz.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_6_sorted_coefficients_case_2c.csv",
"description": "This file contains the data for supplemental figure 6. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 2c as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_6_sorted_coefficients_case_2c.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_7_sorted_coefficients_case_3a.csv",
"description": "This file contains the data for supplemental figure 7. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 3a as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_7_sorted_coefficients_case_3a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_8_sorted_coefficients_case_3a.csv",
"description": "This file contains the data for supplemental figure 8. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 3a as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_8_sorted_coefficients_case_3a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_9_sorted_coefficients_case_3b.csv",
"description": "This file contains the data for supplemental figure 9. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 3b as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_9_sorted_coefficients_case_3b.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_10_sorted_coefficients_case_3b.csv",
"description": "This file contains the data for supplemental figure 10. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 3b as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_10_sorted_coefficients_case_3b.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_4c_SW_coefs_DirPat_CUBE_1895Hz.csv",
"description": "This file contains the data for figure 4c. It shows the magnitude of the spherical wavefunction coefficients for the DirPat CUBE driver 1 loudspeaker at 1895Hz. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_4c_SW_coefs_DirPat_CUBE_1895Hz.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_3_data_s_D_verus_average_s_F_100_trials.csv",
"description": "This file contains the data for Figure 3, which shows the sorted concentrations of the Fourier basis sparsity as a function of Wigner D-function sparsity for different sparsity levels where the Wigner D-function coefficients are set to 1 at random positions. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_3_data_s_D_verus_average_s_F_100_trials.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_5a_sorted_coefficients_case_1a.csv",
"description": "This file contains the data for figure 5a. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 1a as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_5a_sorted_coefficients_case_1a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_5b_sorted_coefficients_case_1a.csv",
"description": "This file contains the data for figure 5b. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 1a as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_5b_sorted_coefficients_case_1a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_6a_sorted_coefficients_case_1b.csv",
"description": "This file contains the data for figure 6a. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 1b as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_6a_sorted_coefficients_case_1b.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_6b_sorted_coefficients_case_1b.csv",
"description": "This file contains the data for figure 6b. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 1b as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_6b_sorted_coefficients_case_1b.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_11_classical_Nyquist_RelativeError_vs_n_meas_with_noise.csv",
"description": "This file contains the data for figure 11. It shows the relative error (dB) for classical Fourier sampling as a function of sample grid density for cases 1a, 2a, and 3a from the paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_11_classical_Nyquist_RelativeError_vs_n_meas_with_noise.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_12a_RelativeError_vs_n_phys_meas_vs_grid_dens.csv",
"description": "This file contains the data for figure 12a. It shows the relative error (dB) for Fourier based compressive sampling as a function of sample grid density and number of measurements for case 1a from the paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_12a_RelativeError_vs_n_phys_meas_vs_grid_dens.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_12b_RelativeError_vs_n_phys_meas_vs_grid_dens.csv",
"description": "This file contains the data for figure 12b. It shows the relative error (dB) for Fourier based compressive sampling as a function of sample grid density and number of measurements for case 2a from the paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_12b_RelativeError_vs_n_phys_meas_vs_grid_dens.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_12c_RelativeError_vs_n_phys_meas_vs_grid_dens.csv",
"description": "This file contains the data for figure 12c. It shows the relative error (dB) for Fourier based compressive sampling as a function of sample grid density and number of measurements for cases 3a from the paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_12c_RelativeError_vs_n_phys_meas_vs_grid_dens.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_13_coherence_vs_sample_dense_vs_grid_dens.csv",
"description": "This file contains the data for figure 13. It shows the average coherence of the 2DDFT CS measurment matrix as a function of grid density and average sample number. The average is over 25 trials. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_13_coherence_vs_sample_dense_vs_grid_dens.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_11_sorted_coefficients_case_3c.csv",
"description": "This file contains the data for supplemental figure 11. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 3c as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_11_sorted_coefficients_case_3c.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/supp_figure_12_sorted_coefficients_case_3c.csv",
"description": "This file contains the data for supplemental figure 12. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 3c as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "supp_figure_12_sorted_coefficients_case_3c.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/README.txt",
"description": "This is a \"read me\" file that contains and overview of the dataset.",
"mediaType": "text\/plain",
"title": "README.txt"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_7a_sorted_coefficients_case_1c.csv",
"description": "This file contains the data for figure 7a. It shows the relative magnitude in dB of the sorted Fourier and Wigner D function coefficients for case 1c as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_7a_sorted_coefficients_case_1c.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_7b_sorted_coefficients_case_1c.csv",
"description": "This file contains the data for figure 7b. It shows the coefficient normalized error in dB of the sorted Fourier and Wigner D function coefficients for case 1c as described in the file header and paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_7b_sorted_coefficients_case_1c.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_8a_actual_vs_CS_field_C1a.csv",
"description": "This file contains the data for figure 8a. It shows the near-field reconstruction (relative magnitude in dB) using Fourier based compressive sensing for an acoustic field (case 1a from the paper). The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_8a_actual_vs_CS_field_C1a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_8b_CS_field_RelativeError_C1a.csv",
"description": "This file contains the data for figure 8b. It shows the relative error (dB) of the near-field reconstruction using Fourier based compressive sensing for an acoustic field (case 1a from the paper). The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_8b_CS_field_RelativeError_C1a.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_9_rel_err_vs_n_phys_meas_Fourier_vs_WingerD.csv",
"description": "This file contains the data for figure 9. It shows the coefficient relative error as a function of measurement number for Fourier and Wigner D function based compressive sensing for acoustic fields (case 1a, 2a, 3a from the paper). The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_9_rel_err_vs_n_phys_meas_Fourier_vs_WingerD.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_10_RelativeError_vs_n_phys_meas_c1a_Fourier_vs_WingerD_grid_dens_1_to_4.csv",
"description": "This file contains the data for figure 10. It shows the coefficient relative error as a function of measurement number for Fourier and on-grid Wigner D function based compressive sensing for the acoustic fields in case 1a as the sampling grid density is increased. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_10_RelativeError_vs_n_phys_meas_c1a_Fourier_vs_WingerD_grid_dens_1_to_4"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_14a_RelativeError_vs_sample_dense_vs_grid_dens.csv",
"description": "This file contains the data for figure 14a. It shows the relative error (dB) for Fourier based compressive sampling as a function of sample grid density and sample density for cases 1a from the paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_14a_RelativeError_vs_sample_dense_vs_grid_dens.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_14b_RelativeError_vs_sample_dense_vs_grid_dens.csv",
"description": "This file contains the data for figure 14b. It shows the relative error (dB) for Fourier based compressive sampling as a function of sample grid density and sample density for cases 2a from the paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_14b_RelativeError_vs_sample_dense_vs_grid_dens.csv"
"downloadURL": "https:\/\/data.nist.gov\/od\/ds\/mds2-2842\/Figure_14c_RelativeError_vs_sample_dense_vs_grid_dens.csv",
"description": "This file contains the data for figure 14c. It shows the relative error (dB) for Fourier based compressive sampling as a function of sample grid density and sample density for cases 3a from the paper. The data is organized as described in the header (first row) of the csv file.",
"mediaType": "text\/csv",
"title": "Figure_14c_RelativeError_vs_sample_dense_vs_grid_dens.csv"
"bureauCode": [
"modified": "2022-11-09 00:00:00",
"publisher": {
"@type": "org:Organization",
"name": "National Institute of Standards and Technology"
"theme": [
"Advanced Communications:Wireless (RF)",
"Mathematics and Statistics:Image and signal processing"
"keyword": [
"compressive sensing",
"compressive sampling",
"sparse signal processing",
"far-field pattern",
"near-field pattern",
"antenna characterization",
"Wigner D-functions",
"spherical harmonics",
"acoustic fields;" | {"url":"https://data.commerce.gov/data-grid-compressive-sampling-spherical-field-measurements-in-acoustics-in-journal-acoustical","timestamp":"2024-11-02T04:34:35Z","content_type":"text/html","content_length":"58123","record_id":"<urn:uuid:ff83786f-ebb9-474f-91c0-f890839fa280>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00543.warc.gz"} |
Intelligence System for Diagnosis Level of Coronary Heart Disease with K-Star Algorithm
I. Introduction
Based on data from the World Health Organization, cardiovascular disease is a major health problems in both developed and developing countries. Diagnosis to determine the level/type of coronary heart
disease is very important, so that appropriate action can be taken. Intelligence systems can be used to provide diagnosis support. Research results have shown that the use of clinical decision
support systems can help physicians deliver improved clinical practice [
] and also can reduce the occurrence of faulty diagnoses [
]. Intelligence diagnosis system development is in need of existing clinical data as training data, in this case, data specifically related to coronary heart disease. Unfortunately, the availability
of data was minimal for a certain level/type compared the amount of data for the other level/type. The conditions have resulted in imbalanced data, and imbalanced data has become an interesting issue
in bioinformatics [
] and machine learning.
Intelligence systems for clinical decision support have been developed, especially for the prediction and diagnosis of coronary heart disease, using a variety of computational intelligence
algorithms. A coronary heart disease prediction system may be used to predict the incidence of coronary heart disease in the future. Predictions are made based on risk factors, such as age,
cholesterol, blood pressure, sex, smoking habits, and history of diabetes. Prediction methods may use algorithms originally created for data mining or computational intelligence [
], or they may employ the standard Framingham risk score, SCORE, and PROCAM [
]. Diagnosis requires some the observation of symptoms as well as tests, such as electrocardiography, scintigraphy, and fluoroscopy. Much research has been done to develop diagnostic systems, and
these studies have used several algorithms from computational intelligence, including the ensemble algorithm [
], neural networks (NN) [
], fuzzy systems [
], support vector machine (SVM) [
], genetic algorithms [
], particle swarm optimization (PSO) [
] and other classification algorithms [
]. Unfortunately, these studies have been limited to proposing model diagnosis systems with a system output of healthy or sick. Such systems use a binary classification approach and sue the dataset
of coronary heart disease from the Univeristy of California Irvine (UCI) repository [
]. The dataset can be grouped into two levels, namely, healthy and sick, based on the observation of blood vessel constriction. If narrowing is more than 50% then the patient is classified as sick,
and if it is less than 50% then the patient is classified as healthy. The amounts of data related to sick and healthy patients are relatively balanced, so the system intelligence preformance in these
studies has been relatively good.
Referring to the dataset of the UCI repository, the data can also be grouped into several sub-types or levels, that is healthy, sick-low, sick-medium, sick-high and sick-serious [
]. Further research was needed regarding the diagnosis of the level or type of coronary heart disease. Such research was conducted by Nahar et al. [
]. Their study on coronary heart disease diagnosis was based on computational intelligence, and the algorithms used included support vector machine optimization (SMO), AdaBoostM1, J48, naive
Bayesian, IBK, and PART. In their approach, multiclass classification problems are converted to binary classification problems. The approach refers to the ability of the SMO classification algorithm,
which is suitable for binary classification [
]. Implementation of the system is done by training and testing at every level of coronary heart disease. When the system is trained to recognize a healthy level, the healthy level is labeled
positive, while the other levels (low, medium, high, and serious) are labeled negative. Thus, there are only two training labels, positive and negative. This applies equally in testing. For example,
when testing the system at the medium level, the level is made positive, while the other is negative, regardless of whether the data is at the level of healthy, sick-low, sick-high or sick-serious.
Subsequent research was conducted by Akrami et al. [
], who used a concept similar to that used by Nahar et al. [
], which converted multiclass classification into binary classification. The differences between these studies is the method of 10-fold cross-validation. The research by Akrami et al. [
] used 10-fold cross-validation each run 100 times, while in the study by Nahar et al. [
] they were run just once. The next differences is the significant test methods used for classification using the t-test. The research by Akrami et al. [
] was able to improve the performance of the true positives rate (TPR) and F-measure parameters. The
-test results showed that the best performance classification method was SMO. The same approach was also adopted by Prabowo et al. [
]. The research also investigated the performance of the algorithm computational intelligence. This research differentiated between the research of Nahar et al. [
] and Akrami et al. [
]. The differences are the process of randomization before the 10-fold cross-validation. The process was repeated 10 times, so the end result was the average of the 10 times. The concept, which is
similar to that of Nahar et al. [
] was also adopted by Setiawan et al. [
]. This research compared the performance of feature selection methods in classifying 5 levels of coronary heart disease using the naive Bayesian classification method and J48 (C4.5).
The studies that use the approach of converting multiclass classification to binary classification for the diagnosis of coronary heart disease achieved relatively good accuracy. Unfortunately, the
parameters of good accuracy do not match with the parameters of TPR and F-measure. Both parameters were low for the diagnosis of sick-low levels, sick-medium, high-sick and sick-serious, indicating
that the system only has the ability to have a high true negatives rate (TNR). Research conducted by Choi [
] demonstrated that the success of a classification method is determined by the data used for training and testing. Data that is not balanced between classes will result in imbalanced data.
Imbalanced classification data can result in a good prediction accuracy of data class training with large numbers of members; however, the number of class members having little accuracy is not good [
Referring to previous research, this paper proposes an intelligence system to diagnosis coronary heart disease level by considering the imbalanced data and multiclass classification approach. To
solve the imbalanced data problem of the distribution of data for each level of CAD, preprocessing is carried out with resampling, non-stratified random sampling, the synthetic minority over-sampling
technique (SMOTE), deletion of data that does not match the range of attributes, and the elimination of duplicate data. The next step is dividing the CAD data for training and testing the concept of
k-fold cross-validation. Classification is done by using the K-star algorithm, while the performance parameters are measured in terms of sensitivity, specificity, positive prediction value (PPV),
negative prediction value (NPV), area under the curve (AUC), and F-measure.
II. Methods
1. Data and Data Processing
Research on intelligence systems for the diagnosis of coronary heart disease levels has used patient data from the UCI repository [
]. The coronary heart disease data was obtained from Roberto Detrano, MD, PhD, which is a collection of data from the VA Medical Center. Coronary heart disease is distributed into five levels or
types, namely, healthy, sick-low, sick-medium, sick-high, and sick-serious [
]. The data comprised 303 intances of data and 14 parameters, with one parameter as an indication of the level of heart disease on a scale of 0–4. The parameters used are shown in detail in
Table 1
2. Synthetic Minority Over-sampling Technique
As seen in
Table 1
, coronary heart disease data distribution among levels is imbalanced, so in this study data imbalanced was observed. This problem was solved by using the over-sampling approach. This sampling
technique is based on research by Chawl et al. [
] and is known as SMOTE. This technique can increase the amount of data in the minority in the original data, by creating mock data to expand the area of decision-making to the minority class.
There are several step in the process of solving the data imbalance problem. In the first step, the existing data is resampled to estimate the accuracy of the statistical sample or provide a picture
at random with replacement using data from a subset of the available data. The second step is done by SMOTE to balance the data of each class minority. The SMOTE parameter used in this study was k =
5 (nearest neighbors), and the value of the over-sampling rate was adjusted by the amount of data for each level, using the healthy level as a reference. This means that the healthy level established
using SMOTE.
Result resampling and the SMOTE process resulted in some things against coronary heart disease data. First the resampling process produces a better data distribution, but it results in the
duplication of coronary heart disease data. Second, the SMOTE process is used to obtain duplication of data, so the data becomes balanced. The SMOTE process also produces attributes whose values are
not within the range of the attribute values. To overcome this, after the SMOTE and resmpling steps, the duplicate data is removed along with the data attributes whose values exceed the limit. After
this process, the data is ready to classify with the K-star algorithm.
3. Model Intelligence System Based K-Star Classifier
The model level intelligence system for the diagnosis of coronary heart disease is shown in
Figure 1
. Intelligence systems use classification algorithms, such as the multilclass K-star (K
). The K
algorithm can be defined as a clustering method that divides
data into
clusters, where each data entry in a particular cluster with an average viewing distance nearby. The K
algorithm is an instance-based learner algorithm that uses entropy to measure the distance [
]. The advantages of using entropy are that it provides a consistent approach to dealing with real-valued attributes, symbolic and missing values. The K
algorithm is similar to the k-NN algorithm, in that it uses entropy to measure the closeness of data.
4. Performance Evaluation of the K-Star Intelligence System Based on Classification
Performance evaluation of the system intelligence diagnosis of coronary heart disease was done by dividing the data into k subsets. Each subset contained the data of each class. Then, from the
k-subsets, one subset was taken for testing, and k–1 subset was taken for training. This was done alternately so that each subset was used for testing. The k value used in this study for testing was
k = 10, so the performance was the average result from 10 times training and testing. Performance was measured in terms of sensitivity, specificity, PPV, NPV, AUC, and F-measure. An explanation of
each performance parameter is given as follows:
1) Positive prediction value (PPV) is the number of positive samples correctly categorized as positive divided by the total testing data sample classified as positive.
2) Negative Prediction Value (NPV) is the number of negative samples correctly categorized as negative divided by the total testing data sample classified as negative.
3) Sensitivity is the number of positive samples correctly categorized as positive divided by the total testing sample data testing positive
4) F-measure (F1) is the harmonic mean of sensitivity and PPV
III. Results
The diagnosis of coronary heart disease levels is a problem that can be solved through the concept of multiclass classification. This approach is appropriate because there are more than two levels of
coronary heart disease. The proposed system uses a 5-level classification of coronary heart disease. Testing of the system was done in two stages before classification using the K
algorithm. In the first stage the R-SCOR-RD was not used, while in the second, the R-SCOR-RD was used. The test results produced without using the R-SCOR-RD generated a confused metrics table shown
Table 2
Table 2
was used to determine the values for sensitivity, specificity, PPV, NPV, AUC and F-measure. The calculation results are presented in
Figure 2
. The performance of the system without the R-SCOR-RD showed that data imbalance occurred. This is demonstrated by the significant performance difference between healthy level with sick-low levels,
sick-medium, sick-high, and sick-serious. The differences between amounts of data for each level are relatively high, with ratios between the healthy level and other levels of 1:3, 1:3, 1:5, 1:5 and
1:13. Comparison of the results shows that the best accuracy is achieved with vast amounts of data compared with a small amount of data.
Furthermore, the performance of the proposed system using the R-SCOR-RD before the data were classified was also evaluated. Th results obtained by confusion matrix are shown in
Table 3
. From the results shown in
Table 3
, the parameters of performance, sensitivity, specificity, PPV, NPV, AUC, and F-measure were calculated. The calculation results shown in
Figure 3
. The performance for each level, for the parameters of precision, recall, F-measure and MCC had the same relative value for each level. The R-SCOR-RD process resulted in data ratios of 1:1.4, 1:2.1,
1:1.5, and 1:2.2. Thus, it can be seen that ratio of the amounts data for the healthy level to each other level decreases, while the number of instances of data increased from 303 to 361.
The test results obtained with R-SCOR-RD and without R-SCOR-RD are shown in
Figure 4
. There were performance differences for each level; only the healthy level showed the same relative performance.
IV. Discussion
In this section we will first discuss the comparison of the diagnosis system using the K
algorithm with R-SCOR-RD (proposed system) treatment and without R-SCOR-RD treatment. Second, we will discuss the comparison of the proposed system with those of previous studies which adopted binary
and multiclass classification approaches. The first comparison is the system of diagnosis without conducting R-SCOR-RD before classification.
Figure 4
shows that the differences in the parameters of sensitivity, specificity, PPV, NPV, AUC, and F-measure were significant. Based on the test results of statistical significance of the difference by
using t-test produces
= 0.00757 (
< 0.05), meaning that there were significant differences before and after using the R-SCOR-RD. The significant difference is explained by the fact that despite the data imbalance problem, machine
learning will yield good prediction accuracy classification of the training data classes with large numbers of members, while the number of class members has poor accuracy [
The next point of discussion is comparison with previous studies, which used binary classification approach. First, let us consider the research done by Nahar et al. [
] to compare multiclass classification algorithms with the binary classification approach. The parameters used for comparison are sensitivity (TPR), and F-measure. These two performance parameters
were used in both studies and were adjusted to the variables used in both these studies. The comparison results are shown in
Tables 4
Based on
Table 4
, the TPR performance in the research by Nahar et al. [
] showed higher sensitivity values for the healthy level with all classification algorithms. That is, the amount of data for the healthy level was more than 3 times higher than the amounts of data
for all other levels. As for the other levels, all showed relatively low values for all types of classification algorithms. The proposed system provides relatively similar sensitivity values for all
levels, and better results were obtained than all the classification algorithms showed in the research by Nahar et al. [
]. This is shown in
Table 6
, which demonstrates the significance of the obtained test results. All algorithms had a
< 0.05.
Next is the F-measure parameter. If it refers to Equation (4), then the F-measure trend is not much different from the sensitivity because the F-measure influenced sensitivity (TPR) and PPV. The
F-measure generate differed significantly from that observed by Nahar et al. [
], as shown in
Table 6
, where the
-value was less than 0.05.
The accuracy results of Nahar et al. [
] for all levels were very high for SMO algorithms in contrast to the low sensitivity values. High accuracy with low sensitivity indicates that the accuracy is high due to the high value components
specificity (TNR). It is proportional to the difference between the amount of data that is labeled negative and the amount of data that is labeled positive. The condition can be said to indicate data
In a future study, we will compare our work with a study done by Akrami et al. [
]. The study used the same concept as Nahar et al. [
], the binary classification approach. The study used 10-fold cross-validation, run as many as 100 times. The addition of this amount can improve system performance in terms of sensitivity and
F-measure. When compared with the proposed system, Akrami et al. [
] achieved better results, but the resulting performance was still as good as that of category classification. Similar to the work by Prabowo et al. [
], improved results were obtained for sensitivity and F-measure when the process is done with randomize variable selection for every 10-fold and it is performed 10 times. When compared with the case
when no variable selection is carried out, the performance of the propose system is still better in terms of sensitivity and F-measure.
Subsequent research was conducted by Setiawan et al. [
]. Their paper discussed benchmarking several feature selection methods for the diagnosis of coronary heart disease using the naive Bayesian method and J48 (C4.5). The approach also uses binary
classification. Performance was measured in their study in terms of accuracy, so it cannot be seen whether specificity or sensitivity is stronger. The average accuracy, however, is similar to that of
the proposed system.
In the studies conducted by Nahar et al. [
], Akrami et al. [
], Prabowo et al. [
] and Setiawan et al. [
], multiclass classification was converted to binary classification. Such conversion makes the those systems ineffective. As an illustration, if the system performs a diagnosis with new data, where
the level of the data is not yet known, it will be difficult for the system to detect it. If the diagnosis is performed for each system level, it will be possible to have two or more positive or
negative outputs, so no conclusion can be made.
The next comparison is with previous research using the multiclass classification approach. Salari et al. [
] proposed a hybrid system using the genetic algorithm, k-nearest neighbor, and backpropagation. That model performs relatively poorly (accuracy is still below 65%). The performance achieved by
Salari et al. [
] was not very different from that achieved by Dominic et al. [
]. The study tested several methods of classification, namely naive Bayesian, decision tree, support vector machine, logistic regression, multilayer perceptron, and AdaBoostM1. Both of these studies
achieved performance that is relatively low compared to that of the proposed system.
The use of sample preprocessing, SMOTE, and the deletion of data for attribute values exceeding the limit, as well as the elimination of duplicate data (R-SCOR-RD) is able to provide improved
performance with the K^* algorithm. The resulting performance showed an average sensitivity of 80.1%, specificity of 95%, PPV of 80.1%, NPV of 95%, AUC of 87.5%, and F-measure of 80.1%. The
performance is better that that of other systems proposed in previous studies. Many previous studies have used the binary classification approach without consideration the data imbalance problem. | {"url":"https://e-hir.org/journal/view.php?number=859","timestamp":"2024-11-05T02:55:16Z","content_type":"application/xhtml+xml","content_length":"118353","record_id":"<urn:uuid:c82a25a1-659e-427b-b3cf-12cdcf1442d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00412.warc.gz"} |
Class RoundRectangle2D.Double
All Implemented Interfaces:
Shape, Serializable, Cloneable
Enclosing class:
The Double class defines a rectangle with rounded corners all specified in double coordinates.
See Also:
• Field Summary
Modifier and Type
The height of the arc that rounds off the corners.
The width of the arc that rounds off the corners.
The height of this RoundRectangle2D.
The width of this RoundRectangle2D.
The X coordinate of this RoundRectangle2D.
The Y coordinate of this RoundRectangle2D.
• Constructor Summary
Constructs a new RoundRectangle2D, initialized to location (0.0, 0.0), size (0.0, 0.0), and corner arcs of radius 0.0.
Double(double x, double y, double w, double h, double arcw, double arch)
Constructs and initializes a RoundRectangle2D from the specified double coordinates.
• Method Summary
Modifier and Type
Gets the height of the arc that rounds off the corners.
Gets the width of the arc that rounds off the corners.
Returns a high precision and more accurate bounding box of the Shape than the getBounds method.
Returns the height of the framing rectangle in double precision.
Returns the width of the framing rectangle in double precision.
Returns the X coordinate of the upper-left corner of the framing rectangle in double precision.
Returns the Y coordinate of the upper-left corner of the framing rectangle in double precision.
Determines whether the RectangularShape is empty.
setRoundRect(double x, double y, double w, double h, double arcw, double arch)
Sets the location, size, and corner radii of this RoundRectangle2D to the specified double values.
Sets this RoundRectangle2D to be the same as the specified RoundRectangle2D.
Methods declared in class java.awt.geom.RectangularShape
clone, contains, contains, getBounds, getCenterX, getCenterY, getFrame, getMaxX, getMaxY, getMinX, getMinY, getPathIterator, intersects, setFrame, setFrame, setFrameFromCenter, setFrameFromCenter
, setFrameFromDiagonal, setFrameFromDiagonal
• Field Details
□ x
public double x
The X coordinate of this RoundRectangle2D.
□ y
public double y
The Y coordinate of this RoundRectangle2D.
□ width
public double width
The width of this RoundRectangle2D.
□ height
public double height
The height of this RoundRectangle2D.
□ arcwidth
public double arcwidth
The width of the arc that rounds off the corners.
□ archeight
public double archeight
The height of the arc that rounds off the corners.
• Constructor Details
□ Double
public Double()
Constructs a new RoundRectangle2D, initialized to location (0.0, 0.0), size (0.0, 0.0), and corner arcs of radius 0.0.
□ Double
public Double(double x, double y, double w, double h, double arcw, double arch)
Constructs and initializes a RoundRectangle2D from the specified double coordinates.
x - the X coordinate of the newly constructed RoundRectangle2D
y - the Y coordinate of the newly constructed RoundRectangle2D
w - the width to which to set the newly constructed RoundRectangle2D
h - the height to which to set the newly constructed RoundRectangle2D
arcw - the width of the arc to use to round off the corners of the newly constructed RoundRectangle2D
arch - the height of the arc to use to round off the corners of the newly constructed RoundRectangle2D
• Method Details
□ getX
public double getX()
Returns the X coordinate of the upper-left corner of the framing rectangle in double precision.
Specified by:
getX in class RectangularShape
the X coordinate of the upper-left corner of the framing rectangle.
□ getY
public double getY()
Returns the Y coordinate of the upper-left corner of the framing rectangle in double precision.
Specified by:
getY in class RectangularShape
the Y coordinate of the upper-left corner of the framing rectangle.
□ getWidth
public double getWidth()
Returns the width of the framing rectangle in double precision.
Specified by:
getWidth in class RectangularShape
the width of the framing rectangle.
□ getHeight
public double getHeight()
Returns the height of the framing rectangle in double precision.
Specified by:
getHeight in class RectangularShape
the height of the framing rectangle.
□ getArcWidth
public double getArcWidth()
Gets the width of the arc that rounds off the corners.
Specified by:
getArcWidth in class RoundRectangle2D
the width of the arc that rounds off the corners of this RoundRectangle2D.
□ getArcHeight
public double getArcHeight()
Gets the height of the arc that rounds off the corners.
Specified by:
getArcHeight in class RoundRectangle2D
the height of the arc that rounds off the corners of this RoundRectangle2D.
□ isEmpty
public boolean isEmpty()
Determines whether the RectangularShape is empty. When the RectangularShape is empty, it encloses no area.
Specified by:
isEmpty in class RectangularShape
true if the RectangularShape is empty; false otherwise.
□ setRoundRect
public void setRoundRect(double x, double y, double w, double h, double arcw, double arch)
Sets the location, size, and corner radii of this RoundRectangle2D to the specified double values.
Specified by:
setRoundRect in class RoundRectangle2D
x - the X coordinate to which to set the location of this RoundRectangle2D
y - the Y coordinate to which to set the location of this RoundRectangle2D
w - the width to which to set this RoundRectangle2D
h - the height to which to set this RoundRectangle2D
arcw - the width to which to set the arc of this RoundRectangle2D
arch - the height to which to set the arc of this RoundRectangle2D
□ setRoundRect
Sets this RoundRectangle2D to be the same as the specified RoundRectangle2D.
setRoundRect in class RoundRectangle2D
rr - the specified RoundRectangle2D
□ getBounds2D
Returns a high precision and more accurate bounding box of the
than the
method. Note that there is no guarantee that the returned
is the smallest bounding box that encloses the
, only that the
lies entirely within the indicated
. The bounding box returned by this method is usually tighter than that returned by the
method and never fails due to overflow problems since the return value can be an instance of the
that uses double precision values to store the dimensions.
Note that the definition of insideness can lead to situations where points on the defining outline of the shape may not be considered contained in the returned bounds object, but only in
cases where those points are also not considered contained in the original shape.
If a point is inside the shape according to the contains(point) method, then it must be inside the returned Rectangle2D bounds object according to the contains(point) method of the bounds.
shape.contains(p) requires bounds.contains(p)
If a point is not inside the shape, then it might still be contained in the bounds object:
bounds.contains(p) does not imply shape.contains(p)
Specified by:
getBounds2D in interface Shape
an instance of Rectangle2D that is a high-precision bounding box of the Shape.
See Also: | {"url":"https://download.java.net/java/early_access/jdk24/docs/api/java.desktop/java/awt/geom/RoundRectangle2D.Double.html","timestamp":"2024-11-06T01:02:06Z","content_type":"text/html","content_length":"41177","record_id":"<urn:uuid:f6382544-6b40-41b4-880d-464672d3896d>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00682.warc.gz"} |
Talk:Cardinal number
From Encyclopedia of Mathematics
Comparability of cardinals
The article deduces from the Schroder–Berstein theorem ($\mathfrak{a} \le \mathfrak{b}$ and $\mathfrak{b} \le \mathfrak{a}$ implies $\mathfrak{a} = \mathfrak{b}$) that cardinals are totally ordered.
This seems wrong: all it proves is that $\le$ is indeed a partial order on cardinals. That any two cardinals are comparable is, I believe, a form of the axiom of choice. A similar assumption is made
a little later when it is asserted that "Any cardinal number $\mathfrak{a}$ can be identified with the smallest ordinal number of cardinality $\mathfrak{a}$". Again this requires that any set can be
well-ordered. Richard Pinch (talk) 18:50, 10 January 2015 (CET)
Maybe. But maybe all this article assumes all axioms of ZFC; this is the default, isn't it? I did not find in this article any discussion of what may happen without the choice axiom . Boris
Tsirelson (talk) 20:24, 10 January 2015 (CET)
I think that at the very least an article in the area of set theory should say whether or not AC is being assumed, especially if assertions depend on it. But for this specific topic the
article really needs to separate out what does and does not depend on AC. Richard Pinch (talk) 23:37, 10 January 2015 (CET)
No one objects if you want to separate it out. But probably it is quite a piece of work. The article is not short... Also, many statements are provable from weaker forms of AC (such as
Dependent Choice), but not provable with no AC at all. An example: the countable union of countable sets is countable. I guess, only experts in AC know exactly, how strange things look in
the absence of any form of AC. Are you ready to this burden? Boris Tsirelson (talk) 07:55, 11 January 2015 (CET)
How to Cite This Entry:
Cardinal number. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Cardinal_number&oldid=36236 | {"url":"https://encyclopediaofmath.org/wiki/Talk:Cardinal_number","timestamp":"2024-11-06T18:39:52Z","content_type":"text/html","content_length":"15099","record_id":"<urn:uuid:c3714ce1-b8e0-4627-aee8-a1b56f2aada9>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00223.warc.gz"} |
Using PyPARRM to filter out stimulation artefacts from data
Using PyPARRM to filter out stimulation artefacts from data#
This example demonstrates how the PARRM algorithm [1] can be used to identify and remove stimulation artefacts from electrophysiological data in the PyPARRM package.
# Author(s):
# Thomas S. Binns | github.com/tsbinns
import numpy as np
from matplotlib import pyplot as plt
from pyparrm import get_example_data_paths, PARRM
from pyparrm._utils._power import compute_psd
When delivering electrical stimulation to bioligical tissues for research or clinical purposes, it is often the case that electrophysiological recordings collected during this time are contaminated
by stimulation artefacts. This contamination makes it difficult to analyse the underlying physiological or pathophysiological electrical activity. To this end, the Period-based Artefact
Reconstruction and Removal Method (PARRM) was developed, enabling the removal of stimulation arefacts from electrophysiological recordings in a robust and computationally cheap manner [1]. N.B. PARRM
assumes that the artefacts are semi-regular, periodic, and linearly combined with the signal of interest.
To demonstrate how PARRM can be used to remove stimulation arfectacts from data, we will start by loading some example data. This is the same example data used in the MATLAB implementation of the
method (neuromotion/PARRM), consisting of a single channel with ~100 seconds of data at a sampling frequency of 200 Hz, and containing stimulation artefacts with a frequency of 150 Hz.
`data` has shape: (1 channel, 19130 timepoints)
`data` duration: 95.65 seconds
Finding the period of the stimulation artefacts#
Having loaded the example data, we can now find the period of the stimulation artefacts, which we require to remove them from the data. Having imported the PARRM object, we initialise it, providing
the data, its sampling frequency, and the stimulation frequency.
After this, we find the period of the artefact using the find_period() method. By default, all timepoints of the data will be used for this, and the initial guess of the period will be taken as the
sampling frequency divided by the artefact frequency. The settings for finding the period can be specified in the method call, however the default settings should suffice as a starting point for
period estimation.
The period is found using a grid search, with the goal of minimising the mean squared error between the data and the best fitting sinusoidal harmonics of the period found with linear regression. The
process is described in detail in Dastin et al. (2021) [1], and is also demonstrated in this video from the paper’s authors.
Estimated artefact period: 1.3311
Creating the filter and removing the artefacts#
Now that we have an estimate of the artefact period, we can design a filter to remove it from the data using the create_filter() method. When creating the filter, there are four key parameters:
1. The size of the filter window, specified with the filter_half_width parameter. This should be chosen based on the timescale on which the artefact shape varies. If no such timescale is known, the
power spectra of the filtered data can be inspected and the size of the filter window tuned until artefact-related peaks are sufficiently attenuated.
2. The number of samples to omit from the centre of the filter window, specified with the omit_n_samples parameter. This parameter serves to control for overfitting to features of interest in the
underlying data. For instance, if there is a physiological signal of interest known to occur on a particular timescale, an appropriate number of samples should be omitted according to this range
of time.
3. The direction considered when building the filter, specified with the filter_direction parameter. This can be used to control whether the filter window takes only previous samples, future
samples, or both previous and future samples into account, based on their position relative to the centre of the filter window.
4. The period window size, specified with the period_half_width parameter. The size of this window should be based on changes in the waveform of the artefact, which can be estimated by plotting the
data on the timescale of the period and identifying the timescale over which features remain fairly constant. This parameter controls which samples are combined on the timescale of the period.
If you are unsure of what parameters are best for your data, you can use the interactive parameter exploration tool offered in PyPARRM (see the explore_filter_params() method and the associated
example: Exploring the best filter parameters for your data).
Here, we specify that the filter should have a half-width of 2,000 samples, ignoring those 20 samples adjacent to the centre of the filter window, considering samples both before and after the centre
of the filter window, and finally using a half-width of 0.01 samples in the period space.
Once the filter has been created, it can be applied using the filter_data() method, which returns the artefact-free data. By default, filter_data() will filter the data stored in the PARRM object,
however other data can be provided should you wish to filter this instead. The filter itself, as well as a copy of the filtered data, can be accessed from the filter and filtered_data attributes,
filtered_data = parrm.filter_data() # other data to filter can be given here
Inspecting the results#
Having filtered the data, we can now compare the results to the original data, as well as the artefact-free form of this simulated data to see how well the method is able to remove the underlying
As you can see, the filtered timeseries data closely resembles the true artefact-free data. Furthermore, inspecting the power spectra shows just how well PARRM is able to attentuate the overwhelming
power at the 50 Hz subharmonic of the stimulation artefacts, again bringing the results closely in line with those of the true artefact-free data.
# comparison to true artefact-free data
artefact_free = np.load(get_example_data_paths("example_data_artefact_free"))
start = 598 # same start time as MATLAB example
end = 1011 # same end time as MATLAB example
times = np.arange(end - start) / sampling_freq
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
inset_axis = axes[0].inset_axes((0.12, 0.6, 0.5, 0.35))
# main timeseries plot
axes[0].plot(times, data[0, start:end], color="black", alpha=0.3, label="Unfiltered")
axes[0].plot(times, artefact_free[0, start:end], linewidth=3, label="Artefact-free")
axes[0].plot(times, filtered_data[0, start:end], label="Filtered (PyPARRM)")
axes[0].set_xlabel("Time (s)")
axes[0].set_ylabel("Amplitude (mV)")
# timeseries inset plot
inset_axis.plot(times[:50], artefact_free[0, start : start + 50], linewidth=3)
inset_axis.plot(times[:50], filtered_data[0, start : start + 50])
axes[0].indicate_inset_zoom(inset_axis, edgecolor="black", alpha=0.4)
# power spectral density plot
n_points = int((sampling_freq / 2) * 2)
psd_freqs, psd_raw = compute_psd(data[0, start:end], sampling_freq, n_points)
_, psd_filtered = compute_psd(filtered_data[0, start:end], sampling_freq, n_points)
_, psd_artefact_free = compute_psd(artefact_free[0, start:end], sampling_freq, n_points)
axes[1].loglog(psd_freqs, psd_raw, color="black", alpha=0.3, label="Unfiltered")
axes[1].loglog(psd_freqs, psd_artefact_free, linewidth=3, label="Artefact-free")
axes[1].loglog(psd_freqs, psd_filtered, label="Filtered (PyPARRM)")
axes[1].set_xlabel("Log frequency (Hz)")
axes[1].set_ylabel("Log power (dB/Hz)")
Python vs. MATLAB implementation comparison#
Due to rounding errors, the final filtered data timeseries of PyPARRM and the MATLAB implementation are not perfectly identical (as shown by the failure of numpy.all()), however they are extremely
close (as shown by the success of numpy.allclose()). Visual inspection of the plots further demonstrates this similarity. Accordingly, both implementations are suitable for identifying and removing
stimulation artefacts from electrophysiological recordings.
# filtered data computed in MATLAB
matlab_filtered = np.load(get_example_data_paths("matlab_filtered"))
fig, axis = plt.subplots(1, 1)
inset_axis = axis.inset_axes((0.12, 0.6, 0.5, 0.35))
# main plot
times, matlab_filtered[0, start:end], linewidth=3, label="Filtered (MATLAB PARRM)"
axis.plot(times, filtered_data[0, start:end], label="Filtered (PyPARRM)")
axis.set_xlabel("Time (s)")
axis.set_ylabel("Amplitude (mV)")
axis.set_title("PyPARRM vs. MATLAB PARRM")
ylim = axis.get_ylim()
axis.set_ylim(ylim[0], ylim[1] * 3)
# inset plot
inset_axis.plot(times[:50], matlab_filtered[0, start : start + 50], linewidth=3)
inset_axis.plot(times[:50], filtered_data[0, start : start + 50])
axis.indicate_inset_zoom(inset_axis, edgecolor="black", alpha=0.4)
"Are the results of the implementations identical? "
f"{np.all(filtered_data == matlab_filtered)}\n\n"
"Are the results of the implementations extremely close? "
f"{np.allclose(filtered_data, matlab_filtered)}"
Are the results of the implementations identical? False
Are the results of the implementations extremely close? True
Total running time of the script: (0 minutes 22.259 seconds) | {"url":"https://pyparrm.readthedocs.io/en/main/auto_examples/plot_use_parrm.html","timestamp":"2024-11-04T08:37:44Z","content_type":"text/html","content_length":"79943","record_id":"<urn:uuid:4868900a-342f-4508-adb3-eba98e0483d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00223.warc.gz"} |
vyborny family, prof. Rudolf Vyborny
Family member Rudolf Vyborny is an influential mathematician in the field of analysis. Below is the preface of a book that he has recently authored with Lee Peng Yee. The work is entitled:
Integral: an easy approach after Kurzweil and Henstock.
Presenting the theory of the integral to non-specialists is an old and everlasting problem. At most universities the Riemann integral is taught in introductory courses, even to future
mathematicians. The reason for this is that the Riemann integral has an intuitive appeal and basic theorems are easy to prove. This, however, is all that can be said in its favour. This theory is
not powerful enough for applications and when it comes to deeper results they are not any easier to prove than the corresponding results in more modern theories. It is true that Riemann with his
approach to integration advanced mathematics significantly but that was almost a century and a half ago. We feel the time is now ripe to start teaching more comprehensive theories of integration
at all levels.
The theory of integration employed by professional mathematicians was created by Henri Lebesgue at the beginning of the twentieth century. It could hardly be criticized and the mathematical
community is happy with it. Unfortunately experience shows that, perhaps because of its abstract character, it is deemed to be difficult by beginners and non-mathematicians. It is not popular
with physicists and engineers. The Lebesgue theory does not cover non-absolutely convergent integrals and there is a need then to consider improper integrals. It is an additional and important
advantage of the theory expounded in this book that it includes all improper integrals. In 1957 Jaroslav Kurzweil gave a new definition of the integral, which in some respects is more general
than Lebesgue's. Ralph Henstock developped the theory further and started to advocate its use at the elementary level. The Kurzweil-Henstock theory preserves the intuitive appeal of the Riemann
definition but has the power of the Lebesgue theory. The aim of this book is to present the Kurzweil--Henstock theory.
We wish to give this powerful tool to non-mathematicians and undergraduates and we advocate the widest possible use of one integral at all levels. We believe that the desirability of teaching one
integral at all levels was also part of the motivation for R. Henstock to develop the theory. Both authors have taught the Kurzweil--Henstock integral at various levels and various universities,
first of all at our home institutions, the National Institute of Education in Singapore and University of Queensland and also at Universität Erlangen-Nürnberg, the University of Canterbury,
Northwest Normal University in Lanzhou and the University of the Philippines. We express our gratitude to the Mathematics Departments of these institutions for their understanding of our desire
to teach a `new' integral and support of our research. Our experience is positive at all levels and in the introductory courses, once the students grasped the concept of delta-fine partitions,
they found the theory as easy as, or perhaps one should say no more difficult than, the Riemann theory.
Several books have appeared since the inception of the Kurzweil-Henstock theory. Most of these aim at the advanced or graduate level. This is so with the books which the inventors themselves
wrote, three books by Henstock and one by Kurzweil (see Bibliography at the end of this book). Other books at the same level are Gordon's, Pfeffer's and Lee's. The book by DePree and Swartz does
contain an introduction to Kurzweil-Henstock theory, but we in contrast cover more material and concentrate solely on integration. J. Mawhin's Introduction à l'Analyse contains the
Kurzweil-Henstock integral; obviously it is in French. The book by McLeod (MAA Carus monograph) is closest to us in its spirit but we use very different and more systematic notation, which we
feel is important at the elementary level. We also consider some topics in greater detail, relate the KH-integral to other integrals and give a range of applications including Fourier series. We
hope that our book will be useful at various levels.
The first section of Chapter 1 and Chapter 2, with perhaps some omissions, can serve as a first (serious) course on integration. Later sections of Chapter 1 contain a fairly complete account of
the Riemann integral but require more mathematical maturity and are not intended for a beginner or a non-mathematician. To indicate that these sections are not meant for the first reading they
are typeset in a smaller font. We have expounded the Riemann theory to provide easily available comparison for someone who desires it. For instance, the example of a non-integrable derivative
gives an opportunity to appreciate the Fundamental Theorem but it is far more difficult than the proof of the Fundamental Theorem itself. Chapters 3 and 6 together with some topics from Chapter 7
can form the basis of a course which could be given instead of a first course in Lebesgue theory. Chapters 4 and 5 are not elementary; they give the most general convergence theorems for the
Kurzweil-Henstock integral. Exercises are provided at the end of each Chapter. Exercises containing additional information which is worth reading even if one does not intend to work them out in
detail are marked by encircled letter i; exercises which are not easy are marked by an encircled exlamation mark.
Finally we wish to acknowledge help when writing this book. We thank the editor of this series, John Loxton, for his friendly attitude and invaluable advice. We are grateful to David Tranah and
particularly to Roger Astley from CUP for the care and expertise with which they have published our work. In writing we had advice on computer typesetting and presentation from our friends and
colleagues. We specifically mention Anthony Miller from CSIRO in Adelaide, Ding Chuan Song from the Northwest Normal University in China, Chew Tuan Seng from the National University of Singapore,
and Peter Adams, Keith Matthews and Ken Smith from the University of Queensland. Peter Adams also produced all figures in this book.
Rudolf Vyborny and Lee Peng Yee
January 2000
Contact: vyborny.c@vyborny.com | {"url":"http://vyborny.com/transfer/rudvyb/rudvyb.htm","timestamp":"2024-11-11T15:14:00Z","content_type":"text/html","content_length":"9633","record_id":"<urn:uuid:6f777794-3546-4b41-a1e2-9c97b7d94906>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00634.warc.gz"} |
Different result with Excel formula – Q&A Hub – 365 Data Science
Resolved: Different result with Excel formula
I did calculation in Excel with T.INV.2T, function T score number is the same (2.31) but the final result is slightly different (81824-103242 instead of 81806-103261). I notice it might be because
the formula actually returned 2.306004135204 but the table is rounded to 2 digits 2.31.
My question is does it matter? But math supposed to be precise isn't it? Or I might use the wrong formula, what is the better way to do it because I don't really like looking-at-the-table-manually
method since it is hard to automate.
3 answers ( 1 marked as helpful)
Hi Nunno,
This is obviously a rounding error. It is up to you to decide how to round figures in practice. Usually rounding up to 4 decimal digits is considered a solid practice. Please don't worry too much
about this issue as the focus of the lessons is to learn the principles and logic of the tests you are performing.
Thank you for sharing the excel formula. This is very helpful, as excel is doing the hard work, and I will be foucusing on the interpertation of the results.
By the way you have an error in cell# F79. i.e, you used alpha of 0.05 which is incorrect
you should use alpha = 0.995 since the confidence interval is 99% (0.99 +-0.005 from each side)
so, the correct excel formula to use is:
=T.INV(0.995,8) = 3.35538733
Hope you find this helpful | {"url":"https://365datascience.com/question/different-result-with-excel-formula/","timestamp":"2024-11-03T16:04:13Z","content_type":"text/html","content_length":"116801","record_id":"<urn:uuid:990a1794-f757-4117-afad-098e75346b40>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00287.warc.gz"} |
Complex sine
< 1 2 3 4 5 >
The complex sine can be expressed with exponential functions as
We start with the power series for the complex sine
and multiply all terms with 2i and get
By conversion with the imaginary unit all terms obtain a plus sign
You can also write it as
We add even exponents and subtract these immediately
Rearranging gives
In brackets are two power series for exponential functions, and thus
so that
Example 1
You can see that sin (0) = 0, as
This formula for the sine was described by the Swiss mathematician Leonhard Euler (1707 - 1783).
Deutsch Español Français Nederlands Русский | {"url":"https://maeckes.nl/Sinus%20(z)%20GB.html","timestamp":"2024-11-01T22:57:49Z","content_type":"application/xhtml+xml","content_length":"6965","record_id":"<urn:uuid:333fe05d-e465-4591-97f2-cf2ca4fc7ab5>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00511.warc.gz"} |
Divergence of numerical solution of discontinuous nonllinear ordinary ODE
3964 Views
1 Reply
0 Total Likes
Divergence of numerical solution of discontinuous nonllinear ordinary ODE
Hi, I am facing troubles to solve the following 2nd order ordinary differential equation
{x'[t] == v[t], v'[t] == 0.4 Sign[v[t]] UnitBox[v[t], x[t]] - 0.1 v[t] - x[t]}
Apparently, this is a nonlinear ODE, moreover, it contains a discontinuity (step) in both variables. I solve it numerically using
sol = NDSolve[{x'[t] == v[t], v'[t] == 0.4 Sign[v[t]] UnitBox[v[t], x[t]] - 0.1 v[t] - x[t], x[0] == 1., v[0] == 1.},
{v[t], x[t]}, {t, 0, 30}]
Plotting the solution using
ParametricPlot[Evaluate[{x[t], v[t]} /. sol], {t, 0, 30}]
I get
This suggests that there is a limit cycle. Indeed, this is confirmed by plotting the phase portrait by
f2 = StreamPlot[{v, 0.4` Sign[v] UnitBox[v, x] - 0.1` v - x}, {x, -2, 2}, {v, -2, 2}, StreamStyle -> LightGray];
and plotting it together with the previous solution
Show[f1, f2]
But now, if I set the initial conditions to inside the orbit, the solution fails
sol = NDSolve[{x'[t] == v[t], v'[t] == 0.4 Sign[v[t]] UnitBox[v[t], x[t]] - 0.1 v[t] - x[t], x[0] == 0.1., v[0] == 0.1.},
{v[t], x[t]}, {t, 0, 30}]
f3 = ParametricPlot[Evaluate[{x[t], v[t]} /. sol], {t, 0, 4.1}, PlotStyle -> Red]
Show[f1, f2, f3]`
What can be done about it? I tried changing the solver and it did not help. I tried to insert the WhenEvent together with CrossDiscontinuity option. No way.
1 Reply
I think it helps to highlight the unitbox:
0.4` Sign[v] UnitBox[v, x] - 0.1` v - x}, {x, -0.6, 0.6}, {v, -0.6,
0.6}, StreamStyle -> LightGray,
Prolog -> {LightBlue, Rectangle[{-0.5, -0.5}, {0.5, 0.5}]}], f3]
As you can see, the blue box is the source of the trouble. The curves are smooth until they hit the box. The part where it goes wrong is a very difficult part of the box. If you look at the stream,
it is directing the curve back into the edge of the box.
It's getting pushed back into the discontinuity basically over and over. This part probably needs some very special treatment. | {"url":"https://community.wolfram.com/groups/-/m/t/528285","timestamp":"2024-11-04T09:23:01Z","content_type":"text/html","content_length":"96697","record_id":"<urn:uuid:530f8a0f-a6bd-4535-96f2-b9270bdbce89>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00541.warc.gz"} |
Transfer Test Worksheet: Number Sequences | TransferReady
Number Sequences
A number sequence is generated when a certain number is repeatedly added or subtracted from a starting number. For example, start with 3 and add 5 each time. You will need to spot the rule for these
sequences from a short part of the sequence, maybe three or four consecutive numbers. Practice this skill with this worksheet. | {"url":"https://transferready.co.uk/worksheet-number-sequences-1/","timestamp":"2024-11-14T04:32:44Z","content_type":"text/html","content_length":"63926","record_id":"<urn:uuid:584b15b2-e18d-4514-87b4-0903bfa40f8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00349.warc.gz"} |
Option theta
Option theta
Option theta is one of the sensitivity parameters used in option theory to measure responsiveness of an option to change in time. Option theta is often represented by Greek symbol Θ. Option theta
belongs to a group of option sensitivity parameters together called "Greeks".
Option theta explained
Option theta is a mathematical tool used to capture how the price or a value of an option reacts to time, specifically how it changes as the option approaches its expiration or maturity date. Option
theta is also often called time decay which means exactly what it says.
An option has some value right at the moment when it is written. It has some value just because it exists and because it theoretically ties up some of the writer's money (for which the writer has to
be compensated). This is called time value of an option. As the option approaches its expiration date, the option time value decreases, or we can say it "decays." However, the time value does not
diminish linearly. The rate at which an option looses its value changes as the option nears its end. And the option theta sensitivity measure is the tool that is used to measure this.
What values can option theta take?
We can explain option theta using the two pictures below.
Option theta and asset price...
Option theta can be quite confusing, but it helps to think about it in plain logic. Option theta is negative because the relationship between option's value and the time to expiration is inverse. As
the option ages, the time value decreases.
When thinking about the
of the option theta, it is however easier to think in terms of option theta
absolute value
. Option theta is the lowest (or the highest when looking at its
absolute value
) when an option is at-the-money. When an option is completely worthless or very valuable, then the absolute value of the option theta is minimal.
Let's use a call option as an example. A call option theta would be on the left hand side of the graph below. The current underlying asset price S would be much smaller than the exercise price X. In
such case, when you have a call option that is already completely worthless, then it is worthless and not even the fact that it is far away from its expiration date helps it much. When you have a
call option that is deeply in the money, then it already is in-the-money, and just mere ageing of the option does not hurt it as much as it would hurt an option that is on the verge of being in- or
This relationship is in a way similar to two other option sensitivity indicators. Compare option theta to option gamma and option vega.
Option theta and option expiration time...
Option theta gets affected not only by whether the option is in or out of the money, but also as the option itself ages. As the option gets closer to its expiration date, it tends to loose its time
value more quickly. Having 90 days to the expiration date, an option changes its value (assuming other factors are constant) from the 90th day to the 89th day only minimally. However, the same option
will loose a lot of its time value between the 3rd and 2nd day before its expiration.
Option theta is negative and its absolute value increases as the option nears its expiration.
What are the important option theta characteristics?
The list below summarizes some important facts related to option theta.
• Option theta is a function of time and value of the underlying asset.
• The time value of either a call or put option decreases as the option approaches its expiration.
• Theta is the same for call and put options.
• Option theta is minimal (or maximal when looking at its absolute value) when an option is at-the-money.
• Option theta - its absolute value - increases as the option approaches its expiration.
• Option theta is usually negative. European put option in-the-money is an exception to the rule. Such an option can have a positive theta.
Now let's take a look how option theta is calculated.
How can I calculate option theta?
Option theta is mathematically first derivation of option's value function by time. We can express this schematically using the following equation:
where dV is the value of an option and dT is the time to expiration. Using the math of the Black-Scholes model, we can calculate the theta of an option as follows:
The d1 and d2 parameters are calculated using the following formula:
where φ (phi) represents the cumulative distribution function of standard Normal distribution, r is risk-free interest rate (for example U. S. Government Treasury 30-day rate), q is the dividend
yield, σ is volatility, T is time to expiration, S is the price of the underlying asset, and K is the strike price.
Are there any other option sensitivity measures I should know about?
Options are sensitive not only to the flow of time but to a number of other factors as well. Option delta and option gamma measure option sensitivity to changes in the price of the underlying asset.
Option vega measures option sensitivity to the volatility of the price of the underlying asset.
Discuss this article or this topic in our discussion
(The table bellow shows a list of 8 most recent topics posted in our
discussion forum
. Visit our discussion forum to see more. It is possible the links below are not related to this page, but you can be certain you will find related posts in the discussion forum. You can post one
yourself too.)
Email this article to a friend:
How can I link to this web page?
It is easy, just include the code provided below into your HTML code. | {"url":"http://maxi-pedia.com/option+theta+definition+what+is","timestamp":"2024-11-04T01:11:18Z","content_type":"application/xhtml+xml","content_length":"33792","record_id":"<urn:uuid:9dc490c3-4b6d-4c0c-8026-c5c96bee0bee>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00264.warc.gz"} |
Understanding the Basics of Trigonometry: A Beginner's Guide
[February 23, 2024 by JoyAnswer.org, Category : Mathematics]
What is basic trigonometry? Get acquainted with the fundamental concepts of trigonometry with this beginner's guide. Explore trigonometric functions, identities, and applications in various fields of
mathematics and science.
What is basic trigonometry?
Basic trigonometry is a branch of mathematics that deals with the study of relationships between angles and sides of triangles. It primarily focuses on the trigonometric functions, which are ratios
of the lengths of sides in right triangles.
The three primary trigonometric functions are:
1. Sine (sin): The sine of an angle in a right triangle is defined as the ratio of the length of the side opposite the angle to the length of the hypotenuse.$\sin(\theta) = \frac{\text{opposite}}{\
2. Cosine (cos): The cosine of an angle in a right triangle is defined as the ratio of the length of the side adjacent to the angle to the length of the hypotenuse.$\cos(\theta) = \frac{\text
3. Tangent (tan): The tangent of an angle in a right triangle is defined as the ratio of the length of the side opposite the angle to the length of the side adjacent to the angle.$\tan(\theta) = \
These functions are fundamental to trigonometry and are used to solve various problems involving triangles, such as finding unknown side lengths or angles.
In addition to these basic trigonometric functions, trigonometry also involves concepts such as angles (measured in degrees or radians), the Pythagorean theorem, trigonometric identities, and the
unit circle.
Trigonometry has numerous applications across various fields, including mathematics, physics, engineering, architecture, astronomy, and more. It provides a powerful tool for analyzing and solving
problems related to angles and distances in both theoretical and practical contexts.
Trigonometry: Unveiling the Secrets of Triangles
Trigonometry, a branch of mathematics, focuses on the relationships between angles and sides of triangles. Here's an exploration of its fundamental concepts, applications, and problem-solving
1. Fundamental Concepts:
• Right Triangles: The foundation of trigonometry lies in right triangles, where one angle is 90 degrees.
• Trigonometric Ratios: These ratios relate the lengths of the sides of a right triangle to its angles. The three primary ratios are:
□ Sine (sin): Opposite side divided by the Hypotenuse.
□ Cosine (cos): Adjacent side divided by the Hypotenuse.
□ Tangent (tan): Opposite side divided by the Adjacent side.
• Unit Circle: This circle with a radius of 1 unit visually represents the trigonometric ratios for all angles, not just right triangles.
2. Applying Trigonometry to Solve Geometric Problems:
Once you understand the trigonometric ratios, you can solve various geometric problems involving triangles, including:
• Finding missing side lengths: Use the appropriate trigonometric ratio based on the given information (angles and other sides) and solve for the unknown side.
• Finding missing angles: Utilize inverse trigonometric functions (sin^-1, cos^-1, tan^-1) to determine the angle measure based on a known ratio and side length.
• Solving real-world problems: Apply trigonometric concepts to calculate distances, heights, or angles in various scenarios, like finding the height of a building, the distance across a river, or
the angle of a launched projectile.
3. Real-World Applications of Basic Trigonometry:
Trigonometry principles have diverse applications beyond traditional geometry, including:
• Navigation: Used in determining positions at sea, calculating flight paths, and even navigating satellites.
• Engineering: Crucial for designing and analyzing structures like bridges, buildings, and trusses, where understanding load distribution and forces is essential.
• Surveying and Mapping: Used in measuring distances and angles to create accurate maps and determine land boundaries.
• Computer Graphics: Plays a vital role in 3D animation, video games, and other visual effects, where calculations involving angles and positions are used to create realistic visuals.
• Physics: Fundamental for understanding wave behavior (sound, light), analyzing projectile motion, and solving problems involving forces and motion.
By mastering these fundamental concepts and applying them creatively, trigonometry becomes a powerful tool for solving real-world problems across various disciplines. | {"url":"https://joyanswer.org/understanding-the-basics-of-trigonometry-a-beginner-s-guide","timestamp":"2024-11-14T16:59:22Z","content_type":"text/html","content_length":"38242","record_id":"<urn:uuid:4d981e41-dd1c-4d29-9d32-ccd205eaad28>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00491.warc.gz"} |
A Tryst With Infinity
A Tryst With Infinity
In Chapter 7 on State and Modularity, we learned how to use assignment as a tool for modeling the real world. In the preceding chapter we came face-to-face with some of the complications that
assignment can raise if used along with concurrency. The root cause of the problem is that a variation in the object being modeled will lead to a state variation in the program as well. Along with
the problems of concurrent assignments, objects with varying state has another drawback – they are ignorant of their own history. In other words, once a state change has occurred, it is not possible
to figure out how the object looked at a point in time in the past.
In this chapter we will explore another way to model change without assignments. The idea, at an abstract level, is to represent the varying object as a function f(x, t), which return the state of
the object x at time t. This function can be implemented using an association list or a hash table which maps a time to the state of x at that time. This will work as long as we don't want the state
at an arbitrary time in the future or in the past. But for this abstraction to be useful, we should be able to represent very large or even infinite sequences of states. Slogan realizes this idea in
the form of lazy sequences.
Before looking at how lazy sequences help us model systems with state without ever using assignment, we should get familiar with the programming technique that make these sequences possible. This
technique is known as lazy or delayed evaluation.
9.1 Delayed Evaluation
If we are allowed to package an expression in such a way that it can be evaluated later on demand, we have delayed evaluation in our language. In fact you already know how to do delayed evaluations!
When we package an expression as a function, we are effectively delaying the evaluation of that expression until the function is called. For example, if we want an expensive computation to be
evaluated only when its value is actually needed by the program, we package the expression into a function. We call that function at the point in the program where the value needs to be computed and
let expensive_computation = ^() { showln("performing long computation...")
12345 }
//> performing long computation...
// 12345
//> performing long computation...
// 12345
The problem with the expensive_computation function is that each time it is called, the whole computation is repeated again. A better version of the function will cache the result once computed and
return that value when the function is called again.
let expensive_computation = let (cached_result = false)
^() if (cached_result) cached_result
else { showln("performing long computation...")
cached_result = 12345
cached_result }
//> performing long computation...
// 12345
// 12345
This optimization is known as memoization. Slogan provides the delay operator (~) for expressing memoized functions easily. Any expression can be delayed by prefixing it with the delay operator. The
delay operator's return value is an object called the promise which can be forced to evaluate and return the expression's value. This value is cached by the promise object and returned when it is
forced again.
We can rewrite the expensive_computation function using delay as shown below:
let expensive_computation = ~{ showln("performing long computation...")
12345 }
// <promise>
//> performing long computation...
// 12345
// 12345
9.2 Lazy Sequences
With that introduction to delayed expressions, we are in a position to understand sequences that can conceptually extend to infinity. The fundamental data structure that can be used to represent a
sequence of values is the list. As we saw earlier in this book, a list is just a pair of pairs that is terminated by an empty list. To make this clear, look at the following definitions of two lists
with similar layouts:
let xs = [1, 2, 3, 4, 5]
let ys = 1:2:3:4:5:[]
// [1, 2, 3, 4, 5]
xs == ys
// true
The problem with lists created from pairs is that they have to eventually terminate. What if we need a list from which we can fetch values to our soul's content? This can be achieved by making a pair
whose head is the first value in the sequence and whose tail is a promise to generate the next value when required. This idea is expressed in the function below, which represents an infinite sequence
of integers:
function infinite_integers(i)
Infinite_integers calls itself recursively to generate the next value in the sequence, but that call is delayed until the next value is actually needed. So calling the function will not go into an
infinite loop.
let xs = infinite_integers(1)
// [1, ...]
// 1
// 2
To have to explicitly force each element in the sequence can be a bit awkward. The functions first, rest and nth know how to access elements in a sequence, invoking force at the right places.
// 1
// 2
nth(0, xs)
// 1
nth(10, xs)
// 11
9.2.1 Generic Operations on Sequences
In Slogan, list is the default data structure for representing and manipulating a sequence of values. When coupled with higher-order functions like map, fold and filter, programs that manipulate
lists can lead to elegant solutions to many data transformation problems. To illustrate this, consider a program for computing the sum of all prime numbers in an interval. As a first attempt, let us
define the program in an iterative style:
function sum_primes(a, b)
letfn loop (count = a, accum = 0)
if (count > b) accum
else if (is_prime(count)) loop(inc(count), count + accum)
else loop(inc(count), accum)
// usage example:
sum_primes(1, 20)
// 77
sum_primes(10000, 1000000)
// 37544665627
We can define this function in a more declarative style by using the higher-order functions we saw in Chapter 6.
function sum_primes(a, b)
fold(`+`, 0, filter(is_prime, range(a, b)))
Though the second sum_primes is shorter and sweeter than the first, it is grossly inefficient when it comes to memory consumption. Filter cannot do any work until range returns a whole list. Then a
new list is generated by filter which is reduced to a sum by fold.
The inefficiency in using lists becomes painfully apparent if we use the same paradigm to compute the second prime in a large interval:
function nth_prime(n, a, b)
nth(n, filter(is_prime, range(a, b)))
nth_prime(1, 1, 20)
// 3
nth_prime(1, 10000, 1000000)
// 10009
The function does find the n^th prime, but the computational overhead is outrageous for calls that search over a large range of numbers. For instance, the call nth_prime(10000, 1000000) will
construct a list of almost a million integers, filter this list by testing each element for primality, and then ignore almost all of the result. If we were programming in the iterative style, we
would interleave the enumeration and the filtering, and stop when we reached the second prime.
With lazy sequences we can achieve the best of both worlds: We can formulate programs elegantly as sequence manipulations, while attaining the efficiency of incremental computation. There are
higher-order functions that can work with lists and lazy sequences alike. Filter and map are two examples. When working with lazy sequences, they will construct only that part of the sequence that is
actually required for the current computation. As the computation makes progress, more of the sequence will be realized maintaining an illusion of an infinite stream of values.
Both the sum_primes and nth_prime functions can be rewritten to take advantage of generic functions that takes lazy sequences as arguments. Specifically, we have to replace the call to range with
enumerate and the call to fold with accumulate . While range returns a fully realized list of values, enumerate will give us a lazy sequence of the same values. Accumulate knows how to reduce a lazy
sequence to a single value, incrementally.
First we will write the sum_primes function using the new sequence functions:
function sum_primes(a, b)
accumulate(`+`, 0, filter(is_prime, enumerate(a, b)))
When we call sum_primes with a large range, it will return immediately with a lazy sequence of sums, deferring the actual computation of the sum only when it is explicitly requested by a sequence
accessor such as nth:
let s = sum_primes(10000, 1000000)
nth(10000, s)
// 628132235
The performance enhancement we get for the nth_prime function is even more profound.
function nth_prime(n, a, b)
nth(n, filter(is_prime, enumerate(a, b)))
nth_prime(1, 10000, 1000000) // this call will return immediately
// 10009
9.2.2 Modularity of Lazy Sequences
As we saw in Chapter 7, one of the major benefits of introducing assignment is that we can increase the modularity of our systems by hiding or encapsulating its state in local variables. Let us
revisit this idea with the example of a simple Pseudo-random Number Generator. First let us implement a purely functional version of the PRNG that do not make use of local state or assignments^1.
function rand(seed)
function rand_update(x)
let (a = 27, b = 26, m = 127)
modulo(a * x + b, m)
The functional purity of the rand function comes with a cost, users must take care of passing the accurate current state of the PRNG to each successive call to the function. This is because its
implementation is not modular enough to hide and keep track of its internal state.
// 42
// 17
// 104
By bringing assignment to the scene, we will be able to create PRNG objects that can keep track of their own internal state and are more modular.
function make_rand(seed)
let (x = seed)
^() { x = rand_update(x)
x }
let r = make_rand(10)
// 42
// 17
// 104
If we realize the PRNG using lazy sequences, we will be able to have the benefits of modularity by not incurring the costs associated with assignment. With lazy sequences, we will have a stream of
random numbers produced by successive calls to rand_update.
function rand(seed)
let (x = rand_update(seed))
let rs = rand(10)
nth(0, rs)
// 42
nth(2, rs)
// 104
nth(99, rs)
// 20
We get considerable modularity with this approach, yet we do not have to maintain a changing local state.
9.2 Generators
Generators are functions that can behave as if they return an infinite sequence of values. A generator function can be made to return a value before its body is fully evaluated. This is accomplished
by calling the yield expression, which returns a pair of the return value and an object known as an iterator. The iterator object knows how to restart the function where yield was called. If the
function is written in such a way that it will return another value:iterator pair, the caller effectively gets a sequence of infinite values.
In the next example, we will write a function that will give us an infinite sequence of random numbers, generated with the help of yield.
function rand(seed)
letfn loop (x = rand_update(seed))
{ yield x
loop(rand_update(x)) }
let rs = rand(10)
// 42
// 17
// 104
Note that once an iterator is asked to move forward, there is no way to ask it to get a value from the past. Because of this, they may not be a stand-in replacement for lazy sequences created by the
delay operator. A function that generates an iterator should also not call itself recursively. Instead use a named letfn as shown in the example.
By default, yield returns void. The next function can be called to pass a user-defined value to yield, which will become its return value. Please see the reference on next for more information.
The traversal on an iterator can be wrapped into a convenience function. This is achieve by calling the iterator constructor. This call will return a function that can be called to traverse the
iterator. Each call to this function will result in returning the next value from the iterator. When all values run out, this function will return false.
let iter = iterator(rand(100))
// 59
// 95
9.3 Custom Sequences
As we saw in the previous sections, first and rest are the generic means for accessing many types of sequences. Their genericity is not limited to built-in sequences like lists, lazy sequences and
generators. You can define your own custom sequence objects that can be destructured with these functions. This can is accomplished by defining the object as a closure that responds to the 'first and
'rest messages.
In this section we will define a custom sequence that respond to these messages and see how it automatically gets fitted into the larger context of the language. The data structure we are going to
define is simply called a bag. You create a bag with a collection of objects. These objects can be taken out one by one, but there is no guarantee to the order in which they will come out, similar to
picking out items from a real bag without looking into it.
The following program defines the bag data structure. The constructor make_bag takes a finite sequence as argument. This is sequence is shuffled and stored in a local list. The returned object
redirects the first and rest messages to this list.
function make_bag(objects)
letfn (objs = shuffle(objects))
| 'first -> first(objs)
| 'rest -> rest(objs)
// utility functions for shuffling a sequence.
function shuffle(objects)
letfn loop (objs = objects, result = [])
if (not(objs)) result
else let (len = count(objs))
if (len == 0) result
else let (i = random_integer(len))
loop(remove_nth(i, objs), nth(i, objs):result)
function remove_nth(n, objects)
letfn loop (objs = objects, i = 0, result = [])
if (not(objs)) reverse(result)
else if (i == n) loop(rest(objs), i + 1, result)
else loop (rest(objs), i + 1, first(objs):result)
This is how you would use the new bag sequence:
let b = make_bag([1,2,3,4,5])
// the results you get may be different,
// because the list is randomly shuffled by the bag.
// 4
// 3
// destructuring a bag:
let [a,b,c,d,e] = b;
// [4, 3, 1, 2, 5]
You can make the bag data structure more versatile by overriding the ref and ref_set messages. See the references for more information.
^1The algorithm we use to generate random numbers may not be suitable for real-world use. For production code, use the built-in functions random_integer or random_real.
Next | Previous | Contents | {"url":"http://schemer.in/slogan/docs/book/lazy.html","timestamp":"2024-11-06T00:55:41Z","content_type":"text/html","content_length":"24202","record_id":"<urn:uuid:8466c0e7-2dc0-467b-8c5b-aa707ab811df>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00394.warc.gz"} |
Operator in C/C++ ⋆ csestudy247
Operator in C/C++
• Operator is a symbolic name which is used to manipulate data.
• It is the symbol use to perform calculation.
• It perform to operand.
• EX:- +,- ,* ,% ,/ ,&
Types of operator:-
There are two types of operator
1.According to operand:-
• Operand is the value on which the operator perform the calculation.
• EX:- See the upper image
2.According to operand there are three types of operator.
A. Unary operator:-
• Operator that takes only one value is called as unary operator.
• EX:-Negation operator(-),increment(++),Decrement(–)
B. Binary operator:-
• Operator that takes two value is called as Binary operator.
• EX:- + ,- , * ,< ,>,
C. Ternary Operator:-
• Operator that take three value is called as Ternary Operator.
• EX:-Conditional operator(?)
2.According to the calculation:-
• On the basic of calculation there are following types of operator.
[i] Arithmetic operator
[ii] Bitwise operator
[iii] Conditional operator
[iv] Decrement & Increment operator
[v] Logical operator
[vi] Assignment operator
[vii] Relational operator
[viii] Special operator
a. Comma(,) operator
b. Size of() operator
[I] Arithmetic operator:-
• This operator is used to perform arithmetic operator.
operator Meaning Example Result
+ Addition 20+2 22
– Subtraction 20-2 18
* Multiplication 20*2 40
/ Division 21/2 10
% Remainder 21%2 1
[ii] Bitwise operator:-
• This operator is used to perform operation at binary digit level.
operator Meaning
<< Shift the bit to Left
>> Shift the bit to Right
~ One’s compliment
| Logical Bitwise OR
& Logical Bitwise AND
⩒ Logical Bitwise EX-OR
[iii] Conditional operator:-
• This operator is used to check a condition and select a value depending on the value of condition.
Variable=(condition)? value1: value2,during the operation if condition be true. It gives value1 as output otherwise it gives value2 output.
INT a=7 b=10 , C
1st (a<b)? a:b,
2nd c=(a>b)?a:b,
[iv] Decrement & Increment operator:-
A. Decrement operator:-
• This operator is used to reduced the value of integer variable by 1(one)
• It is denoted by(–)
• There are two types of decrement operator.
a. Predecrement operator:- This operator is used to reduce the value of variable before calculation.
INT a=7 b=8 C
(i) C=(–a)+b
=6+8 =14
(ii) c= (–a)+(–b)
= 6+7
b. Post-Decrement:- This operator is used to reduce the value of variable after calculation by 1(one).
INT a=6 b=3 C,
B. Increment operator:-
• This operator is used to increase the value of integer variable by one
• It is denoted by(++)
• There are two types of increment operator.
a. Pre-increment operator:- This operator is used to increase the value of of variable before calculation by 1(one).
EX:- INT a=5 , b=3 C,
b. Post-increment:- This operator is used to increase the value of variable after calculation by1(one).
a=7 ,b=4 C,
[v] Logical operator
• This operator is used to perform logical operation.
operator Meaning Example Result
&& Logical AND (14>2)&&(14<2) F
|| Logical OR (14>2)||(14<2) T
! Logical NOT !(14<2) T
[VI]Assignment operator
• Assignment operator are used to perform arithmetic operation while assigning a value to a variable.
operator Example Condition(M=20) Result
+= M+=10 M=M+10 30
-= M-=10 M=M-10 10
*= M*=10 M=M*10 200
/= M/=10 M=M/10 2
%= M%=10 M=M%10 0
[Vii] Relational operator:-
• This operator is used to compare the value of the operands.
operator Meaning Example Result
< less than 5<2 F
> Greeter than 7>2 T
<= less than or equal 5<=2 F
>= Greeter than or equal 5>=2 T
== Equal 4==2 F
!= Not equal 4!=2 T
[Viii] Special operator
a. Comma(,) operator:-
• This operator is used to separate two variable, expression
• EX:- INT a,b,c
b. Size of () operator:-
• This operator is used to find the size of data type in this memory.
• EX:-INT a,b
• Float p,q
• Char x,y,z
a=size of(x)=1 Byte
b=Size of (p)=4 Byte
Relate link | {"url":"https://csestudy247.com/operator-in-c-c/","timestamp":"2024-11-05T14:04:19Z","content_type":"text/html","content_length":"87654","record_id":"<urn:uuid:9ac2e0f3-3272-41b5-8ef6-01f360da9287>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00336.warc.gz"} |
Who is the Father of Mathematics?
The father of mathematics is none other than Archimedes, one of the smartest brains ever to exist. He was born and died in 287 BC and 212 BC, respectively. His significant concepts and discoveries
still define the existing body of mathematics to this day. | {"url":"https://www.19jaem.org/who-is-the-father-of-mathematics","timestamp":"2024-11-07T03:57:45Z","content_type":"text/html","content_length":"5852","record_id":"<urn:uuid:8d4084dd-58e5-467d-84a6-55f6718d4f2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00516.warc.gz"} |
12 Steps Of Al-Anon Printable
12 Steps Of Al-Anon Printable - Don’t worry if it doesn’t work perfectly the first time. Web raise your hand if you get stuck. Web the twelve steps are reprinted and adapted from the original twelve
steps of alcoholics anonymous and are used with the permission of alcoholics. The principles they embody are universal,.
AlAnon. Alcohol Anonymous. AA. 12 Steps. Printable Art. Etsy
Web raise your hand if you get stuck. The principles they embody are universal,. Don’t worry if it doesn’t work perfectly the first time. Web the twelve steps are reprinted and adapted from the
original twelve steps of alcoholics anonymous and are used with the permission of alcoholics.
️Al Anon 12 Steps Worksheets Free Download Gmbar.co
Web raise your hand if you get stuck. The principles they embody are universal,. Don’t worry if it doesn’t work perfectly the first time. Web the twelve steps are reprinted and adapted from the
original twelve steps of alcoholics anonymous and are used with the permission of alcoholics.
Explaining The Twelve Steps of Alcoholics Anonymous
Don’t worry if it doesn’t work perfectly the first time. Web raise your hand if you get stuck. The principles they embody are universal,. Web the twelve steps are reprinted and adapted from the
original twelve steps of alcoholics anonymous and are used with the permission of alcoholics.
12 Steps Of Al Anon Printable Printable Word Searches
Web the twelve steps are reprinted and adapted from the original twelve steps of alcoholics anonymous and are used with the permission of alcoholics. The principles they embody are universal,. Web
raise your hand if you get stuck. Don’t worry if it doesn’t work perfectly the first time.
AlAnon. Alcohol Anonymous. AA. 12 Steps. Printable Art. Etsy
Web raise your hand if you get stuck. Web the twelve steps are reprinted and adapted from the original twelve steps of alcoholics anonymous and are used with the permission of alcoholics. The
principles they embody are universal,. Don’t worry if it doesn’t work perfectly the first time.
Al Anon 12 Steps Worksheets popularquotesimg
Web raise your hand if you get stuck. The principles they embody are universal,. Don’t worry if it doesn’t work perfectly the first time. Web the twelve steps are reprinted and adapted from the
original twelve steps of alcoholics anonymous and are used with the permission of alcoholics.
the12stepsofalcoholicsanonymousinprintversion Top Rated Drug Rehab & Alcohol Treatment
The principles they embody are universal,. Don’t worry if it doesn’t work perfectly the first time. Web the twelve steps are reprinted and adapted from the original twelve steps of alcoholics
anonymous and are used with the permission of alcoholics. Web raise your hand if you get stuck.
️Al Anon 12 Steps Worksheets Free Download Goodimg.co
Web raise your hand if you get stuck. The principles they embody are universal,. Don’t worry if it doesn’t work perfectly the first time. Web the twelve steps are reprinted and adapted from the
original twelve steps of alcoholics anonymous and are used with the permission of alcoholics.
aa 12 steps pdf download Daniella Calvert
The principles they embody are universal,. Web raise your hand if you get stuck. Web the twelve steps are reprinted and adapted from the original twelve steps of alcoholics anonymous and are used
with the permission of alcoholics. Don’t worry if it doesn’t work perfectly the first time.
AlAnon’s Twelve Steps & Twelve Traditions (B8) MA AlAnon & Alateen
Web the twelve steps are reprinted and adapted from the original twelve steps of alcoholics anonymous and are used with the permission of alcoholics. The principles they embody are universal,. Don’t
worry if it doesn’t work perfectly the first time. Web raise your hand if you get stuck.
Don’t worry if it doesn’t work perfectly the first time. Web the twelve steps are reprinted and adapted from the original twelve steps of alcoholics anonymous and are used with the permission of
alcoholics. The principles they embody are universal,. Web raise your hand if you get stuck.
Web Raise Your Hand If You Get Stuck.
The principles they embody are universal,. Web the twelve steps are reprinted and adapted from the original twelve steps of alcoholics anonymous and are used with the permission of alcoholics. Don’t
worry if it doesn’t work perfectly the first time.
Related Post: | {"url":"https://macro-manie.de/printable/12-steps-of-al-anon-printable.html","timestamp":"2024-11-09T04:42:04Z","content_type":"text/html","content_length":"22756","record_id":"<urn:uuid:c9a87ac4-5f50-44d7-83ff-e11dc7539d84>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00142.warc.gz"} |
Read Excerpt | Bill Sampson
top of page
Excerpt from Wheat Fields
Miller Hall
Twin, four-columned, buff-brick scholarship halls sit on the east side of the sidewalk between Danforth Chapel and the Chancellor’s residence. The white-painted hilltop home of KU’s CEO at the
southeast corner of the campus commands the approach to Lawrence from the Wakarusa River to the south. Miller, the twin closer to the Chancellor, has steps down from the sidewalk to its front door.
Healthy evergreens that need a haircut in every season flank the staircase and block the view of the first floor from the street. But the Ionic capitals are visible atop their columns, just beneath a
strict pediment.
Miller holds down 2:00 o’clock on the traffic circle that gives access to the four buildings at the end of Lilac Lane: Miller, the Chancellor’s residence (at 5:00 o’clock), Blake Hall (political
science, at 6:00 o’clock), and Twente Hall (formerly the university hospital and now the home of KU’s school of social welfare, at 8:00 o’clock).
On the sun side of the stained-glass window at Danforth Chapel and just across Lilac Lane on the north side of “new” Fraser are the four sycamore trees that shaded Farieh during the early weeks of
her freshman year as she turned from Jayhawk Boulevard toward her new home at Miller Hall. They gave her more than a break from the high heat of August, for they reminded her of the sycamores
recently planted on Vali Asr Avenue in Tehran, along whose ample sidewalks she and her sisters had all learned to ride bicycles. A central part of that memory was her father, running alongside her
yelling “Farieh, you can do this!” Memories like this were small pieces of home for a young woman so far from hers; but the sycamores were there every day, and they helped.
Math Major
As her educational goal required a graduate degree and as she intended to achieve that elsewhere—either Oxford, where the older of her two older sisters was already in graduate school, or The London
School of Economics—Farieh looked to the United States for college. Kansas did not have the nation’s best school for the study of petroleum engineering, although its faculty was excellent. Most of
the Top 10 were in Texas. But for Farieh, The Longhorn State was as long on oil-and-gas education as it was short on diversity. Louisiana and Oklahoma, whose schools boasted their own highly-rated
oil-and-gas departments, were attitudinally similar. Kansas, which was well regarded academically and more moderate politically, and which had a former colleague of her father on its Business School
faculty, was the winner.
She had “tested out” of her liberal arts math requirement easily enough—KU had a standardized test for that and she took it her first week on campus. Testing out of her math degree was more of a
challenge. She had done all the work before leaving Iran, and then some. Unlike their social peers in Tehran whose new homes boasted flat-screen TVs, Farieh’s parents had a chalk board in their
library. The same library featured theater seating for their daughters, but not for movies. Only mildly comfortable—to encourage the girls’ attention — the seats came equipped with retractable
lapboards, not cup holders. Farieh had grown up with mathematics. She and her sisters had each mastered “the easy stuff”—algebra and geometry—by the time they were ten. They had all breezed through
trigonometry and had been introduced to calculus before they were teenagers. By the time they had matriculated from what would be high school in the States they had worked their way through
additional years of calculus, linear algebra, and statistics, and their tutor had provided them a hard look at computer science. They got all of it. What the three sisters did not get were the
advanced placement certificates provided by American high schools. If she wanted to complete her entire math major early to make way for the cornucopia of other classes she had found on her arrival
at KU—which she did—she would have to find a way to make it happen.
After stewing over it for most of September, she came up with a plan. She declared mathematics as a major, far earlier than she needed to. When the Department of Mathematics assigned Professor Don
Rooney to be her advisor, almost her first question during their first meeting was whether he would give her the final he gave to his seniors in linear algebra. She said she would take it in writing
or, if he wished, she would take it face-to-face as an oral exam using a chalkboard instead of a blue book. As naïve as she was enthusiastic, Farieh assumed Professor Rooney would approve her idea
and then, after she had passed his test, persuade his colleagues in the math department to let her take their finals. Upon passing all of them, she would have satisfied her major requirements!
Rooney was so taken aback at the question from the fired-up young woman in the head scarf that he just stared at her. While he sensed Farieh was better grounded in math than most of his first-year
mentees, he did not like her idea or bravado . . . or her head scarf. He told Farieh he would think about it and let her know.
Rooney wanted to check in with his colleagues—for two reasons. He had never done anything like this before and that made him uncomfortable. Don Rooney did not like feeling uncomfortable. And he
didn’t like the demeanor of the young woman in the head scarf; she was far too aggressive. He wanted to tell her “No,” and he wanted backup. So, he texted the two colleagues who had the most
influence over the department—Ann Garvin, the department chair, who taught differential equations, and Alan Kai, who taught the senior seminar in advanced calculus. He said he had something important
and asked them to meet with him, and they met the next day.
There has been a Snow Hall at KU for over one hundred years. The original—the Snow Hall of Natural History—was named for Francis Snow, one of KU’s original three professors, and housed KU’s vast
collection of plants, animals, and insects. Snow’s assistant, Lewis Lindsay Dyche, who became a world-famous taxidermist, would later rate his own building.
KU students played basketball in the basement of the original Snow Hall. Mathematics is located in the current version, whose six stories top out on the battlements above the prominent stair tower
and give it a military countenance more fitting to The Citadel than to Kansas. Snow’s stone walls guard the west approach to the university’s Administration Building. Chancellor Frank Strong had
commissioned Snow’s neighbor to be “one of the largest and most beautiful [buildings] in the state” and “the center of the University’s architecture.” After a shaky financial start and the scrapping
of the much grander original design and long delays occasioned by quarrels over construction costs and World War I, the building finally accomplished what the Chancellor had hoped for. In l938, four
years after his death, the Administration Building was renamed Strong Hall in his honor. The stone of Strong Hall is as warm as Snow’s is cold; and even attired in its final, cheaper design, Strong
is as balanced and as beautiful as Snow is not.
Snow offers few modern spaces and almost no large ones. They chose Anne’s office because it had a window, and a couch.
“So that’s the proposal,” Rooney had said when he had finished outlining what Farieh had said to him.
“What do you think about it, Don?” asked Anne. She had moved away from her desk when the two men came into the office and was sitting with Alan Kai on the couch. Rooney sat in Anne’s favorite
upholstered chair, which she had brought from home and had wedged between her desk and the wall.
“I don’t know,” said Rooney. “The student tested out of math with a perfect score. But no one in the department has ever had her in class, and her entire math education to this point has taken place
in Iran.”
Impressed by the audacity of Farieh’s proposal, Anne was disappointed by Rooney’s tepid reaction. Something else disappointed her far more. Rooney, now nearly sixty, was one of the department’s
good-ol’-boys. At this stage of his career he was far more likely to be at Johnny’s West on a weekday evening than attending a seminar, to say nothing of presenting his own paper. His comment about
Iran unveiled a prejudice she had sensed in him for some time but had not actually witnessed.
“What did her studying in Iran have to do with her freshman placement exam, Don? Those are graded anonymously, are they not? You said she made a perfect score. Who cares whether she took her classes
in Tehran or in Topeka?”
Anne turned her head to Professor Kai, who was right next to her. “Alan, what about you?”
Alan Kai was thirty. He grew up in Singapore and had moved to the United States when he was a high school senior. After doing his undergraduate degree at Pitt, he took both graduate degrees at
Chicago. He and his partner, a ceramics engineer, had interviewed at Kansas at the same time, both were offered jobs in Lawrence, and they had arrived two years ago. Alan was a star, and Anne really
liked him.
“I think it’s brilliant,” said Alan. “If she passes, good for her! If she doesn’t, Don and our department get major credit for their flexibility, which means everything to young people—especially the
young people we are trying to attract to KU mathematics.” He turned to Don.
“Don, did your student tell you why she wanted to test out of your senior-level course?”
Still smarting from his boss’ upbraiding, Rooney made a reply that registered on the terse side of neutral. “Yes, she did. She wants to test out of her entire undergraduate degree in mathematics; she
sees me as her first step.”
“Even bolder,” interjected Anne. “To Alan’s point, what if we let her do that? What if all three of us gave her a problem to solve—problems demanding a solid grounding in advanced calculus, linear
algebra, and differential equations? We could give them to her at the same time, with each of us present, and see how she does. There is little likelihood she will pass all three—fatigue alone almost
guarantees that. But if she does pass, we will truly have something to brag about. The College marketing department will be all over it!”
Siding enthusiastically with Anne, Alan weighed in strongly for an oral exam with the proofs of her answers written on a chalk board. Professor Rooney merely said, “OK,” hoping what Anne said about
the fatigue of the student’s contending with three problems at the same time would prove accurate. They chose Classroom 203, an average-sized space with a huge chalk board. And they decided to ask
Farieh to consent to the attendance of a photographer from Marketing. If she passed, the photographs would be golden.
Professor Rooney texted Farieh and they scheduled the examination for next week Friday afternoon. Alan could not wait to alert Marketing and get the College on board; Anne said she would get the
As only Professor Rooney had actually met Farieh, her arrival at the back of 203 a week later was dramatic. Any head scarf would have been attention-getting, but the one she was wearing was bright
yellow. Yellow was Farieh’s favorite color, and she had many scarves in that color. She was also far taller than anyone had expected. Farieh had dressed up for the occasion; with her Manolo Blahnik
heels she was the tallest person in the room at nearly six feet. Once her eyes had adjusted to the sunlight that challenged Snow’s air handlers in the summer and fall, she walked steadily from behind
the desks at the back of the classroom to the front. Four persons waited for her—Anne Garvin, Alan Kai, Don Rooney, and Louise, the photographer. Even Rooney could not take his eyes off her; no one
said a word.
“I'm Farieh,” she said, breaking the brief but awkward silence. “Hello, Professor Rooney. Thank you very much for making this afternoon possible.”
As Rooney was not interested in saying much of anything and was incapable of reciprocating her courtesy, Anne Garvin cut in. “You are welcome. It's 'Farieh,' is that correct?”
“Yes,” replied the student.
“Well, your imaginative proposal has created quite the stir here in Snow Hall. We are all delighted to meet you, and we are excited to get started.”
“You may leave your jacket in the front row if you’d like.” Professor Garvin gestured to one of the desks and Farieh began moving toward it. “When you tell us you're ready, we'll begin.”
Responding with the same easy warmth she had felt from Professor Garvin . . . but so rapidly she seemed to be completing Professor Garvin's sentence, not beginning her own, Farieh answered: “I am
ready now.”
Each of the math professors produced a single page of paper. Rooney's was entirely filled with text; Garvin and Kai had needed perhaps half the page. Garvin collected them and held them face down.
“These are questions from actual examinations, Farieh. We did not try to write the hardest problem any of us had ever seen. But I suspect each of these is the hardest problem from its respective
examination. When I hand them to you, Professor Kai will note the time. You will have ninety minutes to solve the problems and to write your proofs on the chalkboard. You may use as much or as little
of the chalkboard as you wish.”
“We will provide the same warnings our students get in an actual examination—at ten minutes, at five, and at one. When Professor Kai says, 'Time,' you will have to put the chalk in the tray and stand
back from the board.”
“Do you have any questions?”
“Yes,” said Farieh. “I did not bring any water. Is there any? May I have some?”
“Of course,” said Professor Garvin. “I have a bottle of water in my backpack. Give me just a moment.” She reached for the backpack, which was sitting on the desk next to her, pulled out the unopened
bottle, and handed it to Farieh. “Here you go,” she said. Farieh, who had appreciated Professor Garvin from the beginning, now liked her very much. “Thank you,” she said. She drew a deep breath—the
only tell of nervousness any of them would see that afternoon. “I'm ready for the problems.”
Farieh took the pages from the department head and stepped forward to the desk. She scanned the three problems, then placed two of them on the desktop next to Professor Garvin’s backpack. She walked
to the board, picked up a new piece of chalk and stood still before the slate as she read the third problem slowly. Then she began to write.
Her height let her use the very top of the chalk board, and she began in the corner farthest to the left. In ten minutes, she had filled nearly two panels with the proofs of her answer to Professor
Rooney’s problem in linear algebra. She paused, put the chalk in the tray, took several steps back, and again stared at the page presenting the problem. She looked up and moved back to the board,
picking up the eraser and removing an “equal” sign. Her chalk having filled the open space with the symbol for “equal/more than,” she again stood back and read through the entire proof. Then she
returned the chalk to its tray and walked back to the desk and the water.
The examinations actually used by the professors had suggested thirty minutes for each of the problems. Farieh had taken just fifteen for the first one. A person looking at Don Rooney would not know
whether Farieh had solved his problem or not; he did nothing, said nothing, and sat expressionless. But Anne Garvin and Alan Kai knew. Farieh had killed it.
Farieh took a long sip from the water bottle before picking up the remaining two problems. Her head scarf did not permit a look at her entire face, but Professor Rooney noticed a slight frown that
had settled just above her eyes. Are they green?! he thought to himself. Why had I not noticed that before?! But the frown delighted him.
The second problem, in advanced calculus, was Professor Kai's. Alan Kai would occasionally nod as Farieh worked her way through the proofs. When she had finished and had walked back to the desk, Alan
exhaled a quiet “Bravo!”
Anne Garvin was less demonstrative than Professor Kai and soon realized she was watching Farieh almost as stonily as Professor Rooney had watched her. When she asked herself “Why?,” and after she had
demanded an answer, she realized she was anxious! She wanted this compelling young woman to win this bet she had made with the three of them, and she was on pins and needles as she watched Farieh
move across the front of the chalk board.
Having taken twenty-five minutes to solve Alan Kai's problem, which she had done with hardly a wasted symbol, Farieh had forty-five minutes left for differential equations, Anne’s problem. She was
repeating the performance she had delivered for Professor Rooney's problem, her fingers flying across the chalk board, when she suddenly stopped. Professor Garvin sensed this was no planned break
during which Farieh might calmly check her work; there must be something wrong.
Farieh stepped farther back, then looked down at the problem she was holding in her left hand. She looked up at the board; she looked down at the problem. As Farieh once again walked back to the
desk, Anne knew she had not yet finished; the calm that had accompanied the student’s earlier breaks was absent this time. Farieh seemed to stand at the desk a long time before she surrendered to an
unspoken recognition and returned to the board. This time she picked up the eraser. Looking hard at the middle panel of the three she had filled, Farieh removed its entire bottom half! Anne's
uneasiness had been building since Farieh first stopped her work. Now, as Farieh deleted first one line of the proof and then the next, Anne almost gasped. There was nothing wrong with the equations
that were now disappearing; Farieh’s proof would fail if she did not do something to reverse this. Beyond that, Farieh's time was expiring. Alan had just given the ten-minute warning, and it would
take Farieh several more minutes even to replace what she had just removed.
In an interesting role reversal for her parents, it was her mother who had taught Farieh the power of mathematics. Her father introduced her to the art—how making the proofs elegant made them more
accessible, and more persuasive. The equations on the central panel, like wayward paragraphs in the draft of a novel, had been in the wrong place. Farieh copied the upper portion of the panel into
the space she had just created at the bottom. Then she erased the top, making space for the proofs she had erased to begin with. Anne noticed Farieh was revising them slightly as she drew them onto
the clear space at the top of the middle panel. Done!
As Alan Kai sounded the five-minute warning, Farieh stepped back, cocked her head to the side, and looked severely at the board. Satisfied, she relaxed her expression, returned her head to upright,
nodded her approval, and placed the chalk in its tray. Lifting the water bottle from the desk as she walked by, she continued to the front row and sat down.
Watching all of this like she had once watched her daughter’s first audition for The Nutcracker, Anne had not immediately understood what Farieh was doing, and she was frightened for her. But as the
young woman with the yellow headscarf fit her elegant proofs into just the right places, it was clear Farieh had not merely solved the problem. Her proofs were clean; her organization was flawless;
her solution was . . . beautiful. Unbidden, Anne’s hands came up from her lap and applauded.
The envelope found the mailbox at Miller Hall three days later. Signed by Regents Professor Anne W. Garvin, Chair, Department of Mathematics and Computer Science, the letter inside informed Farieh
she had completed the requirements for a major in mathematics in the College of Liberal Arts and Sciences. The short second paragraph continued: “When you have completed the other requirements for an
undergraduate degree at the University of Kansas, you will receive a Bachelor of Science in Mathematics.” The third paragraph was still shorter:
“Your degree will be awarded with highest departmental honors—congratulations!”
bottom of page | {"url":"https://www.billsampson-author.com/excerpt","timestamp":"2024-11-14T15:39:46Z","content_type":"text/html","content_length":"447564","record_id":"<urn:uuid:323875b8-69c7-4230-a625-3e356762bfbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00410.warc.gz"} |
iCalculator™ IM FlexiTax Calculator | Periodic Payroll Isle of Man
iCalculator™ IM Isle of Man FlexiTax Calculator
The Isle of Man FlexiTax Calculator, part of our suite of Isle of Man Payroll, Salary and Tax calculators, is updated for the 2024 tax year using the latest tax tables published by the tax
administration in Isle of Man. Let's look at some of the key features of this tax calculator:
The FlexiTax calculator for Isle of Man allows you to select the number of payroll payments you receive in a year, this could be 12 (1 a month), 13 (with bonus), 14 (with additional payments) or
more, you can choose the number of payroll payments in the year to produce an annual income tax calculation. You can find detailed instruction on how to calculate your income tax in Isle of Man using
the Isle of Man FlexiTax Calculator for 2024 further down this page, we also provide a few payroll examples which can be used to understand how the FlexiTax calculator can help you calculate income
tax due in Isle of Man based on regular or irregular payroll payments.
Isle of Man FlexiTax Calculator - Key Features
The Isle of Man FlexiTax Calculator, part of our suite of Isle of Man Payroll, Salary and Tax calculators, is updated for the 2024 tax year using the latest tax tables published by the Government in
Isle of Man. Let's look at some of the key features of this tax calculator:
1. Choose the number of payment you receive in a given period, this could be 2 payments per month, 14 payments a year or more. The income tax calculator is very diverse and can handle all payroll
payment scenarios, see the payroll examples below for further details.
2. Choose your residency status for tax purposes in Isle of Man
3. Use the advanced features to edit the hours and days you work, your income and expenses and family situation
4. Automatically factor in relevant tax credits and personal income tax allowances in Isle of Man
5. Each income tax calculation can be printed and/or emailed for your use at a later date
6. Did we mention this tax calculator is free to use? Like all of our payroll, salary, and tax calculators, the Isle of Man FlexiTax Calculator for 2024 is completely free to use. All we ask is
that, if you have time, you leave a rating for the calculator and/or share on your favourite social media platform.
Isle of Man FlexiTax Calculator Instructions
This tax calculator is designed to be intuitive to use and, if you are familiar with the tax calculators on iCalculator™ Tax you will find it very easy to use.
1. Number of Payroll Payments Per Year. This is the number of payments you will receive in a specific period (by default, we set the calculator period to annual, you can change this to daily,
weekly, monthly etc using the advanced features).
2. Annual Salary. This is the amount of salary you are paid. The Isle of Man income tax calculator assumes this is your annual salary before tax. If you wish to enter you monthly salary, weekly or
hourly wage, then select the 'Advanced' option on the Isle of Man tax calculator and change the 'Employment Income and Employment Expenses period'.
3. Resident Status. The Resident status in Isle of Man here refers to your tax status, not necessarily your visa type or domicile. From a tax calculation perspective, your Tax Resident Status
affects non-refundable tax credits and income tax allowances. As with most countries with an international workforce, Isle of Man has numerous tax/fiscal Memorandums of Understanding (MOU) and
Tax Treaties in place.
That's it! Just two simple steps to calculate your income tax in Isle of Man with supporting calculations for transparency of payroll deductions in Isle of Man. Need more from the Isle of Man Tax
Calculator? Select 'advanced' to access more features of the income tax calculator.
Advanced Features of the Isle of Man FlexiTax Calculator
1. Tax Assessment Year The tax assessment year is defaulted to 2024, you can change the tax year as required to calculate your salary after tax for a specific year.
2. Your Age You age is used to calculate specific age related tax credits and allowances in Isle of Man.
3. Number of Children and Number of Children who qualify for Dependents Allowance The number of children you have and their status is used to calculate specific dependents related tax credits and
allowances in Isle of Man.
4. Number of Working Days in the year The number of working days in the year is used to calculate average daily / weekly etc. earning for the salary calculation and after tax illustrations
5. Number of hours worked per week The number of hours worked in the year is used to calculate average daily / weekly / hourly earnings etc. for the salary calculation and after tax illustrations
6. Number of weeks worked in the year The number of working weeks in the year is used to calculate average daily / weekly etc. earning for the salary calculation and after tax illustrations
7. Employment Income and Employment Expenses period The default Employment Income and Expenses period is 'Annual' meaning the amount you entered as your Employment Income in Isle of Man. is your
annual salary before tax. You can change this to monthly, weekly, four weekly, hourly salary etc. as required to calculate your net pay in Isle of Man..
Isle of Man Payroll Examples for FlexiTax Calculation
The following payroll examples for Isle of Man are provided to illustrate the different payroll scenarios that the Isle of Man FlexiTax Calculator can support.
Payroll Scenario 1: Annual Salary Calculation
A manager in the HR department at a company looking to recruit a new staff member in Isle of Man needs to calculate the cost of the employee from a salary perspective. The new employee will be paid $
100,000.00 per annum and the calculation needs to include all relevant business tax, pension commitments and allowances for the HR manager to appraise the company directors before proceeding with
recruitment in Isle of Man. To complete this calculation:
1. In the field Number of Payroll Payments Per Year, enter 1
2. In the income box labelled 1, enter the annual salary of $ 100,000.00
That's it! The tax calculator will automatically calculate the salary and payroll costs and provide an overivew of the calculations including the cost of the employee in Isle of Man in 2024.
Payroll Scenario 2: Contractor in Isle of Man with Irregular Payments
A contractor working in Isle of Man receievs 9 different payments in 2024, they want to calculate their annual tax commitments for their income in Isle of Man. To complete this calculation:
1. In the field Number of Payroll Payments Per Year, enter 10
2. For each of the income inputs, enter the income amounts
3. The tax calculator will automatically calculate the total income as you update the fields. Once you have added all 10 income amounts, you will have a full annual salary for the contract in Isle
of Man in 2024.
That's it! The contractor can now print or email the tax calculation for later reference
Payroll Scenario 3: An Employee in Isle of Man with Bonus Payments
An employee received 14 income payments in a year. This include monthly payment, an annual bonus for performance and an additional bonus paid at the end of the business tax year. To complete this
1. In the field Number of Payroll Payments Per Year, enter 14
2. In the income inputs 1 to 12, enter each of the monthly income amounts
3. In the income inputs 13 and 14, enter each of the bonus amounts.
That's it! You can now see the total income amount and income tax due for the period based on all 14 payment amounts.
Payroll Scenario 4: Tax Calculation for an Employee in Isle of Man with Two Jobs
An employee has 2 jobs and gets paid the same amount from each job every week. The employee wants to calculate income tax based of the total income of the two jobs. To complete this calculation:
1. In the field Number of Payroll Payments Per Year, enter 2
2. Enter the two payroll payments, one from each job, in the two income inputs
3. Click on the button "Advanced Tax Calculator" to access the advanced features of the Isle of Man FlexiTax Calculator
4. Change the "Employment Income and Employment Expenses period?" to "1 Weeks". This tells the tax calculator that each of the income amounts entered at step 2 are paid weekly
That's it! The tax calculator will automatically calculate the salary and payroll costs based on the two separate weekly payroll payments.
If you are interested in payroll calculations for employees and staff in Isle of Man, you may also wish to try our "True Cost of an Employee in Isle of Man in 2024 Calculator" which is a great tool
for identifying the full costs of recruiting in Isle of Man and maintaining employment of staff in Isle of Man. | {"url":"https://im.icalculator.com/flexitax.html","timestamp":"2024-11-03T00:09:02Z","content_type":"text/html","content_length":"19573","record_id":"<urn:uuid:23780081-21f9-4f4c-95cd-6c806a8f7f75>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00659.warc.gz"} |
Multiply & Divide Integers Lesson {FREE} | Math Geek Mama
Multiply & Divide Integers Lesson {FREE}
Looking for a way to help students understand the rules for multiplying & dividing integers? Show students the ‘why’ with visual models in this free multiply & divide integers lesson.
If you have taught middle school math, you likely know how tricky it is to teach integer operations, particularly addition & subtraction of integers. But a lot of times, multiplication and division
of integers gets rushed or skimmed through because it’s “easier” than addition and subtraction. The problem then is students don’t have a solid understanding of the WHY behind the integer “rules” and
then something that’s “easy” is mixed up or forgotten not long after.
Rather than simply telling students the rules and giving them a worksheet to practice, I want to share a simple, visual math lesson to help students think about the WHY behind these operations and
what happens with the sign. Will students still need lots of exposure, practice and repetition? YES. But hopefully this visual lesson is a good starting point and a good reference to refer back to
when kids get stuck later.
*Note: This post contains affiliate links which help support the work of this site. Read our full disclosure here.
Goal of the Multiply & Divide Integers Lesson:
The goal of this lesson is to provide a concrete visual model so that students understand WHY they get a positive or negative results when multiplying and dividing integers.
Setting Up the Math Lesson:
Ideally, you will be able to project the google slides for the whole class to see as they go through the lesson.
Then you can simply print the student handout to go along with the slides and discuss as you go through.
If you don’t have access to a projector, you can also use a hands-on tool such as two-colored counters or algebra tiles.
Then students will ‘build’ each problem and manually flip the counters or tiles over as they multiply.
Multiply & Divide Integers Lesson:
Once students have the handout, you’re ready to start the lesson.
To begin, you will want to start with a simple reminder of what multiplication means. You can look at arrays, equal groups, repeated addition, etc. using whole numbers.
The lesson starts with 3 x 5, but if you feel your students need even more reinforcement before moving onto negative values, discuss a few more problems and draw a visual model.
Students MUST have a solid understanding of what multiplication represents before they can think about how that applies to negative values.
Once they understand, ask them to draw or model with tiles the expression 3 x (-5).
They should hopefully see that this means 3 groups of -5 for a total of -15, as shown on the google slide.
Again, if there are any misunderstandings about where the picture came from, or where the final answer came from, talk through and model a few more expressions before moving on.
Why Does a Negative times a Negative Equal a Positive?
Then the lesson turns to negative times negative expressions.
Hopefully by shifting the language of the negative sign to mean “the opposite,” students will be able to see and make sense of this without it being another nonsensical rule to memorize.
As the google slides show, if you think of an expression such as (-3) x (-5) as “the opposite of 3 x (-5)” then what we’re saying is that (-3) x (-5) is the same as the opposite of (-15), which would
be +15.
So if we know that a positive times a negative is negative and a negative times a negative is the opposite of that, then the solution must be positive.
That may be a bit confusing to read here, but I think the google slides do a better job of showing visually what I mean.
If you enjoy this lesson, become a Math Geek Mama+ member and gain access to the entire library of engaging math lessons like this one, hundreds of math games and low-prep practice worksheets for
grades 5-8!
Learn more about Math Geek Mama+ right HERE.
Divide Integers:
Lastly, the lesson shifts to division of integers, which students should see offers the same results, as division is the inverse of multiplication.
After a couple more examples of visualizing division, students have a simple “quick check” to put their new understanding of integer operations to use.
And if you need more review and practice with integer operations, check out the list at the end of this article! I have even more resources to help you help your students.
Ready to try out this lesson with your 7th or 8th graders? Use the link below to grab it FREE in my shop!
More Integer Operations Resources: | {"url":"https://mathgeekmama.com/multiply-divide-integers-lesson/","timestamp":"2024-11-14T14:14:40Z","content_type":"text/html","content_length":"183774","record_id":"<urn:uuid:5c2d30ac-8d42-4f4a-9e84-9b4733765394>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00834.warc.gz"} |
Maximum and Minimum Values for Integers in Python
Integers are one of the fundamental data types that you'll encounter. They're used in just about every application and understanding their limits can be crucial for avoiding errors or even optimizing
your code. In this Byte, we'll peak into the world of integers, exploring how to find their maximum and minimum values and why you might need to know these values.
Integers in Python
Python is a dynamically typed language, which means that the Python interpreter infers the type of an object at runtime. This is different from statically-typed languages where you have to explicitly
declare the type of all variables. For integers, Python provides the int type. Here's a simple example:
x = 10
print(type(x)) # <class 'int'>
This is a basic usage of an integer in Python. But what if we try to assign a really, really large value to an integer?
x = 10**100
print(x) # 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
print(type(x)) # <class 'int'>
Even with such a large number, Python still treats it as an integer! This is because Python's int type can handle large integers, limited only by the amount of memory available.
Why Would You Need to Know Max/Min Integer Values?
So you might be wondering why you'd ever need to know the maximum or minimum values of an integer in Python. After all, Python's int type can handle pretty large numbers, right? Well, while it's true
that Python's int type can handle large numbers, there are still cases where knowing the maximum or minimum values can be useful.
For instance, when interfacing with C libraries or when dealing with file formats or network protocols that have specific integer size requirements, it's important to know the limits of your
integers. Also, knowing the limits of your integers can be useful for debugging and optimization.
Another common use for min/max values is in certain algorithms. Let's say you're trying to find the minimum number in a set. For the sake of the initial comparison, you'd likely want to set your min
value to the highest number possible so that the first value you compare it to will be lower. In a language like JavaScript, we'd use:
let min = Infinity;
But unfortunately, Python doesn't have a built-in way to do that.
How to Find Maximum/Minimum Integer Values
In Python, the sys module provides a constant, sys.maxsize, that represents the maximum integer that can be used for things like indexing Python's built-in data structures. Here's how you can access
import sys
print(sys.maxsize) # 9223372036854775807
Note: The value of sys.maxsize can vary between platforms and Python versions, but it's generally 2**31 - 1 on a 32-bit platform and 2**63 - 1 on a 64-bit platform.
But what about the minimum value? Python doesn't have a built-in way to find the minimum value of an integer. However, since Python's integers can be negative, the minimum value is simply
-sys.maxsize - 1.
import sys
print(-sys.maxsize - 1) # -9223372036854775808
Finding the Min/Max Values for Floats, Including Infinity
Floating-point numbers in Python have their limits, just like integers. However, these limits are fairly large and suffice for most applications. Knowing these limits becomes essential when you're
dealing with expansive datasets or high-precision calculations.
You can find the maximum and minimum float values using the sys.float_info object, which is part of Python's sys module. This object provides details about the floating-point type, including its
maximum and minimum representable positive finite values.
import sys
print("Max finite float value:", sys.float_info.max)
print("Min positive finite float value:", sys.float_info.min)
When you execute this code, you'll likely see output similar to the following:
Max finite float value: 1.7976931348623157e+308
Min positive finite float value: 2.2250738585072014e-308
Note: Again, the exact values may differ based on your system's architecture and the version of Python you are using.
Interestingly, Python also provides a way to represent positive and negative infinity for float types, which effectively serve as bounds beyond the finite limits. You can define these infinities
using float('inf') for positive infinity and float('-inf') for negative infinity.
Here's a quick example:
positive_infinity = float('inf')
negative_infinity = float('-inf')
print("Positive Infinity:", positive_infinity)
print("Negative Infinity:", negative_infinity)
Running this code snippet will display:
Positive Infinity: inf
Negative Infinity: -inf
These special float values can come in handy for initializing variables in algorithms, where you need a value guaranteed to be higher or lower than any other number.
Python 2 vs Python 3
When it comes to integer and float limits, there's a significant difference between Python 2 and Python 3.
In Python 2, there were two types of integers: int and long. The int type does have a limit, but the long type could handle arbitrarily large numbers. In Python 3, however, these two types were
merged into a single int type, which can handle arbitrarily large numbers just like the long type in Python 2.
As for floats, there's no difference between Python 2 and Python 3. Both versions use the IEEE 754 standard for floating-point arithmetic, which defines the max and min values we discussed in the
previous section.
While Python's dynamic typing system makes it easy to work with numbers, it's still important to know these limits, especially when dealing with very large numbers or high-precision calculations. I
hope this Byte has shed some light on a topic that often goes unnoticed but is still important in Python programming.
Last Updated: September 11th, 2023 | {"url":"https://stackabuse.com/bytes/maximum-and-minimum-values-for-integers-in-python/","timestamp":"2024-11-06T08:57:07Z","content_type":"text/html","content_length":"74301","record_id":"<urn:uuid:a1c6e852-68d3-4f70-8fd0-d21cc156d496>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00440.warc.gz"} |
slagts - Linux Manuals (3)
slagts (3) - Linux Manuals
slagts.f -
subroutine slagts (JOB, N, A, B, C, D, IN, Y, TOL, INFO)
SLAGTS solves the system of equations (T-λI)x = y or (T-λI)Tx = y,where T is a general tridiagonal matrix and λ a scalar, using the LU factorization computed by slagtf.
Function/Subroutine Documentation
subroutine slagts (integerJOB, integerN, real, dimension( * )A, real, dimension( * )B, real, dimension( * )C, real, dimension( * )D, integer, dimension( * )IN, real, dimension( * )Y, realTOL,
SLAGTS solves the system of equations (T-λI)x = y or (T-λI)Tx = y,where T is a general tridiagonal matrix and λ a scalar, using the LU factorization computed by slagtf.
SLAGTS may be used to solve one of the systems of equations
(T - lambda*I)*x = y or (T - lambda*I)**T*x = y,
where T is an n by n tridiagonal matrix, for x, following the
factorization of (T - lambda*I) as
(T - lambda*I) = P*L*U ,
by routine SLAGTF. The choice of equation to be solved is
controlled by the argument JOB, and in each case there is an option
to perturb zero or very small diagonal elements of U, this option
being intended for use in applications such as inverse iteration.
JOB is INTEGER
Specifies the job to be performed by SLAGTS as follows:
= 1: The equations (T - lambda*I)x = y are to be solved,
but diagonal elements of U are not to be perturbed.
= -1: The equations (T - lambda*I)x = y are to be solved
and, if overflow would otherwise occur, the diagonal
elements of U are to be perturbed. See argument TOL
= 2: The equations (T - lambda*I)**Tx = y are to be solved,
but diagonal elements of U are not to be perturbed.
= -2: The equations (T - lambda*I)**Tx = y are to be solved
and, if overflow would otherwise occur, the diagonal
elements of U are to be perturbed. See argument TOL
N is INTEGER
The order of the matrix T.
A is REAL array, dimension (N)
On entry, A must contain the diagonal elements of U as
returned from SLAGTF.
B is REAL array, dimension (N-1)
On entry, B must contain the first super-diagonal elements of
U as returned from SLAGTF.
C is REAL array, dimension (N-1)
On entry, C must contain the sub-diagonal elements of L as
returned from SLAGTF.
D is REAL array, dimension (N-2)
On entry, D must contain the second super-diagonal elements
of U as returned from SLAGTF.
IN is INTEGER array, dimension (N)
On entry, IN must contain details of the matrix P as returned
from SLAGTF.
Y is REAL array, dimension (N)
On entry, the right hand side vector y.
On exit, Y is overwritten by the solution vector x.
TOL is REAL
On entry, with JOB .lt. 0, TOL should be the minimum
perturbation to be made to very small diagonal elements of U.
TOL should normally be chosen as about eps*norm(U), where eps
is the relative machine precision, but if TOL is supplied as
non-positive, then it is reset to eps*max( abs( u(i,j) ) ).
If JOB .gt. 0 then TOL is not referenced.
On exit, TOL is changed as described above, only if TOL is
non-positive on entry. Otherwise TOL is unchanged.
INFO is INTEGER
= 0 : successful exit
.lt. 0: if INFO = -i, the i-th argument had an illegal value
.gt. 0: overflow would occur when computing the INFO(th)
element of the solution vector x. This can only occur
when JOB is supplied as positive and either means
that a diagonal element of U is very small, or that
the elements of the right-hand side vector y are very
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 162 of file slagts.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://www.systutorials.com/docs/linux/man/3-slagts/","timestamp":"2024-11-05T12:39:55Z","content_type":"text/html","content_length":"11374","record_id":"<urn:uuid:872115a9-5a3e-4b62-bd29-0d5a88d9c2eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00080.warc.gz"} |
Density split statistics: Cosmological constraints from counts and lensing in cells in des Y1 and SDSS data
We derive cosmological constraints from the probability distribution function (PDF) of evolved large-scale matter density fluctuations. We do this by splitting lines of sight by density based on
their count of tracer galaxies, and by measuring both gravitational shear around and counts-in-cells in overdense and underdense lines of sight, in Dark Energy Survey (DES) First Year and Sloan
Digital Sky Survey (SDSS) data. Our analysis uses a perturbation theory model [O. Friedrich, Phys. Rev. D 98, 023508 (2018)10.1103/PhysRevD.98.023508] and is validated using N-body simulation
realizations and log-normal mocks. It allows us to constrain cosmology, bias and stochasticity of galaxies with respect to matter density and, in addition, the skewness of the matter density field.
From a Bayesian model comparison, we find that the data weakly prefer a connection of galaxies and matter that is stochastic beyond Poisson fluctuations on ≤20 arcmin angular smoothing scale. The two
stochasticity models we fit yield DES constraints on the matter density Ωm=0.26-0.03+0.04 and Ωm=0.28-0.04+0.05 that are consistent with each other. These values also agree with the DES analysis of
galaxy and shear two-point functions (3x2pt, DES Collaboration et al.) that only uses second moments of the PDF. Constraints on σ8 are model dependent (σ8=0.97-0.06+0.07 and 0.80-0.07+0.06 for the
two stochasticity models), but consistent with each other and with the 3 x 2pt results if stochasticity is at the low end of the posterior range. As an additional test of gravity, counts and lensing
in cells allow to compare the skewness S3 of the matter density PDF to its ΛCDM prediction. We find no evidence of excess skewness in any model or data set, with better than 25 per cent relative
precision in the skewness estimate from DES alone.
ASJC Scopus subject areas
• Nuclear and High Energy Physics
Dive into the research topics of 'Density split statistics: Cosmological constraints from counts and lensing in cells in des Y1 and SDSS data'. Together they form a unique fingerprint. | {"url":"https://experts.arizona.edu/en/publications/density-split-statistics-cosmological-constraints-from-counts-and","timestamp":"2024-11-03T22:43:17Z","content_type":"text/html","content_length":"58899","record_id":"<urn:uuid:a6dd9c58-db70-404f-9e07-52342e59bd64>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00209.warc.gz"} |
Evolution of macromolecular structure, mechanism, dynamics, and function
Most biochemical reactions take from hundreds to billions of years to occur spontaneously. However, life depends on highly organized networks of catalyzed chemical reactions that proceed not only
rapidly, but specifically and with high fidelity. Biological catalysts are enzymes, complicated molecular nanomachines that massively accelerate reactions by positioning specific substrate molecules
with such precision that they are compelled to react. The molecular mechanism by which an enzyme executes this remarkable feat involves an exquisitely orchestrated sequence of steps. The structures,
mechanisms, and functions of enzymes are all products of millions of years of evolution. Yet despite their fundamental biological importance, we have only a rudimentary understanding of the atomistic
basis of the evolutionary changes that create novel enzymes.
Crystals and 2.8 Å diffraction of a resurrected ancestral enzyme, malate dehydrogenase from green sulfur bacteria (Chlorobia)
Hence a precise molecular understanding of macromolecular assemblies ultimately must be informed by evolutionary mechanisms. For knowledge of the macromolecular structure-function relationship, we
consider it essential to explicitly incorporate modern developments in population genetics, phylogenetics, and probability theory. Conversely, biochemical and biophysical principles also inform
evolutionary inferences.
Our lab is interested in many diverse, basic, and unresolved problems in molecular evolution:
• What are the mechanisms by which new functions evolve?
• What are the physical constraints on protein evolution?
• How can we reliably distinguish distant homology from convergence in structurally similar protein domains?
• How do substitutions distal from the active site affect catalytic activity?
• Are ancestral enzymes functionally promiscuous? Does specificity increase during evolution?
• To what extent is evolution adaptive or due to chance events?
• How many substitutions are required for evolution of a new function?
• Do novel functions evolve by small or large increments?
• How "reversible" are evolutionary trajectories? Does the order of substitutions matter?
• What is the importance of correlations among mutations (epistasis)?
The answers to these questions have broad implications for understanding the protein structure-function relationship, including rational efforts to design (and redesign) proteins for particular
Bayesian methods in structural bioinformatics
From a Bayesian viewpoint, probability is a measure of a degree of belief, and thus probability theory is formally an extension of classic Aristotelian logic in the presence of uncertainty. In recent
years Bayesian methods have experienced a great resurgence, due to theoretical advances, massive increases in computing power, and successful applications to complex and difficult scientific
Bayes theorem, the universal acid relating empirical observations to theory (data D to a hypothesis H)
Accurate analysis of structural differences and commonalities is of fundamental importance for understanding the structure, function, and evolution of biological macromolecules. For the past 40
years, structural analysis methods have relied on the biophysically unrealistic and restrictive least-squares criterion to find optimal superpositions. We are developing probabilistic models of
structural change that can take advantage of powerful maximum likelihood (ML) and Bayesian techniques, which will greatly expand our abilities to accurately superposition, align, and analyze
structural conformations. While we concentrate specifically on the conformations of macromolecules, the methods we are developing have broad mathematical generality and will impact not only molecular
structural biology but also an unusually wide range of scientific fields, including any that compare the shapes and conformations of objects.
We are also interested in developing likelihood and Bayesian methods for single-molecule structural analysis and single-particle cryo-electron microscopy image reconstruction.
The hippogriff (part eagle, part lion, part horse) in the image above symbolizes the empirical testability of evolutionary theory: Given what we know of the evolution and phylogeny of modern animals,
we conclude that such a creature will never be found, neither living nor fossilized. | {"url":"https://theobald.brandeis.edu/","timestamp":"2024-11-03T08:40:20Z","content_type":"application/xhtml+xml","content_length":"14481","record_id":"<urn:uuid:2c874c0c-250e-4c08-afba-231caec61a7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00726.warc.gz"} |
force used in ball mill
WEBFeb 28, 2023 · A ball nose end mill has a rounded tip or "nose" that is ideal for creating curved surfaces and 3D shapes. In contrast, a flatend mill has a flatcutting head that is suitable
for milling flat or shallow surfaces. Ball nose end mills are commonly used for finishing work, where a smooth surface finish is required, and for machining complex ...
WhatsApp: +86 18838072829
WEBAug 17, 2021 · Figure shows a typical mortar–pestle (agate) and a ball mill used for grinding/milling. These ball mills employ close jars or vessels with ball bearings that act as grinding
media. In these ball mills, the reactants can be ground together for longer durations of time and without minimum interference from surrounding environment . The ...
WhatsApp: +86 18838072829
WEBNov 1, 1998 · Mechanics and dynamics of cutting with helical ball end mills are presented. The helical ball end mill attached to the spindle is modelled by orthogonal structural modes in the
feed and normal directions at the tool tip. For a given cutter geometry, the cutting coefficients are transformed from an orthogonal cutting data base using an oblique .
WhatsApp: +86 18838072829
WEBFeb 1, 1997 · The inverse technique used in this study predicts the impact forces from acceleration measurements of the vibratory ball mill. The previous studies on the identifiion of impact
forces have been extended so that the force prediction is now based upon actual experimental measurements for this specific appliion.
WhatsApp: +86 18838072829
WEBJun 23, 2023 · Ball nose end mills are commonly used in aerospace, automotive, and medical industries. They are ideal for producing complex geometries and shapes with tight tolerances, such as
turbine blades, medical implants, and automotive parts. Ball nose end mills are also great for 3D contouring and finishing operations.
WhatsApp: +86 18838072829
WEBJun 1, 2021 · The relief angle affects the sharpness of the cutting edge and influences the mutual friction between the clearance face and the work surface [].In a ball end mill, the flute
angle on the ball part affects the shear force applied by the tool when entering the workpiece, and the value is dependent on the helix angle [].The tool geometry also .
WhatsApp: +86 18838072829
WEBSep 1, 1996 · Process dependent cutting force coefficients are obtained from the orthogonal cutting data base. The linear flank edge force components, which constitute a significant part of
the total forces in ballend milling close to the ball tip region, are modelled and separated from shearing forces. The model is verified by conducting over .
WhatsApp: +86 18838072829
WEBJan 1, 1991 · In this study, the ballend milling process has been analysed, and its cutting force model has been developed to predict the instantaneous cutting force on given machining
conditions. The development of the model is based on the analysis of cutting geometry of the ballend mill with plane rake faces. A cutting edge of the ballend mill .
WhatsApp: +86 18838072829
WEBColeParmer CG900 CryoBlade™ Cryogenic Grinder; 115V, 20 A EW. 15, USD / Each. Add To Cart. +ADD TO LIST. VIEW ALL CATEGORIES +. Master the art of lab milling and grinding. Elevate your
research with our cryogenic, ring and puck, and highenergy ball mills. Explore now!
WhatsApp: +86 18838072829
WEBJul 1, 1997 · F orce Distribution and Comminution in Ball Mills. V olkhard Buchholtz and Thorsten P¨ oschel. Institut f¨ ur Physik, HumboldtUniversit¨ at zu Berlin, Invalidenstr. 110, 10115
Berlin. email ...
WhatsApp: +86 18838072829
WEBRetsch offers mills with jar capacities from ml up to 150 l and balls are available from mm to 40 mm, see Figure 2. A third and very important characteristic of a ball mill, which also has a
great influence on the result of a milling process, is the power of a mill. Depending on the appliion, jars should be moved either slowly for ...
WhatsApp: +86 18838072829
WEBPlanetary Ball Mills are used wherever the highest degree of fineness and meet all the technical requirements for colloidal grinding and have the energy input necessary for mechanical alloying
Processes. Get Quote. Request Demo. Call Now. Call On: +91 . Sales Support:
WhatsApp: +86 18838072829
WEBThey extend to almost the full length of the shell and occupy about 35% of the shell volume. In such mills, attrition forces predominate but impacts also play a part in size reduction. They
are classed as intermediate grinders and are more useful than ball mills for milling sticky materials. Selection Criteria for Sizereduction Equipment
WhatsApp: +86 18838072829
WEBA ball mill is an engineering device used to grind metal, rock, and other materials into fine powder. It consists of a horizontal axle, a rotating shaft, and a vertical sifter screen. The
horizontal axle is connected to a power source and holds the body of the mill. ... The force of gravity, mg where m is the mass of the ball and . 2. The ...
WhatsApp: +86 18838072829
WEBFeb 1, 2023 · 1. Introduction. Sculptured surface machining is extensively used in manufacturing products with complex geometries such as diemolds, aircraft engines, turbine blades and others
as discussed in [1].Since mechanical parts with complex surfaces are typically machined by ballend milling, machining processes and material removal .
WhatsApp: +86 18838072829
WEBAug 1, 2016 · Contact modeling. The most critical ingredient in the model of a ball mill is the contact law. Contact models belong either to the (i) discrete or (ii) continuous approach
[20].While (i) is based on momentum balances, in (ii) a force–displacement law is added to the equation of motion, as a combination of a spring in parallel with a damper, .
WhatsApp: +86 18838072829
WEBFeb 1, 2001 · Biochar is widely used in many appliions and its properties can be modified with various methods, including ball milling. In this work, six different biochar materials (made from
pine, hemp ...
WhatsApp: +86 18838072829
WEBBall mill drums offer an efficient and inexpensive way to grind materials down to a more usable size, often as fine as 2575 microns. Both continuous and batch style ball mills use centrifugal
force, gravity and hard media to break up industrial materials into fine pieces or powder. Ball milling systems grind dry material and wet slurry alike.
WhatsApp: +86 18838072829
WEBJan 15, 2019 · Abstract. This work is related to the kinetic modelling of mechanical milling. Many attempts have been made to simulate the kinetics of the milling process regarding ball
velocity, friction force, and kinetic energy transferred during milling. The goal of this project is to fabrie a planetary ball mill jar and to defined the optimum ...
WhatsApp: +86 18838072829
WEBSep 20, 2023 · Size reduction aims to reduce the particle size of a substance to enhance its solubility, bioavailability, and overall effectiveness. Ball milling is a mechanical process that
utilizes the impact, shear, and attrition forces to break down particles into smaller sizes. The key component of a ball mill is a rotating cylindrical shell filled with ...
WhatsApp: +86 18838072829
WEBSep 24, 2023 · Attrition mills effectively mill metals in inert atmospheres, such as in solidstate or mechanical alloying processes, but are not cost effective. Product output is relatively
low with attrition mills, compared to large tumbler and vibratory ball mills. Consequently, tumbler ball mills usually are used for production runs of over 135 to .
WhatsApp: +86 18838072829
WEBThe Planetary Ball Mill PM 200 is a powerful benchtop model with 2 grinding stations for grinding jars with a nominal volume of 12 ml to 125 ml. The extremely high centrifugal forces of
Planetary Ball Mills result in very high pulverization energy and therefore short grinding times. The PM 200 can be found in virtually all industries where the ...
WhatsApp: +86 18838072829 | {"url":"https://www.lacle-deschants.fr/07/04-304.html","timestamp":"2024-11-11T04:34:04Z","content_type":"application/xhtml+xml","content_length":"21574","record_id":"<urn:uuid:61869b7b-2eda-4ae5-90d2-51fd7eead55a>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00086.warc.gz"} |
The study of cyclic adsorption air separation and oxygen concentration processes
DOI: 10.17277/amt.2019.01.pp.055-072
The Study of Cyclic Adsorption Air Separation and Oxygen Concentration Processes
V.G. Matveykin1, E.I. Akulinin2*, N.V. Posternak1, S.A. Skvortsov2, S.I. Dvoretsky2
1 OJSC "Corporation "Roskhimzashchita", 19, Morshanskoye shosse, Tambov, 392000, Russia 2 Tambov State Technical University, 106, Sovetskaya St., Tambov, 392000, Russia
* Corresponding author: Tel .: +7 (909) 231 40 61. E-mail: akulinin-2006@yandex.ru
The paper presents the dynamics of the developed mathematical model of pressure swing adsorption (PSA) process, which is carried out in a dual-adsorber unit with a 13X zeolite adsorbent used for air
separation with the aim of oxygen concentration. The authors formulate and solve the regularized problem of identifying the kinetic parameters for the mathematical model - the mass and oxygen
transfer coefficients for oxygen and nitrogen. Numerical studies of the effect of raw materials load (air composition, environment temperature and pressure) and control variables ("adsorption
-desorption" cycle time, pressure at the compressor outlet, laws of changing the opening degree of the inlet and discharge valves of the PSA unit) on the dynamics and performance indicators of cyclic
adsorption process of air oxygen enrichment have been carried out. The mathematical and algorithmic support for the creation of automated processes and PSA units for gas mixtures separation and
purification has been developed.
Pressure swing adsorption, zeolite adsorbent, adsorption isotherm, kinetics, mathematical model, parametric identification, calculation experiment.
© V.G. Matveykin, E.I. Akulinin, N.V. Posternak, S.A. Skvortsov, S.I. Dvoretsky, 2019
In recent decades, the use of cyclic adsorption processes for separating gas mixtures and concentrating target products has become increasingly common. Short-cycle processes for adsorptive separation
of gas mixtures are widely used in industry for air oxygen enrichment, drying gases without heating, separating hydrocarbons, concentrating carbon dioxide, extracting hydrogen, methane, etc. One of
the urgent tasks in the field of adsorption separation is air oxygen enrichment. Typical substances that accompany oxygen are nitrogen, argon, carbon dioxide. A feature of the adsorptive oxygen
concentration is the fact that in gas-air mixtures the components associated with oxygen have higher values of adsorption selectivity [1].
The analysis of numerous works by foreign and Russian scientists in the field of adsorption separation of multicomponent gas mixtures and concentration of the target product (hydrogen, oxygen, carbon
etc.) made it possible to determine the place of this article among other works, its relevance and perspectivity [2-11].
Thus, the works [2-8] present the results of numerical studies of the effect of mode variables (pressure, temperature, flow rate of the initial mixture) on the dynamics and efficiency of the
adsorption separation of two (H2-CO2), three (H2-CO2-CO), four (H2-CO2-CO-CH4), five (H2-CO2-CO-CH4-N2) and six (H2-CO2-CO-H2O-Ar-N2) component mixtures and hydrogen concentration using active carbon
and metal-organic compounds as adsorbents. In the works [2, 3], when calculating the equilibrium conditions of a multicomponent mixture, sorption isotherms of individual substances are used. The
calculation experiment allowed to study the features of the ten-adsorber unit with vacuum regeneration (in the English literature - VPSA) and the four-adsorber PSA unit with a metal-organic adsorbent
of a new type. The possibility of obtaining hydrogen with the purity
of 99.981 vol. % at the extraction degree of 81.6 % [2] and 99.9 vol. % at the extraction degree of 48.05 % [4], respectively, was established. The results of numerical studies of the effect of the
number of pressure equalization stages, their sequence and the use of combinations of different adsorbent layers on the purity and degree of hydrogen extraction from a two-component mixture (H2-CH4)
in the PSA unit using the Langmuir - Freundlis equilibrium isotherm are presented in [6, 7]. It has been established that the use of a six-sorbent unit with two pressure equalization operations
provides the best combination of the hydrogen purity (~ 99 vol. %) while achieving the extraction degree of ~ 83 %.
In [9], the calculation experiment investigated the dependences of the purity of extracted carbon dioxide from a nine-component gas mixture using active carbon and found that using the seven-adsorber
PSA unit (instead of three or four adsorbers) allows increasing the purity of the produced carbon dioxide from 95.1 vol. % up to 98.9 vol. % while reducing the extraction degree from 90.2 % to 86.1
%. In [10, 11], the mathematical models of the dynamics of the cyclic adsorption process for producing CO2 from a two-component gas mixture (CO2-N2) on zeolite 13X using the Langmuir isotherm were
studied, and the problem of the optimal design of PSA units (vacuum-pressure VPSA and fractional vacuum-pressure FVPSA types) by the complex criterion - the ratio of energy consumption of the PSA
unit to the purity of the produced carbon dioxide, was formulated and investigated. It has been established that the units implemented according to the FVPSA and VPSA schemes provide the production
of carbon dioxide with the purity of ~ 90 vol. % and ~ 72 vol. %, respectively, and the specific power of the unit according to the FVPSA scheme is on average 2.5 times higher.
Over the last decade, the number and range of consumers of air separation products have significantly increased, and the annual increase in oxygen demand is on average ~ 4-5 % due to the increased
demand in the steel and chemical industry, aluminum production, aviation and other industries and social spheres.
A significant proportion of oxygen consumers uses in their activities not so much pure oxygen as air enriched with oxygen from 30 to 90 vol. %. For these reasons, in recent years, the adsorption
method of separating air is becoming more common as the most profitable method for consumers who use oxygen and nitrogen unevenly in time.
The units separating the air mixture by adsorption using the PSA method differ in the way of creating the
driving force (the difference in equilibrium concentrations at adsorption and desorption stages) and use synthetic zeolites and activated carbons as adsorbents. Pressure-type units operate from an
overpressure source, and production gas can be directly discharged to the consumer. The given costs of electricity for oxygen production with the concentration of 90 vol. % in PSA units range from
1.5 to 1.8 kWh/m , which is several times higher than the costs of obtaining oxygen by the method of low-temperature rectification. Therefore, pressure-type units are distinguished by low
productivity and are used in industries where the problem of oxygen delivery and storage is acute. The main advantages of PSA units are their autonomy, mobility, reliability, and quick access to the
stationary periodic mode. Energy costs in units, where oxygen is obtained at almost atmospheric pressure, and vacuuming is used for nitrogen de sorption, are significantly lower and amount to ~
0.5-0.7 kWh/m .
The highest values of the oxygen extraction degree and productivity are achieved in Vacuum PSA units, in which the adsorption stage is carried out at an overpressure and the desorption stage - under
vacuum. Increasing the level of the unit automation for separating components of the air mixture and concentrating oxygen is associated both with the difficulties of mathematical modeling and
optimization of mass and heat transfer processes within the adsorber, and with the complexity of considering the mutual connections of all included devices. As a rule, the flow chart of the PSA
process includes two - four apparatus-adsorbers filled with granular adsorbent, flow boosters (air compressor, vacuum pump, etc.), receivers, and valves designed to increase and decrease the pressure
in adsorbers (desorbers) and air flow control [12-19].
Impurities of water and carbon dioxide contained in the separated air are traced in the frontal layers of the adsorbent and have practically no impact on the efficiency of nitrogen adsorption. The
limiting purity of oxygen produced in adsorption units is 95.7 % (4.3 % is accounted for by argon, which is adsorbed on zeolites as well as oxygen). In industry, an oxygen-argon mixture is produced
in adsorption units with the purity of 90-95 % [20].
The aim of this work is to study the effectiveness of cyclic adsorption processes for air separation and oxygen concentration, mathematical and algorithmic support for creating automated PSA units
for air oxygen enrichment.
The current state analysis of the PSA technology and equipment
The analysis of the current state of the PSA technology for purifying and separating gas mixtures allowed to identify a generalized flow chart of the PSA process [21-31] (Fig. 1).
The PSA process of a gas mixture is implemented in the environment with the following parameters: the air composition (vector yenv of oxygen, nitrogen, argon and other impurities concentrations),
temperature Tenv and barometric pressure Benv of the environment [32]. The pressure in the system is created by the flow rate boosters FB (compressor, blower, vacuum pump, etc.). The initial gas
mixture with concentration, flow rate,
temperature and pressure yin, Gin, Tin,Pin, respectively enters the unit inlet. Through the inlet valves K1,i (i = 1, n), the gas mixture or atmospheric air enters the adsorbers A1,i (i = 1, n),
where the process of selective adsorption of one or several gas mixture components is carried out. At the unit outlet, using check valves K3,i, a stream of concentrated production gas mixture is
formed with concentration, flow rate, temperature and pressure y out, Gout, Tout, P out,
respectively. Part of the production flow through the respective heat exchanger Tk and the throttle Thk is sent to the adsorbers (valves K2,i are open) to carry out the process of the adsorbate
desorption. The desorbed gas mixture is discharged by the flow booster FB out,1
with the composition yout,\ flow rate Go
temperature Tout,t and pressure Pout,\ respectively, into the atmosphere.
When implementing the adsorption schemes for air separation and purification, the following process organization schemes can be used: pressure (PSA - the adsorption pressure is excessive relative to
atmospheric, while the desorption pressure is atmospheric), vacuum - pressure (VPSA - adsorption pressure is excessive relative to atmospheric and the desorption pressure is below atmospheric),
vacuum pressure (VSA - the adsorption pressure is atmospheric, while the desorption pressure is below atmospheric) [33-35].
The main advantage of PSA units is the simplicity of their organization, and the disadvantage is the low extraction degree of the target product compared to other classes of units [1]. The main
advantage of VPSA units is high efficiency in extracting target components, and the disadvantage is the complexity of instrumentation. VSA units reach a compromise between the efficiency and
complexity of instrumentation, which led to their wide distribution in portable gas concentrators [36].
The adsorbers used in adsorption units can have different constructive designs that affect the structure of the flows in the adsorption layer (Fig. 2).
At the axial direction (Fig. 2a), the gas flow moves along the axis of the adsorber. The main advantage of this type of adsorbers is the simplicity of the design, and the disadvantages are the high
aerodynamic resistance of the layer. At the radial direction (Fig. 2b), the flow is directed to the central cavity and moves through the adsorption layer to the periphery. This provides low
aerodynamic resistance, the ability to provide high flow rates through the adsorber, the disadvantages are: the complexity of the design, the possibility of the stream leakage due to the relatively
small size of the adsorbent layer.
Fig.1. Generalized flow chart of the PSA process
Fig. 2. The direction of the gas flow in the adsorber:
a - axial; b -radial; c - variable
The advantage of the adsorber with a conical insert (Fig. 2c) is the ability to obtain a variable cross-section, providing a uniform flow rate over the entire height of the adsorber.
To increase the efficiency of the gas mixture adsorption separation process, a multilayer structure of adsorbents in the adsorber can be applied, where each layer is focused on the selective
absorption of certain components of the gas mixture. An example can be the use in the frontal layer of adsorbents with high activity on water vapor, which protects the subsequent layers of the
adsorbent from loss of sorption activity on the target components of the gas mixture.
The technological scheme of the PSA process (Fig. 1) can have from one to several adsorbers. The increase in the number of adsorbers allows the increase in the extraction degree of the target
component, but at the same time capital costs get higher, the complexity of the control system increases, and the reliability of the unit decreases [37]. By performance, PSA units are distinguished
by low productivity - up to 2 Nm /h; average productivity -2-20 Nm /h and high productivity - more than 20 Nm3/h.
Activated carbons, zeolites, silica gels, and active alumina are widely used as adsorbents in cyclic adsorption processes [38, 39].
In the adsorption technique, zeolites of types A, X, M are used with a low value of the silica module, which determines the structure of the crystal lattice of the zeolite and its adsorption
properties. Silica gels are mainly used for drying gases, purifying mineral oils and as a carrier of catalysts. Activated (active) carbon has a very large specific surface per unit mass, which
accounts for its high adsorption properties with respect to the sorption of high-molecular compounds.
The current state analysis of mathematical modeling of cyclic adsorption processes
The current state analysis of mathematical modeling of cyclic adsorption processes has shown that, to date, mathematical models constructed by an experimental-analytical method [40-55] are the most
widely used.
The analysis of works in the field of mathematical modeling of cyclic adsorption separation of gas mixtures made it possible to establish that, in general, the mathematical model includes a system of
equations of general material balance; component-wise material balance in the gas (taking into account diffusion, convection in the gas phase, as well as the internal source/drain of the substance as
a result of adsorption
or desorption) and solid phases (taking into account diffusion, as well as the internal source/drain of the substance as a result of adsorption or desorption); thermal balance in the gas phase
(taking into account thermal conductivity, convection, as well as thermal effect as a result of adsorption or desorption) and the adsorbent (taking into account thermal conductivity, as well as
thermal effect as a result of adsorption or desorption); conservation of momentum (a variation of the Navier - Stokes equation); adsorption kinetics -desorption (taking into account the rate of mass
transfer from the gas to the solid phase and back during adsorption - desorption); equilibrium in the gas-adsorbent system (adsorption isotherms of the components) [56, 57]; other relationships
between model variables, initial and boundary conditions.
The equations of component-wise material balance are written in the form of a system of partial differential equations of a parabolic type [58]:
d (Vg Ck) d ck 6 -+—- +
(1 -e \da
d x
d t
= D
d 2ck
x,k dx2
where vg is gas flow rate (m/s); ck is molar
concentration of the k component of the gas mixture (mol/m ); e is porosity of the adsorbent layer, taking
into account the porosity of the particles (m /m ); ak is amount of sorption (adsorbate concentration in
the adsorbent) (mol/m ); Dx,k is effective coefficient
of longitudinal mixing of the k component of the gas mixture (m /s); x is spatial coordinate of the adsorbent layer (m); t is time (s).
In equation (1), the first term describes the convective transfer of the substance in the adsorbent layer; the second term is the accumulation rate of the component in the mixture in the gas phase;
the third and fourth terms are the sorption rate and the longitudinal mixing of the k component in the adsorbent layer, respectively.
The effective coefficient of longitudinal mixing Dx in early works on the adsorption separation of gas mixtures was identified with the molecular diffusion coefficient. At present, two main
components are distinguished in longitudinal mixing (diffusion): molecular diffusion and turbulent mixing, which arises as a result of recombination of flows around the particles of the adsorbent. In
practical calculations, the formula [1] is most often used to estimate the coefficient of longitudinal mixing:
Dx = 0.7Dm + 0.5dgrvg ,
where Dm is molecular diffusion coefficient; dgr is particle diameter of the adsorbent; vg is gas velocity.
Equation (1) for an unambiguous solution should be supplemented with initial and boundary conditions:
- initial conditions
ck(x,0) = c0(x); k = 1,nk ; 0 < x < L ;
- boundary conditions at the stage of adsorption
ck (0, t ) = cf(t ); ^^ = 0, k = Vn k ; dx
- boundary conditions at the stage of desorption
ck ( L, t ) = Ckout(t ); ^^ = 0, k = Vn k.
To describe the sorption kinetics in the external diffusion region of the process, the equation [57] is used:
a dt
= Pk (ck - ck ),
where ck is the equilibrium molar concentration of the
k component of the gas mixture (mol/m ); pk is the mass transfer coefficient related to the concentration of the adsorptive in the gas phase (1/s).
For the internal diffusion adsorption process, the driving force is written as the difference between the values of the equilibrium sorption and the current sorption in the adsorbent (the Glukaf
formula) [57]:
dk = P 2(ak- ak), dt
where ak is the equilibrium value of sorption of the k
component of the gas mixture (mol/m ); pk is the internal diffusion kinetic coefficient in the adsorbent granules (1/s).
To describe the kinetics of adsorption in the mixed-diffusion region, the mass transfer equation for the adsorptive from the gas phase to the solid phase of the adsorbent (through the phase boundary)
is applied in the following form [59, 60]:
dak — F -F-(tgh(e(Vg -v*))+1)+Fk1,
k = 1,2,3.
where Fk - the right part of the kinetics equation for nonstationary convective (external) mass transfer,
11 * 1 Fk =Pk (ck - ck); Pk is the mass transfer coefficient
related to the concentration of the adsorptive in the gas
phase; c* is the concentration of the adsorptive at the
interface or the equilibrium current value of adsorption
ak ; Fk is the right part of the kinetics equation the
internal diffusion adsorption process,
Fk =P2h(a*k - ak); Pit is the kinetic coefficient; a* is the amount of adsorption equilibrium to the current concentration of the adsorptive c k in the gas mixture flow on the outer surface of the
granules; e is the formal coefficient setting the dimensions of the mixed-diffusion region; vg* is the velocity of the gas mixture
which determines the transition from the diffusion region to the kinetic region of the adsorptive transfer; with initial conditions
ak (x,0) - a°(x), 0 < x < L , k — 1, nk .
Equation (2) is a description of the adsorption kinetics for the mixed-diffusion transfer region of the adsorptive across the phase boundary: when the velocity of the gas mixture is below the
velocity vg* , the adsorption process is limited by the
external mass transfer process with the coefficient Pk, otherwise - by the internal diffusion process in the granules of the adsorbent with the kinetic coefficient
Pk. The hyperbolic tangent and the formal coefficient
e along with v* describe a continuous transition from
the external mass transfer region to the internal diffusion adsorption process with zeolite adsorbents CaA, 13X.
The isotherms described by the equations of Dubinin-Radushkevich and Langmuir-Freindlich [56, 61] are most often used as equations of sorption isotherms in multicomponent gas mixtures.
To describe the processes of heat propagation in the gas mixture flow and the adsorbent along its length, partial equations of parabolic type are most often used [60, 62]:
g dTg ( x, t) g dTg( x, t )
cp p+ p pg vg-
--^ud[Ta(x,t) -Tg(x,t)] -^^[v -Tg(x,t)] 8 8 d a
= x
d% g ar2
0 < x < L ,
c pp
pv a
^^ + a ^d[[(x,t) -Tg(x,t)]-
2 ads
dak(x,t) d Ta(x,t)
-Z hk
where cg, pg is the specific heat and molar density of the gas mixture, J/(mol-K), mol/m3, respectively; Tg is the temperature of the gas mixture, K; A,g is the
coefficient of thermal conductivity of the gas mixture, W/(m-K); a is the heat transfer coefficient from the surface of the adsorbent granules to the gas mixture
flow, W/(K-m2); Sud = (1 -e)—is the specific surface
r gr
coefficient of the adsorbent granules, m2/m3; Kenv is heat transfer coefficient from the gas mixture flow to the environment, W/(K-m2); dA is the adsorber diameter, m; Tenv is the environment
temperature, K, where c P is specific heat capacity of the adsorbent,
,3. ,„ads
J/(kg K); pa is the adsorbent density, kg/m ; hk the adsorption heat of the k component of the gas mixture, J/mol; Xa is the coefficient of the adsorbent
thermal conductivity, W/(m-K); with initial and boundary conditions at the adsorption and desorption stages similar to the conditions written above for equation (1).
In equation (3), the first term describes the accumulation of heat in the gas phase; the second term - the convective component of heat transfer; the third term - the heat transfer from the gas phase
to the solid phase (the adsorbent); the fourth term - the heat transfer from the gas phase to the environment through the wall of the adsorber; the fifth term - the longitudinal thermal conductivity
of the gas phase along the height of the adsorbent layer. In equation (4), the first term describes the enthalpy of the solid phase (the adsorbent); the second term - the heat transfer from the solid
phase (the adsorbent) to the gas phase; the third term - the release of the heat of the gas mixture components sorption; the fourth term - the thermal conductivity in the adsorbent along the vertical
axis of the adsorber.
The dynamics of changes in pressure and velocity of the gas mixture in the adsorbent layer is most often described by the Ergun equation [63]:
dP dx
150 (1 -so )2 (2rgrV)s0
^gvg + 1.75M gpg
(i -
2rgr Vs 0
where s o is the porosity of the adsorbent layer without taking into account the porosity of the particles, m3/m3; y is the sphericity coefficient of the adsorbent
granules; is the dynamic viscosity of the gas mixture, Pa s; Mg is the molar mass of the gas mixture, kg/mol; pg is the gas mixture density (mol/m3); r is the adsorbent granule radius, m.
The ideal gas state equation has the following
P(x, t) = R Tg (x, t)Z Ck (x, t),
R is the universal gas constant, J/(mol-K).
For the numerical solution of the system of nonlinear partial differential equations (1) - (5) with the corresponding initial and boundary conditions, the method of straight lines was used according
to which the derivatives of the spatial variable x were approximated by finite difference formulas. In this case, the time derivative remains in continuous form. It results in the system of ordinary
differential equations along a given family of straight lines with initial and boundary conditions, which can be solved by some numerical method, for example, the fourth-order Runge-Kutta with
automatic step selection. The method of straight lines has quite acceptable accuracy and speed of convergence for practice.
The experimental study of air oxygen enrichment process
The flow chart of the experimental dual-adsorber PSA unit for oxygen concentration is shown in Fig. 3: A1, A2 is adsorbers with granulated zeolite adsorbent 13X; K2, K3, K4, K8 is control valves; K5,
K7 is check valves; K6 is pass-through valve; R is receiver of oxygen enriched production air. Further, the concentration of oxygen and nitrogen will be denoted
by y = ^2,) vol. %.
The PSA unit while enriching air with oxygen operates as follows. The flow of atmospheric air is formed by compressor C and inlet valves K1, K3 with
flow rate Gin, initial composition yk", where
k = {1 - 02, 2 - N2}, pressure P^ and temperature
Tgin . At the initial moment of time, the valves K1, K4,
K6, K8 are open. The air flow through the valve K1 enters the adsorber A1, in which the pressure rises to
the value Pj^. for a certain length of time [0, tads ] and
Fig. 3. Dual-adsorber PSA unit
the adsorption process of predominantly nitrogen and to a lesser extent oxygen and argon takes place over
the time tads (impurities are not adsorbed). Oxygen
enriched air in the adsorber A1 enters the receiver R through the valve K5 and then it is removed to the consumer through the valve K8 with the flow rate
Gout, composition yout
/^»out,1 out,1
G , composition y
^ out ,1
temperature Tgout. At the same time, part of oxygen
enriched air flow in the adsorber A1, through the valve K6, enters the adsorber A2, where desorption of nitrogen, oxygen and argon takes place under pressure
Pdœ . The air flow saturated with nitrogen (waste) enters the outlet of the adsorption concentrator from the desorber A2 through the valve K4 with the flow rate
pressure Pou1,1 and
temperature T o
At the moment of time t = tc /2 = tads, where tc is the duration of the "adsorption-desorption" cycle, the valves are switched: valves K1, K4, K5 close, valves K2, K3 open and valves K6, K8 still
remain open. The atmospheric air through the valve K3 is fed by the compressor C to the adsorber A2, in which pressure is raised (pressure up-pu) for a certain length of time
[0, tads] to the value Pad and the adsorption process of mainly nitrogen and to a lesser extent oxygen and argon (impurities are not adsorbed) during the segment of time tc / 2 < t < tc . Oxygen
enriched air in the
adsorber A2 enters the receiver through the valve K7 and then it is removed to the consumer through the
valve K8 with the flow rate G
composition y At the same
pressure Pout and temperature Tg
time, part of oxygen enriched air flow in the adsorber A1 enters the adsorber A1 through valve K6, where pressure is first released (pressure down-pd), and then nitrogen, oxygen and argon are
desorbed under
pressure Pdiens . The air flow saturated with nitrogen (waste) enters the outlet of the adsorption concentrator from the desorber A1 through the valve K2 with flow
rate GouU, composition youU, pressure PouU and temperature 7gou1,1.
Upon expiration of time tc, one complete cycle of the adsorption concentration process is completed, after which the cycles are repeated during the entire operation period [0, tf] of the PSA unit.
The implementation of cyclic operation modes of the PSA unit is carried out by an automated control system using a software setpoint control device and control valves K1, K2, K3, K4, K8 in accordance
with the periodic switching cyclogram.
A schematic diagram of an automated experimental unit for the oxygen adsorption concentration, which implements the described PSA scheme (Fig. 3) and the cyclogram (Fig. 4) is shown in Fig. 5.
Ki K; Ki K, K< Ki K: Ki
; /6 \ / / N / \ r S "s ^J \ ,'A, V t \ ' J \ t
pu ds ¿pd ^des
tjl tc/2
Fig. 4. Dynamics of pressure changes in adsorbers and cyclogram of control valves switching
Fig. 5. Schematic diagram of automated experimental unit of oxygen adsorption concentration
The atmospheric air under pressure of (2.0 - 6.0)-105 Pa is fed to the inlet of the oxygen concentrator (point "a") through the filtering unit 2, which traps water and oils. The pressure regulator 3
maintains the pressure (0.5 - 0.8)-105 Pa on the pneumorel 9, 12, 13, 14, 15, 16, 17, which are set at points "c" of the scheme. Under the pressure, the membrane blocks of elements 9, 12, 13, 14, 15,
16, 17 move upwards. For the sensor 4, the pressure gauge 5
sets the adsorption pressure Pallid (15 - 2.4)-105 Pa in the receiver 6. The oxygen concentrator is started by switching the pneumotumbler 8. The generator of rectangular pneumatic impulses (includes
pneumorel 9, variable pneumatic resistance 10 and pneumatic
capacitance 11) periodically sets pressure P^ or Pdns at the point "d". At the initial time of the generator operation, the membrane blocks of the elements 12, 13, 14, 15, 16, 17 move downwards.
Through the upper chambers of the elements 15, 16, 17, the atmospheric air begins to flow into the adsorber 18. In the adsorber 18, the pressure increases to a certain
value Paidns and the adsorption process of predominantly nitrogen and to a lesser extent oxygen and argon is carried out (impurities are not adsorbed). At the outlet of the adsorber 18, oxygen
enriched air is formed. Part of the air flow enters the receiver 23
through the check valve 21, and the other part is fed into the adsorber 20 through variable pneumatic
resistance under pressure Pd^. The process of nitrogen desorption is carried out in the adsorber 20, and at its exit a gas mixture is formed with a high concentration of nitrogen, which through the
upper chambers of the elements 12, 13, 14 enters the atmosphere. After a
half-cycle time tc / 2 , the generator sets pressure Pifs, and the membrane blocks of the elements 12, 13, 14, 15, 16, 17 move upwards. While the adsorber 20 goes into adsorption mode, and the
adsorber 18 in desorption mode. The continuity of the process is achieved by cyclically switching the adsorbers at regular intervals. The flow rate of the production gas mixture is controlled by a
variable resistance 25 on the rotameter 26. The main tuning parameters of the concentrator are the half-cycle time determined by the resistance 10 and the return flow value determined by the
resistance of the element 19. The oxygen concentration is measured by the gas analyzer 25.
The results of experimental studies of the oxygen
concentration y1out in the production gas-air mixture (at the outlet of the PSA unit in the steady state) depending on the half cycle time tc / 2 for different
values of pressure Pads presented in Fig. 6.
at the adsorption stage are
y. ,vol.%
\ , vol.% 60
82 tJ2, S
A A
□ □ A □ A
[ A o ' D.......... A
Fig. 6. Results of experimental studies, Pa:
Pdns = 2-2 -105; b - A - Pds
82 LI2, s
a - A - -Cr 5-2T05, □ - Pd = 2.2 -105; b - A - Pd = 3.7 -105, □ - Pd = 2.7 -105
Modeling and algorithmization of the dynamics of the PSA unit operation
The PSA unit as a system, in which the process of air separation and oxygen concentration is carried out, can be represented as a set of interacting subsystems: environment, flow booster (compressor,
blower, etc.), "adsorber-desorber", receiver, subsystem valves, and control system.
When developing a mathematical model of the technological process, we will adhere to the principle of an autonomous mathematical description of the processes carried out in each subsystem and the
matching of the subsystem models among themselves into a single mathematical system model.
Here we also give the equations of the mathematical model of the central system-forming element of the PSA unit - the "adsorber - desorber" subsystem.
During the adsorption of O2, N2, granulated zeolitic adsorbent 13X in the adsorbers Ai, A2 of the PSA unit, the following mass and heat exchange processes take place:
a) the mass transfer of O2, N2, and heat exchange between the air mass and the adsorbent;
b) the distribution of air components in the gas phase due to the convection;
c) the distribution of heat in the air flow and the adsorbent due to the convection and thermal conductivity;
d) the adsorption of O2, N2 on the surface and in the micropores of the zeolite adsorbent granules with the heat release, leaching of O2 from the adsorbent at the adsorption stage and desorption of
N2 from micropores and from the surface of the granules with the heat absorption.
The mathematical description of the processes in the adsorber include the following assumptions:
i) the atmospheric air is predominantly a two-component air mixture and is considered as an ideal
gas, which is quite acceptable at pressures in the adsorber up to 200-105 Pa;
2) the granular zeolite NaX of spherical shape with a diameter of 2 mm is used as an adsorbent;
3) longitudinal mixing of O2, N2 components in the air flow in the axial direction and thermal losses to the environment are absent.
The mathematical description of the "adsorber-desorber" subsystem in the PSA unit includes the following equations:
- total material balance in the adsorber
d (vg Pg)
d x
+ Pa
d a
1 + d a2 ^
d t d t
+ -
ÔPg d t
= 0,
where vg is linear velocity of the gas mixture, m/s; pg
is molar density of the gas mixture, mol/m ; pa is bulk
density of the adsorbent, kg/m ; a1, a2 is adsorbate concentrations (oxygen and nitrogen), respectively, mol/kg;
- component-based material balance
d (vg ck ) d ck d ak n
• + 8-- + pa-- = 0,
d t d t
d x
k = {1 - O2,2 -N2}, thermal balance for the gas phase
d Tg d Tg
vgc„gp ——
g pg g dx
d x
■ 8cpgpg■
d t
+ P^ + Kt SUd(?g -Ta) = 0:
where cpg is specific heat of the gas mixture, J/(mol-K); s is adsorption layer porosity; Sud is coefficient of the specific sUrface of the adsorbent particles, m2/m3; Ta is temperature of the
K; KT is heat transfer coefficient, W/(m2-K);
- heat balance for the adsorbent
2 g T 2 Г
Pa(cpa +YjCpkai^^ + PaZ| Ш1
- KT Sud(Tg -Ta)
dak dt
= 0,
where cpa,cpk is specific heat capacities of the
adsorbent and adsorbate, respectively, k = 1, 2, J/(mol-K); AHk is thermal effect of the k air component sorption тепловой, J/mol; A,a is thermal conductivity coefficient of the adsorbent, W/(m-K); -
adsorption kinetics
dak dt
_ Pk (a* - ak ), k = {1 - О 2,2 - N2}, (10)
- equilibrium conditions calculated by the Langmuir-Freindlich adsorption isotherm formal for zeolites [64]:
* b1,kck exp(b2,k /Ta)
a* =—2-,-+
1 +Êb3,}c] exp(b4,k /Ta) j=1
b5,kck exp(b6,k / Ta)
+ -
, k = 1,2 , (11)
1 + Z 63,jCj exP(64,k / Ta) j=1
- momentum conservation
*(Pg Tg) _ , (1 -e)2
-_- Л
^(dgr9)2 e3 RgVg
- B(C1 + C2)M,
1 - e
g ^dgr9e3 g
where A, B are known constants.
Let us formulate the problem of identifying kinetic parameters Pk , k — {1 -02,2-N2} by the
output experimental signals y^ (Pi,p2),
j — 1, m, i — 1, d, where m - the number of output measured coordinates of the control object; d - the number of experimental points for a separate output coordinate of the control object depending
on the half-cycle of adsorption tads — tc / 2 .
Then a non-negative function is constructed
m d r -i2
F (P1, P2) — Hk j - yj (Pidns, tads, i, P1, P2)] ,
j—1 i—1
where yj (P^, tads,,-,P1,P2) is solution of the mathematical model equations (6) - (12) (with the
corresponding initial and boundary conditions) of the process of air oxygen enrichment in the PSA unit for
fixed values Pans, tads,i and kinetic parameters P1, P2 .
In a finite-dimensional numerical Euclidean space
(9) Emd , the value of F is equal to the square of the
distance between the vectors
y(Cs,tads,P1,P2) •
Let us rewrite this function in another form:
F (P1, P 2) _ y e - y ( Pads, tads, P1, P 2)
The function F is defined on the set V c E , where l < md; l is the given natural number.
The task of determining parameters P1, P2 is to
find P* eV c El such that
F (p*)— min F (P)
in case of constraints in the form of the mathematical model equations (6) - (12) of the air oxygen enrichment process in the PSA unit.
Despite the presence of constraint equations (6) -(12), we obtained the problem of the unconditional minimum of the function F(P), since P1, P2 are included in F (P) through the solution y (P1, P2),
which takes into account all the mathematical properties of constraint equations (6) - (12).
Based on the physical meaning of the problem, it
would be necessary to find P so that the solution y(Pails, tads, P*, P2) was as close as possible to the true value of the vector y of the control object state variables measured at the output of the
PSA unit in magnitude
F (P1, P2) — | |y - y (P2, tads, P1, P*2)|| Ei-
However, the vector y is unknown, therefore we have to work with the "perturbed" function F (p).
The regularization of a problem is the process of transforming it into a correctly set one. Regularization of the extremal problem formulated above consists in transforming a convex function F (p)
into a strictly or uniformly convex one, which ensures the uniqueness of the solution P . Let us construct a continuous nonnegative parametric function
0(p ) — F (p ) + aQ(P),
where the parameter a > 0; Q is non-negative continuous function such that ®(p) is an uniformly convex function. An uniformly convex function Q(Pi, P2 ) = Pi + P2 can be taken as Q.
If F (P) is convex, and Q is a uniformly convex
function, then for any a > 0 function 0(P) will be
uniformly convex and the problem of finding Pa e V such that
o(pa)=mpn 0(Pi, P2) (13)
is set correctly and its solution pa is unique for each
fixed a. For determining pa , high-speed quasiNewton methods can be used [65].
As a result of solving the regularized problem
* *
(13), the values of kinetic parameters P1 <i,P2 <a of the
adsorption oxygen concentration process were
determined: P*a = 5.776 s-1, P2a = 1.925 s-1.
The adequacy of the mathematical model was tested on a set of experimental data obtained under conditions different from those under which they were obtained, according to which parametric
was carried out. The function y°ut 0^, tadv, P1, P2)
(solving the mathematical model equations (6) - (12)
for given values P^ = 2.7-105; 3.7-105 Pa, fads (from
7 to 82 s), kinetic parameters P1 = P1P2 = P2 a and
values of ordinates y^f'6, I = 1, 2, ..., 10 are shown in Fig. 7.
The mismatch of calculated by the model (6) -(12) and experimental data (Fig. 7) was estimated by the following formula:
•V, ,vo\.% 60
n_8 ' /---—-__ □
IT 2 I
i! 1 1 ■ ' 1 ■ 1 1 1 -'— .111 . ..
82 L/2s
Fig. 7. Verification of mathematical model adequacy of the process of oxygen adsorption concentration at the inlet pressure: 1 - 2.7-105 Pa; 2 - 3.7-105 Pa; A, □ - experiment, — calculation by model
„ out/nin , n o \ out,e
max y (.Pads, Íad8>í,ß! a, ß2 a)- yu-
i=1,d' ' 1
The mathematical model (6) - (12) with found
* -1 * -1 values P1ai = 5.776 s , P2a = 1.925 s was
considered adequate to the technological process of the adsorption oxygen concentration, if
Smax = max
yOUt(PaÍdns,tads,i,ßl,a,ß2,a) - y^
where 5 is the measurement error of oxygen concentration y1out at the production outlet of the PSA unit, which is 15 %. The verification of the adequacy of the mathematical model of adsorption oxygen
concentration in the PSA unit showed that the maximum relative error of the mathematical model of the adsorption oxygen concentration process 5max
was 13.2 %, which allows using this model with found
values P1 =P1ai, P2 =P2a for the purposes of
analyzing the oxygen adsorption concentration process, optimizing and controlling this process.
The numerical analysis of the dynamics of the dual-adsorber PSA unit operation
In order to examine the systemic links, patterns, and increase the efficiency of the PSA unit, calculation experiments were conducted to study the dynamics and "statics" during the adsorption oxygen
concentration in the air-gas mixture for a dual-adsorber technological scheme with 13X granular zeolite adsorbent (see Fig. 3). The main parameters of the pilot PSA dual-adsorber unit are presented
in Table 1.
In Table 1: dA is the inner diameter of the adsorber shell (the adsorbent bulk layer); L is the height of the adsorbent bulk layer; dK1 = dK2 is the
bore section of cut-off valves; VR is the receiver volume.
Table 1
The characteristics of the pilot PSA dual-absorber unit
Parameter Value Parameter Value
dA, m 0.050 d K1 = d K 2, m 0.0014
L, m 0.500 Vr, m3 0.002
dgr, m 0.002 ßi; ß2, s-1 5.776; 1.925
Table 2
Values of the process parameters at the nominal point and ranges of their variation
Parameter The value in the working (nominal) point Range of variation
?ads = ?c/2, S 40 10 - 90
Pm x105, Pa 3 2.0 - 5.2
DOUt 5 P x105, Pa 1 0.9 - 1.1
P °uu X105, Pa 0.75 0.25 - 1.0
y|n % vol. 20.8 20.3 - 21.3
y3n% vol. 1.0 0.5 - 1.5
Tm,T ,K g T oc 298 273 - 323
dK6, mm 0.5 0.31 - 0.80
Variables and ranges of their changes are presented in Table 2.
A series of calculation experiments was conducted to study the effect of the half-cycle time tc / 2 (the duration of the adsorption stage tads), the
pressure Pin and temperature of the gas-air mixture Tgin at the compressor outlet, the diameter of the K6
throttle dK6 on the concentration >'1out and the degree
of oxygen extraction n.
Fig. 8 shows graphs of the oxygen concentration
dependencies >'1out in the production flow from the half-cycle duration tads = tc / 2 at various pressures
P in and, therefore, pressure Paidns at the adsorption stage.
The increase in P in leads to the increase in the oxygen concentration >'1out in the production flow and, accordingly, its sensitivity to changes in the half-cycle time tads = tc / 2 . All graphs
have a pronounced
extreme character, so it is possible to choose the
optimal half-cycle time value tads, which provides the maximum concentration >'1out at various input pressure
values Pin . In this case, the range of values tads ,
including the optimal value (in case of a given optimality criterion), it is advisable to limit the interval [27 - 67] s. The analysis of the graphs in Fig. 9 shows that the time of the transient
process (the transition of the unit to a periodic stationary mode) corresponds on average to 20-40 "adsorption - desorption" cycles,
i.e. tst ~ (20-40)-tc.
The analysis of the graphs in Fig. 10 shows that at
Tgin= 273 and 298 K the curves monotonously increase
over the entire range of pressure Pin variation. At the same time, up to the value of Pin « 3.7-105 Pa the sensitivity of the concentration >'1)ut to Pin is noticeably higher than in the next
section. At Tgin = 323 K, the graph acquires an extreme character, and the maximum is reached at P in= 3.9-105 Pa and amounts to ~ 41 % vol. The further increase in Tgin leads to the
decrease in the oxygen content y1out in the production air flow. This is because the increase in Tgin leads to
the heating of the adsorption layer and the decrease in the equilibrium adsorptive concentration in the adsorbent. The greatest deviation of the curves (corresponding to 273 and 323 K) is observed at
end segment of the variation in Pm and is ~ 1.2 % vol.
»» ^
y, , vol.%
Fig. 8. Dependencies y1 on /ads = tc / 2 at P , Pa:
1 - 2.2-105; 2 - 2.7-105; 3 - 3.7-105; 4 - 5.2-105
Fig. 9. Dependencies y on the time
of the unit operation t for iads, s:
1 - 10; 2 - 40; 3 - 65
>; ,voi.% 45 -
r|. %
i 1 ! i .................1.................
| --- ............i..................
> ■—-- 2
/j/ | I
/ / / : T 1 : i ; ;
^ V-'.......
¿A ................
3.2 3,7 4.2
4.7 /"n,X105,Pa
Fig. 10. Dependencies y1out on Pln at temperature of initial mixture Tgin, K: 1 - 273; 2 - 298; 5 - 323
This phenomenon is explained by the increase in the adsorbent temperature, which leads, on the one hand, to the decrease in the equilibrium adsorptive concentration in the adsorbent, and, on the
other hand, increases the desorption rate at the stage of the adsorbent regeneration.
The analysis of the graphs in Fig. 11 demonstrates that the oxygen extraction degree n is affected both
Pin and Tgin. At Tgin = 273 K the dependence has an
extreme nature and the extremum is reached at
Pin = 4-105 Pa, and at Tgin = 298 K and Tgin = 323 K
the graphs monotonously increase proportionally Pin . At Tgin = 323 K a sharp decrease in the sensitivity of
the extraction degree n to the change in Pin is observed starting from Pin = 3.9-105 Pa, and at Tgin = 298 K the dependence n on P in becomes almost linear.
The dependencies of oxygen concentration y1out on the half-cycle time tads for various values Tgin are presented in Fig. 12. The graphs are of extreme nature at temperatures Tgin = 298 and 323 K. The
yout is reached at tads «40 s. At Tgin = 273 K the
graph takes the form of a saturation curve, the "plateau" of the graph corresponds to the value
yout= 42,7 % vol.
It should be noted that the more tads , the greater deviations are observed between the curves. The maximum mismatch of the curves corresponding
to Tgin = 273 and 323 K is reached at tads = 60 s and
Fig. 11. Dependencies of oxygen extraction degree n on Pln at 7gin, K: 1 - 273; 2 - 298; 3 -323
amounts ~ 5 % vol. These changes in the graphs can be explained as follows. At small values of tads < 30 s, the heat exchange between the adsorbent and the gas phase is not sufficiently intensive
(due to the inertia of the heating or cooling process of the adsorbent),
therefore the effect of Tgin is insignificant, and at tads - 30 the effect of the inertia of the thermal processes is reduced.
The graphs analysis presented in Fig. 13 shows that the extraction degree is proportional tads at all
Tgin. At tads > 40 s, the maximum n is reached at the
lowest Tgn (273 K). The type of the considered curves as a whole correlates with the graphs of oxygen concentrations y1out in the production flow from the half cycle time yf* at Tgin = 273; 298; 323
Fig. 12. Dependencies y™ on tads at Tgin , K: 1 - 273; 2 - 298; 3 - 323
— AM&T
34 ■
Fig. 13. Dependencies n on tads at Tgn, K:
1 - 273; 2 - 298; 3 - 323
.......1......\......i............-......i......i...... ::::::: i i i i i i i iii ......1......!......1.....-b" : Jr i Jr\ : i i S iii: i i i 1 iii!
i i i i i i i ¡¡¡iii: jf i i : : : i i Jf
i --i—-¿¿L \ \ \ i iii! i i i i ill!
15 20 25 30 35 40 45 50 55 60 6 5 70 75 taAsS
Fig. 15. Dependencies of n from tads at dKe , mm:
1 - 0.31; 2 - 0.62; 3 - 0.8
Dependencies of oxygen concentration >iout in the production air flow from tads for different values of the nominal diameter dK(6 of the purge valve K6 are
shown in Fig. 14. The graphs analysis indicates that with the increase of dK6 in the duration of the
adsorption stage tads ensuring the achievement of the maximum oxygen concentration at the unit outlet should decrease from 62 s (at dK6 = 0.345 mm) to 27 s
(at dK6 = 0.715 mm). Fig. 15 presents dependency
graphs of n from tads for various values of dK6 .
At dK6 = 0.31, the dependency is directly proportional
and close to linear, since the flow directed to the desorption is too small and it takes a long time to regenerate the adsorbent; at dK6 = 0.8, the dependency
is also close to linear, but inversely proportional,
>T! voi.%
...........|........... ...........
> \i
V 5
: : ___________I___________!___________ 4
Fig. 14. Dependencies of _y1out from tads at dK6 , mm:
1 - 0,345; 2 - 0.5; 3 - 0.62; 4 - 0.715
which is explained by the excess flow directed to the nitrogen desorption, as a result of which nitrogen is desorbed before the desorption stage ends; at dK6 = 0.62 mm, the dependency acquires an
character, since the intermediate state is reached.
The graphs analysis presented in Fig. 16 shows
that the maximum value Pin corresponds to the maximum peak of the air velocity vg = 0.18 m/s, since
the velocity rate is proportional to the pressure difference P in at the compressor outlet and the pressure P^ at the inlet to the adsorber. When these pressures are equalized, the air velocity
gradually decreases to some steady-state value, approximately
equal to 0.03 m/s for all considered values Pin . Similar dependences were obtained at the desorption stage (Fig. 16, t > 1270 s).
v„ ,m/s
0.18 -
0.08 -
!/\2Yv II r\, |
; 1
A-icls, S
Fig. 16. Dependencies of vg (x = 0) in adsorber from t at P*n, Pa:
1 - 2-105; 2 - 3-105; 3 - 4-105
Fig. 17. Dependencies of Vg in adsorber from t at dK6 , mm:
1 - 0.345; 2 - 0.5; 3 - 0.715
The graphs analysis in Fig. 17 specifies that varying the diameter of the throttle dK6 in the
investigated range does not affect the appearance of the curves obtained. Reducing the diameter of the purge throttle leads to the slight increase in the gas velocity at the beginning of the
adsorption stage and the decrease at its end. These changes in the graphs are explained by the fact that the increase in the throttle diameter leads to the increase in the flow rate of the purge
mixture for the regeneration of the second adsorber, which is reflected in the value of the difference in the air pressure directly proportional to its speed.
The velocity value of the flow entering the adsorber plays an important role, since it determines the adsorbent abrasion under alternating loads (in the cycles of lifting and pressure relief in the
adsorbers). The interaction of the moving gas stream with the adsorbent layer leads to the effect of limited "fluidization" of the layer when the granules of the adsorbent begin to shift relative to
each other, which leads to abrasion of the adsorbent and the appearance of a significant amount of dust in the product stream.
Even at the velocity of the filtered air flow at a much lower rate of the beginning of fluidization, the abrasion of the adsorbent granules can be quite strong [26]. This is due to the impact on the
granules of changing "side" forces, called the forces of Karman, causing oscillating displacement of the granules relative to each other. Both destructive effects are likely to occur when the stages
change, when large pressure gradients occur. Therefore, it is extremely important to control the gas flow rates in the frontal layer of the adsorbent during the transition periods of adsorption
oxygen concentration process.
Fig. 18. Dynamics of opening degree of inlet 1 and discharge 2 valves of the PSA unit
Fig. 19. Dependencies of Vg (x = 0)
in the adsorber from the time at Pin , Pa:
1 - 3105; 2 - 4-105; 3 - 5105
The analysis of operating experience of PSA units shows that the gas flow rate in the adsorber (depending on the size of the adsorber, the diameter of the adsorbent particles in it, the values of
adsorption and desorption pressures) should not exceed 0.05-0.3 m/s.
Fig. 18 displays the graphs of the step change in the opening degree of the inlet and discharge valves from the time with the control frequency of 4 s. As a result, the air velocity in the front
layer of the adsorbent is not higher than 0.08 m/s (Fig. 19). Thus, controlling the opening degree of the valves is an effective means of ensuring the absence of abrasion of the expensive adsorbent.
The calculation experiments conducted with the developed mathematical model established that it is
advisable to use pressure P in at the compressor outlet, a temporary program ) for opening control valves
Ki (K2), the duration of the adsorption stage tads (as half cycle time) and the diameter of the purge throttle dK6 as control actions allowing to effectively control
the modes implemented in the PSA unit. The range of values tads, including the optimal value (maximum
concentration value y°ut ), is advisable to limit the interval to 27-67 s, and the time for the unit to reach a periodic stationary mode on average corresponds to 20-40 adsorption-desorption cycles.
It has been established that by finding the law of opening the inlet and discharge valves of the PSA unit, it is possible to ensure the air flow rate that does not lead to abrasion of the adsorbent
during the implementation of cyclic adsorption-desorption processes. At the same time, the influence of the gas flow rate limitation on the purity of the production oxygen, the extraction degree and
the capacity of the PSA unit requires further research.
The results of numerical analysis, mathematical and algorithmic support for the operation of the PSA dual-adsorption unit, presented in this paper, can be used to design new automated processes and
adsorption process units with cyclically varying pressure to separate and purify multicomponent gas mixtures.
The research was financially supported by the Russian Ministry of Education and Science within the framework of project No. 10.3533.2017.
1. Ruthven D.M., Farooq S., Knaebel K.S. Pressure Swing Adsorption. New York, 1993.
2. Lopes Filipe V.S., Grande Carlos A., Rodrigues Alirio E. Activated Carbon for Hydrogen Purification by Pressure Swing Adsorption: Multicomponent Breakthrough Curves and PSA Performance. Chemical
Engineering Science, 2011, Vol. 66, p. 303.
3. Jinsheng Xiao, Ruipu Li, Pierre Benard, Richard Chahine. Heat and Mass Transfer Model of Multicomponent Adsorption System for Hydrogen Purification. International Journal of Hydrogen Energy, 2015,
Vol. 30, pp. 1.
4. Silva Bruna, Solomon loan, Ribeiro Ana M., Lee U-Hwang, Hwang Young Kyu, Chang Jong-San, Loureiro José M., Rodrigues Alirio E. H2 Purification by Pressure Swing Adsorption Using CuBTC. Separation
and Purification Technology, 2013, Vol. 118, p. 744.
5. Milad Yavary, Habib Ale Ebrahim, Cavus Falamaki. The Effect of Number of Pressure Equalization Steps on the Performance of Pressure Swing Adsorption
Process. Chemical Engineering and Processing, 2015, Vol. 87, p. 35.
6. Paradias Dionissios, Lee Sheldon, Ahmed Shabbir. Facilitating Analysis of Trace Impurities in Hydrogen: Enrichment Based on the Principles of Pressure Swing Adsorption. Hydrogen Energy, 2012, Vol.
37, p. 14413.
7. Kim Young Jun, Nam Young Suk, Kang Yong Tae. Study on a Numerical Model and PSA (Pressure Swing Adsorption) Process Experiment for CH4/CO2 Separation from Biogas. Energy, 2015, Vol. 91, p. 732.
8. Boon Jurrian, Cobden P., Van Dijk H.A.J., Van Sint Annaland M. High-temperature Pressure Swing Adsorption Cycle Design for Sorption-enhanced Watergas Shift. Chemical Engineering Science, 2015,
Vol. 122, p. 219.
9. Riboldi Luca, Bolland Olav. Evaluating Pressure Swing Adsorption as a CO2 Separation Technique in Coal-fired Power Plants. International Journal of Greenhouse Gas Control, 2015, Vol. 39, p. 1.
10. Ko Daeho; Siriwardane Ranjani; Biegler Lorenz. Optimization of a Pressure-swing Adsorption Process Using Zeolite 13X for CO2 Sequestration. Industrial & Engineering Chemistry Research, 2003, Vol.
42, Issue 2, p. 339.
11. Ko Daeho; Siriwardane Ranjani; Biegler Lorenz. Optimization of Pressure Swing Adsorption and Fractionated Vacuum Pressure Swing Adsorption Processes for CO2 Capture. Industrial & Engineering
Chemistry Research, 2005, Vol. 44, Issue 21, p. 8084.
12. Chai S.W., Kothare M.V., Sircar S. Rapid Pressure Swing Adsorption for Reduction of Bed Size Factor of a Medical Oxygen Concentrator. Industrial & Engineering Chemistry Research, 2011, Vol. 50,
p. 8703-8710.
13. Effendy S., Xu C., Farooq S. Optimization of a Pressure Swing Adsorption Process for Nitrogen Rejection from Natural Gas. Industrial & Engineering Chemistry Research, 2017, Vol. 56, Issue 18, pp.
14. Fu Q., Yan H.Y., Shen Y.H., Qin, Y.J., Zhang D.H., Zhou Z. Optimal Design and Control of Pressure Swing Adsorption Process for N-2/CH4 Separation Journal of Cleaner Production, 2018, Vol. 170,
pp. 704-714.
15. Shokroo E., Farsani D., Meymandi H. and Yadoliahi N. Comparative Study of Zeolite 5A and Zeolite 13X in Air Separation by Pressure Swing Adsorption. Korean Journal of Chemical Engineering, 2016,
Vol. 33, Issue 4, pp. 1391-1401.
16. Wu C., Vermula R., Kothare M., Sircar S. Experimental Study of a Novel Rapid Pressure-Swing Adsorption Based Medical Oxygen Concentrator: Effect of the Adsorbent Selectivity of N2 over O2.
Industrial & Engineering Chemistry Research, 2016, Vol. 55, Issue 16, pp. 4676-4681.
17. Xu M., Wu H.C.; Lin Y.S., Deng S.G. Simulation and Optimization of Pressure Swing Adsorption Process for High-temperature Air Separation by Perovskite Sorbents. Chemical Engineering Journal,
2018, Vol. 354, pp. 62-74.
18. Moran A., Talu O. Limitations of Portable Pressure Swing Adsorption Processes for Air Separation. Industrial & Engineering Chemistry Research, 2018, Vol. 57, Issue 35, pp. 11981-11987.
19. Hu T.M., Zhou H.Y., Peng H., Jiang H.Q. Nitrogen Production by Efficiently Removing Oxygen from Air Using a Perovskite Hollow-fiber Membrane with Porous Catalytic Layer. Frontiers in Chemistry,
2018, Vol. 6, p. 329.
20. Shumyatsky Yu.I. Promyshlennye Adsorbtsi-onnye Protsessy [Industrial adsorption processes]. Moscow: KolosS, 2009, 183 p. (Rus.)
21. Appel W.S., Winter D.P., Sward B.K., Sugano M., Salter E., Bixby J.A. Portable oxygen concentration system and method of using the same. Patent USA N 6691702, MKI3 B01D N 128/202.26, N 134868,
Bjul. N 12 dated 17.02.04, 24 p.
22. Jee J.G., Lee J.S., Lee C.H. Air Separation by a Small-scale Two-Bed Medical O2 PSA. Industrial & Engineering Chemistry Research, 2001, Vol. 40, Issue 16, pp. 3647-3658.
23. Li J. The Experimental Study of a New Pressure Equalization Step in the Pressure Swing Adsorption Cycle of a Portable Oxygen Concentrator. Bio-Medical Materials and Engineering, 2014, Vol. 24,
pp. 1771-1779.
24. Bowie G. High Frequency Pressure Swing Adsorption. Patent USA N 6176897, MKI2 B01D 95/98, N 000844, Bjul. N 2 dated 23.01.01, 27 p.
25. Suzuki M., Suzuki T., Sakoda A., Izumi J. Piston-Driven Ultra Rapid Pressure Swing Adsorption. Adsorption, 1996, Vol. 2, pp. 111-119.
26. Norman R., Robert E., Michael A. Portable Oxygen Concentrator. Patent USA N 6949133, MKI3 B01D 96/111, N 762671, Bjul. N 4 dated 27.09.05, 17 p.
27. Edward J.R. Engineered adsorbent structures for kinetic separation. Patent USA N 7645324, MKI3 B01D 53/02, N 60/642, 366, Bjul. N 1 dated 12.01.10, 18 p.
28. Jagger T.W., Nicholas P.V., Kivisto J.A., Lonnes P.B. Low power Ambulatory Oxygen Concentrator. Patent USA N 7431032, MKI3 A62B 7/00, Bjul. N 8 dated 7.10.08, 35 p.
29. Rauch J.J., Sarigiannis C.B., Warta A.M., Dowd S.J. Air Separation Apparatus. Patent USA N 10113792, MKI3 A25J 3/04824, Bjul. N 8 dated 30.10.18, 16 p.
30. Norio M., Hiroshi I., Akinori T., Masaya O., Kiyofumi M., Toshinari A. Oxygen Adsorbent, Oxygen Manufacturing Equipment Using the Oxygen Adsorbent and Oxygen Manufacturing Method. Patent USA
N 10105678, MKI3 B01D 53/047, Bjul. N 10 dated 23.10.18, 12 p.
31. Bliss L.P., Atlas J.C., Halperin S.C. Portable oxygen concentrator. Patent USA N 7402193, MKI3 B01D 53/053, N 11/099,783, Bjul. N 8 dated 22.07.08, 29 p.
32. Lukin V.D., Novosel'skij A.V. Tsiklicheskie Adsorbtsionnye Protsessy [Cyclic adsorption processes]. Leningrad: Khimiya, 1989, 254 p. (Rus.)
33. Jee J.G., Lee J.S., Lee C.H. Air Separation by a Small-scale Two-Bed Medical O2 PSA. Industrial & Engineering Chemistry Research, 2001, Vol. 40, Issue 16, pp. 3647-3658.
34. Appel W.S., Winter D.P., Sward B.K., Sugano M., Salter E., Bixby J.A. Portable Oxygen Concentration System and Method of Using the Same. Patent USA N 6691702, MKI3 B01D128/202.26, N 134868, Bjul.
N 12 dated 17.02.04, 24 p.
35. Park Y., Lee S., Moon J., Choi D., Lee C. Adsorption Equilibria of O-2, N-2, and Ar on Carbon Molecular Sieve and Zeolites 10X, 13X, and LiX. Journal of Chemical and Engineering Data, 2006, Vol.
51, Issue 3, pp. 1001-1008.
36. Yang R.T. Adsorbents: Fundamentals and Applications. New Jersey, 2003, 410 p.
37. Akulinin E.I., Dvoretsky D.S., Simanenkov S.I., Ermakov A.A. Sovremennye Tendentsii po Umen'she-niyu Ehnergozatrat Kisloroddobyvayushchikh Ustanovok Korotkotsiklovoj Beznagrevnoj Adsorbtsii
[Current trends to reduce the energy consumption of oxygen-prodution units of pressure swing absorption]. Vestnik Tambovskogo gosudarstvennogo tekhnicheskogo universiteta, 2008, Vol. 14, Issue 3. pp.
597-601. (Rus.)
38. Akulinin E.I., Gladyshev N.F., Dvoretsky D.S., Dvoretsky S.I. Sposoby Polucheniya Blochnyh Tseolitovykh Adsorbentov dlya Osushchestvleniya Processov Korotkotsiklovoj Adsorbcii [Methods of
obtaining block zeolite adsorbents for the implementation of pressure swing adsorption processes]. Vestnik Kazanskogo tekhnologicheskogo universiteta, 2015, Vol. 18, Issue 15, pp. 122-125. (Rus.)
39. Akulov A.K. Modelirovanie Razdeleniya Binarnykh Gazovykh Smesej Metodom Adsorbtsii s Koleblyushchimsya Davleniem. Diss. dokt. tekh. nauk [Simulation of separating binary gas mixtures by
adsorption method with oscillating pressure]. St. Petersberg, 1996. 304 p. (Rus.)
40. Shokroo E., Farsani D., Meymandi H., Yado-liahi N. Comparative Study of Zeolite 5A and Zeolite 13X in Air Separation by Pressure Swing Adsorption. Korean Journal of Chemical Engineering, 2016,
Vol. 33 (4), pp. 1391-1401.
41. Bhatt T., Storti G., Rota R. Detailed Simulation of Dual-reflux Pressure Swing Adsorption Process. Chemical Engineering Science, 2015, Vol. 122, pp. 34-52.
42.Nikolaidis G., Kikkinides E., Georgiadis M. Modelling and Simulation of Pressure Swing Adsorption (PSA) Processes for Post - combustion Carbon Dioxide (CO2) Capture from Flue Gas. Computer Aided
Chemical Engineering, 2015, Vol.37, pp. 287-292.
43. Khajuria H., Pistikopolous N. Integrated Design and Control of Pressure Swing Adsorption Systems. 21 st European Symposium on Computer Aided Process Engineering - ESAPE 21, 2011, Vol. 29.
44. Swernath S., Searcy K., Rezaei F., Labreche Y., Lively R., Reallf M., Kawajiri Y. Optimization and Technoeconomic Analysis of Rapid Temperature Swing Adsorption Process for Carbon Capture from
Coal-Fired Power Plant. Computer Aided Chemical Engineering, 2015, Vol. 36, pp. 253-278.
45. Silva B., Solomon I., Ribeiro A., Lee U., Hwang Y., Chang J., Loureiro J., Rodrigues A. H2 Purification by Pressure Swing Adsorption Using CuBTC. Separation and Purification Technology, 2013,
Vol. 118, pp. 744-756.
46. Wurzbacher J., Gebald C., Brunner S., Steinfeld. Heat and Mass Transfer of Temperature-vacuum Swing Desorption for CO2 Capture from Air. Chemical Engineering Journal, 2016, Vol. 283, pp.
47. Dantas T., Luna F., Silva I., Torres A., Aze-vedo D., Rodrigues A., Moreira R. Carbon Dioxide-nitrogen Separation through Pressure Swing Adsorption. Chemical Engineering Journal, 2011, Vol. 172,
pp. 698-704.
48. Songolzadeh M., Soleimani M., Ravanchi M. Using Modified Avrami Kinetic and Two Component Isotherm Equation for Modeling of CO2/N2 Adsorption over a 13X Zeolite Bed. Journal of Natural Gas
Science and Engineering, 2015, Vol. 27.
49. Jain S., Moharir A., Li P., Wozny G. Heuristic Design of Pressure Swing Adsorption: a Preliminary Study. Separation and Purification Technology, 2003, Vol. 33(1), pp. 25-43.
50. Khajuria H, Pistikopoulos E. Dynamic Modeling and Explicit/Multi - parametric MPC Control of Pressure Swing Adsorption Systems. Journal of Process Control, 2011, Vol. 21, pp. 151-163.
51. Santos J.C., Portugal A.F. Magalhaes F.D., Mendes A. Simulation and Optimization of Small Oxygen Pressure Swing Adsorption Units. Industrial & Engineering Chemistry Research, 2004, Vol. 43, pp.
52. Rao V.R., Farooq S., Krantz W.B. Design of a Two-step Pulsed Pressure-swing Adsorption-based Oxygen Concentrator. AIChE Journal, 2010, Vol. 56, Issue 2, pp. 354-370.
53. Beeyani1a A.K., Singh K., Vyasa R.K., Kumar S., Kumar S. Parametric Studies and Simulation of PSA Process for Oxygen Production from Air. Polish Journal
of Chemical Technology, 2010, Vol. 12, Issue 2, pp. 18-28.
54. Santos J.C., Cruz P., Regala T., Magalhaes F.D., Mendes A. High-purity Oxygen Production by Pressure Swing Adsorption. Industrial & Engineering Chemistry Research, 2007, Vol. 46, pp. 591-599.
55. Wu C., Vemula R., Kothare M., Sircar S. Experimental Study of a Novel Rapid Pressure-swing Adsorption Based Medical Oxygen Concentrator: Effect of the Adsorbent Selectivity of N2 over O2.
Industrial & Engineering Chemistry Research, 2016, Vol. 55, Issue 16, pp. 4676-4681. doi: 10.1021/acs.iecr.5b04570
56. Dubinin M.M. Adsorbtsiya i Poristost' [Adsorption and porosity]. Uchebnoe posobie. Moscow: Izd-vo VAKHZ, 1972, 124 p. (Rus.)
57. Kel'cev N.V. Osnovy Adsorbtsionnoj Tekhniki [Basics of adsorption technology]. Moscow: Khimiya, 1984, 592 p. (Rus.)
58. Ruthven D.M. Principles of Adsorption and Adsorption processes. New York: John Wiley and Sons, 1984.
59. Akulinin E.I., Ishin A.A., Skvortsov S.A., Dvoretsky D.S., Dvoretsky S.I. Mathematical Modeling of Hydrogen Production Process by Pressure Swing Adsorption Method. Advanced Materials &
Technologies, 2017, Issue 2, pp. 38-49. (Rus.)
60. Ishin A.A. Matematicheskoe Modelirovanie i Upravlenie Pprotsessom Polucheniya Vodoroda Metodom Adsorbtsionnogo Razdeleniya Gazovoj Smesi [Mathematical modeling and control of hydrogen-obtaining
process by the method of adsorption separation of the gas mixture]. Diss. kand. tekh. nauk. Tambov, 2017. 152 p. (Rus.)
61. Jeong-Geun Jee, Min-Bae Kim, Chang-Ha Lee. Adsorption Characteristics of Hydrogen Mixtures in a Layered Bed: Binary, Ternary, and Five-component Mixtures. Industrial & Engineering Chemistry
Research, 2001, Vol. 40, pp. 868-878.
62. Suzuki M. Adsorption Engineering. Tokyo: Kodansha, 1990.
63. Beloglazov I.N., Golubev V.O. Osnovy Rascheta Fil'tratsionnykh Protsessov [Basics of calculating filtration processes]. Saint Petersburg, 2002. (Rus.)
64. Kumar R.A., Fox V.G., Hartzog D.G., Larson R.E., Chen Y.C., Houghton P.A., Naheiri T. Versatile Process Simulator for Adsorptive Separations. Chemical Engineering Science, 1994, Vol. 49 (18), pp.
65. Gladkih B.A. Metody Optimizatsii i Issledovanie Operatsij dlya Bakalavrov Informatiki: Nelinejnoe i Dinamicheskoe Programmirovanie [Optimization methods and operation research for bachelors of
computer science: Nonlinear and dynamic programming]. Tomsk, 2009, 263 p. (Rus.) | {"url":"https://cyberleninka.ru/article/n/the-study-of-cyclic-adsorption-air-separation-and-oxygen-concentration-processes","timestamp":"2024-11-07T15:15:23Z","content_type":"application/xhtml+xml","content_length":"147970","record_id":"<urn:uuid:fd61c6ba-b3e1-446c-b1bd-38d4b22059cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00315.warc.gz"} |
Valetta6789 - Speedrun.com
0m 58s
0m 50s
0m 53s
0m 43s
0m 57s
0m 45s
4m 16s
1m 21s
0m 43s
1m 00s
0m 55s
0m 57s
3m 41s
0m 50s
0m 59s
0m 54s
0m 58s
1m 10s
1m 06s
1m 14s
1m 13s
0m 59s
0m 35s
0m 57s
0m 49s
0m 53s
1m 27s
0m 47s
0m 47s
0m 36s
0m 42s
0m 27s
0m 40s
0m 49s
0m 59s
0m 46s
1m 37s
0m 52s | {"url":"https://www.speedrun.com/users/Valetta6789","timestamp":"2024-11-04T08:30:38Z","content_type":"text/html","content_length":"374142","record_id":"<urn:uuid:f95262e3-294f-41d7-8e4d-fb1425df8c69>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00671.warc.gz"} |