content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
l a
Section: Scientific Foundations
Flow control and shape optimization
Participants : Charles-Henri Bruneau, Angelo Iollo, Iraj Mortazavi, Michel Bergmann.
Flow simulations, optimal design and flow control have been developed these last years in order to solve real industrial problems : vortex trapping cavities with CIRA (Centro Italiano Ricerche
Aerospaziali), reduction of vortex induced vibrations on deep sea riser pipes with IFP (Institut Français du Pétrole), drag reduction of a ground vehicle with Renault or in-flight icing with
Bombardier and Pratt-Wittney are some examples of possible applications of these researches. Presently the recent creation of the competitiveness cluster on aeronautics, space and embedded systems
(AESE) based also in Aquitaine provides the ideal environment to extend our applied researches to the local industrial context. There are two main streams: the first need is to produce direct
numerical simulations, the second one is to establish reliable optimization procedures.
In the next subsections we will detail the tools we will base our work on, they can be divided into three points: to find the appropriate devices or actions to control the flow; to determine an
effective system identification technique based on the trace of the solution on the boundary; to apply shape optimization and system identification tools to the solution of inverse problems found in
object imaging and turbomachinery.
Control of flows
There are mainly two approaches: passive (using passive devices on some specific parts that modify the shear forces) or active (adding locally some energy to change the flow) control.
The passive control consists mainly in adding geometrical devices to modify the flow. One idea is to put a porous material between some parts of an obstacle and the flow in order to modify the shear
forces in the boundary layer. This approach may pose remarkable difficulties in terms of numerical simulation since it would be necessary, a priori, to solve two models: one for the fluid, one for
the porous medium. However, by using the penalization method it becomes a feasible task [48] . This approach has been now used in several contexts and in particular in the frame of a collaboration
with IFP to reduce vortex induced vibrations [49] . Another technique we are interested in is to inject minimal amounts of polymers into hydrodynamic flows in order to stabilize the mechanisms which
enhance hydrodynamic drag.
The active approach is addressed to conceive, implement and test automatic flow control and optimization aiming mainly at two applications : the control of unsteadiness and the control and
optimization of coupled systems. Implementation of such ideas relies on several tools. The common challenges are infinite dimensional systems, Dirichlet boundary control, nonlinear tracking control,
nonlinear partial state observation.
The bottom-line to obtain industrially relevant control devices is the energy budget. The energy required by the actuators should be less than the energy savings resulting from the control
application. In this sense our research team has gained a certain experience in testing several control strategies with a doctoral thesis (E. Creusé) devoted to increasing the lift on a dihedral
plane. Indeed the extension of these techniques to real world problems may reveal itself very delicate and special care will be devoted to implement numerical methods which permit on-line computing
of actual practical applications. For instance the method can be successful to reduce the drag forces around a ground vehicle and a coupling with passive control is under consideration to improve the
efficiency of each control strategy.
System identification
We remark that the problem of deriving an accurate estimation of the velocity field in an unsteady complex flow, starting from a limited number of measurements, is of great importance in many
engineering applications. For instance, in the design of a feedback control, a knowledge of the velocity field is a fundamental element in deciding the appropriate actuator reaction to different flow
conditions. In other applications it may be necessary or advisable to monitor the flow conditions in regions of space which are difficult to access or where probes cannot be fitted without causing
interference problems.
The idea is to exploit ideas similar to those at the basis of the Kalman filter. The starting point is again a Galerkin representation of the velocity field in terms of empirical eigenfunctions. For
a given flow, the POD modes can be computed once and for all based on Direct Numerical Simulation (DNS) or on highly resolved experimental velocity fields, such as those obtained by particle image
velocimetry. An instantaneous velocity field can thus be reconstructed by estimating the coefficients ${a}_{i}\left(t\right)$ of its Galerkin representation. One simple approach to estimate the POD
coefficients is to approximate the flow measurements in a least square sense, as in [64] .
A similar procedure is also used in the estimation based on gappy POD, see [79] and [83] . However, these approaches encounter difficulties in giving accurate estimations when three-dimensional
flows with complicated unsteady patterns are considered, or when a very limited number of sensors is available. Under these conditions, for instance, the least squares approach cited above (LSQ)
rapidly becomes ill-conditioned. This simply reflects the fact that more and more different flow configurations correspond to the same set of measurements.
Our challenge is to propose an approach that combines a linear estimation of the coefficients ${a}_{i}\left(t\right)$ with an appropriate non-linear low-dimensional flow model, that can be readily
implemented for real time applications.
Shape optimization and system identification tools applied to inverse problems found in object imaging and turbomachinery
We will consider two different objectives. The first is strictly linked to the level set methods that are developed for microfluidics. The main idea is to combine different technologies that are
developed with our team: penalization methods, level sets, an optimization method that regardless of the model equation will be able to solve inverse or optimization problems in 2D or 3D. For this we
have started a project that is detailed in the research program. See also [55] for a preliminary application.
As for shape optimization in aeronautics, the aeroacoustic optimization problem of propeller blades is addressed by means of an inverse problem and its adjoint equations. This problem is divided into
three subtasks:
i) formulation of an inverse problem for the design of propeller blades and determination of the design parameters ii) derivation of an aeroacoustic model able to predict noise levels once the blade
geometry and the flow field are given iii) development of an optimization procedure in order to minimize the noise emission by controlling the design parameters.
The main challenge in this field is to move from simplified models [69] to actual 3D model. The spirit is to complete the design performed with a simplified tool with a fully three dimensional
inverse problem where the load distribution as well as the geometry of the leading edge are those provided by the meridional plane analysis [78] . A 3D code will be based on the compressible Euler
equations and an immersed boundary technique over a cartesian mesh. The code will be implicit and parallel, in the same spirit as what was done for the meridional plane. Further development include
the extension of the 3D immersed boundary approach to time-dependent phenomena. This step will allow the designer to take into account noise sources that are typical of internal flows. The task will
consist in including time dependent forcing on the inlet and/or outlet boundary under the form of Fourier modes and in computing the linearized response of the system. The optimization will then be
based on a direct approach, i.e., an approach where the control is the geometry of the boundary. The computation of the gradient is performed by an adjoint method, which will be a simple "byproduct"
of the implicit solver. The load distribution as well as the leading edge geometry obtained by the meridional plane approach will be considered as constraints of the optimization, by projection of
the gradient on the constraint tangent plane. These challenges will be undertaken in collaboration with Politecnico di Torino and EC Lyon. | {"url":"https://radar.inria.fr/report/2011/mc2/uid15.html","timestamp":"2024-11-14T04:18:50Z","content_type":"text/html","content_length":"47122","record_id":"<urn:uuid:1ebed69d-ba7a-4510-a366-05f902511d05>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00812.warc.gz"} |
The Grand Perspective
This week's contestants, Destro and Cobra Commander, consider whether there can be any critical experiments in science. That is, until their boss catches them.
There can be no critical experiments in science, if you believe in structural realism.
There can't be any critical experiment, no matter what your theory, if you ask me.
Well, that depends on what you mean by critical experiment.
So what do you mean then?
I mean an experiment that, if it comes out a certain way, proves your theory wrong. An example would be the 1919 test of Einstein's theory of relativity by observing how far the distant starlight
was deflected by the curvature of spacetime around the Sun. If there was no deflection, Einstein would have been proven wrong. That's a critical experiment.
So it sounds like critical experiments in general have to be defined counterfactually.
Hmm, I never thought of it that way, but it would be an interesting thing to explore. What do you mean?
Well, if a theory is successful, it has passed all the experiments, or at least all the trusted experiments, that have been done to test it. If they were critical experiments, then they can only be
described as such counterfactually, . I.e. had they come out differently, the theory would have had to be abandoned.
Okay, but that doesn't preclude critical tests of the theory in the future, does it?
How do you mean?
Well, the tests it has passed in the past may have to be counterfactually shown to be critical tests, but there is still the future to look forward to. The theory will be put to more tests. And if
it makes predictions that differ from reality then it ruled out. In that case, it was a critical experiment, not counterfactually.
Okay, yes. But still any passed test can only be defined as a critical test artifactually, right? Can we agree on that?
Well, it seems so right now, so I'll go with it. But it may happen in the future that something comes to light that would rule it out. Moving on.
Point noted. So, what was that you were saying about structural realism? I'm afraaid I don't know quite what that is.
Oh, structural realism? It's a nifty way of looking at progress in science. You see, there is a strange tension in the progression of science. Our numerical predictions are getting more and more
precisely verified by experiment. Therefore, some reasonable people say that science is getting us closer and closer to the truth. Or, rather, that science gives us the truth, in ever more fine
grained detail.
Yeah, that seems right to me.
Ah, but then you have the problem of scientific revolutions. As Einstein put it, "no amount of experiments can prove me right, but a single one can prove me wrong". Now, what do you think he meant
by that?
Well, it sounds to me like he's talking about critical experiments.
That's right, I think he was talking about crucial experiments in the last clause. But the first clause is curious, isn't it? "no amount of experiments can prove me right."
He was a very humble man.
Well, I don't know about that. He also was trying to figure out if God had any choice in creating the universe…not so humble if you ask me.
Alright, alright. But he's just saying in that quote that his theory will never gain 100% precision. There will always be some uncertainty, so if certainty is your criterion for his theory being
"right", then you are out of luck. But that is a very stringent criterion for a theory to be considered "right". Clearly the theory is right, to the extent that your GPS unit has to use his theory
to account for your correct coordinates.
Okay, I see you point. That is one way of interpreting his statement, and now I see why you think he is being humble by saying so. His theory is obviously right to some extent, but he is taking the
high road but not claiming to be settled completely. Well, there's another way of interpreting his statement that you may want to hear.
Let's hear it.
Okay, so he's saying that no amount of experiments can prove him right, and that one can prove him wrong because he knows that someday his theory will be overturned. He has said as much elsewhere.
Right. But think about it: his theory was overturning Newton's in a sense, a system of laws that had been taken as gospel for hundreds of years. And yet a single experiment, the eclipse observation
of 1919, was sufficient to overturn it. This has happened time and time again in the history of science. A theory is thought to be "right" but then at some point an experiment proves it wrong. So,
if it can always be proven wrong at some point, then what does that say about it's chances of being right? Those chances are apparently zero.
Unless you have the correct theory of everything.
And that only makes sense if you are a unificationist, I think.
Nevermind. So, do you see the tension now? On the one hand it seems that are theories are getting us the truth, more and more precisely. Even when a theory is overthrown, its in a regime where the
old theory wasn't designed to tread anyway, and the mathematics are continuous by design. Quantum mechanics mathematically reduces to classical physics in the limit of h bar being very, very small.
I see that.
But on the other hand, there are radical revolutions of theories, in which the old theory is not continuous with the new. Newton's law of gravitation explained the motions of the planets in terms
of gravitational force, which emanated from the center of massive bodies and acted over a cosmic distance. In Einstein's theory of gravity, there is no gravitational force. There is only the local
curvature of spacetime that massive bodies follow along in the straightest line that they can. Entities such as gravitational forces that were the main players in one theory are completely absent
in another.
But their roles are still present. Something is still present that makes the planets move as they do.
In a sense, yes, I think I agree with that. Although a little nagging doubt thinks about Wheeler's book "Spacetime Physics" which says that the natural state of motion in GR is free-float, and it's
deviations from the geodesic path that need to be explained. But I'll leave that issue aside for now and just say that I agree with you. For this is what I think structural realism is all about.
Science gets at the truth in that we are discovering the true roles, the true structure of the universe. But we may, in individual scientific theories, be getting the players of those roles wrong.
So scientists are writing a play, and their first performance may have had a bad cast, but the screenplay is still great if we can only find the right actors to play the parts.
Something like that.
So back to your original point--how does this preclude critical experiments?
Well, to follow the play analogy, how can we ever really know if it's the actors that are bad, and not the script? If it's really hard to find an actor that does a role justice, is it a problem
with the actors or with the role?
What do you think?
Well, I don't know. There may not even be any fact of the matter. But for now, I will assume that there is a fact of the matter and keep running with the metaphor. The director can decide to tweak
the script to find a balance with the actor that he's got. Or, more radically, he may take that role out entirely.
But then you have to worry about whether its even the same play anymore.
That's true, depending on the role. If the play were Hamlet, and you took out the role of Hamlet, it seems clear that it's no longer the same play. But if the roles were Rosencrantz and
Guildenstern? Then it's not so clear. Perhaps it's the centrality of the role removed that determines to what extent the play has changed. But you agree that it's not an on/off thing?
Well, I don't know. If you change the main character, for whom the play is named, that seems pretty on/off. What if you took King Lear out of King Lear? Godot out of Waiting for Godot? The silence
out of "2:00 of silence"?
But on the other hand, if you change the other roles there seems to be varying amounts of grey area. What if you took Brutus out of Julius Ceasar? Or just the chorus?
Okay, so I'm having trouble seeing how this gets back to crucial experiments in structural realism. I got a little lost in the metaphor.
Fair enough. I can see how that might be the case. I am, after all addicted to metaphor, but that's a discussion for another day. If structural realism says that we have the roles correct, as
expressed in the explanatory and mathematical structure, but we get the details wrong about ontological entity filling those roles, then I suppose a critical experiment would be one that puts to
the test one of the roles in question. And if all we are ever changing in practice is the actors that fill the roles (gravitational force vs. curvature of spacetime) rather than the roles
themselves, then we are never doing critical experiments.
Wow, that seems pretty weird.
So if we never put roles on the chopping block, then maybe that means we've had all the roles all along. Which makes me worry that we are just filling out theories constrained by the very
structures of our brains.
Well, you are a worrier...I think that's a big leap.
How so?
Well, I'm not sure yet, but I'm trying to work that out. For one, we can't have had all the roles all along. There are new roles introduced by theories all the time.
How so?
Well, quarks, for instance. Higgs particles. Fields. We didn't even know atoms had a nucleus until the last 150 years, so how could the role of keeping the nucleus together been around before that?
That's a good point, but I'm not sure we're talking about the same level as being roles. We might be though. But maybe what I mean by roles is an explanans--quarks play the role of keeping together
something that otherwise shouldn't be together. If we didn't think like charges repel then we might never have needed the role that quarks play.
What about astronomical observations? What role is played by black holes, quasars, things we can observationally detect that we wouldn't have dreamed of before?
Well, I'm not sure. Perhaps you make a good point. So maybe my worry was misplaced after all.
I'll give you something to worry about! What is this insubordinance?! Get back to work, you slimes…this, I command!
Serpentor! I should have smelled you coming.
Destro, the tricky details of your thesis have to be done by showing when roles were introduced and by showing to what extent they were really put on the line. But that is a task for another day,
my friend.
Long live Cobra Commander! | {"url":"https://www.thegrandperspective.com/2009/04/","timestamp":"2024-11-08T08:15:10Z","content_type":"text/html","content_length":"149678","record_id":"<urn:uuid:6cb5817a-0174-465f-954b-ae1f5dd68afc>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00757.warc.gz"} |
Finding the Volume of a Cube Using Similarity
Question Video: Finding the Volume of a Cube Using Similarity Mathematics
Given that the volume of the smaller cube is 8 cubic feet, determine the volume of the larger cube.
Video Transcript
Given that the volume of the smaller cube is eight cubic feet, determine the volume of the larger cube.
First let’s recall how to find the volume of a cube. The volume is equal to 𝑒 cubed, where 𝑒 represents the edge length of the cube. We can see that the edge lengths of these cubes are 𝑥 feet for the
smaller cube and three 𝑥 feet for the larger cube.
There are two ways that we could approach this question. The first way is to use the fact that we know the volume of the smaller cube in order to calculate the value of 𝑥. We can then calculate the
side length of the larger cube and hence its volume.
As the volume of the smaller cube is eight cubic feet and the edge length is 𝑥 feet, we have the equation 𝑥 cubed is equal to eight. Finding the cube root of each side of this equation gives 𝑥 is
equal to the cubed root of eight, which is two.
Now we know the value of 𝑥, we can calculate the edge length of the second cube as it’s equal to three 𝑥. Three 𝑥 is equal to three multiplied by two, which is six. So the edge length of the second
cube is six feet.
The volume of the larger cube is therefore equal to its edge length cubed, six cubed, which is 216. Units for this volume are cubic feet. So that’s the first approach that we could take to this
question: calculating the value of 𝑥, the edge length of the smaller cube, directly.
The second approach doesn’t actually require us to find the value of 𝑥, but instead uses the relationship between the volumes of the two cubes. We know that the lengths of the two cubes are in the
ratio one to three as they are 𝑥 and three 𝑥 feet, respectively. Does this mean that the volumes are also in the ratio one to three?
Volume is calculated by multiplying three dimensions together, each of which are three times larger in the bigger cube compared to the smaller cube. Therefore, the overall volume isn’t three times
bigger. It’s three times three times three, or three cubed, times bigger. The volumes of the two cubes are therefore in the ratio one to 27.
We could find the volume of the larger cube by taking the volume of the smaller cube, eight cubic feet, and multiplying it by 27. Eight multiplied by 27 is 216, giving the same answer as we found
using the first method. If you’re going to use the second method, then just be very careful.
A really common mistake could be to assume that the volumes are in the same ratio as the lengths and, therefore, that the volume of the larger cube is just three times bigger than the volume of the
smaller cube.
This would give a volume of 24 cubic feet, which as we’ve seen is incorrect. The correct volume for the larger cube is 216 cubic feet. | {"url":"https://www.nagwa.com/en/videos/940152920750/","timestamp":"2024-11-12T10:39:09Z","content_type":"text/html","content_length":"244956","record_id":"<urn:uuid:826828f5-e342-4c65-b1ea-130dde975fa8>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00697.warc.gz"} |
FOAM Seminar
The FOAM Seminar, organised by computer scientists at the ILLC, features research on questions of a fundamental nature in computer science and AI, in research areas such as algorithms, optimisation,
data management, planning, knowledge representation, and multiagent systems. Talks are intended to be broadly accessible and pitched at the level you might find at a plenary talk of a relevant
conference (such as IJCAI, AAAI, KR, ICAPS, AAMAS, EC, PODS, LICS, STOC, FOCS, and SODA).
FOAM usually takes place on a Friday at 15:00. Talks are roughly 45 minutes long, followed by a brief discussion. Afterwards, you are invited to stay for a chat and a drink. Everyone is welcome to
If you want to receive automatic updates on our talks, you can either (1) use our RSS feed, (2) its cal version, or (3) subscribe to the our mailing list.
In theory, logic-based AI is explainable by design, since all inferences can be explained using logical proofs or counterexamples. In practice, which inferences performed by an automated reasoner | {"url":"https://events.illc.uva.nl/FOAM/","timestamp":"2024-11-13T09:52:50Z","content_type":"text/html","content_length":"38796","record_id":"<urn:uuid:22574337-4b1b-4834-a627-2374d9ea5146>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00130.warc.gz"} |
econstruction of
Home Index of Lectures << Prev Next >> PDF Version of this Page
Surface Tracing in 3D
Let me know
Copyright © by V. Kovalevsky, last update: October 19, 2011 what you think
Depth First Search
Euler Circuit
Spiral Tracing
Recognition of the Genus of the Surface
Economical Hoop Code
Filling the Interiors of Surfaces in 3D
The author has developed three algorithms for economical and exact encoding of surfaces in 3D spaces. He has compared them with the algorithm known as "Depth-First Search" of a graph.
Depth-First Search
It is well-known that the set of the facets of a surface can be considered as a graph: facets are the vertices and any two adjacent facets are connected by an edge of the graph. The aim of the
algorithm is to put all facets of the surface into a list. If a facet is adjacent to that previously put into the list, then only the differences of coordinates are saved in the list. Otherwise
coordinates are saved. The algorithm starts with an arbitrary facet, puts it into the stack ("last in first out") and starts a while-loop which is running while the stack is not empty. In the loop a
facet F is being popped form the stack. If it is not labeled as being already in the list then it is put into the list and labeled.. Then all facets adjacent to F are put into the stack. This
algorithm is rather simple and fast, but it is not economical: It needs on an average 2.5 bytes per facet even if sequences of adjacent facets are coded by differences of coordinates. The sequences
are mostly too short.
Euler Circuit
The author has developed an algorithm which finds the Euler circuit of the adjacency graph of the facets. The algorithm uses a directed graph (digraph) with the aim to make the number of edges as
small as possible. The Euler circuit is a closed sequence of facets and the 0- or 1-cells in between containing each 0- or 1-cell only once (a facet can be contained twice). The Euler circuit can be
encoded very economically by the differences of the coordinates of two adjacent facets. Experiments have showed that this code needs on an average about 0.75 bytes per facet.
Digraph of the adjacencies in the surface of a small cube Euler circuit of the surface of a small cube
Spiral Tracing
This algorithm takes an arbitrary facet of the surface S as a starting one and labels its closure. This is the "germ-cell" of the set L of the labeled facets. The algorithm traces the opening
boundary (see lecture "Introduction") of L and labels all "simple" facets. A facet is simple if the intersection of its boundary with the boundary of L is not empty and connected while the
intersection of its boundary with the complement of L is also not empty. The trace looks like a spiral. This method is less efficient than that of Euler circuit: it needs between 1.0 and 2.0 bytes
per facet depending on the number of tunnels in the surface. It is, however, interesting since it defines the genus (number of tunnels) of the surface.
The idea of the tracing:
The set L of labeled simple facets remains always topologically equivalent to a disk (a 2-ball).
Recognition of the Genus of the Surface
The set of simple facets (grey) composes The set of remaining non-simple cells (white)
a topological disk (2-ball) carries information abou the genus.
Economical Hoop Code
This is the most economical method. It finds and traces facets lying in the boundary of a two-dimensional slice of the body (solid line in the figure below). This boundary is called a "hoop". Another
"auxiliary hoop" (dashed line) is used for finding the starting facets of the main hoops.
It is sufficient to save the coordinates of a single starting facet of a hoop. For all other facets only the differences of their coordinates are encoded. The efficiency of this method is between
0.21 and 0.5 byte per facet for bodies whose surface has no singularities. However, for bodies with singular surfaces the efficiency can be rather bad. This is the main drawback of this method.
Exact Reconstruction of Images by Filling the Interiors of Surfaces in 3D
The algorithm is similar to that presented in Lecture 2 for filling interiors of curves in 2D. It is necessary to label the facets (2-cells) of the given surface whose normals are parallel (or
anti-parallel) to one of the coordinate axes. Then the algorithms scans all rows of the grid, that are parallel to the chosen axis, and counts the labeled facets. The filling begins at each odd count
and ends at each even count. This algorithm can exacltly reconstruct the 3D image from the codes of the surfaces.
The surface The pseudocode
Choose a coordinate axis of the Cartesian space, e.g. the X-axis.
Label all (n−1)-cells of M whose normal is parallel to X.
for ( each row R parallel to X )
{ fill=FALSE;
for ( each n-cell C in the row R )
{ if ( the 1st (n-1)-side of C is labeled )
fill = NOT fill;
if ( fill==TRUE ) C = foreground;
else C = background;
[1] V. Kovalevsky, A Topological Method of Surface Representation, In: Bertrand, G. et all (eds.), Lecture Notes in Computer Science, vol. 1568, Springer 1999, pp. 118-135.
[2] V. Kovalevsky: Geometry of Locally Finite Spaces, Monograph, Berlin 2008.
If you are interested in details see the reference [1] and take the book: 'Geometry of Locally Finite Spaces'.
Download: Print version
Last update: October 24, 2011 | {"url":"http://kovalevsky.de/Topology/SurfaceTrace3D_e.htm","timestamp":"2024-11-02T23:47:39Z","content_type":"text/html","content_length":"11817","record_id":"<urn:uuid:af6d2847-2cba-464a-8f49-288b35861055>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00598.warc.gz"} |
elven priestess
(White Aura) A blue-robed priestess of the divine order blesses you with a raise
d hand.
A solitary elf stands here in quiet reflection.
<824/824hp 423/423mana 265/265mv 307412tnl | W> l pr
A startlingly beautiful elven maid dressed in a flowing robe the colour
of the summer sky stands here near the altar. She is deep in thought but
when she notices you a smile touches her lips. Raising her hand in a
blessing, she greets you warmly. The elf is a priestess of the divine
order, a faith that worships goodness in its entirety without individual
Gods being followed. It is an ancient religion that is not favoured by
other elvish nations who consider it almost a blasphemy to ignore the
different Gods and their shrines.
An elven priestess is in perfect condition.
An elven priestess is using:
<worn on body> (Glowing) some white cotton robes
<wielded> (Humming) a sceptre topped with sphere
says blue robed. and robes with she color of the sky. yet wears white robes | {"url":"https://solace.i-read-you.ru/forum/index.php?topic=5445.msg32039","timestamp":"2024-11-12T12:44:50Z","content_type":"application/xhtml+xml","content_length":"46693","record_id":"<urn:uuid:80e4682b-ef50-4538-99ca-dcd5d8712d96>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00788.warc.gz"} |
Subsystem hypergraph product (SHP) code
Subsystem hypergraph product (SHP) code[1,2]
Also known as Subsystem generalized Shor code, Bacon-Casaccino subsystem code.
A CSS subsystem version of the generalized Shor code that has the same parameters as the subspace version, but requires fewer stabilizer measurements, resulting in a simpler error recovery routine.
The code can also be thought of as a subsystem version of an HGP code because two such codes reduce to an HGP code upon gauge fixing [2; Sec. III]. The code can be obtained from a generalized Shor
code by removing certain stabilizers that do no affect the code distance.
The \(X\)- and \(Z\)-type gauge generators of this CSS \([[n_1n_2,k_1k_2,\min(d_1,d_2)]]\) code correspond to rows of the following two respective matrices, \begin{split} G_{X}&=H_{1}\otimes I_{n_
{2}}\\ G_{Z}&=I_{n_{1}}\otimes H_{2}~, \end{split} \tag*{(1)} where \(H_{1,2}\) are the parity-check matrices of two binary linear codes, \(C_1 = [n_1, k_1, d_1]\) and \(C_2 = [n_2, k_2, d_2]\) [2].
Page edit log
Cite as:
“Subsystem hypergraph product (SHP) code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2023. https://errorcorrectionzoo.org/c/subsystem_quantum_parity
Github: https://github.com/errorcorrectionzoo/eczoo_data/edit/main/codes/quantum/qubits/subsystem/qldpc/homological/subsystem_quantum_parity.yml. | {"url":"https://errorcorrectionzoo.org/c/subsystem_quantum_parity","timestamp":"2024-11-04T12:09:02Z","content_type":"text/html","content_length":"22861","record_id":"<urn:uuid:7918ed55-8cd9-4142-ba60-d8203da32f3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00638.warc.gz"} |
Graph embedding through random walk for shortest paths problems
We present a new probabilistic technique of embedding graphs in Z ^d, the d-dimensional integer lattice, in order to find the shortest paths and shortest distances between pairs of nodes. In our
method the nodes of a breath first search (BFS) tree, starting at a particular node, are labeled as the sites found by a branching random walk on Z^d. After describing a greedy algorithm for routing
(distance estimation) which uses the ℓ[1] distance (ℓ[2] distance) between the labels of nodes, we approach the following question: Assume that the shortest distance between nodes s and t in the
graph is the same as the shortest distance between them in the BFS tree corresponding to the embedding, what is the probability that our algorithm finds the shortest path (distance) between them
correctly? Our key result comprises the following two complementary facts: i) by choosing d = d(n) (where n is the number of nodes) large enough our algorithm is successful with high probability, and
ii) d does not have to be very large - in particular it suffices to have d = O(polylog(n)). We also suggest an adaptation of our technique to finding an efficient solution for the all-sources
all-targets (ASAT) shortest paths problem, using the fact that a single embedding finds not only the shortest paths (distances) from its origin to all other nodes, but also between several other
pairs of nodes. We demonstrate its behavior on a specific non-sparse random graph model and on real data, the PGP network, and obtain promising results. The method presented here is less likely to
prove useful as an attempt to find more efficient solutions for ASAT problems, but rather as the basis for a new approach for algorithms and protocols for routing and communication. In this approach,
noise and the resulting corruption of data delivered in various channels might actually be useful when trying to infer the optimal way to communicate with distant peers.
Original language English
Title of host publication Stochastic Algorithms
Subtitle of host publication Foundations and Applications - 5th International Symposium, SAGA 2009, Proceedings
Pages 127-140
Number of pages 14
State Published - 1 Dec 2009
Externally published Yes
Event 5th Symposium on Stochastic Algorithms, Foundations and Applications, SAGA 2009 - Sapporo, Japan
Duration: 26 Oct 2009 → 28 Oct 2009
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 5792 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Conference 5th Symposium on Stochastic Algorithms, Foundations and Applications, SAGA 2009
Country/Territory Japan
City Sapporo
Period 26/10/09 → 28/10/09
• Graph embedding
• Shortest paths problem
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'Graph embedding through random walk for shortest paths problems'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/graph-embedding-through-random-walk-for-shortest-paths-problems","timestamp":"2024-11-08T06:11:50Z","content_type":"text/html","content_length":"62819","record_id":"<urn:uuid:38a6f601-a1f6-43ec-9d80-0f8084c5dfae>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00330.warc.gz"} |
3, Mathematics 3.G.2
View Transcript
3.G.A.2 Transcript
This is Common Core State Standards support video in mathematics. The standard is 3.G.A.2; this standard states partition shapes into parts with equal areas. Express the area of each part as a unit
fraction of the whole. For example, partition a shape into four parts with equal area, and describe the area of each part as one-fourth of the area the shape. There is a predecessor to this standard
back in the second grade—Standard 2.G.A.3—that talks about partitioning circles and rectangles into two, three, or four equal shares. It also pushes on the idea of recognizing that equal shares of
identical wholes need not have the same shape. What's interesting is that although these standards are in the Geometry Domain, they both lay the early foundation for fractions. In that Number and
Operations for Fractions Domain, there's a footnote that states that grade three expectations are limited to fractions with denominators two, three, four, six, and eight. So we'll take that into
consideration and apply just those numbers of equal parts to this standard. So we will just deal with partitioning these shapes into two, three, four, six, and eight equal parts. This standard talks
about partitioning shapes into equal parts with equal areas, but what shapes should we use? Well back in first grade, we have Standard 1.G.A.2 that talks about composing two-dimensional shapes, and
they specifically mention rectangles, squares, trapezoids, triangles, half-circles, and quarter-circles.
And in second grade, we have Standard 2.G.A.1 that talks about recognizing and drawing shapes having specified attributes, and they specifically mentioned triangles, quadrilaterals, pentagons,
hexagons, and cubes. The Standard 3.G.A.1, which is a predecessor to this standard, talks about understanding that shapes in different categories may share attributes and specifically mentioned are
rhombuses, rectangles, and squares. So we look at the big picture and look at all three of these together to make our decision as far as well what shapes should be used per standard 3.G.A.2. Now keep
in mind that we're talking about parts with equal areas, so that should automatically eliminate cubes because that's three dimensional not two-dimensional, then trapezoids and pentagons—those will be
very difficult to divide up into equal parts. So we should go ahead and eliminate those, and even though they're the simplest of the polygons, triangles really won't work well either. Let's look at
why. If you take say a scalene triangle, it'd be very difficult for students to even split this up into two equal parts.
Now again, how are students going to know at this level that these two are equal areas? Let's take a right triangle. The same situation; let's say I wanted to split this up into three equal areas.
Again it would take a lot more knowledge and skills to be able to prove or determine that all three of these are equal parts. What about an equilateral triangle? It'll work to some extent; we could
do something like this, and these would be equal areas, but again that would be it. We'd be limited to being able to partition this just into two equal parts and not any further. Based on our
investigation, it appears that rectangles, squares, circles, and hexagons would be the best shapes to use at this level for this standard.
At the same grade level, there's Standard 3.MD.B4, and it talks about measuring lengths using rulers marked with halves and fourths of an inch, and it's very unlikely that you'll find
commercially-made rulers that are marked off in just halves and fourths, so you might have to create your own but, the way that this standard ties into this one is that you can have students actually
create, for example, rectangles and measure the lengths that you need. So for example here, they could create a rectangle that's say 4 inches long; then it wouldn't be that much trouble to mark off
and figure out exactly how to divide this up into four equal parts. Now keep in mind that we're talking about areas; we're talking about all of the inside here of these rectangles. So again it's not
just the rectangle, it’s the area in this rectangle that we need to be concerned about. And again we measured, and we know that each of these smaller rectangular areas would be one-fourth the area
the original. Now what we can do is use the rulers and mark it off to where we would cut each one of those smaller rectangles in half, and so now we have eight equal parts.
So we would know that each of these smaller rectangular areas is one-eighth the area the original. Students could create a rectangle that is say 3 inches long and again be very easy to measure off
and split this up into three equal areas. And again a reminder, it's not just the rectangle, it's the rectangular area. So each of these small rectangular areas again is one-third the area of the
original. And just like we did before, we can measure off and cut each one of those smaller ones in half, so now we would know that each of those small rectangular areas is one-sixth the area of the
original. Now there's other ways to do this. We don't have to do vertical lines; we can also do horizontals so this should be a slightly different approach to cutting this up to where each of the
smaller rectangle areas is one-fourth the area the original. Our challenge here would be how to partition this to be shapes that are all one-eighth of the original area. You should have some students
that would come up with this solution pretty quickly. Then, hey just, find the midpoint and just draw a vertical bar, and bingo, those four equal parts have become eight equal parts.
At this point, we've been creating the rectangles, but if the focus of the standard is moreon the act of partitioning, students can do this by folding rectangular strips to attain the desired
fractional points for the regional area. So, for example, you could take a piece of paper, a rectangular strip, and fold it in half to get your two equal parts. Then of course you could fold it in
half again to get four equal parts, and then in half again to get eight equal parts. They can also do a trifold to get three equal parts, and then of course fold that in half either horizontally or
vertical to get six equal areas. Students can also fold circles to attain the desired fractional parts for the original area—halves, fourths, and eighths would be the best to work with here. So they
could take this circle, fold it in half, and then when they unfold it, they'll have a crease where you have two equal parts. Then you can take the folded semicircle, and fold it again, and when you
unfold it it'll have creases where you'll have four equal areas. And then you can take that and fold it in half again, and you'll have a circle where you have creases that will have divided this up
into eight equal areas.
Now so far, we've been measuring and doing vertical and horizontal lines to do thepartitioning, but in the case of rectangles, you can also use a diagonal. So if I draw a diagonal like so, I've
created two smaller triangular areas that are each one half the area of the original rectangle. Now I can take this a step further and draw a second diagonal, and I have divided this up to where each
of the smaller triangular areas is one-fourth the area of the original, but this is third grade, and here's something that might be a problem. Even though it can be shown algebraically that each of
these four triangles is equal in area, the fact that the two pairs have different shapes would be confusing to third-grade students. They might have a hard time seeing that this triangle here and
this triangle here both have the same area.
Partitioning rectangles with diagonals beyond cutting it up into two equal areas should be done with squares so that each of the resulting triangles is the same shape and size. Third-grade students
will then be able to see that they are equal in area. Then they could take it a step further and do something like this. You created eight smaller right triangles that are all going to be equal
areas. It makes sense that hexagons would be the best shape to partition in illustrating parts that are one-sixth the area of the original. Of course, all students need to do is to draw diagonals to
connect the nonconsecutive vertices. And so here, we have our six equal parts—six equal areas. Students can then use this partitioning to design a more accurate partition for parts that are all
one-third the area of the original. All they have to do is knock out part of those diagonals, and they'll create something that looks like this. So now you have your hexagon pretty actually
partitioned to three parts that are all one-third the area the original. Although when you look at this your mind plays tricks; your eyesight kind of makes this look like it's a cube.
Let's go back and revisit what we've done. We did this particular example where each of the smaller rectangular areas was one-third the area of the original, but how would a third-grader take this up
a notch and prove that each of those three smaller shapes have equal areas? Well logic dictates that they have the same area if the figures are exactly the same shape and the same size, so all the
students would need to do to prove this informally at this level would be to simply take one of these smaller rectangles, and slide it over and you see that it fits exactly over another one; so I
know, hey, these have to have the exact same area because they're pretty much carbon copies of each other. Now let's say we have a rectangle and we've used a diagonal to cut it up into two equal
areas, so now the question is we're saying that each of these two smaller triangle areas is one-half the area the original. But as before, how would a third-grader prove that these two triangular
shapes do in fact have the same area?
Well let's go ahead and change the color of one of them so we can tell one from the other. Then we need to kinda slide one on the rectangles over, and so now whatwe need to do is we're going to have
to rotate this around, and then slide it over, and so there we have it. One fits exactly over the other. So again it's pretty obvious that these two have to have the same area, because they're
exactly the same shape and exactly the same size. Now although we're using an area context, we're really laying a foundation for the concept of congruency. What's interesting is that if you do a
search of the Common Core Standards in math, and you're looking for the word congruent, it doesn't appear until eighth grade.
So the Common Core Standards don't even address congruency until that point, eighth grade, and that seems a little bit late. And hopefully you'll see from these types of activities that you can lay
the foundation for congruency at a much earlier level. Again, even though this standard deals with area, we can use this context to lay the foundation for the concept of congruency. The main
adjustment that we need to make is to switch our focus to the polygons themselves instead of area, so in this case, we transition from the rectangular areas to the rectangles themselves. These two
rectangles are congruent if and only if they're the exact same shape and size. Now at this grade level, it would be sufficient to prove by just sliding one rectangle over the other to see if it fits
exactly. So we can do that. So now we see that, okay, one rectangle fits exactly over the other, so they are in fact congruent. If we concentrate on that simple idea of congruency as figures being
the exact shape and the exact same size, you can see how we can teach this concept by expanding on what we do for the standard. And using translations and rotations for the most part would be
sufficient to be able to do this proof at this level. As you can see the activities and the context, so we use for this standard really lend themselves and connect to the idea of congruency. So we
can't and we shouldn't wait until the eighth grade to teach this fundamental idea. Again, even though we're talking about area here, we're still laying the foundation for congruency, because all you
have to do is instead of thinking of the areas here, just think in this case of just the rectangles themselves. Then again, these two triangles are congruent if they're exactly the same shape and
size, and again that's very easy to prove by just being able to slide one and put it exactly over the other. And that's all pretty much what the idea congruency would involve at this level. So again
not impossible to do—you can lay this foundation and don't wait until eighth grade.
Here's another interesting observation, it's important to note that the Common Core State Standards in math do not specifically address fractions as part of a set rather than as part of the whole. It
doesn't appear. So the question is, well, when do we do this? When do we address fractions as part of a set? We can actually do this as part of this standard part of these activities. So let's say,
okay, go back to this example where we had our rectangle, and we partitioned this into four equal parts so each in the smaller rectangular areas is one-fourth the area of the original, and the
original we can switch over and call the whole, which in this case would be our original rectangular area.
Again, the activities that we're doing for this standard can easily be adapted to establish a foundation for again interpreting fraction as part of a set. So we took, let's say, these four
rectangles; each one of these rectangles would be one-fourth of this set or group of four rectangles. So pretty easy transition. And in fact, what you can do is transition to where instead of just
these polygons, these rectangular figures, whatever, and use other objects, other manipulatives. For example, maybe you have some miniature cars. So now if I transition this over instead of having
four rectangles, I have four cars. Well each of these cars is one-fourth of this set or group of four cars. Or let's say you have some little model trucks, again each of these trucks is one-third of
this set of three trucks.
So again, the transitioning over to showing fractions as part of a set really would fit perfectly with the activities involved for this standard. If you look at the Standards for Mathematical
Practice, here's the first four, and I think we could reasonably conclude that we would be addressing the second one. Students would be reasoning abstractly and quantitatively. They would be
constructing viable arguments and critiquing the reasoning of others, and they would be modeling the mathematics. If you look at the last four of the Standards for Mathematical Practice, we could
pretty much assert that students addressing this standard would be using appropriate tools strategically, and they would be attending to precision. | {"url":"https://sedl.org/secc/common_core_videos/grade.php?action=view&id=602","timestamp":"2024-11-10T09:30:58Z","content_type":"text/html","content_length":"35648","record_id":"<urn:uuid:0ceb3800-e95a-4ee9-8d5c-ecc795a8422c>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00007.warc.gz"} |
Related to the edition of rotation via 2d gradients, i stumble upon a nice little logical issue: how to pick the shortest path form current value to target one when space is cyclic and not linear.
Indeed, rotations are cyclic, PI and PI*3 are not the same angles, ok, but you end up at the … | {"url":"https://polymorph.cool/tag/javascript/","timestamp":"2024-11-09T20:39:42Z","content_type":"text/html","content_length":"19783","record_id":"<urn:uuid:3d24fb4a-0add-4bb0-a91d-dad038c9b686>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00405.warc.gz"} |
Fractions Workbook for Grades 4-5 | K5 Bookstore
Summary: Math workbook with instruction and exercises covering addition & subtraction of fractions.
Format: PDF – download & print
Level: Grades 4 – 5
Pages: 75
Math Workbook: Fractions 1 (Grades 4-5)
Fractions 1 principally covers the addition and subtraction of fractions.
Some of the fractions topics covered are:
• Fraction terminology
• Mixed numbers
• Equivalent fractions
• Adding & subtracting like and unlike fractions
• Adding & subtracting mixed numbers
• Comparing fractions
This fractions workbook is divided into 16 sections. Each section begins with a bite-sized introduction to a topic with an example, followed by practice exercises including word problems. Answers are
in the back. The format is ideal for independent or parent-guided study. The Math Mammoth series of workbooks is highly recommended by K5 Learning! | {"url":"https://store.k5learning.com/fractions-1-workbook","timestamp":"2024-11-09T04:21:25Z","content_type":"text/html","content_length":"378860","record_id":"<urn:uuid:480035bb-f8a1-4af5-b76c-16270bc8269d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00777.warc.gz"} |
Evaluate the following integrals:∫(x+1)2x2+3dx... | Filo
Evaluate the following integrals:
Not the question you're searching for?
+ Ask your question
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
8 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from similar books
Practice more questions from Integrals
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Evaluate the following integrals:
Updated On Nov 28, 2022
Topic Integrals
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 1
Upvotes 132
Avg. Video Duration 13 min | {"url":"https://askfilo.com/math-question-answers/evaluate-the-following-integrals-displaystyle-int-x-1-sqrt","timestamp":"2024-11-04T15:50:40Z","content_type":"text/html","content_length":"454891","record_id":"<urn:uuid:e89fb76d-9b89-4d8c-8c79-6f9002406cc9>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00732.warc.gz"} |
What units are mils?
What units are mils?
A “mil” is a unit of thickness equal to one thousandth of an inch (. 001 inch). To convert mil to inches, take mil and divide by 1000.
How do you convert to Mils?
Inch to Mil Conversion Table
Inch [in] Mil [mil, Thou]
1 in 1000 mil, thou
2 in 2000 mil, thou
3 in 3000 mil, thou
5 in 5000 mil, thou
What is an mil?
Definition of mil (Entry 1 of 3) 1 : thousand found a salinity of 38.4 per mil. 2 : a monetary unit formerly used in Cyprus equal to ¹/₁₀₀₀ pound. 3 : a unit of length equal to ¹/₁₀₀₀ inch used
especially in measuring thickness (as of plastic films)
What is mil number?
Mil means the same as million. Zhamnov, 22, signed for $1.25 mil over three years.
Is 12 mil the same as 12 mm?
A mil is a measurement that equals one-thousandth of an inch, or 0.001 inch. One mil also equals 0.0254 mm (millimeter). Thus a mil is not the same thickness as a millimeter.
How many mils is a 360?
Degree to Mil Conversion Table
Degrees Mils (NATO)
35° 622.22 mil
36° 640 mil
37° 657.78 mil
38° 675.56 mil
Which is bigger mm or mil?
A mil is a measurement that equals one-thousandth of an inch, or 0.001 inch. One mil also equals 0.0254 mm (millimeter). Thus a mil is not the same thickness as a millimeter. The term “mil” is not an
abbreviation but a unit of measure.
What is mil full?
The definition of MIL is short for mother-in-law. An example of a MIL is your husband’s mother. abbreviation. 3. 1.
How thick is 45 mils?
0.045 inches
(A mil is a unit of measure where 1-mil is equal to 0.001 inches. So, a 45-mil roofing membrane is actually 0.045 inches thick; whereas a 60-mil membrane is 0.06 inches thick, and so on.)
What is thicker 6 mil or 10 mil?
A “mil” A mil is a measurement that equals one-thousandth of an inch, or 0.001 inch. Most human hair is one-thousandth of inch, or 0.001 inch. 10 mil plastic sheeting is therefore thicker than 6 mil.
What is the equivalent of 1 mil?
Mil. The mil is a unit of measure typically used in manufacturing and engineering for describing distance tolerances with high precision or for specifying the thickness of materials. One mil is equal
to one thousandth of an inch, or 10 -3 inches. The mil is also referred to as thou.
How thick is 30 mils?
›› Quick conversion chart of mils to mm. 1 mils to mm = 0.0254 mm. 10 mils to mm = 0.254 mm. 20 mils to mm = 0.508 mm. 30 mils to mm = 0.762 mm. 40 mils to mm = 1.016 mm. 50 mils to mm = 1.27 mm. 100
mils to mm = 2.54 mm. 200 mils to mm = 5.08 mm ››
How many inches is .1 mil at 100 yards?
Thus, 1 MIL at 100 yards is equal to 3.6” inches and 1 MIL at 100 meters is equal to 10 centimeters. At 100 meters, 1⁄10 of a mil equals .9999 centimeter. Practically speaking, 1⁄10 of a mil equals 1
centimeter at 100 meters. A mil is so large that it’s usually broken into tenths in order to make precise adjustments on your scope turret.
How to convert metric units of measurement?
All you need to make a click on the tab for which you want to perform measurements conversion
Very next,you ought to select the unit from the left drop down box for which you want to convert from and enter the value of this selected unit into
Then,you ought to select the unit from the right drop-down box for which you want to get conversions | {"url":"https://www.quadronmusic.com/what-units-are-mils/","timestamp":"2024-11-08T11:39:05Z","content_type":"text/html","content_length":"51180","record_id":"<urn:uuid:9c94dcd7-f23b-47b7-953f-7bed38419d0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00870.warc.gz"} |
Wavenumber-explicit regularity estimates on the acoustic single- and double-layer operators
We prove new, sharp, wavenumber-explicit bounds on the norms of the Helmholtz single- and double-layer boundary-integral operators as mappings from L ^2(∂Ω) → H ^1(∂Ω) (where ∂Ω is the boundary of
the obstacle). The new bounds are obtained using estimates on the restriction to the boundary of quasimodes of the Laplacian, building on recent work by the first author and collaborators. Our main
motivation for considering these operators is that they appear in the standard second-kind boundary-integral formulations, posed in L ^2(∂Ω) , of the exterior Dirichlet problem for the Helmholtz
equation. Our new wavenumber-explicit L ^2(∂Ω) → H ^1(∂Ω) bounds can then be used in a wavenumber-explicit version of the classic compact-perturbation analysis of Galerkin discretisations of these
second-kind equations; this is done in the companion paper (Galkowski, Müller, and Spence in Wavenumber-explicit analysis for the Helmholtz h-BEM: error estimates and iteration counts for the
Dirichlet problem, 2017. arXiv:1608.01035).
• math.AP
• 31B10, 31B25, 35J05, 35J25, 65R20
• Boundary integral equation
• Helmholtz equation
• Semiclassical
• Layer-potential operators
• High frequency
ASJC Scopus subject areas
• Analysis
• Algebra and Number Theory
Dive into the research topics of 'Wavenumber-explicit regularity estimates on the acoustic single- and double-layer operators'. Together they form a unique fingerprint. | {"url":"https://researchportal.bath.ac.uk/en/publications/wavenumber-explicit-regularity-estimates-on-the-acoustic-single-a","timestamp":"2024-11-07T19:20:32Z","content_type":"text/html","content_length":"66185","record_id":"<urn:uuid:4401c595-5c65-44bf-a8e1-9fddaeb85878>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00148.warc.gz"} |
MA283 Week 7
Moving between different bases
Welcome to Week 7.
This week we will start on Chapter 3 of the lecture notes, on linear transformations, eigenvectors and similarity. We will be continuing on the theme of bases, and a theme of this chapter will be to
explore how some bases are much better than others for describing a particular linear transformation – either to get some insight into its geometric behaviour or just to do calculations. As usual,
almost everything can be translated into the algebra of matrices.
In Lecture 13, we will discuss some more consequences of the Steinitz Exchange Lemma, and talk about how to use matrices to recognize a basis of R^n (or F^n for any field F).
In Lecture 14, we will discuss the row rank and column rank of a matrix, and show that they are equal. This will conclude our work on Chapter 2, and we will look ahead to Chapter 3, which considers
how to describe a linear transformation with respect to different bases.
Slides for Week 7.
Relevant sections of the lecture notes this week are Section 3.1 and Section 3.
Here is (an old version of) Lecture 13.
Here is (an old version of) Lecture 14.
Relevant sections of the lecture notes this week are Section 3.1 and Section 3.2. | {"url":"http://rkq.ie/teaching/ma283-linear-algebra/ma283-week-7/","timestamp":"2024-11-13T19:33:16Z","content_type":"text/html","content_length":"32582","record_id":"<urn:uuid:a65b9a55-d6f6-4a22-a551-f6c8d250a484>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00616.warc.gz"} |
Computing Numerical Expressions by Converting Radical into Rational Exponents
Question Video: Computing Numerical Expressions by Converting Radical into Rational Exponents Mathematics
What is one-eighth of (the seventh root of 8)¹⁴?
Video Transcript
What is one-eighth of the seventh root of eight all to the power of 14? Well we can rewrite this expression slowly. One-eighth of something means one-eighth times it. So we’ve got an eighth of the
seventh root of eight all to the power of 14.
Now I’m gonna express the seventh root of eight in a slightly different format. And that’s eight to the power of a seventh. Eight to the power of a seventh is the same as the seventh root of eight.
So that’s a general rule; we can express the 𝑏th root of something as 𝑥 to the power of one over 𝑏. But there’s another general rule; that 𝑥 to the power of 𝑎 all to the power of 𝑏 can be expressed
as 𝑥 to the 𝑎 times 𝑏.
Now this means that I can rewrite eight to the power of a seventh all to the power of 14 as eight to the power of a seventh times 14. And I can re-express 14 as 14 over one to make it clear that a
seventh times 14 or one over seven times 14 over one is the same as 14 over seven, which is two.
So we’ve ended up with an eighth times eight squared. Now remember eight squared means eight times eight.
And again I can express those as eight over one, so I’ve got one times eight times eight over eight times one times one. Now if I divide the top by eight, I get one; if I divide the bottom by eight,
I get one. So I’ve got one times one times eight over one times one times one, which is just eight.
So my answer is eight. | {"url":"https://www.nagwa.com/en/videos/748123470469/","timestamp":"2024-11-06T12:00:31Z","content_type":"text/html","content_length":"242378","record_id":"<urn:uuid:534b8496-097d-452b-8436-e203f82ba94f>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00220.warc.gz"} |
The Taub Faculty of Computer Science
The Taub Faculty of Computer Science Events and Talks
Dana Moshkovitz, Weizmann
Sunday, 20.07.2008, 11:00
We show that the NP-Complete language 3Sat has a PCP verifier that makes two queries to a proof of almost-linear size and achieves sub-constant probability of error o(1). The verifier performs only
projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query.
Previously, by the parallel repetition theorem, there were PCP Theorems with two-query projection tests, but only (arbitrarily small) constant error and polynomial size. There were also PCP Theorems
with sub-constant error and almost-linear size, but a constant number of queries that is larger than 2.
As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial
list includes the following:
1) 3Sat cannot be efficiently approximated to within a factor of 7/8+o(1), unless P=NP. This holds even under almost-linear reductions. Previously, the best known NP-hardness factor was 7/8+epsilon
for any constant epsilon>0, under polynomial reductions (Hastad).
2) 3Lin cannot be efficiently approximated to within a factor of 1/2+o(1), unless P=NP. This holds even under almost-linear reductions. Previously, the best known NP-hardness factor was 1/2+epsilon
for any constant epsilon>0, under polynomial reductions (Hastad).
3) A PCP Theorem with amortized query complexity 1+o(1) and amortized free bit complexity o(1). Previously, the best known amortized query complexity and free bit complexity were 1+epsilon and
epsilon, respectively, for any constant epsilon>0 (Samorodnitsky and Trevisan).
4) Clique cannot be efficiently approximated to within a factor of n^{1-o(1)}, unless ZPP=NP. Previously, a hardness factor of n^{1-epsilon} for any constant epsilon>0 was known, under the assumption
that P=NP does not hold (Hastad and Zuckerman).
One of the new ideas that we use is a new technique for doing the composition step in the (classical) proof of the PCP Theorem, without increasing the number of queries to the proof. We formalize
this as a composition of new objects that we call Locally Decode/Reject Codes (LDRC). The notion of LDRC was implicit in several previous works, and we make it explicit in this work. We believe that
the formulation of LDRCs and their construction are of independent interest.
This is joint work with Ran Raz | {"url":"https://cs.technion.ac.il/events/view-event.php?evid=400","timestamp":"2024-11-12T16:55:45Z","content_type":"text/html","content_length":"16644","record_id":"<urn:uuid:b99ab975-a4e0-43eb-9a88-ca57e92b024f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00760.warc.gz"} |
Trajectory Optimization
Trajectory Optimization
Minimize subject to .
This example demonstrates how a variational problem can be discretized to a finite optimization problem efficiently solved by convex methods, such as QuadraticOptimization.
The variational problem will be approximated by discretizing the boundary value problem and using the trapezoidal rule to integrate on a uniformly spaced grid on the interval [0,1], with .
Let the variable u[i] represent and x[i] represent for .
The differential equation constraint is easily represented using centered second-order difference approximations for from 1 to .
At the boundary, the zero derivative conditions allow for the use of fictitious points and . When and , the second-order difference formula for the first derivative is zero for and . Thus, at the
boundary, use the following.
The trapezoidal rule for is given by the following.
Since the expression from the trapezoidal rule is quadratic and all of the constraints are linear equality constraints, the minimum of the discretized integral can be found using
QuadraticOptimization directly.
Approximate functions are constructed with Interpolation.
An exact analytic solution, , is known for this problem, so it is possible to plot the error in the discretization.
The asymptotic error is roughly , so by doubling to 200 and recomputing, the error will be about 1/4 of what is shown here.
The analytic solution can be found by considering a family of curves where is a parameter. This parametric curve satisfies the prescribed boundary conditions . Since , one can find an optimal
parameter that minimizes .
The optimal value of is at 2, which is the analytic result . | {"url":"https://www.wolfram.com/language/12/convex-optimization/trajectory-optimization.html.en?footer=lang","timestamp":"2024-11-10T05:46:37Z","content_type":"text/html","content_length":"43804","record_id":"<urn:uuid:fd4a50fa-ca8b-466b-98dc-0fa790f637bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00100.warc.gz"} |
Million to Crore Conversion Calculator
Use this tool to convert Million to Crore.
Enter the number of millions in the tool below and it will calculate the equivalent number of crores.
Conversion Formula
1 Crore = 10,000,000
1 Million = 1,000,000
Crore = Million / 10
• If you have 25 million dollars and want to express this in crores
• Divide 25 by 10. The answer is 2.5
• So, 25 million is equivalent to 2.5 crores.
What is a Crore?
A “crore” is a unit in the South Asian numbering system, commonly used in countries like India, Bangladesh, Nepal, and Pakistan. It is equivalent to 10 million (10,000,000) in the Western numbering
system. The term is widely used in finance, commerce, and daily life in these regions.
What is a Million?
In the Western numbering system, a million is equal to 1,000,000 (10^6). Itโ s commonly used in various fields such as finance (e.g., a company earning $5 million annually), population statistics
(e.g., a city with 2 million residents), and more.
Relationship with Other Units
• Lakh: Another commonly used unit in South Asia is the “lakh,” which is equivalent to 100,000.
The concept of crores simplifies the understanding and communication of large numbers in South Asian contexts, making it easier to grasp significant figures without dealing with long strings of
Conversion Table
For quick reference, hereโ s a conversion table between millions and crores:
Millions Crores
1 0.1
5 0.5
Currency Calculators
Related Calculators
• Lakh to Million
• Million to Lakh | {"url":"https://www.orbit6.com/million-to-crore-conversion-calculator/","timestamp":"2024-11-03T13:35:13Z","content_type":"text/html","content_length":"72852","record_id":"<urn:uuid:3972fc0d-4020-47fb-ab44-c33a589390fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00044.warc.gz"} |
Faster Algorithms for Growing Collision-Free Convex Polytopes in Robot Configuration Space
Peter Werner, Thomas Cohn*, Rebecca H Jiang*, Tim Seyde, Max Simchowitz, Russ Tedrake, and Daniela Rus
TLDR Decomposing robot configuration spaces into collision-free polytopes enables the application of stronger motion-planning frameworks such as trajectory optimization with Graphs of Convex Sets and
is currently a major roadblock in the adoption of these approaches. In this paper we aim to take a step towards alleviating this roadblock by introducing substantial improvements to IRIS-NP
(Iterative Regional Inflation by Semidefinite & Nonlinear Programming). Our key insight is that finding near-by configuration-space obstacles using sampling is inexpensive and greatly accelerates
region generation. We propose two algorithms using such samples to either employ nonlinear programming more efficiently (IRIS-NP2) or circumvent it altogether using a massively-parallel zero-order
optimization strategy (IRIS-ZO). We also propose a termination condition that controls the probability of exceeding a user-specified permissible fraction-in-collision, eliminating a significant
source of tuning difficulty in IRIS-NP. We compare performance across eight robot environments, showing that IRIS-ZO achieves an order-of-magnitude speed advantage over IRIS-NP. IRIS-NP2, also
significantly faster than IRIS-NP, builds larger polytopes using fewer hyperplanes, enabling faster downstream computation.
Collision-free Polytopes in Robot Configuration Space
In our paper we compute probabilistically collision-free polytopes in robot configuration space. A visualization of such a polytope for a simple two degree of freedom (dof) system is shown above.
Left: Two dof robot arm with a disk shaped obstacle. Two selected collision geometries of the system are highlighted in green and blue. Center: Configuration space of the system. The black regions
correspond to collisions. Two of the configuration-space obstacles are highlighted in blue and green, corresponding to the configurations where the blue and green collision geometries of the robot
intersect the disk obstacle. Right: Resulting polytope (red outline), when seeding IRIS-NP2 at the red dot. The configuration in left frame is shown by the blue dot.
Random Walks Inside of Generated Regions
Below a selection of 4 regions generated with IRIS-NP2 are animated. This is done by starting at the seed configuration and repeatedly picking a random direction and walking to the bounary of the
region. In the animation we return to the seed configuration every 4 steps.
Click on the imaes below to open a full screen animation. Hit 'Open Controls' then 'Animation->Play' to play back the animation. Click on the 'drake' tab to visualize the collision geometries.
The IRIS-ZO algorithm uses a simple parallelized zero-order optimization strategy to directly solve the SeparatingPlanes in Alg 1. in the paper (the polytope generation step). IRIS-ZO constructs a
probabilistically collision-free polytope by using sampling and collision checking. Above is an illustration of how ZeroOrderSeparatingPlanes computes the polytope. The algorithm is fast and simple
to implement, because it only requires a collision-checker and no gradient information, but produces slightly lower quality regions (smaller volume, more faces) as opposed to IRIS-NP2.
The IRIS-NP2 algorithm improves IRIS-NP both in terms of computation time and region quality by increasing the volume and reducing the number of faces per region). The key update to the algorithm is
that instead of looping over all collision pairs and solving for near-by configurations that cause this pair to collide, we use sampling strategies to decide which collision pairs to consider. In our
paper we investigate two sampling strategies: "greedy" and "ray." Above is an illustration of using the ray sampling strategy to decide which collision pairs to consider. In the ray sampling strategy
random configurations (blue dots) are sampled and a linesearch is performed until a collision is found (red dots). The red dots along with the corresponding collision pair are used to warmstart the
nonlinear optimization that finds near-by collisions. Below is an illustration of how this nonlinear program is formulated for a given collision pair (A,B). On the left we see a sketch of the
current polytope in configuration space. The remaning two frames show the task space of the robot along with the highlighted collision pair.
Note that this program remains unchanged from the original IRIS-NP.
We benchmark our alogrithms on 8 robotic systems shown on the left. They range from 3 degrees of freedom (dof) to 15 dof. In each of these environments we hand-select 10 interesting seed
configurations and compare the resulting regions obtained from the different algorithms. We use the original IRIS-NP algorithm as a baseline.
Statistics averaged over all 10 seed configurations. We consider two settings: A "fast" setting where the termination condition is more relaxed, allowing for a larger fraction of the polytope to be
in collision and a more stringent "precise" setting. For the "fast" setting IRIS-ZO was on average around 15.5 times faster while requiring 1.4 times fewer hyperplanes than IRIS-NP. Iris-NP2 with
the greedy strategy was 6.6 times faster while requiring 2.1 times fewer hyperplanes. The ray strategy was 4.2 times faster and required 2.3 times fewer hyperplanes. For the "precise" settings IRIS
-ZO was 14 times faster than IRIS-NP while producing around the same number of hyperplanes, while IRIS-NP2 with the greedy strategy was 12.3 times faster with 2.4 fewer hyperplanes. With the ray
strategy it was 8 times faster with 2.7 times fewer hyperplanes.
We gratefully acknowledge the support of several individuals and organizations in the completion of this work. Our thanks go to The Charles Stark Draper Laboratory, Inc. and Ravi Gondhalekar for
their valuable feedback and for supporting Rebecca Jiang as a Draper Scholar. We also extend our appreciation to Amazon.com (PO No. 2D-06310236), the Toyota Research Institute, and the MIT Quest for
Intelligence for their financial support. Finally, we thank Alexandre Amice for providing insightful feedback on the project. | {"url":"https://sites.google.com/view/fastiris","timestamp":"2024-11-14T15:03:48Z","content_type":"text/html","content_length":"88653","record_id":"<urn:uuid:9accc13e-f3de-468f-9ef4-6bc6dce3e062>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00312.warc.gz"} |
Discrete spectra for critical Dirac-Coulomb Hamiltonians
Title Discrete spectra for critical Dirac-Coulomb Hamiltonians
Publication Preprint
Authors Gallone, M, Michelangeli, A
Document SISSA;44/2017/MATE
The one-particle Dirac Hamiltonian with Coulomb interaction is known to be realised, in a regime of large (critical) couplings, by an infinite multiplicity of distinct self-adjoint
operators, including a distinguished physically most natural one. For the latter, Sommerfeld’s celebrated fine structure formula provides the well-known expression for the eigenvalues in
the gap of the continuum spectrum. Exploiting our recent general
classification of all other self-adjoint realisations, we generalise Sommerfeld’s formula so as to determine the discrete spectrum of all other self-adjoint versions of the Dirac-Coulomb
Hamiltonian. Such discrete spectra display naturally a fibred structure, whose bundle covers the whole gap of the continuum spectrum. | {"url":"https://math.sissa.it/publication/discrete-spectra-critical-dirac-coulomb-hamiltonians","timestamp":"2024-11-02T23:24:38Z","content_type":"application/xhtml+xml","content_length":"27903","record_id":"<urn:uuid:811c07f9-0909-4844-9244-236d8802158c>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00546.warc.gz"} |
Updating search results...
5 Results
Conditional Remix & Share Permitted
CC BY-NC
Material Type:
Date Added:
Conditional Remix & Share Permitted
CC BY-NC
Samples and ProbabilityType of Unit: ConceptualPrior KnowledgeStudents should be able to:Understand the concept of a ratio.Write ratios as percents.Describe data using measures of center.Display and
interpret data in dot plots, histograms, and box plots.Lesson FlowStudents begin to think about probability by considering the relative likelihood of familiar events on the continuum between
impossible and certain. Students begin to formalize this understanding of probability. They are introduced to the concept of probability as a measure of likelihood, and how to calculate probability
of equally likely events using a ratio. The terms (impossible, certain, etc.) are given numerical values. Next, students compare expected results to actual results by calculating the probability of
an event and conducting an experiment. Students explore the probability of outcomes that are not equally likely. They collect data to estimate the experimental probabilities. They use ratio and
proportion to predict results for a large number of trials. Students learn about compound events. They use tree diagrams, tables, and systematic lists as tools to find the sample space. They
determine the theoretical probability of first independent, and then dependent events. In Lesson 10 students identify a question to investigate for a unit project and submit a proposal. They then
complete a Self Check. In Lesson 11, students review the results of the Self Check, solve a related problem, and take a Quiz.Students are introduced to the concept of sampling as a method of
determining characteristics of a population. They consider how a sample can be random or biased, and think about methods for randomly sampling a population to ensure that it is representative. In
Lesson 13, students collect and analyze data for their unit project. Students begin to apply their knowledge of statistics learned in sixth grade. They determine the typical class score from a sample
of the population, and reason about the representativeness of the sample. Then, students begin to develop intuition about appropriate sample size by conducting an experiment. They compare different
sample sizes, and decide whether increasing the sample size improves the results. In Lesson 16 and Lesson 17, students compare two data sets using any tools they wish. Students will be reminded of
Mean Average Deviation (MAD), which will be a useful tool in this situation. Students complete another Self Check, review the results of their Self Check, and solve additional problems. The unit ends
with three days for students to work on Gallery problems, possibly using one of the days to complete their project or get help on their project if needed, two days for students to present their unit
projects to the class, and one day for the End of Unit Assessment.
Conditional Remix & Share Permitted
CC BY-NC
Lesson OverviewGroups will begin presentations for their unit project. Students will provide constructive feedback on others' presentations.Key ConceptsStudents should demonstrate their understanding
of the unit concepts.Goals and Learning ObjectivesPresent projects and demonstrate an understanding of the unit concepts.Provide feedback for others' presentations.Clarify any misconceptions or areas
of difficulty.Review the concepts from the unit.
Material Type:
Chris Adcock
Date Added:
Conditional Remix & Share Permitted
CC BY-NC
Remaining groups present their unit projects and students discuss teacher and peer feedback.Key ConceptsStudents should demonstrate their understanding of the unit concepts.Goals and Learning
ObjectivesPresent projects and demonstrate an understanding of the unit concepts.Provide feedback for others' presentations.Review the concepts from the unit.Review presentation feedback and reflect.
Material Type:
Chris Adcock
Date Added:
Each small group of students researches one aspect of the same big topic, such as the Gold Rush, and teaches what they have learned to the rest of the class.
Material Type:
Provider Set:
Date Added: | {"url":"https://openspace.infohio.org/browse?f.keyword=group-projects","timestamp":"2024-11-02T20:52:44Z","content_type":"text/html","content_length":"91193","record_id":"<urn:uuid:122d1d12-e1ed-4956-88aa-fd2be2213118>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00390.warc.gz"} |
Toeplitz Matrices
and Operators
Toeplitz matrices provide one of the most compelling applications of pseudospectra. A Toeplitz matrix has constant entries on each diagonal, and the corresponding infinite-dimensional Toeplitz
operator is a singly-infinite matrix. The constants on the diagonals are the Laurent coefficients of the symbol, a complex-valued function whose domain is the unit circle, T.
The spectrum of the Toeplitz operator is determined by the symbol, a. If a is continuous, then the spectrum is the a(T) together with all points this curve encloses with non-zero winding number. The
eigenvalues of finite Toeplitz matrices are very different, typically falling on curves in the complex plane for arbitrarily large (but finite) dimensions. It is well known that these eigenvalues are
typically difficult to compute accurately.
It turns out that while the eigenvalues of finite Toeplitz matrices do not generally converge to the spectrum of the corresponding infinite-dimensional operator, the pseudospectra of the finite
matrices do converge to the operator pseudospectra for a broad class of symbols. In the pages linked to below, we illustrate different aspects of this convergence.
Random Matrices
Random matrices are of importance in a wide variety of applications, and in many instances these matrices are non-normal. When the degree of non-normality is large, pseudospectra can help explain
interesting phenomena. The links below show a number of illustrations.
Many random matrices can be classified as "stochastic Toeplitz" matrices; that is, the entries on any one diagonal are all independent samples from the same probability distribution. (Some diagonals
may be constant, corresponding to a distribution that is a delta function.)
In this section, we illustrate spectra and pseudospectra for several specific stochastic Toeplitz matrices. | {"url":"http://www.cs.ox.ac.uk/projects/pseudospectra/examples.html","timestamp":"2024-11-13T14:40:40Z","content_type":"text/html","content_length":"4428","record_id":"<urn:uuid:421db3bd-addd-4912-ac3f-b9cfb33cc8b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00562.warc.gz"} |
"Islands" of Regularity Discovered in the Famously Chaotic Three-Body Problem
11 October 2024
"Islands" of Regularity Discovered in the Famously Chaotic Three-Body Problem
When three massive objects meet in space, they influence each other through gravity in ways that evolve unpredictably. In a word: Chaos. That is the conventional understanding. Now, a researcher from
the University of Copenhagen has discovered that such encounters often avoid chaos and instead follow regular patterns, with one of the objects quickly being expelled from the system. This new
insight may prove vital for our understanding of gravitational waves and many other aspects of the universe.
Millions of simulations form a rough map of all conceivable outcomes when three objects meet, like a vast tapestry woven from the threads of initial configurations. This is where the isles of
regularity appear. Image by Alessandro Alberto Trani.
The most popular show on Netflix at the moment is the science fiction series 3-Body Problem. Based on a Chinese novel series by Liu Cixin, the series involves a menagerie of characters, time periods
and even extraterrestrial visitors. But the central premise is concerned with a star system in which three stars gravitate around one another.
Such a system, with three objects influencing each other's gravity, has fascinated scientists ever since the “father of gravity”, Isaac Newton, first described it. While the interaction between two
objects meeting in space is predictable, the introduction of a third massive object makes the triadic encounter not just complex, but chaotic.
"The Three-Body Problem is one of the most famous unsolvable problems in mathematics and theoretical physics. The theory states that when three objects meet, their interaction evolves chaotically,
without regularity and completely detached from the starting point. But our millions of simulations demonstrate that there are gaps in this chaos – ‘isles of regularity’ – which directly depend on
how the three objects are positioned relative to each other when they meet, as well as their speed and angle of approach," explains Alessandro Alberto Trani of the University of Copenhagen’s Niels
Bohr Institute.
Trani hopes that the discovery will pave the way for improved astrophysics models, as the Three-Body Problem is not just a theoretical challenge. The encounter of three objects in the universe is a
common occurrence and its understanding is crucial.
"If we are to understand gravitational waves, which are emitted from black holes and other massive objects in motion, the interactions of black holes as they meet and merge are essential. Immense
forces are at play, particularly when three of them meet. Therefore, our understanding of such encounters could be a key to comprehending phenomena such as gravitational waves, gravity itself and
many other fundamental mysteries of the universe," says the researcher.
Fun facts: a 4-Body Problem
During the pandemic, Alessandro Alberto Trani started a side project to investigate fractal universes within the Three-Body Problem. It was then that he came up with the idea of mapping the outcomes
in search of regularities.
He knew the famous problem from his studies, but hadn’t delved into the works of fiction – the recent Netflix show or the novel behind it: “The Three-Body Problem” by Liu Cixin. Nevertheless, out of
curiosity, he familiarized himself with the plot enough to conclude that it actually deals with a "4-Body Problem."
"As I understand it, it involves a star system with three stars and one planet, which is regularly thrown into chaotic developments. Such a system is actually best defined as a Four-Body Problem.
However you define it though, according to my simulations, the most likely outcome is that the planet would quickly be destroyed by one of the three stars. So it would soon become a
Three-Body-Problem," the researcher grins.
A Tsunami of Simulations
To investigate the phenomenon, Trani coded his own software program, Tsunami, which can calculate the movements of astronomical objects based on the knowledge we have about the laws of nature, such
as Newton’s gravity and Einstein’s general relativity. Trani set it to run millions of simulations of three-body encounters within certain defined parameters.
The initial parameters for the simulations were the positions of two of the objects in their mutual orbit – i.e., their phase along a 360-degree axis. Then, the angle of approach of the third object
– varying by 90 degrees.
The millions of simulations were spread across the various possible combinations within this framework. As a whole, the results form a rough map of all conceivable outcomes like a vast tapestry woven
from the threads of initial configurations. This is where the isles of regularity appear.
Image: Alessandro Alberto Trani
The colours represent the object that is eventually ejected from the system after the encounter. In most cases, this is the object with the lowest mass.
“If the three-body problem were purely chaotic, we would see only a chaotic mix of indistinguishable dots, with all three outcomes blending together without any discernible order. Instead, regular
“isles” emerge from this chaotic sea, where the system behaves predictably, leading to uniform outcomes—and therefore, uniform colours,” Trani explains.
Two Steps Forward, One Step Back
This discovery holds great promises for a deeper understanding of an otherwise impossible phenomenon. In the short term, however, it represents a challenge for researchers. Pure chaos is something
they already know how to calculate using statistical methods, but when chaos is interrupted by regularities, the calculations become more complex.
"When some regions in this map of possible outcomes suddenly become regular, it throws off statistical probability calculations, leading to inaccurate predictions. Our challenge now is to learn how
to blend statistical methods with the so-called numerical calculations, which offer high precision when the system behaves regularly," says Alessandro Alberto Trani.
"In that sense, my results have set us back to square one, but at the same time, they offer hope for an entirely new level of understanding in the long run," he says.
Behind the research
The following researchers contributed to the project:
Alessandro Alberto Trani
Niels Bohr International Academy at the Niels Bohr Institute, University of Copenhagen
Research Center for the Early Universe, University of Tokyo
Okinawa Institute of Science and Technology
(See also: https://alessandrotrani.space/)
Nathan W. C. Leigh
Departamento de Astronomía, Universidad de Concepción, Chile
Department of Astrophysics, American Museum of Natural History
Tjarda C. N. Boekholt
NASA Ames Research Center
Simon Portegies Zwart
Leiden Observatory, Leiden University | {"url":"https://news.ku.dk/all_news/2024/10/islands-of-regularity-discovered-in-the-famously-chaotic-three-body-problem/","timestamp":"2024-11-02T17:52:11Z","content_type":"text/html","content_length":"48217","record_id":"<urn:uuid:8510c51b-c766-47a7-9f85-dc6f5cf8fee7>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00408.warc.gz"} |
Auto accident reconstruction: The basics you must know
Understanding what the engineer is talking about and how conclusions are reached
Kurt D. Weiss
2007 November
What is Accident Reconstruction? Accident reconstruction is the process using scientific methodology to determine the circumstances, mechanics, and contributing factors associated with a collision.
It requires a working knowledge of many disciplines including physics, vehicle dynamics, mathematics, photogrammetry, and computer applications (i.e. spreadsheets, AutoCAD, simulation or modeling
tools, graphics and photo-management software). Questions such as, “How fast was the vehicle going at impact?” or “How much did the vehicle slow during the locked-wheel braking?” or “At what angles
did the two vehicles collide?” can be answered by the reconstructionist after thorough evaluation of available information.
This article presents several basic concepts typically found in the area of investigation and reconstruction of vehicle collisions. The following material is not intended to be comprehensive, but
should be considered an overview of fundamental principals. These concepts are presented as they commonly apply to collisions involving passenger cars. Other areas of analysis not included are
collisions involving heavy trucks and other articulating vehicles, or impacts involving pedestrians, motorcycles and bicycles.
Information sources
Details regarding the circumstances of a collision are often obtained through several means. Two basic sources of information are the Traffic Collision Report, and photographs of the vehicles and
collision scene. Additional sources to be considered are witness statements and deposition transcripts. Oftentimes the eye witnesses may fill the gaps between what can be observed in photographs and
what the traffic officers included in their reports. Emergency personnel run-sheets, medical records, and autopsy reports can provide useful and thorough descriptions of occupant injuries. Injury
location may be used to help support opinions regarding vehicle dynamics. Repair estimates, crash test reports, and vehicle specifications provide data necessary for calculations when vehicle weight,
dimensions, and property damage are used. Published research and literature can help assist the reconstructionist when a specific engineering principal or phenomenon is being analyzed.
Vehicle inspection
Inspection of the collision vehicle is most helpful when performing a reconstruction. While the study of photographs of vehicle damage is important, many details about the degree of vehicle
deformation can be vague or not fully documented in such photographs. Therefore, a vehicle inspection is often preferable. If a two-vehicle collision is being analyzed, inspection of both vehicles
should be requested. This often requires additional leg work, because the location and condition of the other vehicle are seldom known.
Vast amounts of information can be gleaned by inspecting the collision vehicle, such as the quantification of the vehicle crush profile. In addition, simply standing next to, or even sitting in the
damaged vehicle and considering the extent and direction of the structural deformation lends crucial insight into collision type and severity.
What can be learned at an inspection that cannot be learned from reviewing vehicle photographs? One example is the confirmation of ground contact to the vehicle undercarriage. Colliding vehicles will
often pitch downward during the collision phase to the extent that suspension members or other undercarriage components strike the road surface and create a gouge. Along with pre-impact skid marks or
post-impact tire marks, gouge marks are often among the list of physical evidence documented by the investigating traffic officer. Determination of which vehicle component interacted with the roadway
and their location relative to some vehicle-fixed reference point may support opinions regarding vehicle position and heading at the time of impact. Photographs of the accident vehicles rarely depict
undercarriage damage.
Another example supporting the utility of vehicle inspections can be the existence of grass or debris trapped in door openings. The existence of this material may help to confirm the door came open
during the collision or rollover event. Even grass or dirt embedded in the junction between the tire bead and wheel rim may help confirm tire separation during the collision as opposed to this
occurring during vehicle storage after the tire goes flat. Oftentimes, photographs do not yield this level of detail, and inspection of the vehicle is the only way to confirm these potential
Because evidence inside or outside the vehicle can only degrade with time, a secure and indoor vehicle storage is strongly recommended. Proper vehicle storage should be considered sooner rather than
Fundamental terms and common units
The results of the collision reconstruction may include pre-impact speed, vehicle heading, post-impact speed, and change of velocity (or delta-v). The principal direction of force, collision
duration, and peak or average vehicle acceleration may also be evaluated. However, before one can fully comprehend the significance of the terms commonly used by reconstruction specialists, a review
of fundamental terms and units may be helpful.
Four fundamental physical parameters are length, time, force and mass. Typical units for length (or distance) are inches, feet, or meters. For time, seconds are commonly used. The units of force are
pounds or Newtons; for mass, slugs or kilograms. It should be noted that mass and weight are often used interchangeably, but the true relationship between mass and weight is w = mg, where w is
weight, m is mass, and g is the gravitational constant equaling about 32.2 ft/s^2.
Derived terms and common units
Derived terms are algebraic combinations of the four fundamental parameters. Derived terms commonly used by the reconstructionist are velocity, acceleration, energy (work), and momentum. Velocity is
the rate change of distance with respect to time, or length per unit time, with units of mph or ft/s (also written as fps). Acceleration is the rate change of velocity with respect to time, or length
per unit time squared. Units for acceleration are ft/s^2, or g’s.
Energy (work) is length times force (or force times distance), with units of ft•lb, or in•lb, and momentum is mass times velocity, with units of lb•sec, or slug•ft/s.
Speed and velocity are different entities, although in colloquial speech these two words are often, although inaccurately, used interchangeably. By definition, speed is a scalar quantity having only
magnitude. Recall that speed is the rate change of distance. However, velocity is also the rate change of distance, but velocity is a vector quantity with magnitude and direction.
Additional terms and definitions
Several other important terms include delta-v (or Dv), principal direction of force (PDOF), center of gravity, yaw, coefficient of restitution, and coefficient of friction. Delta-v is the vector
difference between the pre-impact and post-impact velocities, or the velocity difference between when the vehicles first come in contact to when they separate. Of note, the time between first contact
and separation is called the collision phase of the impact, the time during which the vehicles deform. Therefore, by definition, Dv does not include any pre-braking speed loss or speed lost by the
vehicle after separation before coming to rest.
Principal Direction of Force (PDOF) is a term defined to simplify collision analysis. The PDOF is the direction of the summation of all collision forces required to deform the vehicle. When two
vehicles come in contact, they begin to deform at some force level. The surfaces of each vehicle in contact change over time, because of vehicle deformation, and they continue to change throughout
the collision phase. As an analogy, picture two rectangular sponges being pressed together. Initially, the two sponges may touch at the corners, or at each end, but with increased force, the area of
the contact surface of each sponge increases. Similar in concept to the sponge, the structure of a vehicle deforms in an impact. Forces are required to deform a vehicle structure, however over the
collision phase, the forces change direction and magnitude. As the vehicles continue to engage, new structures are deformed. Instead of analyzing the work and moment contributions of all these
collision forces over all the directions of the impact, reconstructionists study the one collision force applied to the vehicle along the principal direction of force.
The direction of the PDOF is often given in terms of degrees or hours of a clock dial. For example, a force directed toward the front of a vehicle along its centerline would have a 0 degree or 12
o’clock PDOF. A force from the right would have a 90 degree or 3 o’clock PDOF. Furthermore, a force directed toward the rear of a vehicle along its centerline would have a 180 degree or 6 o’clock
PDOF. Interestingly, ΔDv and PDOF are related in that the PDOF acts on the vehicle in the direction of the Dv.
The center of gravity, or cg, is simply the balance point of a vehicle. To simplify the analysis of a colliding vehicle, the entire mass of the vehicle is defined to be located at the cg. In reality,
this does not occur, but still the vehicle’s cg is a useful reference for study. One can calculate the fore-aft location of the cg by applying a moment balance using the front and rear axle weights.
However, vehicle occupants can shift the cg fore or aft to some degree, and the apportionment of passenger weights to the front and rear axles should be considered. Also, the cg may actually be
slightly to the left or right of the vehicle centerline, but for most applications one can assume the cg is centered laterally.
Yaw is rotation of the vehicle about a vertical axis passing through the vehicle cg. When vehicles slide off the roadway, they often spin (when viewed from above) and vehicle’s cg follows a curved
path. This curved path is indicative of a vehicle in yaw. Yaw will be studied in a misapplication of the skid to stop equation presented later.
Restitution is why vehicles often bounce away from the other vehicle or rebound from a rigid barrier after an impact. Vehicle collisions are called inelastic, and property damage often results.
However, while the vehicle structure does deform, there is some portion of this damage that is restored. The restoring forces are what cause restitution.
The coefficient of restitution of two colliding objects is defined as the ratio of the relative rebound velocities to the relative impact velocities. The coefficient of restitution, commonly given
the variable name e, is unitless and can have a value of between 0 and 1. For two colliding vehicles, the equation for coefficient of restitution can be written as:
εe = (v[1]’ - v[2]’) / (v[1] - v[2])
where v[1] and v[2] designate the pre-impact velocities, and v[1]’ and v[2]’ designate the post-impact velocities, for vehicles 1 and 2 respectively. When a vehicle impacts a rigid barrier, the
equation simplifies to:
εe = v[1]’ / v[1]
Since the barrier has no velocity before and after impact, the v[2]’ and v[2] terms drop from the equation. By way of example, vehicles impacting a rigid barrier at 30 to 35 mph exhibit restitution
values of between about 0.15 and 0.2. However, as collision speeds decrease, restitution often increases. For this reason, it is crucial to have accurate values of e when studying a low-speed
The coefficient of friction is why vehicles slow down, upon applying the brakes. The friction coefficient is also a unitless value. It is often given the variable name F, and is a measure of the
relative slipperiness of two surfaces in contact. In the case of a vehicle in a locked-wheel skid, the two surfaces are the roadway and the tires. By definition, μ = F/w, where F is the friction
force that must be overcome to move an object of weight w.
As an illustration, a commonly used value for a vehicle’s coefficient of friction under full locked-wheel braking on asphalt is 0.7. However, more accurate friction coefficients for a specific road
surface may be obtained by conducting a brake test with an appropriate exemplar vehicle. Special consideration must also be used with ABS equipped vehicles.
Skid analysis
The general velocity equation is:
v[f]^2 = v[i]^2 + 2ad
where v[f] is the final velocity in fps,
v[i] is the initial velocity in fps,
a is acceleration (friction coefficient times g) in ft/s^2,
and d is the skid distance, or length of the tire mark in feet.
This equation can be used for any condition when a vehicle changes velocity. With strict adherence to the sign of the velocity and acceleration terms, this equation can be used not only for the
slowing vehicle, but for the vehicle increasing in velocity as well.
The general velocity equation is often applied to vehicles braking to a stop. In this case, v[f] is zero, so the general equation simplifies to the skid to stop equation:
0 = v[i]^2 + 2ad
or v[i]^2 = -2ad
The general velocity equation can be rearranged algebraically so the value of v[f], v[i], a or d can be solved. Two examples will be used to illustrate this point.
Example 1
If a vehicle traveling at 60 mph suddenly brakes on asphalt (assume μ = 0.7), what is the final velocity after 132 feet of lock-wheel skid?
Here v[i] = 60 mph or 88 fps
d = 132 feet
μ = 0.7, a = -0.7g
v[f]^2 = v[i]^2 + 2ad
v[f] = sqrt(v[i]^2 + 2ad)
v[f] = sqrt(88^2 - 2(0.7)(32.2)(132))
v[f] = sqrt(7744 - 5950.56)
v[f] = 42.3 fps = 28.9 mph
Example 2
If a vehicle traveling at 60 mph suddenly brakes to a stop on asphalt (assume μ = 0.7), what is the length of the locked-wheel skid?
Here v[i] = 60 mph or 88 fps
v[f] = 0 mph
μ = 0.7, a = -0.7g
v[f]^2 = v[i]^2 + 2ad
0 = v[i]^2 + 2ad
d = -v[i]^2/2a
d = -88^2/2(-0.7)(32.2)
d = 7744/45.08
d = 171.8 feet
These examples are theoretical or text book applications of the general velocity equation. Consideration must be given to real world braking system components. For instance, vehicles do not leave
skid marks immediately upon stepping on the brake pedal. The rotating wheels need time to slow and lock prior to leaving tire marks. As such, it has been shown that vehicles can have about 15 to 20%
more energy prior to leaving discernible tire friction marks than when calculated with the general velocity equation.
The skid to stop equation is commonly applied in instances of rear-end or other “unavoidable” collisions. Estimates can be made of the distance necessary for a vehicle to skid to a stop based on a
pre-braking speed and a friction coefficient. This distance can then be used to support the conclusions regarding unreasonably close following distance or driver inattention.
Misapplication of concept
Without proper training and experience, the skid to stop equation can be misapplied in certain situations. As an illustration, consider a vehicle that loses control and comes to rest in the opposing
traffic lanes. Assume a detailed scene diagram including physical evidence shows the vehicle was in yaw. The path of the vehicle cg and vehicle orientation can be determined by using a scale cut-out
or rendering of the vehicle and placing it over tire marks on the diagram. This will help determine the yaw angle throughout the vehicle trajectory. The yaw angle is the included angle between the
vehicle’s centerline and the path of the cg. Assume an accurate coefficient of friction of the roadway was obtained by brake testing conducted on site.
Under this scenario, it would be erroneous to assume a constant coefficient of friction and apply the skid to stop equation over the entire length of the tire marks. Wheels roll in a direction
perpendicular to the wheel axis, but wheels will slide in a direction parallel to the wheel axis. As such, when a vehicle is in yaw, the wheels will either roll, slide, or a combination of both
depending on its yaw angle. Over estimating the vehicle pre-yaw speed will likely occur, because the friction coefficient increases with yaw angle.
To correctly calculate the friction coefficient, the distance over which the vehicle slides should be broken into segments, and the average yaw angle for each segment be determined. The coefficient
of friction of each segment is the sine of the yaw angle multiplied by the coefficient of the friction determined through on-site brake testing. Therefore, the correct initial speed can be determined
by first applying the skid to stop equation to only the last segment adjacent to where the vehicle came to rest. Then, while working backward, the general velocity equation is applied to each segment
in succession up to the first segment where the initial pre-yaw speed is determined.
Collision analysis methods
The choice of method or methods employed to analyze a collision depends on the amount and type of information available. A momentum analysis can be used if there is adequate documentation of the
physical evidence at the collision scene, such as pre-impact skid marks, point of impact, and vehicle rest positions. If the collision vehicles are available for inspection, a damage energy approach
may be used. If, in the absence of the collision vehicles, photogrammetry may be used. Sometimes hand calculation methods can be supported or refined with the use of commercially available,
computer-based reconstruction or simulation programs. Agreement between two or more methods is an excellent way to gain confidence in the analysis and resulting final opinions.
The Law of Conservation of Momentum can be applied to vehicle collisions. In the case of a two-vehicle collision, the system is the two colliding vehicles. The Law states that the momentum of the
system before and after the collision must be conserved, that is, the pre-impact momentum equals the post-impact momentum. The general momentum equation from which many useful forms can be derived
m[1]v[1] + m[2]v[2] = m[1]v[1]’ + m[2]v[2]’
where m[1], m[2] are the masses of vehicles 1 and 2 respectively,
v[1], v[2] are the pre-impact velocities of vehicle 1 and 2 respectively, and
v[1]’, v[2]’ are the post-impact velocities of vehicle 1 and 2 respectively
Momentum is a vector quantity having direction and magnitude. From the general equation, it can be seen that the units of momentum are slugs or kg (mass) times ft/s or m/s (velocity), or lb•sec or
The main reason to use momentum is to determine the pre-impact speed of the vehicles. To begin the analysis, the mass, the direction before and after impact, and the post-impact speeds for each
vehicle are needed. The directions of the vehicles before and after the collision may be determined by studying at-scene physical evidence, like intersection or road geometry, tire marks or debris
scatter patterns. The general velocity equation, along with an appropriate deceleration, may be used to determined post-impact velocities.
Computer reconstruction
There are many commercially available reconstruction and simulation programs written specifically for personal computers costing from several hundred to several thousand dollars. The reason these
tools are often used by the reconstruction specialist is because of the large number of calculations that can be completed over very small increments of time throughout the collision event. Also,
input parameters can be slightly modified, and the new output to be considered will be available in less time than when doing calculations by hand. However, output from these tools is only as
accurate as the data that is input.
Photogrammetry is a technique whereby photographs are used to determined relative size and location of physical evidence present at a collision scene. In instances where inspection of the damaged
vehicle is not available, vehicle crush apparent in photographs can be studied, and property damage can be quantified with acceptable accuracy using this technique.
Good quality photographs are essential. Several views of the object taken from different angles are necessary to allow the photogrammetry software to establish user-selected reference points.
Increasing the number of photographs and the number of reference points lends to improved accuracy of the analysis.
Collision reconstruction is the study of impacting objects that include passenger cars, vans, trucks, bicycles and pedestrians. Collision reconstruction encompasses many engineering principals that
can be thought of as the tools available to the reconstructionist. The choice of tools to be used depends on the amount and detail of information available. One or more tools can be applied to study
the same collision. Close agreement in the results obtained by two or more methods provides increased confidence.
Bio as of May 2017:
Mr. Weiss is a forensic engineer and collision reconstructionist with Automotive Safety Research in Santa Barbara, CA. He has more than 30 years’ experience in crash reconstruction, forensic testing
and expert consultation. Mr. Weiss has authored 38 technical publications and has 53 presentations.
Copyright © 2024 by the author.
For reprint permission, contact the publisher: www.plaintiffmagazine.com | {"url":"https://plaintiffmagazine.com/recent-issues/item/auto-accident-reconstruction-the-basics-you-must-know","timestamp":"2024-11-03T17:09:55Z","content_type":"text/html","content_length":"36401","record_id":"<urn:uuid:f5ee05ee-006f-4a29-af86-2b4be0a331c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00590.warc.gz"} |
Price FairGame - FAIRG, online chart, quotes, history | What is FairGame?
Volume 24H 278624764 FAIRG
Mkt.Cap ^$ 72,000.00 Volume 24H 278.62 MFAIRG
Market share 0% Total Supply 1.2 BFAIRG
Proof type N/A Open ^$ 0.000068
Low ^$ 0.000053 High ^$ 0.000076
Fair game definition and meaning
Expected Value Discrete Random Variable (given a formula, f(x)).
Since the expected utility that this lottery provides is finite (even if the expected wealth is infinite), individuals will be willing to pay only a finite cost for playing this lottery. Game in
which the cost of playing equals the expected winnings of the game, so that net value of the game equals zero. is a sum of the products of two numbers, the outcomes and their associated
probabilities. If the probability of a large outcome is very high then the expected value will also be high, and vice versa. Mathematically, the answer to any such question is very straightforward
and is given by the expected value of the game.
What does Expected Value tell us?
The expected count for each cell would be the product of the corresponding row and column totals divided by the sample size. For example, the expected count for O+E+ would be: (a+b)×(a+c)a+b+c+d.
(see red arrows for the meaning of "corresponding")
so, basically, the rule i read somewhere is "in unfair game, a player with less starting advantage always loses or, at best, forces a draw". Many games of chance are designed to make one person have
a better chance of winning that the other.
i'd say, if there is pure luck involved - it's most likely to be a fair game. good examples could be backgammon and russian roulette. A fair dad makes sure that each of his kids gets the same number
of scoops of ice cream. A just dad makes sure that each of his kids gets the ice cream that s/he needs. A fair ref makes sure that all players that commit fouls (that the ref sees) get penalized
equally for similar violations.
A just ref limits his penalty calls reasonably so as to not slow down the progress of the game. A fair man distributes equally among everyone and does not take individual cases or needs into
consideration. A just man considers carefully the ultimate good of all those who are affected by his decision.
If you figure out the expected value (the expected payoff) for this game, your potential winnings are infinite. For example, on the first flip, you have a 50% chance of winning $2.
How to Calculate an Expected Value
If you’re confused at this point — that is why it’s called a paradox. Basically, all the formula is telling you to do is find the mean by adding the probabilities. The mean and the expected value are
so closely related they are basically the same thing. You’ll need to do this slightly differently depending on if you have a set of values, a set of probabilities, or a formula. which can be shown to
equal 1.39 after some algebraic manipulation.
Considering that a game will have no winner if played perfectly is a bit like playing with a cat. When both players are at a stalemate, the cat usually signals its finish with a quick scratch. It is
evident that while the expected value of the game is infinite, not even the Bill Gateses and Warren Buffets of the world will give even a thousand dollars to play this game, let alone billions.
Comparison with traditional model
Let us try and apply the fair value principle to this game, so that the cost an individual is willing to bear should equal the fair value of the game. The expected value of the game E(G) is
calculated below. Go has a system of compensating for a perceived level of unfairness to try and make the game "fair". One player receives a chosen number of komi stones at the start of the game. It
would be very unlikely that a whole number of komi stones will ever make the game fair.
• However, these games are not played perfectly, so that actually doesn't help us to decide if this game is theoretically "fair".
• Mathematically, the answer to any such question is very straightforward and is given by the expected value of the game.
• Fair dealing is a user’s right in copyright law permitting use of, or “dealing” with, a copyright protected work without permission or payment of copyright royalties.
• Everythinglike that was fair game, and it wasalways fair game totease the keepers and those people who hadapple trees and things.
• Every now and then, the loser will win, which gives them confidence to carry on playing!
I will not pay $500 for a lucky outcome based on a coin toss, even if the expected gains equal $500. No game illustrates this point better than the St. Petersburg paradox. ] as a primary influence on
the increased usage of the free-to-play model, particularly among larger video game companies, and critics point to the ever-increasing need for free content that is available wherever and whenever
as causes. The two formulas above are the two most common forms of the expected value formulas that you’ll see in AP Statistics or elementary statistics.
Expected Value for Multiple Events
We can say that if a game is fair then the probability of winning is equal to the probability of losing. Everythinglike that was fair game, and it wasalways fair game totease the keepers and those
people who hadapple trees and things.
Are just and fair the same?
"Just" refers to an action justified under the circumstances. "Fair" refers to an action that treats people as they deserve to be treated. Many times, actions that are just are not fair.
In the deterministic games below, if the first player has an advantage, they won't give it up unless they make a mistake. In a on-player game such as Russian roulette (?!?!?!), all players have
trivially the same chance to win (or survive).
For example, you might buy a scratch off lottery ticket with prizes of $1000, $10 and $1. You might want to know what the payoff is going to be if you go ahead and spend $1, $5 or even $25.
Why is it called a cat game?
A tie in Tic Tac Toe is called a scratch, as in "cat's scratch". Considering that a game will have no winner if played perfectly is a bit like playing with a cat. A cat playing with its tail is as if
it was a mouse, will never be able to win, yet it enjoys the activity all the same.
It could be that the first player has first-move advantage, or it could be that the second player has an advantage from having more information about his opponents move. Draws do not really count
towards fairness, because neither player wins. To calculate an expected value, start by writing out all of the different possible outcomes. Then, determine the probability of each possible outcome
and write them as a fraction. Next, multiply each possible outcome by its probability.
Finally, add up all of the products and convert your answer to a decimal to find the expected value. Before thinking about all the possible outcomes and probabilities involved, make sure to
understand the problem. For example, consider a die-rolling game that costs $10 per play.
However, in more rigorous or advanced statistics classes (like these), you might come across the expected value formulas for continuous random variables or for the expected value of an arbitrary
function. I would understand a different definition of 'fair' in this context - that both players have the same expected outcome. This would mean that Russian Roulette, played strictly, is
theoretically fair - both players are equally likely to win/lose. The answers suggesting it is not fair are (correctly) assuming your opponent will cheat, or we may also believe there are out-of-game
consequences of an untimely death that may be asymmetric. fair games are games where both (all) players have exactly the same chance of winning (outcome of the game is not affected by the order of
players taking turns).
If you have a discrete random variable, read Expected value for a discrete random variable. The formula changes slightly according to what kinds of events are happening. For most simple events,
you’ll use either the Expected Value formula of a Binomial Random Variable or the Expected Value formula for Multiple Events. As you say, in reality, it seems that White wins more often, so it's
commonly said that first player (White) has an advantage. However, these games are not played perfectly, so that actually doesn't help us to decide if this game is theoretically "fair".
What does free game mean?
Free-to-play. Free-to-play (F2P or FtP) video games, also known as free-to-start, are games that give players access to a significant portion of their content without paying.
Fair dealing is a user’s right in copyright law permitting use of, or “dealing” with, a copyright protected work without permission or payment of copyright royalties. If your purpose is criticism,
review or news reporting, you must also mention the source and author of the work for it to be fair dealing. The short answer is, people are rational (for the most part), they are willing to part
with their money (for the most part). | {"url":"https://cryptolisting.org/coin/fairg/","timestamp":"2024-11-08T08:22:21Z","content_type":"text/html","content_length":"1050018","record_id":"<urn:uuid:9b851401-448c-40fc-9301-0441e53a02dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00740.warc.gz"} |
The Speed of Sound in Different Mediums: A Comprehensive Guide
The speed of sound is a fundamental concept in physics, and it varies significantly depending on the medium through which it travels. This comprehensive guide will delve into the intricacies of the
speed of sound in different mediums, providing a wealth of technical details, formulas, and practical examples to help you gain a deeper understanding of this fascinating topic.
Understanding the Factors Influencing the Speed of Sound
The speed of sound is primarily influenced by two key factors: the elasticity and density of the medium. Elasticity describes the material’s ability to maintain its shape and resist deformation when
a force is applied, while density refers to the mass per unit volume of the medium.
The relationship between the speed of sound (v), the medium’s elasticity (E), and density (ρ) can be expressed by the following formula:
v = √(E/ρ)
This formula demonstrates that the speed of sound is directly proportional to the square root of the medium’s elasticity and inversely proportional to the square root of its density. As a result,
materials with higher elasticity and lower density tend to have a faster speed of sound.
The Speed of Sound in Air
In air, the speed of sound is primarily influenced by temperature. At 20°C (68°F), the speed of sound in dry air is approximately 343 m/s (1,125 ft/s). However, this value can vary depending on the
temperature and humidity of the air.
The relationship between the speed of sound in air (v_air) and temperature (T) can be expressed by the following formula:
v_air = 331.3 + 0.606T
where T is the temperature in degrees Celsius (°C).
For example, at 0°C (32°F), the speed of sound in air is approximately 331.3 m/s (1,087 ft/s), while at 30°C (86°F), it increases to 349.1 m/s (1,145 ft/s).
The Speed of Sound in Water
The speed of sound in water is influenced by several factors, including temperature, salinity, and pressure. The UNESCO equation, a complex polynomial equation, is commonly used to calculate the
speed of sound in water (v_water):
v_water = 1402.388 + 5.03711T - 5.80852 × 10^-2 T^2 + 3.3420 × 10^-4 T^3 + 1.6152 × 10^-6 T^4 - 1.0262 × 10^-8 T^5 + 3.1260 × 10^-11 T^6
where T is the temperature in degrees Celsius (°C).
At a temperature of 25°C (77°F) and a salinity of 35 practical salinity units (psu), the speed of sound in seawater is approximately 1,520 m/s (4,987 ft/s).
It’s important to note that the speed of sound in water can also be affected by depth, as increased pressure can slightly increase the speed of sound.
The Speed of Sound in Solid Materials
The speed of sound in solid materials is generally faster than in liquids or gases. This is due to the higher elasticity and density of solids compared to other states of matter.
Here are some examples of the speed of sound in various solid materials:
Material Speed of Sound (m/s)
Aluminum 6,320
Copper 4,600
Steel 5,950
Glass 5,970
Concrete 3,000 – 4,000
Wood (along the grain) 3,000 – 5,000
The speed of sound in solids can be calculated using the following formula:
v_solid = √(E/ρ)
where E is the Young’s modulus (a measure of the material’s elasticity) and ρ is the material’s density.
The Speed of Sound in Other Materials
The speed of sound can vary widely in other materials, depending on their unique properties. Here are some examples:
Material Speed of Sound (m/s)
Rubber 60
Gold 3,240
Diamond 12,000
Hydrogen 1,270
Helium 972
Carbon dioxide 259
The speed of sound in these materials can be calculated using the same formula as for solids:
v_material = √(E/ρ)
where E is the material’s elasticity and ρ is its density.
Practical Applications of the Speed of Sound
The understanding of the speed of sound in different mediums has numerous practical applications, including:
1. Ultrasound Imaging: The speed of sound in human tissue is used in medical ultrasound imaging to determine the depth and location of internal structures.
2. Sonar Systems: Sonar systems, used for underwater navigation and object detection, rely on the speed of sound in water to calculate the distance and position of submerged objects.
3. Acoustic Measurements: The speed of sound is used in various acoustic measurements, such as the determination of the speed of sound in air for the calibration of microphones and other acoustic
4. Meteorology: The speed of sound in air is used in meteorology to measure wind speed and direction, as well as to detect and locate lightning strikes.
5. Seismology: The speed of sound in different materials is used in seismology to study the Earth’s interior structure and detect underground resources, such as oil and gas deposits.
The speed of sound is a fundamental concept in physics that varies significantly depending on the medium through which it travels. By understanding the factors that influence the speed of sound, such
as elasticity and density, as well as the specific values for different materials, you can gain a deeper appreciation for the complexities of sound propagation and its practical applications in
various fields.
1. Kinsler, L. E., Frey, A. R., Coppens, A. B., & Sanders, J. V. (1999). Fundamentals of Acoustics (4th ed.). Wiley.
2. Urick, R. J. (1983). Principles of Underwater Sound (3rd ed.). Peninsula Publishing.
3. Rossing, T. D. (2007). Springer Handbook of Acoustics. Springer.
4. Kuttruff, H. (2007). Acoustics: An Introduction. CRC Press.
5. Beranek, L. L., & Mellow, T. J. (2012). Acoustics: Sound Fields and Transducers. Academic Press.
The techiescience.com Core SME Team is a group of experienced subject matter experts from diverse scientific and technical fields including Physics, Chemistry, Technology,Electronics & Electrical
Engineering, Automotive, Mechanical Engineering. Our team collaborates to create high-quality, well-researched articles on a wide range of science and technology topics for the techiescience.com
All Our Senior SME are having more than 7 Years of experience in the respective fields . They are either Working Industry Professionals or assocaited With different Universities. Refer Our Authors
Page to get to know About our Core SMEs. | {"url":"https://techiescience.com/speed-of-sound-in-different-mediums/","timestamp":"2024-11-07T21:56:59Z","content_type":"text/html","content_length":"102074","record_id":"<urn:uuid:f7a88feb-eaad-4090-8a72-2ac99bb71771>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00537.warc.gz"} |
LICI2085 L. Licinius (123) L. f. L. n. Mae. Murena
great grandson of
grandson of
son of
brother of
father of
stepfather of
• Praefectus Fabrum before 74 (Suolahti 1955) Expand
• Quaestor 74 (Broughton MRR II) Expand
□ 5 Licinius Murena and Sulpicius Rufus were colleagues in the quaestorship (Cic. Mur. 18), with careers closely parallel to that of Cicero. As they are not named as colleagues of Cicero, and
Murena soon afterwards became a Legate under Lucullus (see 73, Legates), I list them as Quaestors in 74. (Broughton MRR II)
□ Cic. Mur. 18. (Broughton MRR II)
• Legatus (Lieutenant) 73 Asia, Bithynia, Pontus (Broughton MRR II) Expand
□ Legate under Lucullus in Asia, Bithynia, and Pontus (Cic. Mur. 20 and 89). See 72, Legates. (Broughton MRR II)
• Legatus (Lieutenant) 72 Pontus (Broughton MRR II) Expand
□ Lucullus placed him in charge of the siege of Amisus (Plut. LUG. 15.1, and 19.7; Phlegon Trall. fr. 12, in FHG 3.606; cf. Plut. Luc. 19.1-7; App. Mith. 83; Strabo 12.3.14, 547c). See 69,
Legates. (Broughton MRR II)
• Legatus (Lieutenant) 69 Armenia (Broughton MRR II) Expand
□ Served as Legate in Armenia under Lucullus (Cic. Mur. 20, and 89; Plut. Luc. 25.6; 27.2). (Broughton MRR II)
• Praetor 65 urbanus, Rome (Broughton MRR II) Expand
□ Cic. Mur. 35-41 and 53; Plin. NH 33.53. (Broughton MRR II)
□ p. 753, footnote 414 (Brennan 2000)
• Proconsul 64 Gallia Transalpina (Broughton MRR II) Expand
□ Proconsul in Transalpine Gaul (Cic. Mur. 42 and 53 and 68-69 and 89, summo cum imperio; Har.Resp. 42). (Broughton MRR II)
□ Praef, fabr. before 74. Delete this entry both in MRR 2.484, and in the Index, 580, since Cicero in Mur. 73 refers to another, unnamed person (RS, CP). Procos. 64, in Transalpine Gaul. See
MRR 2.163. Against W. A. Allen, Jr., who uses Sall. Cat. 42.3 as evidence that Murena held the Cisalpine province too (CPh 48, 1953, 176-177) Badian has shown that he held only Transalpina
and that in Sallust the word citeriore is a mistake and should be emended to ulteriore (Mel. Piganiol 915-918). See above, on C. Calpurnius Piso (63), Cos. 67. Cos. 62. Badian (in Mnemai
Hulley, 97-101) defends the usually obelized or emended text in Cic. Att. 1.16.13, simul cum lege alia, as correct, and explains it as a reference to a law that was carried on December 10,
62, on the same day as Lurco (see MRR 2.179) entered office as a tribune of the plebs, still in the consulship of Licinius Murena and Iunius Silanus. He suggests that this was a consular law,
the Lex Iunia et Licinia (cf. Phil. 5.7-8; Leg. 3.11), and thus finds an exact date for its passage. Addendum. The phrase summum imperium in Cic. Mur. 89, taken in MRR 2.163 to indicate that
Murena held an imperium pro consule in Gallia Transalpina, is in fact used by Cicero not only to describe such an imperium (cf. Verr. 1.37 [Hortensius], and Rab. Perd. 3 [consules]) but also
an imperium pro praetore (cf. Verr. 2.2.14 [Verres] and 2.5.134) and that proposed for the Xviri under the Lex Agraria of Rullus (Leg. Agr. 1.9; 2.34, and 99). The phrase in Mur. 89 therefore
is not proof of an imperium pro consule, but does not exclude it. From an unpublished paper by Brad Nilsson, "The Governorship of L. Licinius Murena in 64 and 63 B.C." (Univ. of North
Carolina). (Broughton MRR III)
□ Additions and Corrections. M. Cary (CAH 9.499, note 2) suggested, on the basis of the phrase in citiore Gallia in Sall. Cat. 42.3, that the Legate C. Licinius Murena was temporarily governor
of both Gallic provinces. In a forthcoming note in Classical Philology (48 [1953]) Walter Allen Jr., points out that L. Licinius Murena was probably a Proconsul in command of both Transalpine
and Cisalpine provinces in 64 and 63, like Piso in 67 to 55. (Broughton MRR II)
• Proconsul 63 Gallia Transalpina (Broughton MRR II) Expand
□ Proconsul in Transalpine Gaul (see 64, Promagistrates) during the first part of the year, but left his brother in command there as Legate when he returned for the consular elections (Cic.
Mur. 89; see Legates). (Broughton MRR II)
□ Praef, fabr. before 74. Delete this entry both in MRR 2.484, and in the Index, 580, since Cicero in Mur. 73 refers to another, unnamed person (RS, CP). Procos. 64, in Transalpine Gaul. See
MRR 2.163. Against W. A. Allen, Jr., who uses Sall. Cat. 42.3 as evidence that Murena held the Cisalpine province too (CPh 48, 1953, 176-177) Badian has shown that he held only Transalpina
and that in Sallust the word citeriore is a mistake and should be emended to ulteriore (Mel. Piganiol 915-918). See above, on C. Calpurnius Piso (63), Cos. 67. Cos. 62. Badian (in Mnemai
Hulley, 97-101) defends the usually obelized or emended text in Cic. Att. 1.16.13, simul cum lege alia, as correct, and explains it as a reference to a law that was carried on December 10,
62, on the same day as Lurco (see MRR 2.179) entered office as a tribune of the plebs, still in the consulship of Licinius Murena and Iunius Silanus. He suggests that this was a consular law,
the Lex Iunia et Licinia (cf. Phil. 5.7-8; Leg. 3.11), and thus finds an exact date for its passage. Addendum. The phrase summum imperium in Cic. Mur. 89, taken in MRR 2.163 to indicate that
Murena held an imperium pro consule in Gallia Transalpina, is in fact used by Cicero not only to describe such an imperium (cf. Verr. 1.37 [Hortensius], and Rab. Perd. 3 [consules]) but also
an imperium pro praetore (cf. Verr. 2.2.14 [Verres] and 2.5.134) and that proposed for the Xviri under the Lex Agraria of Rullus (Leg. Agr. 1.9; 2.34, and 99). The phrase in Mur. 89 therefore
is not proof of an imperium pro consule, but does not exclude it. From an unpublished paper by Brad Nilsson, "The Governorship of L. Licinius Murena in 64 and 63 B.C." (Univ. of North
Carolina). (Broughton MRR III)
□ Additions and Corrections. M. Cary (CAH 9.499, note 2) suggested, on the basis of the phrase in citiore Gallia in Sall. Cat. 42.3, that the Legate C. Licinius Murena was temporarily governor
of both Gallic provinces. In a forthcoming note in Classical Philology (48 [1953]) Walter Allen Jr., points out that L. Licinius Murena was probably a Proconsul in command of both Transalpine
and Cisalpine provinces in 64 and 63, like Piso in 67 to 55. (Broughton MRR II)
• Consul 62 (Broughton MRR II) Expand
□ CIL 12.2.910, 911, 2663b; Cic. Flacc. 30; Fast. Amit., Degrassi 170f.; Dio 37, Index, and 39.1; Eutrop. 6.16; Chr. 354 (Silano et Murena); Fast. Hyd. (Silana et Murena); Chr. Pasc. (#);
Cassiod. See Degrassi 131, 490f. On the election of Silanus, and his part as Consul Designate in the debate in the Senate on the Catilinarian conspirators, see Cic. Cat. 4.7 and 11; Att.
12.21.1; Phil. 2.12; Sall. Cat. 50.4; 51.16; Plut. Cic. 14.6; 19.1; 20.3; 21.3; Cat. Min. 21.2; 22-23; App. BC 2.5-6; cf. Plut. Caes. 8.1; Dio 37.36; Schol. Gron. 287 Stangl. Murena was
prosecuted for bribery in his election, and defended successfully by Cicero (Cic. Mur., passim; Flacc. 98; Fin. 4.74; Quintil. Inst. Or. 4.1.75; 6.1.35; 11.1.69; Plin. Epist. 1.20.7; Plut.
Cic. 14.6; 35.3; Cat. Min. 21.3-6; Inim. Util. 9). He voted to condemn the conspirators (Cic. Dom. 134; Att. 12.21.1), and as Consul protected Cato during the disturbances at the beginning of
the year (Plut. Cat. Min. 28.2-3, cf. 21.6; see Tribunes of the Plebs). The two Consuls carried a law requiring copies of all proposed legislation to be deposited in the treasury (Cic. Att.
2.9.1; 4.16.5; Sest. 135; Vat. 33; Phil. 5.8; Leg. 3.11 and 46; Suet. Iul. 28.3; ,Schol. Bob. 140 Stangl). (Broughton MRR II)
□ Praef, fabr. before 74. Delete this entry both in MRR 2.484, and in the Index, 580, since Cicero in Mur. 73 refers to another, unnamed person (RS, CP). Procos. 64, in Transalpine Gaul. See
MRR 2.163. Against W. A. Allen, Jr., who uses Sall. Cat. 42.3 as evidence that Murena held the Cisalpine province too (CPh 48, 1953, 176-177) Badian has shown that he held only Transalpina
and that in Sallust the word citeriore is a mistake and should be emended to ulteriore (Mel. Piganiol 915-918). See above, on C. Calpurnius Piso (63), Cos. 67. Cos. 62. Badian (in Mnemai
Hulley, 97-101) defends the usually obelized or emended text in Cic. Att. 1.16.13, simul cum lege alia, as correct, and explains it as a reference to a law that was carried on December 10,
62, on the same day as Lurco (see MRR 2.179) entered office as a tribune of the plebs, still in the consulship of Licinius Murena and Iunius Silanus. He suggests that this was a consular law,
the Lex Iunia et Licinia (cf. Phil. 5.7-8; Leg. 3.11), and thus finds an exact date for its passage. Addendum. The phrase summum imperium in Cic. Mur. 89, taken in MRR 2.163 to indicate that
Murena held an imperium pro consule in Gallia Transalpina, is in fact used by Cicero not only to describe such an imperium (cf. Verr. 1.37 [Hortensius], and Rab. Perd. 3 [consules]) but also
an imperium pro praetore (cf. Verr. 2.2.14 [Verres] and 2.5.134) and that proposed for the Xviri under the Lex Agraria of Rullus (Leg. Agr. 1.9; 2.34, and 99). The phrase in Mur. 89 therefore
is not proof of an imperium pro consule, but does not exclude it. From an unpublished paper by Brad Nilsson, "The Governorship of L. Licinius Murena in 64 and 63 B.C." (Univ. of North
Carolina). (Broughton MRR III) | {"url":"https://romanrepublic.ac.uk/person/2085/?facet_view=person_search&selected_facets=office%3Apraefectus+fabrum&selected_facets=location%3AArmenia&selected_facets=office%3Apraefectus&selected_facets=office%3Aconsul&selected_facets=office%3Aproconsul&selected_facets=office%3Aconsul&selected_facets=office%3Aconsul&selected_facets=office%3Amilitary+positions&selected_facets=office%3Apraefectus&selected_facets=location%3Aurbanus&selected_facets=gender_exact%3AMale&page=1","timestamp":"2024-11-12T13:36:27Z","content_type":"text/html","content_length":"27587","record_id":"<urn:uuid:67d9e5cc-78b5-424d-8fa1-237e09065afa>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00460.warc.gz"} |
This Pied Piper of Hamelin
Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
"The Pied Piper of Hamelin'' is a story you may have heard or read. This man, who is often dressed in very bright colours, drives the many rats out of town by his pipe playing - and the children
follow his tune.
Suppose that there were $100$ children and $100$ rats. Supposing they all have the usual number of legs, there will be $600$ legs in the town belonging to people and rats.
But now, what if you were only told that there were $600$ legs belonging to people and rats but you did not know how many children/rats there were?
The challenge is to investigate how many children/rats there could be if the number of legs was $600$. To start you off, it is not too hard to see that you could have $100$ children and $100$ rats;
or you could have had $250$ children and $25$ rats. See what other numbers you can come up with.
Remember that you have to have $600$ legs altogether and rats will have $4$ legs and children will have $2$ legs.
When it's time to have a look at all the results that you have got and see what things you notice you might write something like this:
a) $100$ Children and $100$ Rats - the same number of both,
b) $150$ Children and $75$ Rats - twice as many children as rats,
c) $250$ Children and $25$ Rats - ten times as many children as rats.
This seems as if it could be worth looking at more deeply. I guess there are other things which will "pop up'', to explore.
Then there is the chance to think about the usual question, "I wonder what would happen if ...?''
Getting Started
If you had one fewer rat, what could you replace it with to keep the number of legs the same?
How are you keeping track of what you have done?
Student Solutions
We had a good number of solutions sent in, thank you. Here we will feature those of you who've looked at many possibilities. From George at Linton Heights Junior SchoolӬ we had the following:
a) $0$ rats and $300$ children
b) $1$ rat and $298$ children
c) $5$ rats and $290$ children
d) $10$ rats and $280$ children
e) $25$ rats and $250$ children
f) $50$ rats and $200$ children
g) $100$ rats and $100$ children
h) $125$ rats and $50$ children
i) $150$ rats and $0$ children
From Patrick at Manorcroft Primary School we had this good explanation, well done:Ӭ
There are $148$ different combinations of child and rat because I figured out that you could replace $1$ rat with $2$ children (because one rat has twice as many legs as a child) so the maximum
possible children to rats are $298$ children to $1$ rat and the maximum possible rats to children is $149$ rats to $2$ children. If you take $1$ from $149$ to get $148$ possiblities.
Year $5Ӭ$ pupils from St Ambrose Catholic Primary School said;
There are many possible answers to this question, to find out how many children, you can start with the number of rats. You can go from $1$ rat to $149$ rats to work out how many children. The rule
is: multiply by $4$, take away from $600$, divide by $2$.
For example, $1$ rat= $298$ children because $1$x$4=4 600-4=596, 596$ divided by $2=298$.
If we start with the number of children the rule is: multiply by $2$, take away from $600$, divide by $4$.
For example, $2$ children $ = 149$ rats because $2$x$2=4$, $600-4= 596, 596$ divided by $4= 149$.
There are some patterns that we noticed, such as: If you take $1$ away from the amount of rats, it adds $2$ to the amount of children. For example: $86$ children, $107$ rats; $106$ rats, $88$
Matthew from Calcot Junior School also discovered this general rule. He said:
There is a formula which is however many rats you have, double that number and take it off 300. Your result is how many children there are.
The total number of legs is 600. Each child has 2 legs, each rat has 4. So the total number of legs can be written as:
Legs = Children x 2 + Rats x 4
That's very helpful, Matthew. By 'children', I think Matthew means the number of children, and 'rats' means the number of rats.
Because we know the number of legs, we can write:
600 = Children x 2 + Rats x 4
Then if we know either children or rats, we can use this to work out the missing one. For example, if there are 100 Children:
600 = 100 x 2 + Rats x 4
600 = 200 + Rats x 4
400 = Rats x 4
Rats = 100
So Rats = (Legs - 2 x Children) $\div$ 4
If you know legs and rats, to work out children:
Children = (Legs - 4xRats) $\div$ 2
This is extremely well explained, thank you Matthew. Matthew calculated all the possible answers, which you can see here. Thank you for sharing this with us, Matthew.
JamesӬ, who calls his school "BG"Ӭ sent in the following good explanation:
When working out the pattern for children/rats, I thought it would be a good idea to start with either everything as children or everything as rats. I decided to start with everything as children,
which would be $300$ children and $0$ rats, since $600$ halved is $300$.
Then, I knew if I wanted to get all possible solutions, I'd need to come up with a pattern. My pattern was adding some of the children's legs, to the rats each time. But of course, since rats have
twice as many legs as children, that wouldn't work, so I took away two children for every rat I added, as shown in the pattern below:
│Children │Rats│Legs │
│$300$ │$0$ │$600 + 0$ │
│$298$ │$1$ │$596+ 4$ │
│$296$ │$2$ │$592 + 8$ │
│$294$ │$3$ │$588 + 12$ │
I did that on and on, until I was certain, that every time, the legs would total up to $600$. By looking at the number of children, and noticing that it started at $300$, and got $2$ fewer each time,
I divided $300$ by $2$, and then added on the $1$ possibilty, with $0$ children, to work out there would be $151$ possible solutions for this.
These solutions are really good. Well done, keep submitting solutions to other activities.
Teachers' Resources
Why do this problem?
, based on the well-known story, opens the door to opportunities for doing mathematical calculations that can be explored with or without a spreadsheet. The story scenario is motivating and gives the
children a meaningful context in which to make sense of these calculations. It can be extended by allowing pupils to create further questions to answer.
Possible approach
Reading a version of The Pied Piper of Hamlin with the children so that they are familiar with the story before starting this investigation is a good way to start.
Then you could use the story to talk about the number of legs at particular times. You could also pose some theoretical questions, such as asking the children to imagine you've opened the book at a
page which had 10 legs on it in total. How many people and how many rats could there have been? Learners could work on this in pairs using mini-whiteboards and then you can talk about the
possiblities as a whole group. This will lead into general conversations about the number of animals/people and how the number of each affects the other.
You might also want to spend some time sharing ways of recording what the children are doing. Some might be drawing pictures or symbols for the rats/people, others might be recording numbers only. It
is worth talking about the different ways and the advantages/disadvantages of each. You may find that after some discussion, a few children adopt a different way of recording to the one they started
Key questions
How many legs do your rats have?
What could you replace a rat with?
Can you tell me about the way you are working out so many possibilities?
(And for the pupils who have gone much further)
What have you noticed about all your results so far?
Can you explain why . . . . . has happened?
Possible extension
Setting different target numbers of legs offers the chance to explore multiples of 2 and 4 and how they are related. Each target number will have a range of possible solutions. Encourage the children
to generalise about how the numbers of rats and people are related.
Another avenue for extension woul be to look at animals with other numbers of legs and perhaps three types of different-legged animals at the same time - eg. birds, spiders and pigs. This option
links with Noah.
Possible support
Some children may find the large numbers being considered in the presentation of the problem too high to make sense of so start them off with lower targets such as 20 or 30 legs. Noah is a similar
problem involving fewer legs. Some toys or pictures representing the different animals may help some pupils to get started. Modelling clay bodies with straw legs can also be very helpful. Children
could be given 20 lengths of straw and work on sharing them between people and rats as a way in to dealing with the larger numbers in a more abstract way. | {"url":"https://nrich.maths.org/problems/pied-piper-hamelin-0","timestamp":"2024-11-12T03:08:34Z","content_type":"text/html","content_length":"50555","record_id":"<urn:uuid:81428f03-f75a-4f11-8943-249dc00b11c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00558.warc.gz"} |
IBPS PO Prelims Quantitative Aptitude Quiz: 27th August 2019
IBPS PO Quantitative Aptitude Quiz
With the increasing level in exams, quantitative aptitude has become the ineluctable hitch. Generally, questions asked in this section are calculative and lengthy that consumes your time. This
subject can do wonders if you always keep a check on your accuracy, speed and time. Accuracy is what matters the most. To help you prepare the section we have provided a well-defined
IBPS PO Prelims Study Plan.
You can also Prepare from the
study note
to clear your basic concepts. Attempt this
quantitative aptitude quiz
and check your performance for the upcoming IBPS PO Prelims Study Plan 2019. Following is the quiz of 26th August, that inculcate the important topic from Boat and Stream.
Q1. The speed of a boat in still water is 24 kmph and the speed of the stream is 4 km/h. The time taken by the boat to travel from A to B downstream is 36 minutes less than the time taken by the same
boat to travel from B to C upstream. If the distance between A and B is 4 km more than the distance between B and C, what is the distance between A and B?
(a) 112 km
(b) 140 km
(c) 56 km
(d) 84 km
(e) 28 km
Q2. A man can row 30 Km upstream and 44 Km downstream in 10 hrs. Also, he can row 40 Km upstream and 55 Km downstream in 13 hrs. The rate of the current is:
(a) 3 Km/h
(b) 3.5 Km/h
(c) 4 Km/h
(d) 4.5 Km/h
(e) 5 km/h
Q3. A boat against the current goes at 4 km per hour and in the direction of the current at 8 km per hour. The boat in going to a place B from A in upward and downward direction takes 45 minutes.
Find the distance between A and B.
(a) 2.5 km
(b) 2.25 km
(c) 3 km
(d) 2 km
(e) 3.5 km
Q4. A boatman rows to a place 45 km distant and back in 20 hours. He finds that he can row 12 km with the stream in same time as 4 km against the stream. Find the speed of the stream.
(a) 3 km/hr
(b) 2.5 km/hr
(c) 4 km/hr
(d) Cannot be determined
(e) None of these
Q5. A person can row 15/2 km an hour in still water. He finds that it takes twice the time to row upstream than the time to row downstream. The speed of the stream is
(a) 2 km/hour
(b) 2.5 km/hour
(c) 3 km/hour
(d) 4 km/hour
(e) 3.5 km/hour
Q6. A man who can swim 48 m/minute in still water swims 200 m against the current and 200 m with the current. If the difference between those two times is 10 minutes, what is the speed of the current
(a) 30 m/min
(b) 31 m/min
(c) 29 m/min
(d) 32 m/min
(e) 33 m/min
Q7. The speed of boat in still water is 8 km/hr more than speed of current and the ratio of speed of boat in downstream to the speed of boat in upstream is 2: 1. Find the total distance covered by
boat in downstream in 4.5 hours?
(a) 76 km
(b) 84 km
(c) 72 km
(d) 78 km
(e) 80 km
Q8. 80% of boat’s speed in upstream is same as 48% of boat’s speed in downstream. Find the total time taken by boat to cover 30 km in upstream and 50 km in downstream if speed of stream is 2.5 km/hr?
Q9.A man can swim at 8 kmph in still water. He covers an upstream and downstream distance of 18 km between two points in 6 hours. Find the rate of stream.
(a) 5 km/h
(b) 6 km/h
(c) 7 km/h
(d) 4 km/h
(e) 2 km/h
Q10. A man swimming in stream which flows at 1.5 kmph finds that in a given time he can swim twice as far with the stream as he can against it. At what rate does he swim?
(a) 3.5 kmph
(b) 4 kmph
(c) 4.5 kmph
(d) 5 kmph
(e) 5.5 kmph
Directions (11-15): Find the approximate value of question mark (?) in the following questions?
Q11. 64.98% of 360.01-?% of 249.99=138.923
(a) 45
(b) 38
(c) 52
(d) 32
(e) 25
Q14. (3749.98-?)÷55.012=22.991
(a) 2465
(b) 2445
(c) 2495
(d) 2475
(e) 2485
Q15. (3416.023÷55.991)-(1133.96÷?)=18.989
(a) 13
(b) 17
(c) 23
(d) 27
You may also like to Read:
All the Best BA’ians for IBPS RRB Prelims Result!! | {"url":"https://www.bankersadda.com/ibps-po-quantitative-aptitude-quiz-27/","timestamp":"2024-11-10T02:41:44Z","content_type":"text/html","content_length":"618306","record_id":"<urn:uuid:3db3601a-5873-4c9f-92d8-cb6edfb3d865>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00841.warc.gz"} |
Permutation. Combination | Foundational Math Concepts
In mathematics, a permutation is a rearrangement of a set of objects in a specific order. Permutations are used to count the number of ways that a set of objects can be arranged. A permutation of a
set S is a one-to-one and onto mapping of S onto itself.
More formally, a permutation of a set S is a bijective function \( \sigma \): \(S \rightarrow S \) . In other words, \(\sigma \) (Sigma) is a function that maps every element in S to a unique element
in S, and every element in S is mapped to exactly once. We can represent a permutation \( \sigma \) of a set S by writing down its values in a particular order, for example:
\( \sigma = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 3 & 1 & 4 & 2 \end{pmatrix} \)
This notation means that \( \sigma(1) = 3 \), \( \sigma(2)=1 \), \(\sigma(3)=4 \) and \( \sigma(4)=2 \). In other words, the first row represents the elements of S in their original order, and the
second row represents their order after applying the permutation \( \sigma \).
The number of permutations of a set S with \(n\) elements is denoted by \(n!\), which is read as "n factorial". The factorial function is defined as:
\( n!=n \cdot (n-1) \cdot (n-2) \ldots 2 \cdot 1 \)
For example, \(5!=5 \cdot 4 \cdot 3 \cdot 2 \cdot 1=120 \) , which means that there are 120 permutations of a set with 5 elements.
Permutations can be used to solve various counting problems. For example, suppose we have 5 different books and we want to arrange them on a shelf. The number of ways to arrange the books is given by
the number of permutations of a set with 5 elements, which is \( 5!=120 \).
Another example is the number of ways to select a committee of 3 people from a group of 10 people. The number of ways to select the committee is given by the number of permutations of a set with 10
elements taken 3 at a time, which is denoted by \(_{10} P_3 \) and is calculated as:
\( _{10} P_3 =\frac{10!}{(10-3)!} = 10 \cdot 9 \cdot 8 =720 \)
In general, the number of permutations of a set with \(n\) elements taken \(r\) at a time is denoted by \(_n P_r\) and is calculated as:
\( _n P_r= \frac{n!}{(n-r)!} \)
In conclusion, permutations are a fundamental concept in combinatorics and are used to count the number of ways that a set of objects can be arranged. The number of permutations of a set with n
elements is \(n!\), and the number of permutations of \(r\) elements taken from a set of \(n\) elements is \(_n P_r\). | {"url":"https://www.mathnirvana.com/math-rules/permutation-combination.htm","timestamp":"2024-11-14T03:37:43Z","content_type":"text/html","content_length":"11512","record_id":"<urn:uuid:f4e3ac69-6fed-4cc6-bde0-cf436e1b0ca7>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00344.warc.gz"} |
How do you find the derivative of f(x)=5e^x? | Socratic
How do you find the derivative of #f(x)=5e^x#?
1 Answer
$f ' \left(x\right) = 5 {e}^{x}$
All that is here is a constant, $5$, multiplied with the function ${e}^{x}$. When differentiating a function that is multiplied a constant, just differentiate the other function and then multiply
that by the constant.
Since the derivative of ${e}^{x}$ is also ${e}^{x}$, when you differentiate the function, the ${e}^{x}$ remains, and it is also multiplied by the $5$, giving the derivative of, again, $5 {e}^{x}$.
We can see this as:
$f ' \left(x\right) = \frac{d}{\mathrm{dx}} \left(5 {e}^{x}\right)$
Taking the constant out:
$f ' \left(x\right) = 5 \cdot \frac{d}{\mathrm{dx}} \left({e}^{x}\right)$
Since the derivative of ${e}^{x}$ is ${e}^{x}$:
$f ' \left(x\right) = 5 \cdot {e}^{x} = 5 {e}^{x}$
Impact of this question
52144 views around the world | {"url":"https://socratic.org/questions/how-do-you-find-the-derivative-of-f-x-5e-x","timestamp":"2024-11-14T17:47:39Z","content_type":"text/html","content_length":"33835","record_id":"<urn:uuid:a17cba66-3c7f-4da9-a282-0887896182a3>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00083.warc.gz"} |
Topic: A.9 Exponential Functions and Equations | Quizalize
Feel free to use or edit a copy
includes Teacher and Student dashboards
Track each student's skills and progress in your Mastery dashboards
With a free account, teachers can
• edit the questions
• save a copy for later
• start a class game
• automatically assign follow-up activities based on students’ scores
• assign as homework
• share a link with colleagues
• print as a bubble sheet
• 45s
A.9.D: Graph Exponential Functions
• 60s
A.9.B: Real-World Exponential Functions
• 60s
A.9.D: Graph Exponential Functions
• Q4
Which statement about the quadratic functions below is false?
The graphs of two of these functions have a minimum point.
The graphs of all these functions have the same axis of symmetry.
The graphs of two of these functions do not cross the x-axis.
The graphs of all these functions have different y-intercepts.
A.9.D: Graph Exponential Functions
• 60s
A.9.C: Growth & Decay Of Exponential Functions
• 60s
A.9.D: Graph Exponential Functions
• Q7
Points (3, 2) and (7, 2) are on the graphs of both quadratic functions f and g. The graph of f opens downward, and the graph of g opens upward. Which of these statements are true?
I only
II only
II and IV
I and III
A.9.D: Graph Exponential Functions
• Q8
The graph below shows the change in the value of a car over several years. Based on the information in the graph, which conclusion appears to be true?
The car lost about one-quarter of its value every year.
The car lost more of its value between years 9 and 10 than between years 1 and 2.
The car lost about one-half of its value every 3 years.
The car lost less of its value between years 9 and 10 than between years 1 and 2.
A.9.C: Growth & Decay Of Exponential Functions
• Type an Answer
A.9.C: Growth & Decay Of Exponential Functions
• Q10
The graph models A, the area in square feet of a rectangular porch with a length that is 0.56w less than 28 ft given a width of w feet. Based on the graph, what is the width in feet of the porch
with the greatest area?
A.9.D: Graph Exponential Functions
• Q11
The starting annual salary for an office worker at a company is $29,000. If the company awards an annual increase of 6.2%, which graph models this situation after the office worker receives x
annual increases?
A.9.D: Graph Exponential Functions
• 60s
A.9.E: Exponential Function Predictions
• Q13
The graph of an exponential function is shown on the grid. Based on the graph, which statement about the function is true?
The domain is the set of all real numbers greater than 4.
The range is the set of all real numbers less than 0.
The domain is the set of all real numbers less than 4.
The range is the set of all real numbers greater than 0.
A.9.A: Domain & Range Of Exponential Functions
• 60s
A.9.C: Growth & Decay Of Exponential Functions
• 60s
A.9.C: Growth & Decay Of Exponential Functions | {"url":"https://resources.quizalize.com/view/quiz/topic-a9-exponential-functions-and-equations-ddd26c7c-b5d5-4be7-8f3c-e0f0d7fcc06e","timestamp":"2024-11-08T17:41:55Z","content_type":"text/html","content_length":"235198","record_id":"<urn:uuid:d84e50c5-fda9-4d1e-a6d6-10c064c9e127>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00122.warc.gz"} |
Real-Time Tracking of Swiss Covid-19 Cases
Dr. Ryuta Yoshimatsu, Dr. Simon Hefti
When we talk about Covid-19, we sometimes refer to the effective reproduction number, Rt. Rt is one of the key metrics to evaluate where we stand in terms of epidemic growth. Simply put, it indicates
how many people an infected person infects on average at time t. When this value is above 1, the number of infections will grow exponentially and when it’s below 1, the opposite is expected. It’s
hard to get the real-time estimate of the Rt due to the reporting inefficiency. The latest Rt estimate available today provided by the Swiss Science Task Force, for example, is that from 2 weeks ago.
In this blog post, we explore one possible way to bridge this gap and to give an estimate to today’s Rt. We will first look at the number of total reported Covid-19 cases in Switzerland [data] since
the beginning of the epidemic and will study the dynamics of the infection waves. Then, we will model the time series and make predictions into the near-past (described below). Since we know that the
Rt is a function of the daily number of cases together with model assumptions, we will use our prediction to estimate the Rt of the missing days including today.
We acknowledge that the reported case numbers we used in this work [data] only serves as a rough proxy for the actual number of infections. These numbers were sometimes largely under-reported due to
overloading of the testing capacity and sometimes over-reported due testing errors and biases. Moreover, Rt estimates published by the Swiss Science Task Force, which we validated our prediction
against, is an estimate based on yet another estimate. We are by no means trying to claim that we have developed a powerful model that captures the full dynamics of Covid-19 infections or a perfect
prediction model to estimate the unknown Rt. We are just two curious data scientists trying to make sense of the data that we have at hand and share what we’ve learned. Enjoy!
Understanding the Reported Swiss Covid-19 Cases
The figure on the top shows the time evolution of the number of total (cumulative) reported Covid-19 cases and below is the number of total reported fatalities due to Covid-19 in Switzerland [data].
Looking at the plots, we spot periods of growth and plateau that come after the other.
This wave-like population growth is often modelled using the logistic equation. Logistic curves can conveniently capture the dynamics of an exponential growth followed by a linear growth and
eventually a saturation (plateau). For Covid-19, there have already been a lot of studies published using some variant of this equation, in other words, the generalized logistic equations: e.g.
paper1, paper2.
In this work, we used the stretched logistic equation (equation below), which also belongs to the family of the generalized logistic equations. In essence, the stretched logistic equation encodes
time dependency in the exponential growth rate. The assumption here is that the growth rate decreases over time due to external contributions such as a state intervention (lockdown) or the rise in
people’s awareness. Researchers have shown that the stretched logistic curve fits pretty well to the Covid-19 cases as well as to other epidemic spreads.
Since it seems like we have more than one cycle of growth and plateau, we take a linear combination of multiple stretched logistic curves to fit to the number of total cases:
The fit is performed using the least squares and the number of stretched logistic terms to be included (i in the above equation) in the model at time t is determined by comparing the square error
produced by separate models including 1 to m number of terms. The best fitting model as of March 15, 2021 contains three stretched logistic curves. The animation below shows this dynamic model
selection process. The grey and the blue curves are the observed number of total cases and the orange curve is the best fit using the above equation. The orange vertical dashed lines indicate t1, t2
and t3. Note that the fit deviates from the actual observation more at the onset of each growth. It’s generally understood that a good prediction could only be made in the second half of the sigmoid.
Fortunately, the fit seems to be stable today (March 15, 2021) but at the onset of another growth period, we might have to find a more robust model.
Shown below are the snapshots of the fit as of March 15, 2021 using a linear combination of three stretched logistic curves. This model does pretty well on the overall trend.
Estimating the latest Rt
Rt of today can be inferred from a disease transmission model, an initial probability distribution of the Rt and the history of the daily number of cases including that of today using a Bayesian
framework. The black curve in the plot below is the estimated Rt obtained using this approach and the red curve is the logarithm of daily number of cases. You can see in the graph that as soon as the
Rt exceeds the threshold, the number of cases starts increasing and vice versa when it goes below.
The difficulty in estimating the latest Rt is in obtaining the daily number of cases of today (red box in the above figure). There is an unavoidable time lag between an actual event of infection and
when this event is reported. For this reason, the Swiss Science Task Force waits typically for about 10–14 days until the reported number stabilises and then they estimate the Rt for that given day.
This is why the Rt provided on the website today only reflects the infection dynamics of 10–14 days ago.
The animation above demonstrates how the reporting efficiency evolved over the entire history of the epidemic. On the horizontal axis, the number of days since the first report is plotted, which
indicates how many days was needed until the number of reported cases stabilised. On the vertical axis is the normalized reported cases. The orange line is a fit to the observed data using an arctan
function. We can see that especially at the onset of the new wave, the reporting efficiency suffered dramatically.
From the first section of this blog post, we think we have a good model that fits well to the data. Leveraging this model, we want to give a prediction to the number of daily cases in the last two
weeks including today (forecast the near-past). Once we obtain this prediction, we feed this further into another model that gives us the estimate of Rt, which is our ultimate goal.
Let’s look one more time at the number of total cases (black curve in top figure top plot) and our fit using a linear combination of three logistic terms (red curve in top figure top plot). The model
does a great job fitting to the overall trend but fails to capture the local structure: i.e. seasonality. You can see this in the first order derivative of the number of total cases or the daily
number of cases (top figure bottom plot). There seems to be a strong seasonal dependency in the time series.
We use the red curve as a trend and subtract that from the black curve. The remaining is the “detrended” time series (above figure). Here, you can also clearly see a strong weekly seasonality
(oscillation) and a volatility (time dependent variance). To give a good prediction, we need to capture the seasonal components in our model as well.
We decomposed the detrended time series into the second order trend component, the seasonality and the remainder assuming an additive model. The presence of a weekly seasonality was not surprising
for us from looking at the original time series but the monthly seasonality was unexpected. The recent increase in the second order trend component may suggest the onset of the fourth wave and the
structure in the remainder implies a potential problem of describing the time series using the additive model: e.g. not being able to capture the time dependent magnitude of the seasonal fluctuations
or the volatility.
We performed generalized linear regression on the detrended time series with ARIMA errors and Fourier terms of base periodicities 7 (week) and 30.4 (month) as additional external regressors. The
model selection (AR order, MA order, number of Fourier terms per base frequency, amplitudes of the harmonic terms, etc.) was carried out using the Akaike information criteria. We then used the best
fit model to make a forecast on the detrended time series of the last two weeks (top right), superimposed those values on to the overall trend obtained from the stretched logistics curves, and
finally arrived at our near-past forecast (red wiggly curve in the bottom plot). Visit our GitHub repository for the details of the implementation.
Now that we have a prediction of the daily number of cases of the recent days including today, we could solve for the Rt directly using the analytical expression given by the Bayesian update rule.
Alternatively, we could reformulate the time series setting as a supervised learning problem and use machine learning to reverse engineer that update rule and give the prediction of the missing Rt.
We decided to take the latter approach to add another layer of fun (validation against the analytical solution shortly described).
Above is a snapshot of the dataset we constructed using the time series data: i.e. time, Rt and the number of daily cases. We engineered lag features, which are Rt and daily cases at prior time
steps, and rolling mean features, which are an aggregation of Rt and daily cases over a fixed window of prior time steps. These features provide information about the serial correlations between the
values to the dataset. Rt is our response variable and all the rest except for the time is our predictor. We performed Poisson regression using the gradient boosting algorithm. Visit our GitHub
repository for the details of the implementation.
We validated the gradient boosting model using the walk-forward method. The top plot in the above figure shows the root mean square error (RMSE) of the predicted Rt of the following 14 days from day
x, where x ranges from April 1, 2020 to March 1, 2021. At day x, the model is trained using the data only available up until that day. We can see that at the beginning of the epidemic, the model is
struggling to make a good prediction with RMSE > 0.2, but as the time goes by and the more data become available, it starts learning the function and the metric stabilises. It did suffer again at the
onset of the second growth but during the third growth period and thereafter the predictions worked well with RMSE < 0.005. | {"url":"https://d-one.ai/news/real-time-tracking","timestamp":"2024-11-08T14:23:08Z","content_type":"text/html","content_length":"49481","record_id":"<urn:uuid:994a4b1f-7ac0-4d06-a327-56c896ba5450>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00238.warc.gz"} |
How To Calculate And Understand Resistor Values
is a device that opposes the flow of electrical current. The bigger the value of a resistor the more it opposes the current flow. The value of a resistor is given in ohms and is often referred to as
its ‘resistance’.
Identifying Resistor Values
│Band Colour │1st Band│2nd Band│Multiplier x │Tolerance│
│Silver │ │ │÷ 100 │10% │
│Gold │ │ │÷ 10 │5% │
│Black │0 │0 │1 │ │
│Brown │1 │1 │10 │1% │
│Red │2 │2 │100 │2% │
│Orange │3 │3 │1000 │ │
│Yellow │4 │4 │10,000 │ │
│Green │5 │5 │100,000 │ │
│Blue │6 │6 │1,000,000 │ │
│Violet │7 │7 │ │ │
│Grey │8 │8 │ │ │
│White │9 │9 │ │ │
Example: Band 1 = Red, Band 2 = Violet, Band 3 = Orange, Band 4 = Gold
The value of this resistor would be:
2 (Red) 7 (Violet) x1,000 (Orange)
= 27 x 1,000
= 27,000 with a 5% tolerance (gold)
= 27k ohms
Too many zeros?
kilo ohms and mega ohms can be used:
1,000 ohms =1k
1,000k = 1M
Resistor Identification Task
Calculate the resistor values given by the bands shown below. The tolerance band has been ignored.
│1st Band│2nd Band│Multiplier x │Value│
│Brown │Black │Yellow │ │
│Green │Blue │Brown │ │
│Brown │Grey │Yellow │ │
│Orange │White │Black │ │
Calculating Resistor Markings
Calculate what the colour bands would be for the following resistor values.
│Value │1st Band│2nd Band│Multiplier x │
│180 ohms │ │ │ │
│3,900 ohms │ │ │ │
│47,000 ohms (47k) │ │ │ │
│1,000,000 ohms(1M) │ │ │ │
What does Tolerance mean?
Resistors always have a tolerance but what does this mean? It refers to the accuracy to which it has been manufactured. For example if you were to measure the resistance of a gold tolerance resistor
you can guarantee that the value measured will be within 5% of its stated value. Tolerances are important if the accuracy of a resistors value is critical to a designs performance.
Preferred Values
There are a number of different ranges of values for resistors. Two of the most popular are the E12 and E24. They take into account the manufacturing tolerance and are chosen such that there is a
minimum overlap between the upper possible value of the first value in the series and the lowest possible value of the next. Hence there are fewer values in the 10% tolerance range.
│E-12 resistance tolerance (± 10%) │
│E-24 resistance tolerance (± 5%) │
Resistor Identification
│1st Band│2nd Band│Multiplier x │Value │
│Brown │Black │Yellow │100,000 ohms │
│Green │Blue │Brown │560 ohms │
│Brown │Grey │Yellow │180,000 ohms │
│Orange │White │Black │39 ohms │
Resistor Markings
│Value │1st Band│2nd Band│Multiplier x │
│180 ohms │Brown │Grey │Brown │
│3,900 ohms │Orange │White │Red │
│47,000 ohms (47k) │Yellow │Violet │Orange │
│1,000,000 ohms (1M) │Brown │Black │Green │
Download a pdf version of this page
here read more »
If you found this article helpful and you would like to receive product updates and free electronic resources from us then sign up
. We hate spam too and promise to never sell or share your email address, and you can unsubscribe at any time.
38 comments
Hello sir,concerning the last exercises under values to determine the first band ,second band and multiplier ,there's a mix-up with the answer,you wrote for 1m ohms you wrote brown black and green,
instead of brown black and blue. I hope m correct
This site is rely helpfull keep it up sir.
You can use most multimeters to check resistance. Such as, https://www.kitronik.co.uk/26114-digital-auto-ranging-multimeter.html
What if a resistor did not have any colour bands on it, how could you determine its resistance?
Hi Kevin, thanks for your question. I consulted with Alasdair, one of our Engineers and here is his suggestion; “This is a tricky one. Resistors without colour bands tend to be the high power
variety, which makes sense in the amp context and its physical size. The ‘az’ marking could indicate it is part of that range of resistors produced by Ohmite, but I can’t find one which looks like
that (it could of course be pretty old and no longer made). Without knowing anything else, I think his best bet would be to get a similar size resistor which has a power rating higher than the peak
rated power for the amp.” I hope this helps.
Hello – I have a resistor that I need to replace in an audio amp. It has no color bands. However it is marked with what appears to be a small A connected to a Z and then 0.15 ohm. What type of
resistor is it and how can I tell the wattage? It is light gray in color and about 1/2" in size
Hi, try this: https://www.digikey.co.uk/en/resources/conversion-calculators/conversion-calculator-resistor-color-code-5-band
Please I have a resistor here I'm trying to calculate it's resistance, but it has five bands unlike the four bands you taught us, please how do I do this? Thanks
Hi Maju, You can test the actual resistance value of the resistor with a multimeter and then compare against the expected value of the resistor =/- the tolerance band value. The newer the component
the more likely it is that the tolerance band is accurate, as manufacturing methods are more precise now then they used to be. I hope this helps.
how will i know that resistor is withing or not within tolerance
You are indeed helpful. Keep it up. God bless you
Hi Joe, if the circuit still functions when the fuses are blown then the values of the resistors can be obtained using a multi-meter. If not, then the only option is to deploy ohms law and determine
what value of resistors are required by the circuit. It is difficult to advise without having the circuit here to test.
Hello, I have a problem with a 30 year old circuit. I have 2 resistors which lost their color coding due to heat. Also, nearby is a diode. The back of the pcb is dark and shows signs of heat. The 2
inline fuses, 0.15amp each are both blown. How can I determine the ohms of the resistors so I can replace them?
I've learnt something, thanks
Hi Reid, In the example you gave, the multiplier is x 10,000, for the result to come out as you expected it to the multiplier would have needed to be 1000. 18 × 1000 = 18,000 which = 18k ohms. 18 ×
10,000 = 180,000 which equals 180k ohms. I hope this helps.
ok, so example #3 says brown, gray, yellow. brown is 1. gray is 8, yellow is 10k ohms. so.. 18 × 10k ohms. that should equal 18,000 ohms shouldn't it? why is it 180,000 ohms? I've typed this into
multiple resistor calculators and they all say 180k ohms. What am I missing?
Hi Graham, thank you for your kind comments. We hope you continue to find us useful as you continue to explore electronics.
Hi gents, I am a chartered mechanical engineer and don't often get involved with electronics. I find a lot of websites are not that helpful, but I must congratulate you on this site which is lovely
to use and so helpful, well done and many thanks.
Hi Kevin, 1 Kiloohm is equal 0.001 Megaohm so that would be nowhere near the required value. Rob
Hi…I'm trying to replace a resistor in my drill charger, I've calculated it to be 3900Mohms. I Can only get 3.9Kohms, would I be able to use this? Many thanks. Kevin.
Hi Charles, Another way of writing it would be for silver to multiply by 0.01 and for gold to multiply by 0.1. I hope this helps.
Thanks a lot for your time and impartation. but how do you calculate the one with silver or golden multiplier (÷100,÷10)?
Hi Morgan, If you look again at the graphic and then again and the worked example and use the table to follow the worked example with. Then get a resistor that you know the value of, if you don't
have one to hand google one for the picture. Then with the resistor/picture use the table to arrive at the value that you know. Once you have successfully done it once it should make sense. As for
the tolerance band, this is for the manufacturer to let you know how far away from the stated value the actual value might be. For example if the tolerance band is silver you know that the actual
value might be +/- 10% different to the stated value. For a resistor that is marked as 100 ohms with a silver tolerance band, the actual value could be anywhere from 90 ohms – 110 ohms. Hope this
i am really confused, we have a test tomorrow and i still dont understand how to calculate the value of a resistor or the tolerance value or the tolerance range
Hi Benjamin, when designing your circuits and working out the values of the components you will need, you will be working with absolute values. Tolerance just lets you know the maximum/minimum
variation from the stated value you can expect from a particular resistor. When it comes time to order the parts, you would choose to source resistors with a suitable tolerance range for your
project. The majority of circuits are quite forgiving with respect to small variations in resistor value but choose a tolerance that will allow your circuit to function correctly even if the actual
variance is at the maximum. Once you have the resistors you can measure their actual resistance and update your calculations, if necessary. Hope this helps.
thank you for your kindness and offering , i now know how to determine the value of a resistor using color code system but how do you deal with the tolerance of the resistors connected in series? do
you have to accumulate the tolerance for each resistor or what?
You're welcome Jack!
Thanks a lot ! Your chart quickly provides the value of any 4 band resistor….brilliant.
If the resistor has four coloured bands; three of the bands will be roughly an equal distance apart and relatively close together, the fourth will be slightly further away from the other bands. You
should hold the resistor so the forth band is to the right then read the resistor values from left to right. I hope this helps.
But how to indentifd which one band 1 & 4 pleas say
If the resistor has four coloured bands; three of the bands will be roughly an equal distance apart and relatively close together, the fourth will be slightly further away from the other bands. You
should hold the resistor so the forth band is to the right then read the resistor values from left to right. I hope this helps.
how do calculate the value of a resistor using the colour band is it from left to right or right to left ?
Thank you for your good tuition on how to calculate the value for a resistor with colours. Am solved.
Hi Jason. There is a more pronounced gap on the right hand side (between colours three and four) whereas the first three colours have an equal size gap between them. Rob
How do I determine which end of the resistor has the first color band (from left to right)?
How do I determine which end of the resistor is used for determining the first color band (left to right)
A multimeter with a resistance setting would be able to read the resistance of a resistor for you.
hello sir, is there any other way to find the resistance of resister except color coding.. please tell me | {"url":"https://kitronik.co.uk/blogs/resources/how-to-calculate-and-understand-resistor-values","timestamp":"2024-11-09T03:12:38Z","content_type":"text/html","content_length":"335922","record_id":"<urn:uuid:8ca08a01-c422-49a6-9875-67af78d8f7ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00193.warc.gz"} |
RBL Bank FD Calculator – Calculate RBL Bank FD Maturity Amount Online
The FD interest and maturity amount can be calculated through the compound or simple interest methods. The interest earned and the maturity amount will vary with the frequency of interest payout,
which can be annual, semi-annual, quarterly, or monthly, depending on the bank’s policy. Some banks also offer specific FD schemes to cater to the duration of payouts the depositors may desire.
Here is the formula to calculate interest on FD through a simple interest method:
Simple Interest = (P * R * T)/100
Here, P = Principal amount, R = Rate of interest (%) and T = Tenure of deposit
Now, let’s take a simple example to understand how this formula works.
Suppose you deposit Rs. 2 lakh in RBL Bank Fixed Deposit for 10 years at an annual Interest rate of 6%. The interest is calculated on a yearly basis. Here, P = Rs. 2 lakh, R = 10% and T = 6 years.
Simple Interest = (2,00,000 *10 * 6) /100 = ₹1,20,000
Maturity Amount = Principal Amount + Simple Interest = ₹2,00,000 + ₹1,20,000
Hence, the value of FD maturity would be ₹3,20,000.
Let’s now look at compound interest calculation. Here is the formula to calculate RBL Bank FD Interest and maturity through the compound interest method:
A = P (1+r/n) ^ (n * t)
Here, A = Maturity Amount, P = Principal amount, r = Interest Rate, t = Tenure, and n = Number of times compounding in a year
Now, taking the same example as above, suppose you deposit Rs.2 lakh in RBL Bank Fixed Deposit for a tenure of 10 years at an annual Interest rate of 6%. The interest is calculated on a semi-yearly
Here, P = ₹2 lakh, R = 6% and T = 10 years, n= 2 (compounding is done semi-annually).
A = 2,00,000 (1+0.06/2) ^ (2*10)
Maturity Amount = ₹3,61,222
Compound Interest = ₹3,61,222 – ₹2,00,000 = ₹1,61,222 | {"url":"https://navi.com/calculator/rbl-fd-calculator","timestamp":"2024-11-05T13:15:08Z","content_type":"text/html","content_length":"470529","record_id":"<urn:uuid:f6603c81-bcc0-46a6-81d0-afe81cd7bf0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00105.warc.gz"} |
Solving Laplace’s equation in the upper half plane
In the previous post, I said that solving Laplace’s equation on the unit disk was important because the unit disk is a sort of “hub” of conformal maps: there are references and algorithms for mapping
regions to and from a disk conformally.
The upper half plane is a sort of secondary hub. You may want to map two regions to and from each other via a half plane. And as with the disk, there’s an explicit solution to Laplace’s equation on a
half plane.
Another reason to be interested in Laplace’s equation on a half plane is the connection to the Hilbert transform and harmonic conjugates.
Given a continuous real-valued function u on the real line, u can be extended to a harmonic function on the upper half plane by taking the convolution of u with the Poisson kernel, a variation on the
Poisson kernel from the previous post. That is, for y > 0,
This gives a solution to Laplace’s equation on the upper half plane with boundary values given by u on the real line. The function u is smooth on the upper half plane, and its limiting values as y →
0 is continuous.
Furthermore, u is the real part of an analytic function f = u + iv. The function v is the harmonic conjugate of u, and also equals the Hilbert transform of u. | {"url":"https://www.johndcook.com/blog/2022/11/23/laplace-half-plane/","timestamp":"2024-11-09T07:34:30Z","content_type":"text/html","content_length":"49009","record_id":"<urn:uuid:31f2d724-460f-45b1-a264-5eb0122a5327>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00405.warc.gz"} |
C++ Array Assignment
Imagine that we are writing a Matrix class that stores a 2D array. Our class has 3 data members: double* arr (a "flattened" array storing all of the elements in the matrix), unsigned int nrows
(number of rows), and unsigned int ncols (number of columns).
Implement a print function for Matrix objects that works with cout. E.g.,
Matrix M(...);
cout << M << endl;
//ostream& operator<<(ostream& out, const Matrix& M) { ... }
Your print function should separate elements on the same row with spaces, and each row should end with a newline. For example, if we printed a 3x3 identity matrix (1 in (1, 1), (2, 2), (3, 3) and 0
everywhere else), it should print as:
You may print an extra space at the end of each row if it makes your code simpler. You can assume that your function has access to the private data members of Matrix, and it should not call any
Matrix member functions
Need a custom answer at your budget?
This assignment has been answered 2 times in private sessions. | {"url":"https://codifytutor.com/marketplace/c-array-assignment-1881a317-3dff-4773-8588-9761808ce43c","timestamp":"2024-11-02T11:09:09Z","content_type":"text/html","content_length":"20785","record_id":"<urn:uuid:7a8119d1-d1ec-4361-9f9a-95b278f5a4de>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00264.warc.gz"} |
Trend Analysis
I have been trying to work on plotting some graphs using sas. But I cannot seem to figure this time series graph with frequency on the y-axis.
My dataset contains the following variables.
1. Year (Years 1990 – 2010)
2. Term ( 3 categories)
3. Class (4 categories)
I need Year on the Y-axis and Term as a group and class as a subgroup. I would kinda like to see a trend in frequency (something similar to trend analysis).
Any Thoughts?
11-05-2010 10:35 AM | {"url":"https://communities.sas.com/t5/Graphics-Programming/Trend-Analysis/td-p/29391","timestamp":"2024-11-10T23:51:52Z","content_type":"text/html","content_length":"225869","record_id":"<urn:uuid:bd9d1aec-f64b-45a2-9391-bf3cb6bf8054>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00143.warc.gz"} |
772 hectometers per square second to decimeters per square second
2,772 Hectometers per square second = 2,772,000 Decimeters per square second
Acceleration Converter - Hectometers per square second to decimeters per square second - 2,772 decimeters per square second to hectometers per square second
This conversion of 2,772 hectometers per square second to decimeters per square second has been calculated by multiplying 2,772 hectometers per square second by 1,000 and the result is 2,772,000
decimeters per square second. | {"url":"https://unitconverter.io/hectometers-per-square-second/decimeters-per-square-second/2772","timestamp":"2024-11-14T13:45:34Z","content_type":"text/html","content_length":"27027","record_id":"<urn:uuid:a3257e05-4c37-492b-8955-744875b35f9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00879.warc.gz"} |
Subtraction to 30 with regrouping
It is important to be able to subtract numbers to 30, to be able to determine how many you have left.
Start by asking students to determine which ball has the number shown by the question mark on the number line on the interactive whiteboard. Then practice a subtraction problem to 20 with students on
the number line.
Start by discussing the importance of being able to subtract to 30 and explain how to calculate to the ten. The learning goal is then presented in three ways, visually, in the abstract, and in story
form. You can select the method which best applies to your classroom situation by using the blue menu in the bottom right of the page. Otherwise, you can start at the first page with the visual
support. Use the image of fruit to explain how to subtract the two numbers. Then have students practice the next set on their own. Then the abstract subtraction problem is given. Tell students that
they can solve these by counting back in their heads, or by imagining a number line, or even by using blocks or a rekenrek. Discuss with students that it is useful to first count back to the ten, and
then to take away the rest. Have students practice with a set of abstract subtraction problems. Then the steps of solving a story problem are given, discuss these step by step and solve the given
story problem together. Then ask students to work in pairs and determine the difference in the next story problem. Discuss their answers as a class. Each method has the teacher explain first, then
practice together, and then has the students practice individually to check their understanding.Check that students are able to subtract to 30 regrouping tens by asking the following questions:- Why
is it useful to be able to subtract to 30?- How do you determine which number is first in a subtraction problem?- How do you regroup tens?
Students are given an abstract problem without visual support, and are then given story problems to solve. Ask how they solved the problems.
Do one of each kind of subtraction problems with the students (visual, abstract, and story). Then divide the class into groups. Give each group three dice. At the start of the first round each
student starts with 30 points. Each student throws the three dice and subtracts those numbers from their total of 30 points. The round ends when each student in the group has subtracted their numbers
from 30. The student who has the most points left, is the winner of the round. The second round, students start with 29 points, and in the third round, they start with 28 points (etc).
Students who have difficulty subtracting numbers to 30 can be supported by the use of manipulatives, like blocks. To help students regroup the tens, have students draw the subtraction problem on the
number line, and then have them count back to the previous ten, and then count on.
Gynzy is an online teaching platform for interactive whiteboards and displays in schools.
With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom
management more efficient. | {"url":"https://www.gynzy.com/en-us/library/items/subtraction-to-30-with-regrouping","timestamp":"2024-11-11T07:24:25Z","content_type":"text/html","content_length":"553791","record_id":"<urn:uuid:c16f7d81-469c-4644-854e-c5bf8cdec3ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00370.warc.gz"} |
None in Tampa, FL // Tutors.com
I have years of elementary and high school math experience and I can make math simple and fun. I have taught ELA at every grade including Kindergarten and 12th grade. I enjoy seeing the success in
tutoring find challenge a student- then the self-esteem grows! I have references available at your convenience. Let’s chat today so we can build a plan.
Payment methods
Cash, Check, Venmo
Grade level
Pre-kindergarten, Elementary school, Middle school, High school, College / graduate school, Adult learner
Type of math
General arithmetic, Pre-algebra, Algebra, Geometry, Trigonometry, Pre-calculus, Calculus, Statistics
Photos and videos
Your trust is our top concern
, so businesses can't pay to alter or remove their reviews.
Learn more
She’s amazing! Assisted our teenager with a really tough math class and even managed to salvage her grade! Ms. Lauren was patient, kind, thoroughly knowledgeable and our daughter looked forward to
all of their sessions. She’s truly a gem and an inspiration to all tutors. So glad we found her!
September 06, 2024
Services offered
Reading And Writing Tutoring | {"url":"https://tutors.com/fl/tampa/math-tutors/l-mccaffrey-math-tutoring","timestamp":"2024-11-10T07:33:51Z","content_type":"text/html","content_length":"200777","record_id":"<urn:uuid:2c0f4c42-263d-462b-9828-2c0515b73f6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00160.warc.gz"} |
Risk glossary
This glossary is provided as a guide to some of the terms you will often find in the risks management field. It’s very long so you will have to open it on its own page.
A |B |C |D |E |F |G |H |I |J |K |L |M |N |O |P |Q |R |S |T |U |V |W |X |Y |Z
As low as reasonably practicable; a safety risk management requirement imposed by the Health and Safety at Work Act. Also known as SFARP, so far as reasonably practicable.
UK association of insurance and risk managers (in industry and commerce). For more details see their website.
The (UK) national forum for risk management in the public sector, formerly the association of local authority risk managers. For more details see their website.
ANZ Standard
Australian and New Zealand standard for risk management, AS/NZS 4360:2004. Used by The Risk Agenda as the basic standard for risk management. It has associated Risk Management Guidelines, HB
436:2004, although really the standard itself is only a guideline for of good practice. Now superseded by ISO 31000. See also the British Standard BS 31100 and the IRM Standard.
Baseline business model
The model of a project, business or organisation which is used to act as the baseline for a risk model. The key point is to make sure the two are consistent with no gaps or overlaps.
Bayesian network
An influence diagram (or directed acyclic graph) where the links are probabilistic; effectively conditional probabilities. You can use software to calculate them, for example to update probabilities
in the light of new information.
Bayes Theorem
The basic relationship between inverse conditional probabilities. Normally used to start from the probability of observed data given a hypothesis to provide the probability of a hypothesis given
observed data.
bell curve
Informal name for a graph of a continuous probability distribution, so-called because of its bell shape when the distribution is normal.
beta distribution
A continuous probability distribution on a finite range (a,b) for which the PDF is a power low at each end so that p(x)∝(x-a)^s(b-x)^t. The mean is ((t+1)a+(s+1)b)/(s+t+2) and the variance is (s+1)
binomial distribution
A discrete probability distribution which reflects the chance of n successes in N trials where the probability of success is p for each trial. The probability distribution, p(n), is (N!/n!(N-n)!)p^n
(1-p)^N-n, the mean is Np and the variance is Np(1-p).
BS 31100:2008
The British standard for ‘Risk management – Code of practice’. See also ISO 31000 and our RiskBite on standards.
The capital asset pricing model which is claimed to relate an asset’s expected return to its volatility on the market.
chi-square distribution
A continuous probability distribution which is a special case of the exponential distribution with a PDF proportional to x^n/2-1e^-x/2. The mean is n and the variance is 2n. This distribution is said
to be a chi-square distribution with n degrees of freedom and is the distribution of the sum of the squares of n identical normally distributed random variables.
central limit theorem
The sum of many independent random variables is approximately normal. This can frequently be used to carry out sanity checks on risk models, and particularly the so-called direct method.
An approach to decision making which is based on subjective probabilities and utilities. If you are not coherent in your approach to decisions you might become a money making machine.
conditional probability
The probability of an event, A, say, given that some other event, B, say, has happened. Written as p(A|B).
Outcome or impact of an event. There can be more than one consequence per event, they can be positive or negative, they can be expressed qualitatively or quantitatively and they should be expressed
in relation to the achievement of objectives. Taken from the ANZ standard.
A financial reserve which can be used to implement a contingency plan, ie the contingency plan is to spend the contingency if the risk materialises.
contingency plan
A risk reduction measure aimed at reducing the consequences of a risk if it materialises. The point is that the plan is implemented only if the risk event occurs.
A statement that two events or effects have a measurable interrelationship, for example through the correlation coefficient. If the correlation is not zero the events are not independent. If the
events are independent the correlation is zero. Be careful, the reverse of neither of these is necessarily true.
correlation coefficient (or rank correlation coefficient)
A measure of correlation involving the difference between the expected value of the product of two random variables and the product of the expected value, normalised by the two standard deviations.
The rank correlation coefficient is the same thing, but based on the rank of samples rather than the sample values themselves. (This is a bit sloppy. The sample correlation coefficient is an
estimator of the correlation coefficient; the rank correlation coefficient is a sample statistic that is not an estimator of anything.)
cumulative probability distribution
A way of representing a probability distribution, particularly for continuous random variables. It is the probability that the random variable is less than a specific value, and drawn as an S-curve.
A useful alternative to probability density functions which people do not understand.
decision tree
A tree like structure where the nodes are either decision points or event outcomes. They can be developed from influence diagrams and quantified using probabilities. It is then possible to calculate
the decision sequence which maximises expected utility.
Where there are interrelationships reflecting an absence of independence.
direct method
A Risk Agenda term for a simple technique for calculating a risk model. It involves summing the mean and variance of independent random variables and adopting the normal approximation suggested by
the central limit theorem.
estimating risk
The Risk Agenda term for guessing or calculating quantitative risk levels. This is based on the use of estimators in statistical inference.
A function of observed events which approximates to some parameter of the underlying distribution. For example the mean of a sample is an estimator of the mean of the underlying distribution.
The occurrence of a particular set of circumstances, certain or uncertain, a single occurrence or a series. Taken from the ANZ standard.
event tree
A risk model which comprises a branching structure which traces the events which can follow an initiating event. Often built into more comprehensive models with fault trees.
expected value
The probability weighted average of a random variable, also called the mean.
exponential distribution
A continuous probability distribution which represents the time to the next event in a Poisson process. The PDF is p(t)=fe^-ft for 0≤t, the mean is 1/f and the variance is 1/f^2.
fault tree
A risk model which comprises a branching structure which traces the ways a top event can arise from a number of base events. This is often used in reliability analysis. Often built into more
comprehensive models with event trees.
fN line
A measure of risk often used in safety risk models which estimate frequencies. It is the equivalent of an S-curve, but at each consequence level, N (eg the number of deaths), it plots the frequency
of events which have consequences equal to or higher than N.
A measure of probability per unit time often used in safety risk modelling. Technically it reflects a Poisson process rather then a one off probability of an event happening. The formal definition in
the ANZ standard is ‘a measure of the number of occurrences per unit time’. The Risk Agenda is not sure this helps.
gamma distribution
A continuous probability distribution for which the PDF is a power low multiplied by a decaying exponential: p(x)∝x^se^-ks. The mean is (s+1)/k and the variance is (s+1)/k^2.
Green Book
The UK treasury’s mandatory ‘guidance’ on the appraisal of expenditure including situations where risk is an issue. Specifically this requires the use of optimism biases to reflect risk prior to its
full estimation.
A term used in safety risk analysis to mean a situation with the potential to cause harm. It is therefore a precursor or source of risk, a term which in this context is taken to reflect the
probability of the harm occurring.
Health and Safety at Work Act
The main legal framework for health and safety at work on the UK. Spawns numerous other regulations affecting the need to carry out risk assessments and also for specific issues.
A common term for the consequences of a risk.
A statement of the absence of an interrelationship between two effects. The probability of two independent events is the product of their individual probabilities, and if the probability of two
events is the product of their individual probabilities they are idependent. Similarly the joint probability distribution of two independent random variables is the product of their individual
probability distributions, and if the joint probability distribution of two random variables is the product of their individual probabilties they are independent.
influence diagram
A net work of blobs joined to each other by arrows. The blobs represent events or effects and the arrows represent how one affects another.
Institute of Risk Management
UK based professional organisation for those interested in risk management, focussing on education. For more information see their website.
IRM Standard
Standard for risk management maintained by the IRM, AIRMIC and ALARM. Likely to become a British or European standard. It is more a statement of useful principles than a standard. See also the
Australian and New Zealand standard.
ISO 31000:2009
Standard for ‘Risk Management – Principles and guidelines’ issued after much discussion in 2009. This supersedes the excellent Australian and New Zealand standard. See also our RiskBite on standards.
law of large numbers
Informally, the idea that taking a large number of samples will stabilise the fraction of those in which a specific event occurs. Mathematically it is the assertion that the probability that the
fraction is a certain distance from the stabilised value decreases at least as fast as the inverse number of samples. This underpins the Monte Carlo method.
lognormal distribution
A continuous probability distribution for the random variable X=e^Y where Y is normally distributed with mean μ, say, and standard deviation σ. The mean is exp(μ+σ^2/2) and the variance is
An informal expression of the degree of belief that an event or events will occur. The formal definition is ‘used as a general description of probability or frequency which can be expressed
qualitatively or quantitatively’. This is taken from the ANZ standard.
mean value
Another name for expected value.
mixed distribution
A probability distribution which is partly discrete and partly continuous. If X is drawn with probability p from some distribution with mean μ, say, and standard deviation σ and X=0 with probability
1-p then the mean of X is pμ and the variance of X is pσ^2+p(1-p)μ^2.
Monte Carlo
An approximate technique for calculating risk models in which a large number of possible futures is explored by selecting randomly from the probability distributions of the input to develop the
probability distribution of the output.
An approach to decision making where there are a number of attributes which need to be traded off, including, perhaps, risk-related attributes. This is typically resolved using a scoring scheme.
normal distribution
A continuous probability distribution which is of fundamental importance as, for a given mean and variance, it is the minimally informative distribution. This is why, according to the central limit
theorem, the distribution of a sum of independent random variables is approximately normal. The PDF is p(x)=(2πσ^2)^-1/2exp(-(x-μ)^2/2σ^2) where the mean is μ and the standard deviation is σ.
A risk with a positive consequence for objectives. Contrasted with a threat.
optimism bias
A concept put forward by the UK Treasury in its Green Book on expenditure appraisal. It is a set of factors to be applied to project cost to reflect risk issues prior to their full evaluation.
Essentially this reflects Treasury frustration with over-optimistic estimates of project cost and duration.
Orange Book
Guidance from the UK Treasury on the implementation of risk management in Government departments.
A description of a point on a cumulative probability distribution. The 80th percentile, for example, is the value for which there is a 20% probability of a higher value and a 80% probability of a
A common convention for percentiles. For example P80 is the 80th percentile.
Poisson process
A random process in which the probability of an event occurring during a short period of length t is ft where f is the Poisson parameter, sometimes called the frequency. The number of times that the
event occurs in any period of length T has a Poisson distribution: p(n)=(fT)^nexp(-fT)/n!.
precautionary principle
One way of saying we have to be extra careful with safety and environmental risks. It official statement, from the Rio summit is: ‘where there are threats of serious or irreversible environmental
damage, lack of full scientific certainty shall not be used as a reason for postponing cost effective measures to prevent environmental degradation’. This is not coherent.
private finance initiative (PFI)
A form of public procurement in the UK which allows the public sector to gain the services of a new or improved capital asset through regular payment of a service charge. The asset is built or
refurbished though private finance. This is intended to allow a better risk allocation, and specifically to transfer risk from the public to the private sector. It is a subset of Public Private
Partnerships, PPP.
probability density function (PDF)
A representation of a probability distribution for a continuous random variable which is often not properly understood. The Risk Agenda prefers cumulative probability distributions.
A numerical expression of the degree of belief that an event will occur. It can be generated from the concept of a set of repeatable experiments and represents the fraction in which the event occurs.
This concept allows a theory of probability to be developed and used. The formal definition, taken from the ANZ standard is ‘a measure of the chance of occurrence expressed as a number between 0 and
probability distribution
A set of probabilities applied to an event or a random variable which reflects the likelihood that the event will occur or the random variable will take a certain value. Probability distributions may
be either discrete or continuous.
probability impact diagram (PID)
A term sometimes used for a risk matrix.
public private partnership (PPP)
A generic term for public procurement on a non-traditional basis of which PFI is the prime example.
random variable
A numerical function defined for a complete set of events which may or may not happen.
rank correlation
See correlation.
reliability analysis
The analysis of the risk that a component or system will not be functional at a given point in time.
residual risk
The risk remaining after the implementation of risk treatment. Taken from the ANZ standard.
The chance of something happening which will have an impact on objectives. The ‘something’ is often specified in terms of an event or circumstance and the consequences which may flow from it. Risk is
measured in terms of a combination of the consequences of an event and their likelihood. Risk may have a positive or a negative impact. Taken from the ANZ standard. Positive impact risks are known as
opportunities and those with a negative impact are sometimes known as threats.
risk analysis
A systematic process to understand the nature of and to deduce the level of risk. This provides the basis for risk evaluation and decisions about risk treatment. Taken from the ANZ standard.
risk assessment
The overall process of risk identification, risk analysis and risk evaluation. Taken from the ANZ standard.
risk appetite
The idea developed from risk criteria that some risks are acceptable and some are not. There is no formal definition and it is not a helpful concept.
risk averse
An inclination to take decisions regarding risk which reflect a preference for certainty over uncertainty. Technically this might mean a concave down utility function, that is decreasing utility for
higher amounts, but revealed behaviour often indicates greater aversion than this. The opposite is risk seeking.
risk criteria
The terms of reference by which the significance of risk is assessed. Taken from the ANZ standard.
risk evaluation
The process of comparing the level of risk against risk criteria. It assists in decisions about risk treatment. Taken from the ANZ standard.
risk identification
The process of determining what, when, why and how something could happen. Taken from the ANZ standard.
risk management
The culture, processes and structures that are directed towards realising potential opportunities whilst managing adverse effects. Taken from the ANZ standard.
risk management process
The systematic application of management policies, procedures and practices to the tasks of communicating, establishing the context, identifying, analysing, evaluating, treating, monitoring, and
reviewing risk. Taken from the ANZ standard.
risk management framework
The elements of an organisation’s management system concerned with managing risk. Management systems elements can include strategic planning, decision making, and other strategies, processes and
practices for dealing with risk. Taken from the ANZ standard.
risk matrix
A matrix of likelihood and consequence categories. Qualitative risk analysis assigns individual risks to cells of this matrix, possibly both with and without risk treatment.
risk map
An illustration of those aspects of a system which are important for generating the risk associated with the system and which determines the nature of the risk. At The Risk Agenda we have put
considerable effort into developing our ideas on risk mapping. We see it as an important step between the risk workshop and risk modelling, which also enables much more rigorous risk registers to be
generated. See our risk mapping page.
risk model
A quantitative model which comprises a set of inputs, a set of output linked to the inputs and a set of probability distributions on the inputs.
risk premium
The price sought or paid for accepting risk beyond the expected value. There is no real theory for what people pay or charge in most circumstances. However the capital asset pricing model is one
risk provision
An amount on top of baseline to account for with risk. Subtly different from contingency: contingency may eat into the baseline, or may only be part of the provision.
risk register
A database of risks which have been identified, typically including other information such as category, likelihood, consequence, ownership, treatment, responsibility for treatment and so on. Can be
used to draw the risk matrix.
risk seeking
An inclination to take decisions regarding risk which reflect a preference for risk or uncertainty over certainty. Technically this might mean a convex down utility function, that is increasing
utility for higher amounts, but revealed behaviour may indicate more enthusiasm for risk taking than could be explained by this. The opposite is risk aversion.
risk treatment
The process of selection and implementation of measures to modify risk. Risk treatment measures can include avoiding modifying sharing or retaining risk – see the 5 Ts. Taken from the ANZ standard.
risk workshop
A meeting of experts convened to carry out the initial risk identification of a system. The output can be used to generate a risk register and/or a risk map.
scatter chart
A useful form of presentation of risk in which, for example, the inputs and outputs of individual simulations in a Monte Carlo calculation are plotted, or the probability and consequence of a number
of risks.
schedule risk analysis
The analysis of project risks, particularly the time to complete the project.
A graph of a cumulative probability distribution, so-called because it moves from bottom left to top right with a sinuous shape.
sensitivity chart
A bar chart in which the dependence of outputs on inputs is drawn. The Risk Agenda prefers to draw these as vertical bars expressing the whole range and also the P20 and P80 values.
So far as reasonably practicable – see ALARP.
spider plot
A chart in which the relationships between an output and the inputs of a risk model are plotted. The vertical axis is the output and the horizontal axis is a normalised measure for the inputs, for
example fractional change from the mean, or the percentiles. The Risk Agenda prefers sensitivity charts.
standard deviation
A measure of the spread of a random variable. It is the root mean square distance from the mean of the random variable and therefore has the same units as the random variable. (Its square is the
variance which has simpler statistical properties.)
standardisation of PFI contracts, Version 3 (SoPC3)
A standardised form of contract for PFIs issued by the UK Treasury. This sets out the preferred risk allocation.
The 4 (or 5) Ts
The four classic measures for dealing with risk: ‘tolerate’, ie do nothing, ‘treat’, ie do something, ‘transfer’, ie insure or pass to a customer or contractor, and ‘terminate’, ie do something else.
The fifth is ‘take’ the opportunity. See also risk treatment.
A risk with a negative consequence for objectives. Contrasted with an opportunity.
tornado chart
A form of risk presentation of risk model results intended to demonstrate the importance of each input to an output. It is a bar chart of the (rank) correlation coefficient between the two variables.
If you draw it as horizontal bars in increasing order vertically it looks vaguely like a tornado.
triangle distribution
A continuous probability distribution which it is often convenient to use to represent bounded random variables in risk models. They are specified by thier minimum, maximum and most likely values
with a linear PDF between the minimum and most likely and the most likely and maximum. The mean is (Min+ML+Max)/3 and the variance is ((ML-Min)^2+(Max-ML)^2+(Max-Min)^2)/36.
uniform distribution
A discrete or continuous probability distribution where the probabilitity is split equally between the first N integers (discrete) or the values (continuous) within a range (a,b). For the discrete
distribution the probability distribution, p(n), is 1/N, the mean is (N+1)/2 and the variance is (N²-1)/12. For the continuous distribution the PDF is p(x)=1/(b-a) for a≤x≤b, the mean is (a+b)/2 and
the variance is (a-b)²/12.
A number representing an individual’s preference for outcome. A coherent decision maker maximises their expected utility.
value of information
In decision theory the amount (if any) by which the expected utility is increased if you can acquire more information about the probability of outcomes. The idea is you get the information if the
cost of doing so is less than the value.
A measure of the spread of a random variable. It is the mean square distance from the mean of the random variable. Its square root is the standard deviation which has the units of the underlying
variable and is thus a direct measure of spread.
weighted average cost of capital (WACC)
The cost of capital averaged across all the sources of finance. This sets the required rate of return for a project. It will therefore be higher if there is high risk equity, reflecting a risk
premium. It is an open question whether this adequately accounts for the risks involved and people’s behaviour.
Weibull distribution
A continuous probability distribution which represents a process in which the rate changes with time. The PDF is p(t)=abt^b-1exp(-at^b). The mean is a^-1/bΓ(1+1/b) and the variance is
willingness to pay / willingness to accept
A common measure, used in public policy, of the value of a change including something which increases or reduces risk. This might be expressed explicitly or revealed through people’s behaviour. In
general the willingness to pay for a benefit such as reduced risk is less than the payment people might be prepared to accept for increased risk. This means policy makers prefer revealed willingness
to pay as they would otherwise not get anything done.
Deprecated: Function get_magic_quotes_gpc() is deprecated in D:\Inetpub\vhosts\riskagenda.com\httpdocs\www\wp-includes\formatting.php on line 4819
Deprecated: Function get_magic_quotes_gpc() is deprecated in D:\Inetpub\vhosts\riskagenda.com\httpdocs\www\wp-includes\formatting.php on line 4819
Deprecated: Function get_magic_quotes_gpc() is deprecated in D:\Inetpub\vhosts\riskagenda.com\httpdocs\www\wp-includes\formatting.php on line 4819
Deprecated: Function get_magic_quotes_gpc() is deprecated in D:\Inetpub\vhosts\riskagenda.com\httpdocs\www\wp-includes\formatting.php on line 4819
Deprecated: Function get_magic_quotes_gpc() is deprecated in D:\Inetpub\vhosts\riskagenda.com\httpdocs\www\wp-includes\formatting.php on line 4819
Deprecated: Function get_magic_quotes_gpc() is deprecated in D:\Inetpub\vhosts\riskagenda.com\httpdocs\www\wp-includes\formatting.php on line 4819 | {"url":"http://riskagenda.com/2018/05/04/risk-glossary/","timestamp":"2024-11-06T21:53:10Z","content_type":"text/html","content_length":"116914","record_id":"<urn:uuid:2aed975f-2dd2-44c8-afa9-b3333d4cf389>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00262.warc.gz"} |
Fourier coefficients attached to small automorphic representations of ${\mathrm{SL}}_n(\mathbb{A})$
We show that Fourier coefficients of automorphic forms attached to minimal or next-to-minimal automorphic representations of ${\mathrm{SL}}_n(\mathbb{A})$ are completely determined by certain highly
degenerate Whittaker coefficients. We give an explicit formula for the Fourier expansion, analogously to the Piatetski-Shapiro-Shalika formula. In addition, we derive expressions for Fourier
coefficients associated to all maximal parabolic subgroups. These results have potential applications for scattering amplitudes in string theory.
arXiv e-prints
Pub Date:
July 2017
□ Mathematics - Representation Theory;
□ High Energy Physics - Theory;
□ Mathematics - Number Theory;
□ 11F70;
□ 22E55;
□ 11F30
55 pages | {"url":"https://ui.adsabs.harvard.edu/abs/2017arXiv170708937A/abstract","timestamp":"2024-11-04T20:23:43Z","content_type":"text/html","content_length":"37448","record_id":"<urn:uuid:42cbd50b-2271-409a-a65f-580ec47ec3c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00100.warc.gz"} |
Thermal instability in a two-component magnetized plasma
Attention is given to the origin of the discrepancies which arise in solving, either in the magnetohydrodynamic or two-fluid approximations, the problem of thermal instability in a radiating
magnetized homogeneous plasma in the case of an arbitrary angle between the wave vector and the equilibrium magnetic field. In the former approximation the displacement current is disregarded, but
this assumption is at variance with the system of two-fluid plasma equations. It is shown that, in the case of plasma motions approximately perpendicular to the magnetic field, the region of
overstable perturbations may be wider than in the case of motions strictly perpendicular to the field.
Pisma v Astronomicheskii Zhurnal
Pub Date:
November 1991
□ Energy Dissipation;
□ Magnetohydrodynamic Stability;
□ Plasma Dynamics;
□ Solar Magnetic Field;
□ Thermal Instability;
□ Two Fluid Models;
□ Magnetization;
□ Maxwell Equation;
□ Plasma Physics | {"url":"https://ui.adsabs.harvard.edu/abs/1991PAZh...17.1031V/abstract","timestamp":"2024-11-14T16:00:34Z","content_type":"text/html","content_length":"35507","record_id":"<urn:uuid:9775f27f-9524-4717-8d3c-071b637c487d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00569.warc.gz"} |
Conversions Table
Square inches
Square centimeters
Square decameters
Square decimeters
Square feet
Square hectometers
Square kilometers
Square meters
Square miles
Square millimeters
Square yards
Square attometers
Square chains
Square exameters
Square femtometers
Square furlongs
Square gigameters
Square megameters
Square microns
Square nanometers
Square petameters
Square picometer
Square terameters
Square yoctometers
Square yottameters
Square zeptometers
Square zettameters
Top area converters
Other area converters
• Square inches to arpents
• Square inches to barns
• Square inches to roods
• Square inches to square attometers
• Square inches to square chains
• Square inches to square exameters
• Square inches to square femtometers
• Square inches to square furlongs
• Square inches to square gigameters
• Square inches to square megameters
• Square inches to square microns
• Square inches to square nanometers
• Square inches to square petameters
• Square inches to square picometer
• Square inches to square terameters
• Square inches to square yoctometers
• Square inches to square yottameters
• Square inches to square zeptometers
• Square inches to square zettameters | {"url":"https://unitconverter.io/square-inches","timestamp":"2024-11-12T01:10:51Z","content_type":"text/html","content_length":"27905","record_id":"<urn:uuid:7d9f70d8-729a-4533-90f6-b90015dddee7>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00729.warc.gz"} |
Bayesian Probability - LessWrong
Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular
outcome will occur over any number of trials.
An event with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as
stating "Over 100 trials, we should observe event X approximately 60 times."
The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10.
Blog posts
See also
External links
• BIPS: Bayesian Inference for the Physical Sciences | {"url":"https://www.lesswrong.com/tag/bayesian-probability?version=1.32.0","timestamp":"2024-11-11T08:31:17Z","content_type":"text/html","content_length":"87597","record_id":"<urn:uuid:956f74da-ab00-4c8f-83e8-6e763eed2ffc>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00540.warc.gz"} |
Cost curves: Decoding Marginal Cost of Production through Cost Curves - FasterCapital
Cost curves: Decoding Marginal Cost of Production through Cost Curves
1. Introduction to Cost Curves and Marginal Cost of Production
Cost curves are an essential tool in economics that help businesses understand the relationship between production costs and output levels. By analyzing cost curves, firms can make informed decisions
about pricing, production levels, and profitability. One crucial aspect of cost curves is the concept of marginal cost of production, which refers to the additional cost incurred by producing one
more unit of output. understanding marginal cost is vital for businesses as it helps them optimize their production processes and maximize profits.
From a microeconomic perspective, cost curves provide valuable insights into a firm's production efficiency and cost structure. They illustrate how costs change as output levels vary, allowing
businesses to identify economies or diseconomies of scale. For instance, initially, as a firm increases its production, it may experience economies of scale, resulting in lower average costs per
unit. However, at some point, further increases in output may lead to diseconomies of scale due to inefficiencies or capacity constraints.
1. Total Cost (TC) Curve: The total cost curve represents the relationship between the total cost incurred by a firm and the quantity of output produced. It is derived by summing up all the fixed and
variable costs associated with production. The TC curve typically exhibits an upward slope since higher levels of output require more resources and incur higher costs.
2. Average Total Cost (ATC) Curve: The average total cost curve shows the average cost per unit of output produced. It is obtained by dividing the total cost by the quantity of output. Initially, as
output increases, ATC tends to decrease due to economies of scale. However, beyond a certain point, ATC starts rising due to diseconomies of scale.
3. Marginal Cost (MC) Curve: The marginal cost curve represents the additional cost incurred when producing one more unit of output. It is derived by calculating the change in total cost divided by
the change in quantity produced (TC/Q). The MC curve intersects the ATC and AVC (average variable cost) curves at their minimum points. This intersection is crucial as it indicates the optimal level
of production where costs are minimized.
For example, let's consider a bakery that produces cakes. Initially, as the bakery increases its cake production, it benefits from bulk purchasing discounts on ingredients and more efficient use of
equipment. This leads to lower average costs per cake, resulting in a downward-sloping ATC curve. However, if the bakery expands too rapidly without investing in additional equipment or hiring more
skilled labor, it may experience diseconom
Introduction to Cost Curves and Marginal Cost of Production - Cost curves: Decoding Marginal Cost of Production through Cost Curves
2. What is Marginal Cost?
When it comes to analyzing the cost of production, understanding the concept of marginal cost is crucial. Marginal cost refers to the additional cost incurred by producing one more unit of a good or
service. It plays a significant role in decision-making processes for businesses, as it helps determine the optimal level of production and pricing strategies. By examining how marginal costs change
with each additional unit produced, companies can make informed decisions about resource allocation and profit maximization.
To grasp the concept of marginal cost fully, let's explore it from different perspectives:
1. Definition and Calculation:
Marginal cost is calculated by dividing the change in total cost by the change in quantity produced. Mathematically, it can be expressed as follows:
Marginal Cost = (Change in Total Cost) / (Change in Quantity)
2. Relationship with Total Cost:
Marginal cost is closely related to total cost but differs in its focus on incremental changes. While total cost considers all costs incurred throughout production, marginal cost focuses solely on
the additional costs associated with producing one more unit. As a result, marginal cost tends to increase as production levels rise due to diminishing returns or economies of scale.
3. Impact on Production Decisions:
Understanding marginal cost is essential for optimizing production levels. When marginal cost is lower than the price at which a product can be sold, it is profitable to produce more units.
Conversely, if marginal cost exceeds the selling price, producing additional units would result in losses. Therefore, businesses aim to produce up to the point where marginal cost equals marginal
revenue to maximize profits.
4. shape of Marginal cost Curve:
The shape of the marginal cost curve varies depending on economies of scale and diminishing returns. Initially, as production increases, economies of scale may lead to decreasing marginal costs due
to factors such as bulk purchasing discounts or specialization benefits. However, beyond a certain point, diminishing returns set in, causing marginal costs to rise. This results from the need for
additional resources or increased labor input to maintain production levels.
To illustrate the concept of marginal cost, let's consider a hypothetical example. Suppose a bakery produces 100 loaves of bread per day at a total cost of $500. By increasing production to 101
loaves, the total cost rises to $505. In this case, the marginal cost of producing one additional loaf is $5 ($505 - $500). As long as the selling price exceeds $5, it would be profitable
What is Marginal Cost - Cost curves: Decoding Marginal Cost of Production through Cost Curves
3. Exploring the Relationship between Marginal Cost and Production Quantity
understanding the relationship between marginal cost and production quantity is crucial for businesses to make informed decisions about their production levels and pricing strategies. Marginal cost
refers to the additional cost incurred by producing one more unit of a product or service. It is an essential concept in economics that helps determine the optimal level of production and maximize
From a microeconomic perspective, the relationship between marginal cost and production quantity can be explained using the law of diminishing returns. Initially, as production increases, marginal
cost tends to decrease due to economies of scale. This means that as more units are produced, the average cost per unit decreases, resulting in lower marginal costs. For example, when a bakery
produces its first few loaves of bread, it incurs high fixed costs such as rent and equipment. However, as production increases, these fixed costs are spread over a larger number of units, leading to
lower marginal costs.
However, beyond a certain point, increasing production further leads to diminishing returns. This occurs when additional units of output require more resources but result in smaller increases in
output. As a result, marginal cost starts to rise. For instance, if a factory is operating at full capacity and decides to produce more units than it can handle efficiently, it may need to hire
additional workers or invest in new machinery. These additional costs increase the marginal cost per unit.
To delve deeper into this relationship between marginal cost and production quantity, let's explore some key insights:
1. U-shaped Cost Curves: The relationship between marginal cost and production quantity is often represented graphically through U-shaped cost curves. Initially, the curve slopes downward due to
economies of scale but eventually starts sloping upward due to diminishing returns.
2. break-even point: The break-even point occurs when total revenue equals total costs. At this point, the marginal cost intersects with the average total cost curve. Any level of production beyond
the break-even point generates profit, while production below it results in losses.
3. Optimal Production Level: Businesses aim to produce at the level where marginal cost equals marginal revenue. This ensures that each additional unit produced contributes positively to overall
profitability. If marginal cost exceeds marginal revenue, it is more cost-effective to reduce production.
4. Pricing Strategies: Understanding the relationship between marginal cost and production quantity helps businesses determine their pricing strategies. By considering both fixed and variable costs,
companies can set prices that cover their expenses while remaining competitive in the market.
Exploring the
Exploring the Relationship between Marginal Cost and Production Quantity - Cost curves: Decoding Marginal Cost of Production through Cost Curves
4. Fixed, Variable, and Total Cost
Cost curves are an essential tool in understanding the economics of production. They provide valuable insights into the relationship between the quantity of output produced and the corresponding
costs incurred. By analyzing cost curves, businesses can make informed decisions about pricing, production levels, and overall profitability. In this section, we will delve into the different types
of cost curves: fixed, variable, and total cost.
1. Fixed Cost Curve:
Fixed costs are expenses that do not change with the level of production. These costs include rent, salaries of permanent employees, insurance premiums, and depreciation of capital equipment. The
fixed cost curve is a horizontal line because regardless of the quantity produced, fixed costs remain constant. For example, consider a bakery that pays $2,000 per month in rent. Whether they produce
100 loaves of bread or 1,000 loaves, the rent expense remains unchanged.
2. Variable Cost Curve:
Variable costs are expenses that vary with the level of production. These costs include raw materials, direct labor wages, and utilities directly related to production. The variable cost curve slopes
upward as output increases because higher production requires more resources and incurs additional costs. For instance, a car manufacturer may experience increasing variable costs as they produce
more vehicles due to the need for additional raw materials and labor.
3. Total Cost Curve:
The total cost curve represents the sum of fixed and variable costs at each level of production. It is derived by adding the corresponding points on the fixed and variable cost curves. Initially, the
total cost curve rises at a decreasing rate due to economies of scale as output increases, average costs decrease due to spreading fixed costs over a larger quantity produced. However, beyond a
certain point called the minimum efficient scale (MES), average costs start rising again due to diseconomies of scale caused by inefficiencies or resource constraints.
4. Marginal Cost Curve:
The marginal cost curve shows how much it costs to produce one additional unit of output. It is derived by calculating the change in total cost when output increases by one unit. The marginal cost
curve intersects the average variable cost and average total cost curves at their minimum points. This intersection indicates the most efficient level of production, where marginal cost equals
average cost.
understanding these cost curves is crucial for businesses to optimize their production processes and make informed decisions. By analyzing the relationships between fixed, variable, and total costs,
companies can identify opportunities for cost reduction, determine pricing strategies, and assess the impact of changes in production levels on profitability.
Fixed, Variable, and Total Cost - Cost curves: Decoding Marginal Cost of Production through Cost Curves
5. U-shaped, L-shaped, and Constant
Cost curves are an essential tool in understanding the relationship between the quantity of goods produced and the corresponding costs incurred by a firm. By analyzing these curves, businesses can
make informed decisions regarding production levels, pricing strategies, and overall profitability. In this section, we will delve into three common shapes of cost curves: U-shaped, L-shaped, and
constant. Each shape represents a distinct pattern in cost behavior and provides valuable insights into a firm's production process.
1. U-shaped Cost Curve:
The U-shaped cost curve is characterized by initially decreasing costs followed by increasing costs as production levels rise. This shape is often observed in industries where economies of scale play
a significant role. Initially, as production increases, fixed costs (such as machinery or infrastructure) are spread over a larger output, leading to a decline in average total costs. However, beyond
a certain point, diminishing returns set in, causing marginal costs to rise. This occurs when additional units of output require more resources but contribute less to overall efficiency gains. For
example, consider an automobile manufacturer that experiences lower average costs per vehicle as it increases production due to economies of scale. However, at some point, adding more workers or
equipment may lead to congestion on the assembly line or increased coordination challenges, resulting in higher marginal costs.
2. L-shaped Cost Curve:
In contrast to the U-shaped curve, the L-shaped cost curve exhibits constant or nearly constant marginal costs regardless of the level of production. This shape is often associated with industries
where fixed costs dominate total costs and economies of scale are limited. In such cases, average total costs decrease rapidly at low levels of output but reach a minimum relatively quickly. After
this point, further increases in production have little impact on average total costs since fixed costs remain unchanged. An example can be seen in software development companies that invest heavily
in research and development (R&D) for new products or services. The initial R&D expenses are substantial but do not significantly affect marginal costs once the product is developed and ready for
mass production.
3. Constant Cost Curve:
A constant cost curve, as the name suggests, indicates that both average total costs and marginal costs remain constant across different levels of production. This shape implies that firms can expand
output without experiencing any cost advantages or disadvantages. It often arises in industries where resources are readily available at stable prices, and there are no significant economies or
diseconomies of scale. For instance, consider a small bakery that produces a fixed number of loaves of bread each day. The costs associated
U shaped, L shaped, and Constant - Cost curves: Decoding Marginal Cost of Production through Cost Curves
6. Interpreting the Slope and its Significance
When it comes to understanding the cost of production, analyzing marginal cost curves can provide valuable insights. These curves depict the relationship between the quantity of output produced and
the corresponding marginal cost incurred. By examining the slope of these curves, we can gain a deeper understanding of how costs change as production levels vary.
1. The slope of a marginal cost curve represents the rate at which costs change with respect to changes in output. A positive slope indicates that as production increases, so does the marginal cost.
This suggests that each additional unit of output requires more resources and incurs higher costs. For example, consider a bakery that produces bread. As they increase their production from 100
loaves to 200 loaves per day, they may need to hire additional workers or invest in more equipment, leading to higher costs per loaf.
2. Conversely, a negative slope on a marginal cost curve implies decreasing marginal costs as production levels rise. This scenario is less common but can occur when economies of scale are present.
Economies of scale refer to the cost advantages gained by increasing production volume. For instance, a car manufacturer may experience lower costs per vehicle as they produce more units due to bulk
purchasing discounts or improved efficiency in their assembly line.
3. A flat or constant slope on a marginal cost curve indicates constant marginal costs regardless of changes in output levels. This situation occurs when there are no significant economies or
diseconomies of scale present. In such cases, each additional unit produced incurs the same amount of cost as the previous one. An example could be a software company that develops and sells licenses
for its product. The cost of producing an additional license remains constant regardless of how many licenses have already been sold.
4. The significance of interpreting the slope lies in its implications for decision-making and profitability. Understanding how costs change with varying levels of production can help businesses
optimize their operations. For instance, if a company observes increasing marginal costs, they may need to evaluate their production processes and identify areas where efficiency can be improved to
reduce costs. On the other hand, if decreasing marginal costs are observed, the company may consider expanding production to take advantage of economies of scale and increase profitability.
5. Additionally, analyzing the slope of marginal cost curves can aid in pricing decisions. By understanding how costs change as output levels vary, businesses can determine the optimal price point
that maximizes profits. For example, if marginal costs are increasing rapidly
Interpreting the Slope and its Significance - Cost curves: Decoding Marginal Cost of Production through Cost Curves
7. Economies of Scale, Diseconomies, and Break-even Point
When it comes to understanding the cost structure of a business, one key concept that plays a crucial role is the marginal cost of production. Marginal cost refers to the additional cost incurred by
producing one more unit of a good or service. It is an essential metric for businesses as it helps them make informed decisions about pricing, production levels, and profitability. However, several
factors can influence the marginal cost, making it imperative for businesses to comprehend these factors thoroughly.
1. Economies of Scale: One of the primary factors affecting marginal cost is economies of scale. Economies of scale occur when a company experiences a decrease in average costs as its production
volume increases. This phenomenon arises due to various reasons such as spreading fixed costs over a larger output, bulk purchasing discounts, and improved operational efficiency. As a result, the
marginal cost decreases with each additional unit produced. For example, consider a car manufacturer that invests in new machinery and technology to increase its production capacity. As the company
produces more cars, it can spread its fixed costs (e.g., machinery maintenance) over a larger number of units, leading to lower marginal costs.
2. Diseconomies of Scale: On the other hand, diseconomies of scale can also impact the marginal cost. Diseconomies of scale occur when a company experiences an increase in average costs as its
production volume expands beyond a certain point. This situation often arises due to inefficiencies in operations, coordination challenges, or increased bureaucracy within the organization. When
diseconomies of scale occur, the marginal cost starts to rise with each additional unit produced. For instance, imagine a restaurant that becomes extremely popular and experiences high demand for its
food. If the restaurant does not have sufficient kitchen space or staff to handle this increased demand efficiently, it may face higher costs per additional meal served.
3. Break-even Point: The break-even point is another crucial factor that influences the marginal cost. The break-even point refers to the level of production or sales at which a company neither makes
a profit nor incurs a loss. It is the point where total revenue equals total costs, including both fixed and variable costs. understanding the break-even point helps businesses determine the minimum
level of production required to cover all costs and start generating profits. Once a company surpasses the break-even point, its marginal cost becomes positive, indicating that each additional unit
produced contributes to profitability. For example,
Economies of Scale, Diseconomies, and Break even Point - Cost curves: Decoding Marginal Cost of Production through Cost Curves
8. Using Cost Curves for Decision Making and Pricing Strategies
Cost curves are not just theoretical concepts used in economics textbooks; they have practical applications that can greatly benefit businesses in decision making and pricing strategies. By
understanding the relationship between costs and production levels, companies can make informed choices about their operations, optimize resource allocation, and set competitive prices in the market.
From a managerial perspective, cost curves provide valuable insights into the cost structure of a business, allowing managers to identify areas of inefficiency and implement cost-saving measures. On
the other hand, from a marketing standpoint, cost curves help determine optimal pricing strategies that maximize profitability while remaining competitive.
1. cost-volume-profit analysis: Cost curves play a crucial role in cost-volume-profit (CVP) analysis, which helps businesses understand how changes in production volume affect costs and profits. By
plotting the total cost curve alongside the total revenue curve, managers can determine the breakeven point the level of production at which total revenue equals total cost. This information is vital
for decision making, as it allows managers to assess the feasibility of different production levels and make informed choices about expanding or contracting operations.
2. Pricing decisions: Cost curves also aid in setting optimal prices for products or services. By incorporating marginal cost data into pricing strategies, businesses can ensure that prices cover
both variable and fixed costs while maximizing profit margins. For example, if a company's marginal cost is lower than its average variable cost at a particular production level, it may consider
lowering prices to increase sales volume and overall profitability.
3. Economies of scale: Cost curves provide insights into economies of scale the phenomenon where per-unit costs decrease as production volume increases. By analyzing the shape of the average total
cost curve, businesses can identify the optimal scale of production that minimizes costs per unit. This knowledge helps companies determine whether it is more advantageous to produce on a larger
scale or maintain smaller-scale operations.
4. Resource allocation: Understanding cost curves enables businesses to allocate resources efficiently by identifying cost drivers and areas of inefficiency. By analyzing the shape of the cost
curves, managers can identify which inputs or processes contribute most to overall costs. This information allows them to prioritize resource allocation, invest in cost-saving technologies, or
streamline operations to reduce expenses.
5. Competitive pricing: Cost curves are instrumental in determining competitive pricing strategies. By comparing their cost curves with those of competitors, businesses can assess their cost
advantage or disadvantage in the market. For instance, if a company's cost curve is lower than its competitors', it may have room to lower prices while still maintaining profitability
Using Cost Curves for Decision Making and Pricing Strategies - Cost curves: Decoding Marginal Cost of Production through Cost Curves
9. Limitations and Criticisms of Cost Curves in Production Analysis
Cost curves are a fundamental tool in production analysis, providing valuable insights into the relationship between costs and output levels. They help businesses make informed decisions about
production quantities, pricing strategies, and profit maximization. However, it is important to acknowledge that cost curves have their limitations and face criticisms from various perspectives.
Understanding these limitations is crucial for a comprehensive understanding of production analysis and its practical implications.
1. Assumptions: Cost curves are based on certain assumptions that may not always hold true in real-world scenarios. For instance, they assume that all inputs are variable in the short run and fixed
in the long run. However, this assumption may not be valid for industries with long-term contracts or those facing regulatory constraints. Additionally, cost curves assume perfect competition, which
may not accurately reflect market conditions in many industries.
2. fixed vs variable Costs: Cost curves often simplify costs into two categories: fixed costs and variable costs. While this categorization provides a useful framework for analysis, it oversimplifies
the complex nature of costs in reality. In practice, costs can exhibit both fixed and variable elements simultaneously, making it challenging to accurately represent them using traditional cost
3. Economies of Scale: Cost curves assume that economies of scale exist, meaning that as output increases, average costs decrease. While this is generally true for many industries due to factors such
as specialization and increased efficiency, there are cases where diseconomies of scale occur. For example, when a firm becomes too large to effectively manage its operations or experiences
diminishing returns to scale, average costs may start to rise with increased output.
4. Short-Run Focus: Cost curves primarily focus on short-run analysis, assuming that firms have limited flexibility to adjust their inputs and production processes. However, in the long run, firms
can make significant changes to their operations by adjusting their capital investments or adopting new technologies. This long-run perspective is often overlooked by cost curves but can have a
substantial impact on production costs and efficiency.
5. External Factors: Cost curves do not account for external factors that can influence production costs, such as changes in input prices, government regulations, or technological advancements. These
factors can significantly impact a firm's cost structure and render cost curves less reliable in predicting actual production costs.
While cost curves provide valuable insights into the relationship between costs and output levels, they have limitations that must be considered. Understanding these limitations is crucial for
businesses to make informed decisions based on a comprehensive understanding of production analysis. By recognizing the assumptions made
Limitations and Criticisms of Cost Curves in Production Analysis - Cost curves: Decoding Marginal Cost of Production through Cost Curves | {"url":"https://fastercapital.com/content/Cost-curves--Decoding-Marginal-Cost-of-Production-through-Cost-Curves.html","timestamp":"2024-11-05T18:19:21Z","content_type":"text/html","content_length":"89224","record_id":"<urn:uuid:6b12a3ad-f9ec-4242-b4c2-35d71a2f4a7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00248.warc.gz"} |
The Magical Number 6174 - Mathematical Explorations
The Magical Number 6174: Think of a four-digit number (using at least two different digits). Rearrange the digits to form the largest and smallest numbers possible, and then subtract the smaller
number from the larger one. Repeat this process with the result. Surprisingly, after a few iterations, you’ll always end up with the number 6174. Why does this happen?
The phenomenon you described is known as the “Kaprekar’s Routine,” and the resulting number, 6174, is often referred to as “Kaprekar’s Constant.” This intriguing mathematical curiosity was discovered
by Indian mathematician D.R. Kaprekar.
Let’s walk through the steps with an example to see why it always leads to 6174:
1. Start with a four-digit number: Let’s use 4321.
2. Rearrange the digits to form the largest and smallest numbers: 4321 (largest) and 1234 (smallest).
3. Subtract the smaller number from the larger one: 4321 – 1234 = 3087.
4. Repeat the process with the result: Rearrange 3087 to get 8730 (largest) and 0378 (smallest).
5. Subtract the smaller number from the larger one: 8730 – 0378 = 8352.
6. Repeat the process: 8532 – 2358 = 6174.
From this point onwards, the process will keep resulting in 6174 every time.
Now, the interesting question is: Why does this always happen?
The reason lies in the nature of the number 6174 and how it behaves under this particular process. When you follow the steps described above, you’re essentially arranging the digits of the original
number in ascending and descending order and subtracting the two resulting numbers. The key observation is that this process eventually leads to the number 6174 for all four-digit numbers (except for
some specific exceptions, like numbers with identical digits).
The reason behind this phenomenon is related to the nature of four-digit numbers and the properties of the subtraction process. I’ll provide a brief explanation:
1. The subtraction always results in a three-digit number: Regardless of the starting four-digit number, the result after subtraction will always be a three-digit number. This is because the
difference between the largest and smallest arrangements of four digits will always have three digits.
2. Adding leading zeros: To ensure that you always have four digits for the subsequent iterations, you add leading zeros if necessary. For example, after the first iteration, you get 3087. To
maintain four digits, you add a leading zero to get 3087.
3. Cycling process: As you continue the process, you’ll observe that after a finite number of iterations, you’ll reach a cycle of four-digit numbers that continuously repeats. This cycle consists of
eight numbers: 6174, 6174, …, and so on.
$$t_{L}=t_{0}-t_{z}=\int_{a}^{a_{0}} \frac{dt}{\dot{a}}$$
Eventually, all four-digit numbers will fall into this cycle, and the process will keep repeating 6174. This intriguing property has fascinated mathematicians and enthusiasts for years and continues
to be a remarkable mathematical curiosity.
Add a Comment | {"url":"https://mathematicalexplorations.co.in/the-magical-number-6174-mathematical-explorations/","timestamp":"2024-11-08T15:25:44Z","content_type":"text/html","content_length":"221731","record_id":"<urn:uuid:713b414c-14d9-4da8-a54e-99e0e1ede648>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00182.warc.gz"} |
Generate Genotyping Error Matrix — ErrToM
Generate Genotyping Error Matrix
Make a vector or matrix specifying the genotyping error pattern, or a function to generate such a vector/matrix from a single value Err.
with the probabilities of observed genotypes (columns) conditional on actual genotypes (rows), or return a function to generate such matrices (using a single value Err as input to that function).
estimated genotyping error rate, as a single number, or 3x3 or 4x4 matrix, or length 3 vector. If a single number, an error model is used that aims to deal with scoring errors typical for SNP
arrays. If a matrix, this should be the probability of observed genotype (columns) conditional on actual genotype (rows). Each row must therefore sum to 1. If Return='function', this may be NA.
If a vector, these are the probabilities (observed given actual) hom|other hom, het|hom, and hom|het.
vector-generating or matrix-generating function, or one of 'version2.9', 'version2.0', 'version1.3' (='SNPchip'), 'version1.1' (='version111'), referring to the sequoia version in which it was
used as default. Only used if Err is a single number.
output, 'matrix' (default), 'vector', 'function' (matrix-generating), or 'v_function' (vector-generating)
Depending on Return, either:
• 'matrix': a 3x3 matrix, with probabilities of observed genotypes (columns) conditional on actual (rows)
• 'function': a function taking a single value Err as input, and generating a 3x3 matrix
• 'vector': a length 3 vector, with the probabilities (observed given actual) hom|other hom, het|hom, and hom|het.
By default (flavour = "version2.9"), Err is interpreted as a locus-level error rate (rather than allele-level), and equals the probability that an actual heterozygote is observed as either homozygote
(i.e., the probability that it is observed as AA = probability that observed as aa = Err/2). The probability that one homozygote is observed as the other is (Err/2\()^2\).
The inbuilt 'flavours' correspond to the presumed and simulated error structures, which have changed with sequoia versions. The most appropriate error structure will depend on the genotyping
platform; 'version0.9' and 'version1.1' were inspired by SNP array genotyping while 'version1.3' and 'version2.0' are intended to be more general.
This function, and throughout the package, it is assumed that the two alleles \(A\) and \(a\) are equivalent. Thus, using notation \(P\)(observed genotype |actual genotype), that \(P(AA|aa) = P(aa|
AA)\), \(P(aa|Aa)=P(AA|Aa)\), and \(P(aA|aa)=P(aA|AA)\).
version hom|hom het|hom hom|het
2.9 \((E/2)^2\) \(E-(E/2)^2\) \(E/2\)
2.0 \((E/2)^2\) \(E(1-E/2)\) \(E/2\)
1.3 \((E/2)^2\) \(E\) \(E/2\)
1.1 \(E/2\) \(E/2\) \(E/2\)
0.9 \(0\) \(E\) \(E/2\)
or in matrix form, Pr(observed genotype (columns) | actual genotype (rows)):
0 \(1-E\) \(E -(E/2)^2\) \((E/2)^2\)
1 \(E/2\) \(1-E\) \(E/2\)
2 \((E/2)^2\) \(E -(E/2)^2\) \(1-E\)
0 \((1-E/2)^2\) \(E(1-E/2)\) \((E/2)^2\)
1 \(E/2\) \(1-E\) \(E/2\)
2 \((E/2)^2\) \(E(1-E/2)\) \((1-E/2)^2\)
0 \(1-E-(E/2)^2\) \(E\) \((E/2)^2\)
1 \(E/2\) \(1-E\) \(E/2\)
2 \((E/2)^2\) \(E\) \(1-E-(E/2)^2\)
0 \(1-E\) \(E/2\) \(E/2\)
1 \(E/2\) \(1-E\) \(E/2\)
2 \(E/2\) \(E/2\) \(1-E\)
version0.9 (not recommended)
0 \(1-E\) \(E\) \(0\)
1 \(E/2\) \(1-E\) \(E/2\)
2 \(0\) \(E\) \(1-E\)
When Err is a length 3 vector, or if Return = 'vector' these are the following probabilities:
• hom|hom: an actual homozygote is observed as the other homozygote (\(E_1\))
• het|hom: an actual homozygote is observed as heterozygote (\(E_2\))
• hom|het: an actual heterozygote is observed as homozygote (\(E_3\))
and Pr(observed genotype (columns) | actual genotype (rows)) is then:
0 \(1-E_1-E_2\) \(E_2\) \(E_1\)
1 \(E_3\) \(1-2E_3\) \(E_3\)
2 \(E_1\) \(E_2\) \(1-E_1-E_2\)
When the SNPs are scored via sequencing (e.g. RADseq or DArTseq), the 3rd error rate (hom|het) is typically considerably higher than the other two, while for SNP arrays it tends to be similar to P
ErM <- ErrToM(Err = 0.05)
#> obs-0|act obs-1|act obs-2|act
#> act-0 0.950000 0.049375 0.000625
#> act-1 0.025000 0.950000 0.025000
#> act-2 0.000625 0.049375 0.950000
ErrToM(ErM, Return = 'vector')
#> hom|hom het|hom hom|het
#> 0.000625 0.049375 0.025000
# use error matrix from Whalen, Gorjanc & Hickey 2018
funE <- function(E) {
matrix(c(1-E*3/4, E/2, E/4,
E/4, 1-2*E/4, E/4,
E/4, E/2, 1-E*3/4),
3,3, byrow=TRUE) }
ErrToM(Err = 0.05, flavour = funE)
#> obs-0|act obs-1|act obs-2|act
#> act-0 0.9625 0.025 0.0125
#> act-1 0.0125 0.975 0.0125
#> act-2 0.0125 0.025 0.9625
# equivalent to:
ErrToM(Err = c(0.05/4, 0.05/2, 0.05/4))
#> obs-0|act obs-1|act obs-2|act
#> act-0 0.9625 0.025 0.0125
#> act-1 0.0125 0.975 0.0125
#> act-2 0.0125 0.025 0.9625 | {"url":"https://jiscah.github.io/reference/ErrToM.html","timestamp":"2024-11-13T23:06:24Z","content_type":"text/html","content_length":"23019","record_id":"<urn:uuid:cb393672-70f0-4a8e-a1a9-ebb144e1e40f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00242.warc.gz"} |
Model Theory for Predicate Logic
Model theory for predicate logic is the sub-branch of model theory that focuses on predicate logic.
In particular, the formal semantics of structures for predicate logic is studied.
Also known as
Historically, model theory for predicate logic has received the main attention within the larger field of model theory in general.
Therefore, there is a good chance that any source referring to model theory actually refers to the subfield of model theory for predicate logic.
Also see
• Results about model theory for predicate logic can be found here. | {"url":"https://proofwiki.org/wiki/Definition:Model_Theory_for_Predicate_Logic","timestamp":"2024-11-04T10:34:03Z","content_type":"text/html","content_length":"39535","record_id":"<urn:uuid:cc3f0e34-f2a4-4ced-9d11-b1d87426a449>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00197.warc.gz"} |
Topic 24 : 22 shrutis – Rational Behind the Ratios
There are people who find it difficult to understand how complicated numbers like 729/512 (m2) or 243/128 (N2) arise at all, to represent frequencies of shrutis.
The following table will explain the same.
Let me explain how is a 'Ratio' created. Atikomal Rishabh has a frequency of 105.3497942. Which is the number when multiplied by 100 will give this frequency ? It is 256/243. 256 divided by 243 and
multiplied by 100 gives 105.3497942. Hence, Ratio of Atikomal Rishabh is 256/243. This is how frequencies are expressed as 'Ratios'.
We can also see the derivation of ratios another way in the following table.
It can be seen that there are simply 5 groupes of 4 shrutis each arising sequentially at a distance of 5.3497942 %, 6.66%, 11.11%, and 12.5 %, and this gives rise to the resulting frequencies and
ratios of 22 shrutis.
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"} | {"url":"https://22shruti.com/research_topic_24.asp","timestamp":"2024-11-07T12:21:17Z","content_type":"text/html","content_length":"323422","record_id":"<urn:uuid:45c663ce-7ae2-4824-8e60-299ab2927f59>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00891.warc.gz"} |
Programming with Constraints Reading Group
Winter 2016 — Tuesday, 2:30pm — CSE 203
Subscribe to the calendar:
Google Calendar
We’ll be reading and discussing papers relating to programming models for working with search procedures and solvers, solver technology, and reactive programming. In addition, this quarter we will
also have some sessions on local work in progress, and other sessions devoted to working with different tools (solvers and languages), in which participants who are using specific tools will give
informal talks/demos/code walkthroughs of the application.
Some paper links may point into the ACM Digital Library or the Springer online collection. Using a UW IP address, or the UW libraries off-campus access, should provide access. To receive
announcements and news, please subscribe to the 591R mailing list.
Programming Models
Constraint Solver Technology
Reactive Programming | {"url":"https://uwplse.org/meet/pcrg/16wi.html","timestamp":"2024-11-11T01:01:17Z","content_type":"text/html","content_length":"19267","record_id":"<urn:uuid:ef2ea563-cb06-4583-b4c9-c767adea9fe0>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00730.warc.gz"} |
Virtual Anthropology
Reference based reconstruction
Since rich digital data is available in Virtual Anthropology, the process of reconstruction can be enhanced in ways to which physical reconstruction has no access. Geometrical or statistical
information can be integrated from an individual reference specimens or even whole reference populations.
Although the traditionally operating researcher has a reference in his mind as well when he sets out to reconstruct a particular specimen, the formalized approach in Virtual Anthropology forces one
to uncover the underlying presumptions, which in turn leads to a more transparent and reproducible way of doing science. Reference based reconstructions also go far beyond the possibilities of simple
mirror imaging or assembling of fragments of one or more specimens:
Geometric reconstruction
It is possible to predict missing parts based on geometric properties of a single specimen, such as smoothness of curvature. For this purpose the thin plate spline interpolation function is used:
Missing data is estimated by mapping a complete case to the specimen with missing landmarks - using the thin plate spline interpolation based on the subset of observable landmarks. This reference
specimen can be the Procrustes average of all complete cases or a single specimen that matches the specimen with missing data in some other variables such as age, size, sex or species.
If the dataset includes semilandmarks, it makes sense to combine the step of missing data estimation with a thin plate spline relaxation of the available semilandmarks against the reference specimen.
While available semilandmarks slide on their tangents and tangent planes, missing landmarks are "fully relaxed" so as to minimize the bending energy between the reference and the target specimen. The
whole idea of sliding landmarks is at root a missing data algorithm. In the case of curve-semilandmarks, one estimates the missing coordinate along a tangent vector; sliding on a surface estimates
two coordinates. With "full relaxation" all three dimensions are estimated. To allow "full relaxation" all these missing coordinates are allowed to slide at the same time.
Statistical reconstruction
Simulation of a statistical reconstruction. Missing data was estimated in various regions of 52 modern human crania using 388 landmarks and semilandmarks. Three different approaches were used. The
chart illustrates that regression is always the best method (except in the first set), mean substitution always performs worst.
Another approach, statistical reconstruction, regards any landmark location as correlated with all other locations in the landmark set. Using the complete cases, one can use multiple multivariate
regression to estimate the location of the missing points. This process requires of course the existence of a reference sample rather than the single reference specimen underlying the thin plate
spline interpolation described above. Using the complete specimens, one predicts the location of the missing landmarks with the minimum sum-of-squares given the other data. | {"url":"https://www.virtual-anthropology.com/virtual-anthropology/reconstruct/reference-based-reconstruction/","timestamp":"2024-11-02T05:32:47Z","content_type":"text/html","content_length":"41924","record_id":"<urn:uuid:d7665678-9cc9-437c-a619-b47912bfde3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00346.warc.gz"} |
Chennai Mathematical Institute
Public viva-voce Notification
Date: Friday, 12 January 2024
Time: 11:00 AM
Venue: Seminar Hall
On the structure of certain non-commutative $C^{\ast}$-algebras
Malay Mandal
Chennai Mathematical Institute.
One of the most intriguing $C^{\ast}$-algebra is the non-commutative unitary algebra $U^{nc}_n$, which is defined as a universal $C^{\ast}$-algebra $A$ generated by $n^2$ elements that generate a
unitary matrix in $M_n(A)$. In this thesis, we study three aspects of this $C^{\ast}$-algebra $U^{nc}_n$.
In the first part, we study several structural properties of this $C^{\ast}$-algebra. In particular, we show that it possesses the Lifting property and is primitive. Also, we discuss the $RFD$
property of this $C^{\ast}$-algebra.
In the second part, we study the compact quantum semigroup structure of this $C^{\ast}$-algebra $U^{nc}_n$ and the resulting compact semigroup structure of its state space, characterize all of its
invertible elements.
In the third part, we study the $KK$-theory of this $C^{\ast}$-algebra. In particular, we discuss the $KK$-equivalnce of the $C^{\ast}$-algebras $U^{nc}_n$ and $U^{nc}_{n,red}$, the reduced $C^{\ast}
$-algebra with respect to its natural state.
In the last part, we study the simplicity property of mixed $q$-Gaussian $C^{\ast}$-algebra and the mixed $q$-deformed Araki-Woods $C^{\ast}$-algebra. More precisely, we discuss the simplicity of the
mixed $q$-deformed Araki-Woods $C^{\ast}$-algebra $\Gamma_{T}(\mathcal{H}_{\mathbb{R}}, U_t)$ for a bounded infinitesimal generator $A$ of the one-parameter group of orthogonal representation $(U_t)_
{t\in\mathbb{R}}$ and $U_t=A^{it},\ t\in\mathbb{R}$.
All are invited to attend the viva-voce examination. | {"url":"https://www.cmi.ac.in/activities/show-abstract.php?absyear=2024&absref=7&abstype=sem","timestamp":"2024-11-13T01:40:48Z","content_type":"text/html","content_length":"8205","record_id":"<urn:uuid:e1179d06-8492-4428-a3ad-44639fa91100>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00570.warc.gz"} |
After I posted the last two parts of this series (which I thought wrapped it up…) I received an email asking about whether there’s a similar thing happening if you remove the reconstruction
(low-pass) filter in the digital-to-analogue part of the signal path.
The answer to this question turned out to be more interesting than I expected… So I wound up turning it into a “Part 3” in the series.
Let’s take a case where you have a 1 kHz signal in a 48 kHz system. The figure below shows three plots. The top plot shows the individual sample values as black circles on a red line, which is the
analogue output of a DAC with a reconstruction filter.
The middle plot shows what the analogue output of the DAC would look like if we implemented a Sample-and-hold on the sample values, and we had an infinite analogue bandwidth (which means that the
steps have instantaneous transitions and perfect right angles).
The bottom plot shows what the analogue output of the DAC would look like if we implemented the signal as a pulse wave instead, but if we still we had an infinite analogue bandwidth. (Well… sort of….
Those pulses aren’t infinitely short. But they’re short enough to continue with this story.)
Figure 1. A 1 kHz sine wave
If we calculate the spectra of these three signals , they’ll look like the responses shown in Figure 2.
Figure 2. The spectra of the signals in Figure 1.
Notice that all three have a spike at 1 kHz, as we would expect. The outputs of the stepped wave and the pulsed wave have much higher “noise” floors, as well as artefacts in the high frequencies.
I’ve indicated the sampling rate at 48 kHz as a vertical black line to make things easy to see.
We’ll come back to those artefacts below.
Let’s do the same thing for a 5 kHz sine wave, still in a 48 kHz system, seen in Figures 3 and 4.
Figure 3. A 5 kHz sine wave
Figure 4. The spectra of the signals in Figure 3.
Compare the high-frequency artefacts in Figure 4 to those in Figure 2.
Now, we’ll do it again for a 15 kHz sine wave.
Figure 5. A 15 kHz sine wave
Figure 6. The spectra of the signals in Figure 5.
There are three things to notice, comparing Figures 2, 4, and 6.
The first thing is that artefacts for the stepped and pulsed waves have the same frequency components.
The second thing is that those artefacts are related to the signal frequency and the sampling rate. For example, the two spikes immediately adjacent to the sampling rate are Fs ± Fc where Fs is the
sampling rate and Fc is the frequency of the sine wave. The higher-frequency artefacts are mirrors around multiples of the sampling rate. So, we can generalise to say that the artefacts will appear
n * Fs ± Fc
where n is an integer value.
This is interesting because it’s aliasing, but it’s aliasing around the sampling rate instead of the Nyquist Frequency, which is what happens at the ADC and inside the digital domain before the DAC.
The third thing is a minor issue. This is the fact that the level of the fundamental frequency in the pulsed wave is lower than it is for the stepped wave. This should not be a surprise, since
there’s inherently less energy in that wave (since, most of the time, it’s sitting at 0). However, the artefacts have roughly the same levels; the higher-frequency ones have even higher levels than
in the case of the stepped wave. So, the “signal to THD+N” of the pulsed wave is lower than for the stepped wave.
Aliasing is Weird: Part 2
In Part 1, we looked at what happens when you try to record a signal whose frequency is higher than 1/2 the sampling rate (which, from now on, I’ll call the Nyquist Frequency, named after Harry
Nyquist who was one of the people that first realised that this limit existed). You record a signal, but it winds up having a different frequency at the output than it had at the input. In addition,
that frequency is related to the signal’s frequency and the sampling rate itself.
In order to prevent this from happening, digital recording systems use a low-pass filter that hypothetically prevents any signals above the Nyquist frequency from getting into the analogue-to-digital
conversion process. This filter is called an anti-aliasing filter because it prevents any signals that would produce an alias frequency from getting into the system. (In practice, these filters
aren’t perfect, and so it’s typical that some energy above the Nyquist frequency leaks into the converter.)
So, this means that if you put a signal that contains high frequency components into the analogue input of an analogue-to-digital converter (or ADC), it will be filtered. An example of this is shown
in Figure 1, below. The top plot is a square wave before filtering. The bottom plot is the result of low-pass filtering the square wave, thus heavily attenuating its higher harmonics. This results in
a reduction in the slope when the wave transitions between low and high states.
Figure 1: A square wave before and after low-pass filtering.
This means that, if I have an analogue square wave and I record it digitally, the signal that I actually record will be something like the bottom plot rather than the top one, depending on many
things like the frequency of the square wave, the characteristics of the anti-aliasing filter, the sampling rate, and so on. Don’t go jumping to conclusions here. The plot above uses an aggressively
exaggerated filter to make it obvious that we do something to prevent aliasing in the recorded signal. Do NOT use the plots as proof that “analogue is better than digital” because that’s a
one-dimensional and therefore very silly thing to claim.
… just because we keep signals with frequency content above the Nyquist frequency out of the input of the system doesn’t mean that they can’t exist inside the system. In other words, it’s possible to
create a signal that produces aliasing after the ADC. You can either do this by
• creating signals from scratch (for example, generating a sine tone with a frequency above Nyquist)
• by producing artefacts because of some processing applied to the signal (like clipping, for example).
Let’s take a sine wave and clip it after it’s been converted to a digital signal with a 48 kHz sampling rate, as is shown in Figure 2.
Figure 2: The red curve is a clipped version of the black curve.
When we clip a signal, we generate high-frequency harmonics. For example, the signal in Figure 2 is a 1 kHz sine wave that I clipped at ±0.5. If I analyse the magnitude response of that, it will look
something like Figure 3:
Figure 3: The magnitude response of Figure 2, showing the upper harmonics that I created by clipping.
The red curve in Figure 2 is not a ‘perfect’ square wave, so the harmonics seen in Figure 3 won’t follow the pattern that you would expect for such a thing. But that’s not the only reason this plot
will be weird…
Figure 3 is actually hiding something from you… I clipped a 1 kHz sine wave, which makes it square-ish. This means that I’ve generated harmonics at 3 kHz, 5 kHz, 7 kHz, and so on, up to ∞ Hz..
Notice there that I didn’t say “up to the Nyquist frequency”, which, in this example with a sampling rate of 48 kHz, would be 24 kHz.
Those harmonics above the Nyquist frequency were generated, but then stored as their aliases. So, although there’s a new harmonic at 25 kHz, the system records it as being at 48 kHz – 25 kHz = 23
kHz, which is right on top of the harmonic just below it.
In other words, when you look at all the spikes in the graph in Figure 3, you’re actually seeing at least two spikes sitting on top of each other. One of them is the “real” harmonic, and the other is
an alias (there are actually more, but we’ll get to that…). However, since I clipped a 1 kHz sine wave in a 48 kHz world, this lines up all the aliases to be sitting on top of the lower harmonics.
So, what happens if I clip a sine wave with a frequency that isn’t nicely related to the sampling rate, like 900 Hz in a 48 kHz system, for example? Then the result will look more like Figure 4,
which is a LOT messier.
Figure 4: The magnitude response of a 900 Hz square wave, plotted with a logarithmic frequency axis in the top axis and a linear axis in the bottom.
A 900 Hz square wave will have harmonics at odd multiples of the fundamental, therefore at 2.7 kHz, 4.5 kHz, and so on up to 22.5 kHz (900 Hz * 25).
The next harmonic is 24.3 kHz (900 Hz * 27), which will show up in the plots at 48 kHz – 24.3 kHz = 23.7 kHz. The next one will be 26.1 kHz (900 Hz * 29) which shows up in the plots at 21.9 kHz. This
will continue back DOWN in frequency through the plot until you get to 900 Hz * 53 = 47.7 kHz which will show up as a 300 Hz tone, and now we’re on our way back up again… (Take a look at Figure 7,
below for another way to think of this.)
The next harmonic will be 900 Hz * 55 = 49.5 kHz which will show up in the plot as a 1.5 kHz tone (49.5 kHz – 48 kHz).
Depending on the relationship between the square wave’s frequency and the sampling rate, you either get a “pretty” plot, like for the 6 kHz square wave in a 48 kHz system, as shown in Figure 5.
Figure 5: the magnitude response of a 6 kHz square wave in a 48 kHz system
Or, it’s messy, like the 7 kHz square wave in a 48 kHz system in Figure 6.
Figure 6: The magnitude response of a 7 kHz square wave in a 48 kHz system.
The moral of the story
There are three things to remember from this little pair of posts:
• Some aliased artefacts are negative frequencies, meaning that they appear to be going backwards in time as compared to the original (just like the wheel appearing to rotate backwards in Part 1).
• Just because you have an antialiasing filter at the input of your ADC does NOT protect you from aliasing, because it can be generated internally, after the signal has been converted to the
digital domain.
• Once this aliasing has happened (e.g. because you clipped the signal in the digital domain), then the aliases are in the signal below the Nyquist frequency and therefore will not be removed by
the reconstruction low-pass filter in the DAC. Once they’re mixed in there with the signal, you can’t get them out again.
Figure 7: This is the same as Figure 4, but I’ve removed the first set of mirrored alias artefacts and plotted them on the left side as being mirrored in a “negative frequency” alternate universe.
One additional, but smaller problem with all of this is that, when you look at the output of an FFT analysis of a signal (like the top plot in Figure 7, for example), there’s no way for you to know
which components are “normal” harmonics, and which are aliased artefacts that are actually above the Nyquist frequency. It’s another case proving that you need to understand what to expect from the
output of the FFT in order to understand what you’re actually getting.
Aliasing is weird: Part 1
One of the best-known things about digital audio is the fact that you cannot record a signal that has a frequency that is higher than 1/2 the sampling rate.
Now, to be fair, that statement is not true. You CAN record a signal that has a frequency that is higher than 1/2 the sampling rate. You just won’t be able to play it back properly, because what
comes out of the playback will not be the original frequency, but an alias of it.
If you record a one-spoked wheel with a series of photographs (in the old days, we called this ‘a movie’), the photos (the frames of the movie) might look something like this:
As you can see there, the wheel happens to be turning at a speed that results in it rotating 45º every frame.
The equivalent of this in a digital audio world would be if we were recording a sine wave that rotated (yes…. rotated…) 45º every sample, like this:
Notice that the red lines indicating the sample values are equivalent to the height of the spoke at the wheel rim in the first figure.
If we speed up the wheel’s rotation so that it rotated 90º per frame, it looks like this:
And the audio equivalent would look like this:
Speeding up even more to 135º per frame, we get this:
and this:
Then we get to a magical speed where the wheel rotated 180º per frame. At this speed, it appears when we look at the playback of the film that the wheel has stopped, and it now has two spokes.
In the audio equivalent, it looks like the result is that we have no output, as shown below.
However, this isn’t really true. It’s just an artefact of the fact that I chose to plot a sine wave. If I were to change the phase of this to be a cosine wave (at the same frequency) instead, for
example, then it would definitely have an output.
At this point, the frequency of the audio signal is 1/2 the sampling rate.
What happens if the wheel goes even faster (and audio signal’s frequency goes above this)?
Notice that the wheel is now making more than a half-turn per frame. We can still record it. However, when we play it back, it doesn’t look like what happened. It looks like the wheel is going
backwards like this:
Similarly, if we record a sine wave that has a frequency that is higher than 1/2 the sampling rate like this:
Then, when we play it back, we get a lower frequency that fits the samples, like this:
Just a little math
There is a simple way to calculate the frequency of the signal that you get out of the system if you know the sampling rate and the frequency of the signal that you tried to record.
Let’s use the following abbreviations to make it easy to state:
• Fs = Sampling rate
• F_in = frequency of the input signal
• F_out = frequency of the output signal
F_in < Fs/2
F_out = F_in
Fs > F_in > Fs/2
F_out = Fs/2 – (F_in – Fs/2) = Fs – F_in
Some examples:
If your sampling rate is 48 kHz, and you try to record a 25 kHz sine wave, then the signal that you will play back will be:
48000 – 25000 = 23000 Hz
If your sampling rate is 48 kHz, and you try to record a 42 kHz sine wave, then the signal that you will play back will be:
48000 – 42000 = 6000 Hz
So, as you can see there, as the input signal’s frequency goes up, the alias frequency of the signal (the one you hear at the output) will go down.
There’s one more thing…
Go back and look at that last figure showing the playback signal of the sine wave. It looks like the sine wave has an inverted polarity compared to the signal that came into the system (notice that
it starts on a downwards-slope whereas the input signal started on an upwards-slope). However, the polarity of the sine wave is NOT inverted. Nor has the phase shifted. The sine wave that you’re
hearing at the output is going backwards in time compared to the signal at the input, just like the wheel appears to be rotating backwards when it’s actually going forwards.
In Part 2, we’ll talk about why you don’t need to worry about this in the real world, except when you REALLY need to worry about it.
Vinyl simulation
Tokyo Dawn Records has released a vinyl mastering simulator, which includes a prediction of the end result of the mastering process.
If you have a plan to becoming a mastering engineer for vinyl, you can start practicing immediately.
Distortion effects on Linear measurements, Part 4
In Part 3, I showed that a magnitude responses calculated from impulse responses produced by the MLS and swept sine methods produce different results when the measurement signals themselves are
In this posting, I’ll focus on the swept sine method which showed that the apparent magnitude response of the system looked like a strange version of a low shelving filter, but there’s a really easy
explanation for this that goes back to something I wrote in Part 1.
The way these systems work is to cross-correlate the signal that comes back from the DUT with the signal that was sent to it. Cross-correlation (in this case) is a bit of math that tells you how
similar two signals are when they’re compared over a change in time (sort of…). So, if the incoming signal is identical to the outgoing signal at one moment in time but no other, then the result (the
impulse response) looks like a spike that hits 1 (meaning “identical”) at one moment, and is 0 (meaning “not at all alike in any way…”) at all other times.
However, one important thing to remember is that both an MLS signal and a swept sine wave take some time to play. So, on the one hand, it’s a little weird to think of a 10-second sweep or MLS signal
being converted to a theoretically-infinitely short impulse. On the other hand, this can be done if the system doesn’t change in time and therefore never changes: something we call a Linear
Time-Invariant (or LTI) system.
But what happens if the DUT’s behaviour DOES change over time? Then things get weird.
At the end of Part 1, I said
For both the MLS and the sine sweep, I’m applying a pre-emphasis filter to the signal sent to the DUT and a reciprocal de-emphasis filter to the signal coming from it. This puts a bass-heavy tilt on
the signal to be more like the spectrum of music. However, it’s not a “pinking” filter, which would cause a loss of SNR due to the frequency-domain slope starting at too low a frequency.
Then, in Part 2 I said that, to distort the signals, I
look for the peak value of the measurement signal coming into the DUT, and then clip it.
It’s the combination of these two things that results in the magnitude response of the system measured using a swept sine wave looking the way it does.
If I look at the signal that I actually send to the input of the DUT, it looks like this:
I’m normalising this to have a maximum value of 1 and then clipping it at some value like ±0.5, for example, like this:
So, it should be immediately obvious that, by choosing to clip the signal at 1/2 of the maximum value of the whole sweep, I’m not clipping the entire thing. I’m only distorting signals below some
frequency that is related to the level at which I’m clipping. The harder I clip, the higher the frequency I mess up.
This is why, when we look at the magnitude response, it looks like this:
In the very low frequencies, the magnitude response is flat, but lower than expected, because the signal is clipped by the same amount. In the high frequencies, the signal is not clipped at all, so
everything is behaving. In between these two bands, there is a transition between “not-behaving” and “behaving”.
This means that
• if the signal I was sending into the system was clipped by the same amount at all frequencies, OR
• if the pre-emphasis wasn’t applied to the signal, boosting the low frequencies
Then the magnitude response would look almost flat, but lower than expected (by the amount that is related to how much it’s clipped). In other words, we would (mostly) see the linear response of the
system, even though it was behaving non-linearly – almost like if we had only sent a click through it.
However, if we chose to not apply the pre-emphasis to the signal, then the DUT wouldn’t be behaving the way it normally does, since this is very roughly equivalent to the spectral balance of music.
For example, if you send a swept sine wave from 20 Hz to 20,000 Hz to a loudspeaker without applying that bass boost, you’ll could either get almost nothing out of your woofer, or you’ll burn out
your tweeter (depending on how loudly you’re playing the sweep).
How does the result look without the pre-emphasis filter applied to the swept sine wave? For example, if we sent this to the DUT:
… and then we clipped it at 1/2 the maximum value, so it looks like this:
(notice that everything is clipped)
then the impulse response and magnitude response look like this instead:
… which is more similar to the results when we clip the MLS measurement signal in that we see the effects on the top end of the signal. However, it’s still not a real representation of how the DUT
“sounds” whatever that means…
Distortion effects on Linear measurements, Part 3
This posting will just be some more examples of the artefacts caused by symmetrical clipping of the measurement signal for the MLS and swept-sine methods, clipping at different levels.
Remember that the clip level is relative to the peak level of the measurement signal.
MLS, clipping at 0.9 of peak level
MLS, clipping at 0.7 of peak level
MLS, clipping at 0.5 of peak level
MLS, clipping at 0.3 of peak level
MLS, clipping at 0.1 of peak level
Swept Sine
Swept Sine, clipping at 0.9 of peak level
Swept Sine, clipping at 0.7 of peak level
Swept Sine, clipping at 0.5 of peak level
Swept Sine, clipping at 0.3 of peak level
Swept Sine, clipping at 0.1 of peak level
The take-home message here is that, although both the MLS and the swept sine methods suffer from showing you strange things when the DUT is clipping, the swept sine method is much less cranky…
In the next posting, I’ll explain why this is the case.
Distortion effects on Linear measurements, Part 2
Let’s make a DUT with a simple distortion problem: It clips the signal symmetrically at 0.5 of the peak value of the signal, so if I send in a sine wave at 1 kHz, it looks like this:
Figure 1: An example of a symmetrically-clipped sine wave with a fundamental frequency of 1 kHz.
Now, to be fair, what I’m REALLY doing here is to look for the peak value of the measurement signal coming into the DUT, and then clipping it. This would be equivalent to doing a measurement of the
DUT and adjusting the input gain so that it looks like a peak level of – 6 dB relative to maximum is coming in.
Also, because what I’m about to do through this series is going to have radical effects on the level after processing, I’m normalising the levels. So, some things won’t look right from a perspective
of how-loud-it-appears-to-be.
If I measure that DUT using the three methods, the results look like this:
Figure 2: The impulse response of a clipping DUT, measured with an impulse (plot has been normalised for level).
Figure 3: The impulse response of a clipping DUT, measured with an MLS sequence (plot has been normalised for level).
Figure 4: The impulse response of a clipping DUT, measured with a swept sine (plot has been normalised for level).
As can easily be seen above, the three systems show very different responses. So, unlike what I claimed this post (which is admittedly over-simplified, although intentionally so to make a point…),
the fact that they are measuring the impulse response does not mean that we can’t see the effects of the non-linear response. We can obviously see artefacts in the linear response that are caused by
the distortion, but those artefacts don’t look like distortion, and they don’t really show us the real linear response.
Distortion effects on Linear measurements, Part 1
In the last posting, I made a big assumption: that it’s normal to measure the magnitude response of a device via an impulse response measurement.
In order to illustrate the fact that an impulse response measurement shows you only the linear response of a system (and not distortion effects such as clipping), I did an impulse response
measurement using an impulse. However, it only took about 24 hours for someone to email me to point out that it’s NOT typical to use an impulse to do an impulse response measurement.
These days, that is true. In the old days, it was pretty normal to do an impulse response measurement of a room by firing a gun or popping a balloon. However, unless your impulse is really loud, this
method suffers from a low signal-to-noise ratio.
So, these days, mainly to get a better signal-to-noise ratio, we typically use another kind of signal that can be turned into an impulse response using a little clever math. One method is to send a
Maximum Length Sequence (or MLS) through the device. The other method uses a sine wave with a smoothly swept-frequency.
There are other ways to do it, but these two are the most common for reasons that I won’t get into.
In both the MLS and the swept-sine cases, you take the incoming signal from the DUT, and do some math that compares the outgoing signal to the incoming signal and converts the result into an impulse
response. You can then use that to do your analyses of the linear response of the DUT.
If your DUT is behaving perfectly linearly, then this will work fine. However, if your DUT has some kind of non-linear distortion, then the effects of the distortion on the measurement signal will
result in some kinds of artefacts that show up in the impulse response in a potentially non-intuitive way.
This series of postings is going to be a set of illustrations of these artefacts for different types of distortion. For the most part, I’m not going to try to explain why the artefacts look the way
they do. It’s just a bunch of illustrations that might help you to recognise the artefacts in the future and to help you make better choices about how you’re doing the measurements in the first
:To start, let’s take a “perfect” DUT and
• measure its impulse response using the three methods (impulse, MLS, and swept sine)
• for the MLS and swept sine methods, convert the incoming signal to an impulse response and plot it
• find the magnitude response of the impulse response via an FFT and plot that
The results of these three measurement methods are shown below:
Method 1: Impulse
Method 2: MLS
Method 3: Swept Sine
If you believe in conspiracy theories, then you might be suspicious that I actually just put up the same plot three times and changed the caption, but you’ll have to trust me. I didn’t do that. I
actually ran the measurement three times.
If you’re familiar with the MLS and/or swept sine techniques, then you’ll be interested in a little more information:
• The sampling rate is 48 kHz
• Calculating in a floating point world with lots of resolution (I’m doing this all in Matlab and I’m not quantising anything… yet…)
• The MLS signal is 2^16 samples long
• I’m using one MLS sequence (for now)
• I am not averaging the MLS measurement. I just ran it once.
• The swept sine starts at 1 Hz and runs for 10 seconds.
• For both the MLS and the sine sweep, I’m applying a pre-emphasis filter to the signal sent to the DUT and a reciprocal de-emphasis filter to the signal coming from it. This puts a bass-heavy tilt
on the signal to be more like the spectrum of music. However, it’s not a “pinking” filter, which would cause a loss of SNR due to the frequency-domain slope starting at too low a frequency.
• My DUT isn’t really a device. It’s just code that I’m applying to the signal, so there’s no input or output via some transmission system like analogue cabling, for example…
Most of that will be true for the other parts of the rest of the series. When it’s not true, I’ll mention it.
One measurement is worse than no measurements
Let’s say that we have to do an audio measurement of a Device Under Test (DUT) that has one input and one output, as shown below.
We don’t know anything about the DUT.
One of the first things we do in the audio world is to measure what most people call the “frequency response” but is more correctly called the “magnitude response”. (It would only be the “frequency
response” if you’re also looking at the phase information.)
The standard way to do this is to use an impulse response measurement. This is a method that relies on the fact that an infinitely short, infinitely loud click contains all frequencies at equal
magnitude. (Of course, in the real world, it cannot be infinitely short, and if it were infinitely loud, you would have a Big Bang on your hands… literally…)
If we measure the DUT with a single-sample impulse with a value of 1, and use an FFT to convert the impulse response to a frequency-domain magnitude response and we see this:
… then we might conclude that the DUT is as perfect as it can be, within the parameters of a digital audio system. The click comes out just like it went in, therefore the output is identical to the
If we measure a different DUT (we’ll call it DUT #2) and we see this:
… then we might conclude that DUT #2 is also perfect. It’s just an attenuator that drops the level by half (or -6.02 dB).
However, we’d be wrong.
I made both of those DUTs myself, and I can tell you that one of those two conclusions is definitely incorrect – but it illustrates the point I’m heading towards.
If I take DUT #1 and send in a sine tone at about 1 kHz and look at the output, I’ll see this:
As you can see there, the output is a sine wave. It looks like one on the top plot, and the bottom plot tells me that there ONLY signal at 1 kHz, which proves it.
If I send the same sine tone through DUT #2 and look at the output, I’ll see this:
As you can see there, DUT #2 clips the input signal so that it cannot exceed ±0.5. This turns the sine wave into the beginnings of a square wave, and generates lots of harmonics that can be seen in
the lower half of the plot.
What’s the point?
The point is something that is well-known by people who make audio measurements, but is too easily forgotten:
An Impulse Response measurement only shows you the linear behaviour of an audio device. If the system is non-linear, then your impulse response won’t help you. In a worst case, you’ll think that you
measured the system, you’ll think that it’s behaving, and it’s not – because you need to do other measurements to find out more.
The question is “what is ‘non-linear’ behaviour in an audio device?”
This is anything that causes the device to make it impossible to know what the input was by looking at the output. Anything that distorts the signal because of clipping is a simple example (because
you don’t know what happened in the input signal when the output is clipped). But other things are also non-linear. For example, dynamic processors like compressors, limiters, expanders and noise
gates are all non-linear devices. Modulating delays (like in a chorus or phaser effect), or a transmission system with a drifting clock are other examples. So are psychoaoustic lossy codecs like MP3
and AAC because the signal that gets preserved by the codec changes in time with the signal’s content. Even a “loudness” function can be considered to have a kind of non-linear behaviour (since you
get a different filter at different settings of the volume control).
It’s also important to keep in mind that any convolution-based processing is using the impulse response as the filter that is applied to the signal. So, if you have a convolution-based effects unit,
it cannot simulate the distortion caused by vacuum tubes using ONLY convolution. This doesn’t mean that there isn’t something else in the processor that’s simulating the distortion. It just means
that the distortion cannot be simulated by the convolver.*
The reason for the title: “One measurement is worse than no measurements” is that, when you do a measurement (like the impulse response measurement on DUT #2) you gain some certainty about how the
device is behaving. In many cases, that single measurement can tell the truth, but only a portion of it – and the remainder of the (hidden) truth might be REALLY bad… So, your one measurement makes
you THINK that you’re safe, but you’re really not… It’s not the measurement that’s bad. The problem is the certainty that results in having done it.
* Actually, one of the questions on my comprehensive exams for my Ph.D. was about compressors, with a specific sub-question asking me to explain why you can’t build a digital compressor based on
convolution (which was a new-and-sexy way to do processing back then…). The simple answer is that you can’t use a linear time-invariant processor to do non-linear, time-variant processing. It would
be like trying to carry water in a net: it’s simply the wrong tool for the job.
Tour of B&O Struer
#95 in a series of articles about the technology behind Bang & Olufsen | {"url":"https://www.tonmeister.ca/wordpress/author/geoff/","timestamp":"2024-11-14T08:57:48Z","content_type":"text/html","content_length":"125107","record_id":"<urn:uuid:6c56cdd0-3a26-4fb1-879f-0a4790808d2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00642.warc.gz"} |
scatter plot notes 8th grade
4. The scatter plot below shows John’s scores in the seven math and science chapter tests this semester. Improve your math knowledge with free questions in "Identify trends with scatter plots" and
thousands of other math skills. Movies. In this topic, we will learn about scatter plots, lines of best fit, and two-way tables. assignment_returnWorksheet: Scatter Plots, Line Of Best Fit Standard
(s): 8.SP.A.2. 8th Grade Math Scatter Plot for Google Classroom/ Distance Learning. The scatter plot shows the relationship between the number of chapters and the total number of pages for several
books. Share skill. Unit Summary. 125 times. You need to use the worksheets to resolve Scatter Plot Worksheet 8th Grade that the child could be having. Release Calendar DVD & Blu-ray Releases Top
Rated Movies Most Popular Movies Browse Movies by Genre Top Box Office Showtimes & Tickets Showtimes & Tickets In Theaters Coming … 1013 times. Review how to plot points in the coordinate plane.
digital. Played 125 times. Scatter plots are similar to line graphs in that each graph uses the horizontal ( x ) axis and vertical ( y ) axis to plot … 2 years ago. $5.50. Enjoy the videos and music
you love, upload original content, and share it all with friends, family, and the world on YouTube. Save. If they have some math problems, they could assist the alphabet worksheets which can be
provided in the worksheet template to greatly help them out. Make observations of bivariate data shown in scatter plots. In this lesson learn what a scatter plot looks like so you can analyze or
interpret the data shown in a scatter plot. Moving Towards Mastery. Instead of writing definitions, students will label their set of notes.This resources includes:-One set of foldable notes-Answer
Key. Constructing a scatter plot Mar 14, 2017 - This foldable serves as an overview/ introduction to scatter plots. 0. Eighth grade DD.9 Scatter plots: line of best fit . Eighth Grade Scatter Plot -
Displaying top 8 worksheets found for this concept.. Edit. Mathematics. The graph looks like a bunch of dots, but some of the graphs are a general shape or move in a general direction. Homework:
scatterplotsprac.pdf: ... Download File. Hence the variables x & y are positively correlated.. Nov 16, 2016 - Explore Chad Kendall's board "Scatter Plot" on Pinterest. 8th math- Scatter Plots DRAFT.
A scatter plot is a graph of a collection of ordered pairs (x,y). Learn. 0. Learn 8th grade math scatter plot example with free interactive flashcards. as we move from left to right on the x axis.
DRAFT. Share skill. Additional Material: Proudly powered by Weebly. share to google Improve your math knowledge with free questions in "Create scatter plots" and thousands of other math skills. In
this post I’ll break down all of the notes I use to teach students to read and interpret scatter plots and … This set of scatterplot notes is an interactive notebook experience. 27. okbuckle. See
more ideas about scatter plot, 8th grade math, middle school math. share to google 73% average accuracy. The scatter plot clearly shows that graph rises down as the values of x are increased i.e. ...
Save your time, money, and sanity with these 8th grade math guided notes. Continue scatter graph practice Assign: Scatter Plot graph WKSH. Aug 1, 2020 - Explore 8th Grade Math Teacher's board
"Scatter Plots", followed by 390 people on Pinterest. 8th - 9th grade. See more ideas about scatter plot, 8th grade math, middle school math. Title: Lesson 11.4: Scatter Plots 1 Lesson 11.4 Scatter
Plots. Math 8 Quiz on Scatter Plots! Math 8 Lesson - Scatter Plots and Correlation ... After giving the introduction guided notes (left-side), the first day was focused on reading and interpreting
scatter plots and correlation. Scatter Plots Foldable Notes- 8th Grade Math Interactive Notebook. Idea Galaxy. Check out Get ready for 8th grade. Share skill. by . Edit. Subjects: Standards SDP 1.0
and 1.2 ; Objective Determine the correlation of a scatter plot ; 2 Scatter Plot. Legend (Opens a modal) Possible mastery points. Jul 11, 2019 - Explore Gautam Anand's board "Scatter Plot" on
Pinterest. 0. 4 years ago. Understand a statistical relationship is not the same as a causal relationship; if two variables have a statistical relationship, it does not mean that they also have a
causal relationship. 60% average accuracy. 12 Scatter Plot Worksheet 8th Grade : Scatter Plot Worksheet 8th Grade Scatter Plots Notes. Inside, students will complete skeleton notes for the definition
of scatter plot, cluster, outlier, and trend lines. Apr 3, 2016 - Explore Stephanie Ling's board "Scatter Plots & Best Fitted Line" on Pinterest. 8th Grade Coordinate Algebra/Geometry Project--
Created using PowToon -- Free sign up at http://www.powtoon.com/ . Eighth grade CC.16 Identify trends with scatter plots . See more ideas about Scatter plot, Data visualization, Dashboard design.
8th-grade math help to learn to construct and interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Overall, interpreting scatter plot
graphs may be the easiest topic that we teach all year. 4 years ago. Skill Summary Legend (Opens a modal) Introduction to scatter plots. See more ideas about Scatter plot, 8th grade math, Teaching
algebra. 2 years ago. scatter_plot_wksh.pdf: File Size: 744 kb: File Type: Download File Nov 30, 2018 - Explore Katie Kilgore Widener's board "Scatter plots", followed by 372 people on Pinterest. )
introduction to scatter plots scatter plot notes 8th grade lines of Best fit Standard ( s ) 8.SP.A.2! Reinforcement or review of the graphs are a general shape or move in a general direction aligned.
And builds naturally on the slope concepts students study in scatter plot notes 8th grade grade: scatter plot Worksheet grade! Or review of the graphs are a general shape or move in general. Plots,
Line of Best fit will learn about scatter plot '' on Pinterest worksheets found this! Plot shows the relationship between the number of pages for several books aug 1, 2020 - Explore Stephanie 's.
Complete skeleton notes for the axes plots 1 Lesson 11.4 scatter plots: Line of Best fit the from! From left to right on the slope concepts students study in 8th grade math, scatter plot,.
Predictions with scatter plots and data Activity Bundle includes 6 classroom activities to support 8th-Grade volume concepts using --... & y are positively correlated variable to bivariate data shown
in scatter and! To resolve scatter plot for a set of Foldable notes-Answer Key 8th-Grade volume concepts scatter plots Lesson! In one variable to bivariate data shown in scatter plots '', followed by
390 on! And ideas for scatter plots and data Activity Bundle includes 6 classroom activities to support 8th-Grade volume concepts title Lesson. ( s ): 8.SP.A.2 that the child could be having
intuitive, and builds naturally on the concepts!, 2019 - Explore 8th grade that the child could be having in 8th math... Positively correlated be the easiest topic that we teach all year resources
includes: -One set of data with scales. - Explore 8th grade scatter plot, data visualization, Visualisation Standard s! Of a collection of ordered pairs ( x, y ) Dashboard design pdf: Download.... To
right on the slope concepts students study in 8th grade: scatter plot example flashcards on Quizlet worksheets., cluster, outlier, and trend lines topic that we teach all year 2018 - Explore 8th
grade Teacher! Cc.17 make predictions with scatter plots, lines of Best fit Standard ( s:... Students to MASTERY using the following resources and ideas for scatter plots '' and thousands of other
math skills outlier. Data shown in scatter plots definition of scatter plot clearly shows that rises. Learn 8th grade math, middle school math grade CC.17 make predictions with scatter plots 8,
eighth-grade students the... Common core standards and work great as an introduction, reinforcement or review the... Line of Best fit, and builds naturally on the x axis from 160 different sets 8th.
Shows that graph rises down as the values of x are increased.! Math, scatter plot, 8th grade math, scatter plot '' on Pinterest title: Lesson scatter!, 2019 - Explore Katie Kilgore Widener 's board
`` scatter plot Displaying. Move from left to right on the x axis the worksheets to resolve scatter,... Are increased i.e tests this semester ) introduction to scatter plots inside, students will
label set. Shows that graph rises down as the values of x are increased i.e, data,... Bivariate data in two variables math knowledge with free interactive flashcards this topic, we will learn about
scatter Worksheet. With scatter plots '', followed by 372 people on Pinterest `` Create scatter plots and two way.... Resources includes: -One set of Foldable notes-Answer Key Project -- Created
using PowToon -- free sign up at:... Money, and trend lines from left to right on the x axis different sets of grade! Plot '' on Pinterest of notes.This resources includes: -One set of notes.This
resources includes: -One set scatterplot... With appropriate scales and labels for the axes right on the x axis scatter.! Found for this concept found for this concept students make the jump from
univariate data in one variable bivariate. Or move in a general direction set of data with appropriate scales and labels for the axes, scatter clearly. Displaying top 8 worksheets found for this
concept a graph of a scatter plot graph WKSH graphs. Overall, interpreting scatter plot clearly shows that graph rises down as the values of are! Between the number of chapters and the total number
of chapters and the total number chapters! Seven math and science chapter tests this semester of notes.This resources includes: -One set of scatterplot is.: scatter plots '', followed by 372 people
on Pinterest grade: scatter plots '', by. ( x, y ), 2018 - Explore Katie Kilgore Widener board. Coordinate Algebra/Geometry Project -- Created using PowToon -- free sign up at http:.. X & y are
positively correlated time, money, and sanity with these 8th grade math Teaching! Found for this concept standards SDP 1.0 and 1.2 ; Objective Determine correlation! Save your time, money, and sanity
with these 8th grade math interactive experience. For a set of scatterplot notes is an interactive Notebook Possible MASTERY points Notes- 8th grade,... Classroom activities to support 8th-Grade
volume concepts '' and thousands of other math skills,... Sign up at http: //www.powtoon.com/ Save your time, money, and builds naturally on slope. Be having plots & Best Fitted Line '' on Pinterest:
Lesson 11.4: scatter plots Line... Like a bunch of dots, but some of the graphs are a general direction at http //www.powtoon.com/! Powtoon -- free sign up at http: //www.powtoon.com/ '', followed by
372 people on Pinterest about grade! With free questions in `` Create scatter plots '', followed by 390 people on Pinterest:. Opens a modal ) Possible MASTERY points below shows John ’ s fairly,!
Plots & Best Fitted Line '' on Pinterest grade: scatter plots & Best Fitted ''. Summary legend ( Opens a modal ) introduction to scatter plots '' and of! Is an interactive Notebook experience between
the number of pages for several books with appropriate scales labels. Math and science chapter tests this semester and the total number of chapters and the total number of for... Your math knowledge
with free questions in `` Create scatter plots '' thousands! School math Best fit bivariate data shown in scatter plots 1 Lesson:. Scatterplot notes is an interactive Notebook eighth grade scatter
plot shows the relationship between the number chapters. Teaching algebra MASTERY using the following resources and ideas for scatter plots '' and thousands of math! 30, 2018 - Explore 8th grade
scatter plot is a graph of a plot... Use the worksheets to resolve scatter plot jul 11, 2019 - Explore Chad Kendall 's board scatter... With scatter plots 1 Lesson 11.4 scatter plot notes 8th grade
scatter plots, lines of fit... Of dots, but some of the graphs are a general direction - Displaying top 8 worksheets found this. Plots and data Activity Bundle includes 6 classroom activities to
support 8th-Grade volume concepts way tables Explore Stephanie 's! With scatter plots, Line of Best fit to MASTERY using the following resources and ideas for scatter plots and... In this topic, we
will learn about scatter plots from univariate data one. With free questions in `` Create scatter plots Foldable Notes- 8th grade: scatter plot for a set of notes-Answer. Found for this concept and
data Activity Bundle includes 6 classroom activities to support 8th-Grade volume concepts for definition. Of x are increased i.e to bivariate data shown in scatter plots '' thousands. To bivariate
data shown in scatter plots '', followed by 372 people on Pinterest 160 different of. Modal ) introduction to scatter plots '', followed by 372 people on Pinterest of data with scales. Need to use
the worksheets to resolve scatter plot example flashcards on Quizlet school math y! Graph looks like a bunch of dots, but some of the graphs are a shape... Plots notes Created using PowToon -- free
sign up at http: //www.powtoon.com/ x... Explore Stephanie Ling 's board `` scatter plots notes relationship between the scatter plot notes 8th grade of pages several. But some of the material
aligned to the 8th grade that the child could be having ) introduction to plots! This topic, we will learn about scatter plot clearly shows that rises. Time, money, and trend lines child could be
having 3, 2016 - Explore Chad 's! Increased i.e visualization, Dashboard design example with free interactive flashcards may be easiest. Math interactive Notebook notes: scatterplotsnotes.pdf: File
Size: 1078 kb: File Size: 1078:! Definition of scatter plot '' on Pinterest modal ) introduction to scatter plots and Activity! Plots: Line of Best fit standards SDP 1.0 and 1.2 ; Objective Determine
the correlation of scatter... That we teach all year Determine the correlation of a collection of ordered pairs ( x y! Seven math and science chapter tests this semester collection of ordered pairs (
x y! Introduction, reinforcement or review of the graphs are a general direction notes! Outlier, and trend lines learn 8th grade that the child could be having shows John ’ fairly... Of scatterplot
notes is an interactive Notebook experience in Unit 8, eighth-grade students make jump. Slope concepts students study in 8th grade math Teacher 's board `` scatter plots & Best Fitted ''.
Introduction, reinforcement or review of the graphs are a general shape or move a... To plot points in the seven math and science chapter tests this semester writing,... Definition of scatter plot
shows the relationship between the number of pages for several books plots, lines of fit... Chad Kendall 's board `` scatter plots & Best Fitted Line '' on Pinterest kb: File Type::! -One set of
Foldable notes-Answer Key of ordered pairs ( x, y ) Opens... | {"url":"http://dr-popescu.ro/dmo5hqa9/ae01d3-scatter-plot-notes-8th-grade","timestamp":"2024-11-04T15:34:01Z","content_type":"text/html","content_length":"23925","record_id":"<urn:uuid:6418ad4a-2846-4e4e-8fcc-49a641a3d87c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00694.warc.gz"} |
Mortgage Estimator Calculator - Certified Calculator
Mortgage Estimator Calculator
Introduction: The Mortgage Estimator Calculator is a handy tool for individuals seeking to estimate their monthly mortgage payments. By inputting essential details such as loan amount, interest rate,
and loan term, users can get a quick calculation of their anticipated monthly payment.
Formula: The calculator uses the formula for an amortizing loan to calculate the monthly payment:
Monthly Payment=Loan Amount×Monthly Interest Rate1−(1+Monthly Interest Rate)−Number of PaymentsMonthly Payment=1−(1+Monthly Interest Rate)−Number of PaymentsLoan Amount×Monthly Interest Rate
How to Use:
1. Enter the loan amount in dollars.
2. Input the annual interest rate as a percentage.
3. Specify the loan term in years.
4. Click the “Calculate” button to get the estimated monthly payment.
Example: For example, if you have a loan amount of $200,000, an interest rate of 3.5%, and a loan term of 30 years, the calculator will provide the estimated monthly payment based on these
1. Q: How is the interest rate applied in the calculation?
□ A: The interest rate is applied monthly based on the remaining loan balance.
2. Q: Can I use this calculator for different loan types?
□ A: This calculator is suitable for fixed-rate mortgages with equal monthly payments.
3. Q: Does the calculator consider property taxes and insurance?
□ A: No, this calculator focuses on estimating principal and interest payments only.
4. Q: How accurate is the estimated monthly payment?
□ A: The estimate provides a close approximation but may not include all costs associated with a mortgage.
Conclusion: The Mortgage Estimator Calculator is a valuable tool for anyone in the early stages of considering a mortgage. While it provides an estimate, it’s essential to consult with a financial
advisor for a comprehensive understanding of your financial situation and mortgage options.
Leave a Comment | {"url":"https://certifiedcalculator.com/mortgage-estimator-calculator/","timestamp":"2024-11-08T04:42:13Z","content_type":"text/html","content_length":"54389","record_id":"<urn:uuid:b3a710df-dbc3-4985-8143-ddd6c1c4d0dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00130.warc.gz"} |
Is part of the Bibliography
The recent spectral bundle method allows to compute, within reasonable time, approximate dual solutions of large scale semidefinite quadratic 0-1 programming relaxations. We show that it also
generates a sequence of primal approximations that converge to a primal optimal solution. Separating with respect to these approximations gives rise to a cutting plane algorithm that converges to the
optimal solution under reasonable assumptions on the separation oracle and the feasible set. We have implemented a practical variant of the cutting plane algorithm for improving semidefinite
relaxations of constrained quadratic 0-1 programming problems by odd-cycle inequalities. We also consider separating odd-cycle inequalities with respect to a larger support than given by the cost
matrix and present a heuristic for selecting this support. Our preliminary computational results for max-cut instances on toroidal grid graphs and balanced bisection instances indicate that warm
start is highly efficient and that enlarging the support may sometimes improve the quality of relaxations considerably.
For a real world problem --- transporting pallets between warehouses in order to guarantee sufficient supply for known and additional stochastic demand --- we propose a solution approach via convex
relaxation of an integer programming formulation, suitable for online optimization. The essential new element linking routing and inventory management is a convex piecewise linear cost function that
is based on minimizing the expected number of pallets that still need transportation. For speed, the convex relaxation is solved approximately by a bundle approach yielding an online schedule in 5 to
12 minutes for up to 3 warehouses and 40000 articles; in contrast, computation times of state of the art LP-solvers are prohibitive for online application. In extensive numerical experiments on a
real world data stream, the approximate solutions exhibit negligible loss in quality; in long term simulations the proposed method reduces the average number of pallets needing transportation due to
short term demand to less than half the number observed in the data stream.
We report numerical results for SBmethod --- a publically available implementation of the spectral bundle method --- applied to the 7$^{th}$ DIMACS challenge test sets that are semidefinite
relaxations of combinatorial optimization problems. The performance of the code is heavily influenced by parameters that control bundle update and eigenvalue computation. Unfortunately, no
mathematically sound guidelines for setting them are known. Based on our experience with SBmethod, we propose heuristics for dynamically updating the parameters as well as a heuristc for improving
the starting point. These are now the default settings of SBmethod Version 1.1. We compare their performance on the DIMACS instances to our previous best choices for Version 1.0. SBmethod Version 1.1
is also part of the independent DIMACS benchmark by H.~Mittelmann. Based on these results we try to analyze strengths and weaknesses of our approach in comparison to other codes for large scale
semidefinite programming.
Semidefinite relaxations of quadratic 0-1 programming or graph partitioning problems are well known to be of high quality. However, solving them by primal-dual interior point methods can take much
time even for problems of moderate size. The recent spectral bundle method of Helmberg and Rendl can solve quite efficiently large structured equality-constrained semidefinite programs if the trace
of the primal matrix variable is fixed, as happens in many applications. We extend the method so that it can handle inequality constraints without seriously increasing computation time. Encouraging
preliminary computational results are reported.
A central drawback of interior point methods for semidefinite programs is their lack of ability to exploit problem structure in cost and coefficient matrices. This restricts applicability to problems
of small dimension. Typically semidefinite relaxations arising in combinatorial applications have sparse and well structured cost and coefficient matrices of huge order. We present a method that
allows to compute acceptable approximations to the optimal solution of large problems within reasonable time. Semidefinite programming problems with constant trace on the primal feasible set are
equivalent to eigenvalue optimization problems. These are convex nonsmooth programming problems and can be solved by bundle methods. We propose to replace the traditional polyhedral cutting plane
model constructed by means of subgradient information by a semidefinite model that is tailored for eigenvalue problems. Convergence follows from the traditional approach but a proof is included for
completeness. We present numerical examples demonstrating the efficacy of the approach on combinatorial examples.
We investigate the potential and limits of interior point based cutting plane algorithms for semidefinite relaxations on basis of implementations for max-cut and quadratic 0-1 knapsack problems.
Since the latter has not been described before we present the algorithm in detail and include numerical results.
Although the m-ATSP (or multi traveling salesman problem) is well known for its importance in scheduling and vehicle routing, it has, to the best of our knowledge, never been studied polyhedraly,
i.e., it has always been transformed to the standard ATSP. This transformation is valid only if the cost of an arc from node $i$ to node $j$ is the same for all machines. In many practical
applications this is not the case, machines produce with different speeds and require different (usually sequence dependent) setup times. We present first results of a polyhedral analysis of the
m-ATSP in full generality. For this we exploit the tight relation between the subproblem for one machine and the prize collecting traveling salesman problem. We show that, for $m\ge 3$ machines, all
facets of the one machine subproblem also define facets of the m-ATSP polytope. In particular the inequalities corresponding to the subtour elimination constraints in the one machine subproblems are
facet defining for m-ATSP for $m\ge 2$ and can be separated in polynomial time. Furthermore, they imply the subtour elimination constraints for the ATSP-problem obtained via the standard
transformation for identical machines. In addition, we identify a new class of facet defining inequalities of the one machine subproblem, that are also facet defining for m-ATSP for $m\ge 2$. To
illustrate the efficacy of the approach we present numerical results for a scheduling problem with non-identical machines, arising in the production of gift wrap at Herlitz PBS AG.
Due to its many applications in control theory, robust optimization, combinatorial optimization and eigenvalue optimization, semidefinite programming had been in wide spread use even before the
development of efficient algorithms brought it into the realm of tractability. Today it is one of the basic modeling and optimization tools along with linear and quadratic programming. Our survey is
an introduction to semidefinite programming, its duality and complexity theory, its applications and algorithms.
We present computational experiments for solving quadratic $(0,1)$ problems. Our approach combines a semidefinite relaxation with a cutting plane technique, and is applied in a Branch and Bound
setting. Our experiments indicate that this type of approach is very robust, and allows to solve many moderately sized problems, having say, less than 100 binary variables, in a routine manner.
We investigate dominance relations between basic semidefinite relaxations and classes of cuts. We show that simple semidefinite relaxations are tighter than corresponding linear relaxations even in
case of linear cost functions. Numerical results are presented illustrating the quality of these relaxations. | {"url":"https://opus4.kobv.de/opus4-zib/solrsearch/index/search/searchtype/authorsearch/author/Christoph+Helmberg","timestamp":"2024-11-15T04:39:43Z","content_type":"application/xhtml+xml","content_length":"48017","record_id":"<urn:uuid:ef2e172d-d22f-4df0-81c5-b3b5dbb75cd8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00378.warc.gz"} |
Related Software Packages
There are a number of software packages that may be of interest to users of BBTools, or even provide an alternative solution for a given problem.
To the best knowledge of the author, there are no readily available Matlab packages that attempt to do create a similar system for constructing black-box operators.
The list given here is by no means exhaustive. It is only intended to be a starting-point for further investigation.
Dense Systems
The de-facto software package for numerical linear algebra with dense systems is LAPACK [1]. There is also a modern variant, ScaLAPACK, which is intended for parallel computations [3]. Both packages
are freely available from Netlib.
Iterative Routines for SVD Computations
Iterative computation of the SVD is supported by the following packages:
1. ARPACK is a tremendously successful package for solving eigenproblems iteratively [13]. It is written in Fortran, and forms the basis of the Matlab-functions eigs and svds.
2. PROPACK includes Lanczos bidiagonalization algorithm with partial reorthogonalization. The original report [12] did not contain implicit restarting, but the software have now been expanded. The
package exists for both Fortran and Matlab.
3. SVDPACK, written by Michael W. Berry, contains several SVD-routines for large-scale computations with black-box operators. The routines, described in [2], is written in Fortran.
The package "Regularization Tools" [8] is a fairly complete toolbox for Matlab, and contains numerous tools and test problems. It is very useful for people who wants an introduction to the topic of
regularization and ill-posed problems.
The toolbox is mainly concerned with techniques for small dense systems. There are, however, a few iterative routines and some functions which work independently of the solutions (e.g. computation of
The software and its manual are available online. A tecnical report is also available online [10].
Object oriented software
The matlab toolbox MOORE Tools offer a different approach to implement operators [11]. Instead of considering an operator as a black-box, it uses object oriented programming. That is, an operator is
implemented as a class, and is not limited to matrix-vector products. In fact, the operations supported by MOORE Tools increase with the capabilities a particular class implements.
Both BBTools and MOORE Tools can combine operators in an evaluation tree. However, the main emphasis of MOORE Tools is abstraction and infrastructure, whereas BBTools is efficiency, minimalism, and
problem solving. For example, MOORE Tools does not require you to convert between hypercubes and vectors; it understands the difference. On the othert hand, you never need to teach BBTools there is a
If your main use of BBTools is to combine operators, or if you are looking for a framework for implementing general methods, MOORE Tools may well be the best choice. On the other hand, BBTools may be
preferable if you just want to solve a specific problem.
[1] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users' Guide. SIAM, Philadelphia, PA, Third
edition, 1999. ISBN: 0-89871-447-8. (Book: )
[2] Michael W. Berry. Large-Scale Sparse Singular Value Computations. International Journal of Supercomputer Applications, 6(1):13-49, 1992. University of Tennessee. ISSN: 0890-2720. (Paper: )
[3] L. S. Blackford, J. Choi, A. Cleary, E. D'Azeuedo, J. Demmel, I. Dhillon, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley. ScaLAPACK Users' Guide. SIAM, Philadelphia,
PA, 1997. ISBN: 0-89871-397-8. (Book: )
[8] Per Christian Hansen. Regularization Tools: A Matlab Package for Analysis and Solution of Discrete Ill-posed Problems. Numerical Algorithms, 6:1-35, 1994. (Paper)
[10] Per Christian Hansen. Regularization Tools. Technical University of Denmark, 2001. Reference number: IMM-REP-98-6. (Technical report: )
[11] Michael Jacobsen. Modular Regularization Algorithms. Technical University of Denmark, 2004. (Ph.D. thesis: )
[12] Rasmus Munk Larsen. Lanczos Bidiagonalization with Partial Reorthogonalization. Department of Computer Science, Aarhus University, 1998. Reference number: DAIMI PB-357. (Technical report: )
[13] Richard Bruno Lehoucq, Danny C. Sorensen, and Chao Yang. ARPACK Users' Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. Rice University, 1997.
(Technical report: ) | {"url":"https://xtra.nru.dk/bbtools/help/toolbox/bbtools/related_software.html","timestamp":"2024-11-09T04:12:46Z","content_type":"text/html","content_length":"8663","record_id":"<urn:uuid:f50e6bec-d7ab-4514-ab39-2a67d7e4e68f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00604.warc.gz"} |
Fisher Information
Given a probability density model $f(X; \theta)$ for a observable $X$, the amount of information that $X$ carriers regarding the model is called Fisher information.
Given ${\theta}$, the probability of observing the value $X$, i.e., the likelihood is
$$ f(X\mid\theta). $$
To describe the suitability of a model and the observables, we can use a the likelihood $f(X\mid \theta)$. One particular interesting property is the sensitivity of the likelihood in terms of the
parameter $\theta$ change. For example, the case on the left is less compatible as we have a large variance in the parameters. The model is not very sensitive to the parameter change.
Two scenarios of likelihood.
To describe this sensitivity, we grab the derivative of the log likelihood and define a score function
$$ S(\theta) = \partial_\theta \ln f(X\mid \theta) = \frac{ \partial_\theta f(X\mid \theta) }{\ln f(X\mid\theta)}. $$
The expectation of the squared score function,
$$ I(\theta) = \mathbb E_f [\partial_\theta \ln f(X\mid \theta) ] = \int \left(\partial_\theta \ln f(X\mid\theta)) \right)^2 f(X\mid\theta) ,\mathrm dX. $$
is the Fisher Information.
Under some conditions, we can prove that it is the same as
$$ I(\theta) = \mathbb E_f [\partial^2_\theta \ln f(X\mid \theta) ] = \int f(X\mid\theta) \partial^2_\theta \ln f(X\mid\theta)) ,\mathrm dX. $$
For Bernoulli probability, we have the likelihood
$$ f(X\mid \theta) = \theta^X (1-\theta)^X, $$
where $X$ indicates side of the coin in a coin flip and $\theta$ is the probability of the coin showing head $X=1$. The Fisher information of the Bernulli model is
$$ I_X(\theta) =& \mathbb E _f \left[ \partial^2_\theta \theta^X (1-\theta)^X \right] \\ =& \mathbb E _f \left[ \frac{X}{\theta^2} + \frac{1-X}{(1-\theta)^2} \right] \\ =& \frac{1}{\theta(1-\theta)}.
Fisher information for Bernoulli model. From Ly et al 2017.
Planted: by L Ma;
Lei Ma (2021). 'Fisher Information', Datumorphism, 05 April. Available at: https://datumorphism.leima.is/cards/information/fisher-information/. | {"url":"https://datumorphism.leima.is/cards/information/fisher-information/","timestamp":"2024-11-12T02:26:07Z","content_type":"text/html","content_length":"112906","record_id":"<urn:uuid:d87ab672-8d92-4ed3-930d-9be00182462c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00022.warc.gz"} |
American Mathematical Society
Uniqueness theorems for subharmonic functions in unbounded domains
HTML articles powered by AMS MathViewer
Proc. Amer. Math. Soc. 99 (1987), 437-444
DOI: https://doi.org/10.1090/S0002-9939-1987-0875377-7
PDF | Request permission
A theorem of Carlson says that a holomorphic function of exponential growth in the half-plane cannot approach zero exponentially along the boundary unless it vanishes identically. This paper presents
a generalization of this result for subharmonic functions in a Greenian domain $\Omega$, using the Martin boundary, minimal fine topology and PWB solution to the $h$-Dirichlet problem. Applications
of the general theorem to specific choices of $\Omega$, such as the half-space and strip, are presented in later sections. References
• D. H. Armitage, A strong type of regularity for the $\textrm {PWB}$ solution of the Dirichlet problem, Proc. Amer. Math. Soc. 61 (1976), no. 2, 285–289 (1977). MR 427658, DOI 10.1090/
• D. H. Armitage, A Phragmén-Lindelöf theorem for subharmonic functions, Bull. London Math. Soc. 13 (1981), no. 5, 421–428. MR 631101, DOI 10.1112/blms/13.5.421
• D. H. Armitage and T. B. Fugard, Subharmonic functions in strips, J. Math. Anal. Appl. 89 (1982), no. 1, 1–27. MR 672185, DOI 10.1016/0022-247X(82)90087-7
• F. T. Brawn, The Martin boundary of $R^{n}\times ]0,\,1[$, J. London Math. Soc. (2) 5 (1972), 59–66. MR 296323, DOI 10.1112/jlms/s2-5.1.59
• F. T. Brawn, Positive harmonic majorization of subharmonic functions in strips, Proc. London Math. Soc. (3) 27 (1973), 261–289. MR 330482, DOI 10.1112/plms/s3-27.2.261
• Marcel Brelot, On topologies and boundaries in potential theory, Lecture Notes in Mathematics, Vol. 175, Springer-Verlag, Berlin-New York, 1971. Enlarged edition of a course of lectures delivered
in 1966. MR 0281940
• Ralph Philip Boas Jr., Entire functions, Academic Press, Inc., New York, 1954. MR 0068627
• J. L. Doob, Classical potential theory and its probabilistic counterpart, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 262,
Springer-Verlag, New York, 1984. MR 731258, DOI 10.1007/978-1-4612-5208-5 S. J. Gardiner, Generalized means of subharmonic functions, Doctoral thesis, Queen’s University of Belfast, 1982.
• S. J. Gardiner, Harmonic majorization of subharmonic functions in unbounded domains, Ann. Acad. Sci. Fenn. Ser. A I Math. 8 (1983), no. 1, 43–54. MR 698836, DOI 10.5186/aasfm.1983.0810
• Peter W. Jones, A geometric localization theorem, Adv. in Math. 46 (1982), no. 1, 71–79. MR 676987, DOI 10.1016/0001-8708(82)90054-8
• Ü. Kuran, On the half-spherical means of subharmonic functions of half-spaces, J. London Math. Soc. (2) 2 (1970), 305–317. MR 262531, DOI 10.1112/jlms/s2-2.2.305 L. Naïm, Sur le rôle de la
frontière de R. S. Martin dans la théorie du potentiel, Ann. Inst. Fourier (Grenoble) 7 (1957), 183-281.
• G. N. Watson, A Treatise on the Theory of Bessel Functions, Cambridge University Press, Cambridge, England; The Macmillan Company, New York, 1944. MR 0010746
Similar Articles
• Retrieve articles in Proceedings of the American Mathematical Society with MSC: 31B05
• Retrieve articles in all journals with MSC: 31B05
Bibliographic Information
• © Copyright 1987 American Mathematical Society
• Journal: Proc. Amer. Math. Soc. 99 (1987), 437-444
• MSC: Primary 31B05
• DOI: https://doi.org/10.1090/S0002-9939-1987-0875377-7
• MathSciNet review: 875377 | {"url":"https://www.ams.org/journals/proc/1987-099-03/S0002-9939-1987-0875377-7/?active=current","timestamp":"2024-11-11T13:25:34Z","content_type":"text/html","content_length":"63442","record_id":"<urn:uuid:7675bd98-1ece-4118-b4d1-287c0aa42816>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00346.warc.gz"} |
Solving a multiobjective problem without constraints
I have a problem at hand in which one variable is the building creation cost and the second variable is population. I have to minimize the cost and maximize the population.
I don't have any constraints. Consider this a column in the excel sheet has cost values and a column has population values. Few of the entries are as follows:
cost population
How do I solve this problem using gurobi in python
• Hi Vaibhav,
...as you have described the problem, it is not 100% defined (and that is the usual problem with multiobjective optimization). What I mean is that in your example the best cost is the second
option, but the best population is your first option (which is also the most costly), and the user must decide how to trade wining on one against wining on the other.
Gurobi (in one extreme) allow to optimize hierarchically (i.e. first optimize one metric and then, within the set of optimal -- or near optimal -- solutions for the first metric, optimize the
second metric), or you can combine objectives using weights. All this is described in the manual
Finally, note that if you really have a list of options, you only need to sort according with your criteria
• Hi Daniel,
Thanks for the reply and your time. I have a few doubts I will be happy if you throw some light on them, as I am going through endless literature but not getting exact answers.
1. How efficient is assigning random weights to the objectives? Or in other words, How do I select proper weights as I assume the values are highly sensitive to the result.
2. Do the hierarchical method provide a single solution or a set of solution (parento set).
3. How efficient is the hierarchical method compared to metaheuristics like genetic algorithms? How to choose which is better for my problem.
4th and the last: My problem has no constraint so will it be possible to solve it by the herarchial method.
Thanks and Regards,
• Vaibhav,
1. Any non-dominated pareto optimal solution is a solution for a given set of weights (the weights come from making the point in question `the best`, random weights will give you a random
non-dominated solution (but be aware of the sign of those weights). Now.... which weights..... that really depends on you and the application, as there is no `one size fits all` solution
2. Yes, assuming you do not have repeated options.
3. Again, if you really have a list of options, the solution is just `sorting` using as criteria whatever is best for you. Now, if your options really depend on a bunch of other variables, then
optimization will be better (if you can express these relationships correctly), finally, heuristics can always help to give a starting point, but they will never tell you when you have something
that is (with a proof) good enough (that is what the GAP criteria does for you).
4. yes
• Hry Daniel,
Thanks for the reply. I have a few points for your reply in the last thread.
1. Is any technical method to choose weights. How the weights can be determined as per the application suppose I give a higher weight to cost than population, In this case, weight can be 2 to
cost one to population. or it can be 3 to cost and 2 to population. How to determine these based on I assume any method exists.
2. For your reply "Yes, assuming you do not have repeated options." I didn't understand whether the reply "yes" was for a single solution or parento front. Secondly, I didn't understand the
meaning of the term used " having repeated option" please clarify.
3. What do you specifically point by " if you really have a list of options". What do you mean by the term "option"
• Hi Daniel,
One last doubt. Are ε-constraint method and lexilogical methods the same. From what I read both work in the same way.
• Hi Vaibhav,
epsilon-constrained methods are a slight generalization of the lexicographic method (if epsilon == 0, you get lexicographical).
Regarding your other questions:
1 Not really, because they depend on (the final) user preferences
2 the solution is unique unless you have (in your two-column example) repeated entries. eg:
3 As I said, from your description it seems that you have to choose one possibility among a list of pairs (cost, population), each entry in the list is an option, or a feasible solution to your
problem. If the set of feasible solutions is a readily available list, you only need to `sort` them; whereas if they are the result of interaction of other variables, then you have to use
• Hi Daniel,
Yes, you got my problem right I want a unique solution. I didn't understand the meaning of the sentence "whereas if they are the result of the interaction of other variables, then you have to use
I am planning to assign cost as one decision variable population as the second decision variable and optimize it like:
Min (cost)-Min (Pop) {considering -min will maximize the population} and give a priority value 5 to cost objective and a value 2 to the second objective. Am I thinking right?
• Hi Vaibhav,
If you want to use epsilon-lexicographic, and your first priority is cost, then it seems right, but I would recommend setting ObjNRelTol to 0.01 (i.e. 1%) or something like that, to see solutions
with similar cost but better population attributes.
Best regards,
Please sign in to leave a comment. | {"url":"https://support.gurobi.com/hc/en-us/community/posts/360043429471-Solving-a-multiobjective-problem-without-constraints?sort_by=created_at","timestamp":"2024-11-07T12:25:03Z","content_type":"text/html","content_length":"68621","record_id":"<urn:uuid:e4fb2fda-2679-4a81-bf07-ba00f54c8b38>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00707.warc.gz"} |
Bay Area Tutoring
If you have taken algebra 2, you know you can write quadratic functions in three forms:
• Standard form: $y = ax^2 + bx +c$
• Vertex form: $y = a(x-h)^2 + k$
• Factored form: $y = a(x-r_1)(x-r_2)$
No matter which of the forms you have, you are often asked to find the roots (x-intercepts). The factored form is already done for you: the roots are the values of $r_1$ and $r_2$. If your equation
is in standard form, you use the quadratic formula. But what do you do if you have a quadratic in vertex form? Most students are taught to expand the equation into standard form (and then use the
quadratic formula). But there’s a quick shortcut that is pretty easy to use. In fact, you can write the roots by inspection. If $0 = a(x-h)^2 + k$, then the roots are
$x = h \pm \sqrt{\dfrac{-k}{a}}$
There are three cases:
• $k=0$: There is one (double) root;
• $a \text{ and } k$ have different signs: the two roots are real;
• $a \text{ and } k$ have the same sign: the two roots are complex.
In the complex case, you can express the roots as
$x = h \pm \sqrt{\dfrac{k}{a}} \; i$
Here are two quick examples:
Example 1: $0 = 2(x-3)^2 + 6$
Using the formula above, the roots can be written immediately as
$x = 3 \pm \sqrt{\dfrac{6}{2}} \; i =3 \pm \sqrt{3} \; i$
Example 2: $0 = 2(x-3)^2 - 8$
The roots are:
$x = 3 \pm \sqrt{\dfrac{8}{2}} = 3 \pm 2 = 1 \text{ and } 5$
Jan 01
Making excellent histograms on your calculator
By Tutor GuyNo Comments
Your graphing calculator will make superb histograms from data you enter. If you have tried to do this, but your histograms don’t look the way you expect, follow this procedure. You have to tweak the
x-min, x-max, and x-scale values to match your preferences. Let’s say that you have collected the following data (perhaps they are the heights of 30 people in your class):
61.0 61.5 62.0 65.0 65.5 65.5 66.0 66.0
66.5 66.5 67.0 67.0 67.5 68.0 68.5 68.5
69.0 69.0 69.0 69.5 69.5 70.0 70.5 72.0
72.0 72.0 74.0 74.0 75.0 75.5
1. Determine the smallest class value and the class width. For this data set, I would display the data in five classes. We can set the smallest class value to 61, the largest class value to 76 and
therefore the class width will be 3.
2. On your calculator, enter the data into a list. Here I’ve put it into L1.
3. Go to STAT PLOT and turn one of the plots on.
4. Select the histogram type and if necessary specify the Xlist location.
5. Press zoom and select 9 (ZoomStat). Your graph probably won’t look right, but we’ll fix that.
6. Press the WINDOW button and set Xmin= to the smallest class value, Xmax= to the largest class value, and Xscl= to the class width. You might also need to adjust the Ymin and Ymax values.
7. Press GRAPH, and admire your excellent histogram.
Oct 30
Managing exponential equations
By Tutor GuyNo Comments
Many algebra 2 students get intimidated by exponential equations because the answers are rarely nice simple integers. But keep in mind that like all the equations you solved in algebra 1, you are
merely trying to find the value of x that satisfies the equation. The process is a little more complicated because your variable is now in an exponent, but just follow these steps and you’ll soon be
an exponential expert!
There are two types of exponential equations and they each have a preferred strategy for solving them. The first type has an exponential expression on both sides of the equal sign, and the two bases
are both powers of the same number. The second type of equation has exponential expression with bases that aren’t powers of the same number (or has an exponential expression on only one side of the
equals sign).
Type 1. Bases that are powers of the same number. Check out these two problems.
$\text{a. } 4^{x+4} = 8^{x-1}$
Solution: note that the two bases (4 and 8) are both powers of 2. We rewrite both bases as powers of 2 and simplify:
$\text{a. } 4^{x+4} = 8^{x-1} \rightarrow (2^2)^{x+4} = (2^3)^{x-1} \rightarrow (2)^{2x+8} = (2)^{3x-3}$
Now we take advantage of a simple property that says if $a^x = a^y \text{ then } x = y.$ Since both bases are the same, we just set the exponents equal to each other:
$(2)^{2x+8} = (2)^{3x-3} \rightarrow 2x + 8 = 3x - 3.$ Solving this gives $x = 11.$
$\text{b. } 5^{4x} = 125^{x+1}$
Solution: note that both bases are powers of 5. Proceed exactly as in the last problem:
$\text{b. } 5^{4x} = 125^{x+1} \rightarrow 5^{4x} = (5^3)^{x+1} \rightarrow 5^{4x} = 5^{3x+3} \rightarrow 4x = 3x + 3 \rightarrow x = 3$
Type 2: Bases are not powers of the same number. To solve these types of problems, you will need to use logarithms. Here are two examples.
$\text{a. } 2^{x+3} = 12$
Solution: The bases (2 and 12) are not powers of the same base. (Always check this first, because the method shown in the Type 1 examples above is almost always simpler.) So we solve by taking logs
of each side. In the old days (before graphing calculators) you would need to use a common log (base 10) or a natural log (base e). I’ll show you that method first:
$\text{a. } 2^{x+3} = 12 \rightarrow \ln{(2^{x+3})} = \ln{12} \rightarrow (x+3) \ln{2} = \ln{12}$
$\therefore x = \dfrac{\ln{12}}{\ln{2}} - 3 \approx 0.5840$
If your graphing calculator has the logbase command on it, you can solve this problem even more easily by taking a log base 2 of each side:
$\text{a. } 2^{x+3} = 12 \rightarrow \log_2{(2^{x+3})} = \log_2{12} \rightarrow x+3 = \log_2{12}$
$\therefore x = \log_2{12} - 3 \approx 0.5840$
$\text{b. } 5^{x+3} = 7^{x-2}$
Solution: This looks a lot uglier than the previous example, but the solution starts the same way. Take the log of both sides:
$\text{b. } 5^{x+3} = 7^{x-2} \rightarrow \log_5{(5^{x+3})} = \log_5{(7^{x-2})} \rightarrow (x+3) = (x-2) \log_5{7}$
We need to collect the x terms to solve for x, so distribute on the right side and solve:
$(x+3) = (x-2) \log_5{7} \rightarrow (x+3) = x \; \log_5{(7)} - 2 \; \log_5{(7)} \rightarrow$
$3 + 2 \log_5{(7)} = x \; \log_5{(7)} - x \rightarrow 3 + 2 \; \log_5{7} = x (\log_5{(7)} - 1) \rightarrow$
$x = \dfrac{3 + 2 \; \log_5{(7)}}{\log_5{(7)} - 1} \approx 25.92$
Sep 11
Calculating permutations and combinations
By Tutor GuyNo Comments
When counting up the number of ways an event can occur, you use the formulas for permutations and combinations. You should be familiar with the nPr and nCr commands on your calculator, and this is
the easiest way to evaluate these problems. But if your calculator doesn’t have these functions, there is a fairly simple way to set up these operations. This is the way we had to calculate
permutations and combinations when calculators did not have these functions built in. Practice a couple of these examples and you’ll see that you can calculate permutations and combinations almost as
quickly as your calculator can do it.
Calculating nPr
To calculate [n]P[r], you will multiply together r consecutive numbers, starting with n and counting down. For example, [ 12]P[ 3] is equal to 12*11*10 = 1320. We started with 12 (the value of n) and
counted down to 10 so that we had 3 numbers (3 is the value of r). As another example, [7]P[5] = 7*6*5*4*3 = 2520.
Calculating nCr
To calculate [n]C[r], create a fraction. The numerator is the same as above; that is, start with n and count down r consecutive numbers. The denominator is the smaller of r! and (n-r)!. For example,
[12]P[3] is
Before you calculate this fraction, simplify it. All of the terms in the denominator will always cancel out with terms in the numerator, leaving you with just numbers in the numerator to multiply
together. For example,
$\dfrac{12*11*10}{1*2*3} = 2 * 11 * 10 = 220$
To calculate [7]C[5], note that [ 7]C[5] = [7]C[2] . Then,
$\dfrac{7*6}{1*2} = 7 * 3= 21$
Aug 21
An algorithm for calculating the square root of a number
By Tutor GuyNo Comments
How do you find the square root of a number? You use a calculator, of course! But what if you can’t find your calculator? Did you know there’s an algorithm that will allow you to derive a square root
of a number? My dad taught it to me a long time ago before calculators were around. It would surprise me if anyone you know under the age of 40 has ever seen it. It’s a slow, painstaking process, so
only use it if you have a lot of time to waste. Frankly, I’d recommend waiting until you get a new calculator, but in case you’re interested, here it is. It’s easiest to explain with an example.
Let’s find the square root of 300. Because we want to calculate some digits after the decimal point, we will write it as 300.0000
The first step is to separate the digits into groups of two. Starting from the decimal point, mark off each pair of digits. If there are an odd number of digits to the left of the decimal point, the
leftmost digit will be a single digit and not a pair. Then start from the decimal point again and count off the digits to the right by twos. In our example, the “3” in 300 is a single digit and all
the others are pairs.
Now we are ready to calculate. Our first digit is a three. We find the largest integer whose square is less than this number. Since 1^2 = 1 < 3 and 2^2 = 4 >3, our number is 1. We place a 1 above the
3, just like we are doing a long division problem.
Next, copy this digit on the line below the 300.
This next step looks a lot like a long division problem. Multiply the (red) 1 by the (tan) 1 and put the product under the 3. Then subtract, and bring down the next two digits. Our example will look
Now it gets a little strange. Take the (red) number above the 300 and double it. Write this number on the next line down on the left and add an underscore. 1 doubled is 2, so our example now looks
The underscore is a place holder for an unknown digit. We need to find a single digit that we will place above the line (over the “00”) and in the placeholder. We want the product of these two
numbers to be as large as possible without being larger than the current remainder. Let’s say we decide the digit is 6. Then 6 ·26 = 156. If the digit is 7, then 7 ·27 = 189. If the digit is 8, then
8 ·28 = 224. This is larger than 200, so our digit is 7. We place it above the radical and in the placeholder as shown below. Do the multiplication and subtraction as before. Write the remainder and
bring down the next two digits. Our example now looks like this:
Now we repeat this process over and over for each new digit. Double the number above the radical and add a placeholder. 17 ·2 = 34, so our problem now looks like this:
Again, we need a digit above the line and in the placeholder so that the product is less than the remainder. 3·343 = 1029 < 1100. 4·344 = 1376 >1100. So the digit we want is 3. Do the
multiplication and subtraction and bring down the next two digits. Our example looks like this:
Let’s do it one more time. Double the number over the radical and add a placeholder:
The next digit we need is a 2. (2·3462 = 6924 < 7100; while 3·3463 = 10389 > 7100). Multiply and subtract and bring down the next two digits.
17.32 is a pretty good approximation of the square root of 300. You can repeat this process as often as you want to get even more digits in your solution. The next number we would write on the left
would be 3464_. You can see that the number on the left gets bigger with each step, so the process gets pretty unwieldy. If you need more than three or four digits in your square root, make sure you
have a lot of paper, or go find that calculator!
Aug 14
Graphing sine and cosine functions like a pro
By Tutor GuyNo Comments
When graphing a sine or cosine curve, the first thing you must do is determine the amplitude, period, phase shift and vertical shift. See my previous post (Graphing Sine and Cosine Functions – Intro)
if you need help with this analysis. In this post, we will graph the function
$\displaystyle f(x) = -3 \sin (2x + \frac{\pi}{2}) -1$
We quickly determine the four values we need:
Amplitude = |-3| = 3
Period = $2 \pi /2 = \pi$
Phase shift = $-(\pi /2)/2 = - \pi/4$ (that is, $\pi /4$ units to the left)
Vertical shift = -1
This is all the information we need in order to complete the graph. Just follow this procedure step-by-step.
1. Put values on the coordinate axes. On the y-axis, you typically make each square equal to one unit, but you can change this if you want. To determine the scale on the x-axis, take the period and
divide by 4. This will be the scale on the x-axis. In our example, the period is $\pi$, so each square will be $\pi /4$. The vertical axis will be one unit per square. What do you do if your teacher
gives you a grid with the numbers already in place? You should get a blank piece of graph paper and do your own grid!
2. Use the vertical shift to draw a dashed line across the figure. This is the location of the midline of your graph. In our example, the vertical shift is -1, so we draw a dashed line at y= -1.
3. Use the amplitude to draw two more dashed lines—one above the midline and one below. These represent the maximum and minimum values of your function. In our example, the amplitude is 3. Three
units above -1 is 2—that’s our maximum dashed line. Three units below -1 is -4—that’s where our minimum is located.
4. Plot the starting point of your graph, using the vertical shift and phase shift as a guide. Our function is a sine curve, which starts at the midline. The phase shift is $\pi /4$ to the left, so
our initial point is $\pi /4$ units left of the y-axis. If our function had been a cosine curve, our initial point would be plotted on the maximum line instead of the midline (or on the minimum line
if A is negative). It’s hard to see, but note that I’ve placed a green dot at the “start” point; the coordinates are $( - \pi /4, -1).$
5. Moving one square to the right at a time (because each square is one quarter of a period), plot points at the maximum, midline, minimum and midline. This is one period of your function. If you
want to graph more than one period, continue the process. In our example, we’ve plotted points for two complete periods. Note that because A is a negative number (-3), our first point after the
starting point is at the minimum instead of the maximum. Look closely, and you will see that I’ve placed a green dot every square to the right of our first point.
6. Connect the dots with a nice smooth curve. You’ve graphed the sine curve like a pro!
Aug 07
Graphing sine and cosine functions– an Intro
By Tutor GuyNo Comments
One of the most complicated skills you need to learn in your trig class is how to graph sine and cosine functions. This scares a lot of students, but you can tame this process if you make one simple
observation: Every sine and cosine curve has exactly the same shape! No matter the amplitude or period or phase shift, the curve looks just like this:
You only need to place the graph in its proper position on the coordinate axes. This is (mostly) easy to accomplish if you can remember only two things about the sine and cosine parent curves:
1) The sine curve y = sin x “starts” at the origin and goes up to its maximum, while the cosine curve y = cos x “starts” at its maximum.
2) For either curve, you can break one period into four equal intervals. At each interval, the curve moves from its midline to the maximum to the midline to the minimum to the midline to the
maximum to… over and over again. So all you need to do is find the starting point, and plot the points on the curve at each ¼-period interval.
We will always write our functions in standard form:
$f(x) = A \; sin(Bx+C) + D \; or \; f(x)= A \; cos(Bx+C) + D$
(Note that some textbooks prefer to write the formula in a slightly different form:
$f(x) = A \; sin(B(x+C)) + D \; or \; f(x)= A \; cos(B(x+C)) + D$
We will discuss how that affects your work below.)
Each of the constants A, B, C & D affects the position of the curve and you need to analyze this before you graph the curve. Let’s look at each of them in turn:
A: The absolute value of this number tells you the amplitude of your curve.
B: The period of your curve is determined by dividing $2\pi$ by B.
C: The phase shift is found by dividing -C by B. A positive value means the phase shift is to the right. A negative value means the phase shift is to the left. (If your class uses the version of the
equation above with the B factored out, then the phase shift is equal to C.)
D: The vertical shift is equal to D.
Here’s an example to show how you would calculate all these values.
$\displaystyle f(x) = -3 \sin (2x + \frac{\pi}{2}) -1$
Here, A = -3; B = 2; C = $\pi/2$; and D = -1. Therefore,
Amplitude = |-3| = 3
Period = $2 \pi /2 = \pi$
Phase shift = $- (\pi /2)/2 = - \pi /4$ (that is, $\pi /4$ units to the left)
Vertical shift = -1
When you need to graph a sine or cosine curve, always determine these four values first. Then you are ready to graph the function. We’ll do that in our next post.
Aug 22
Using tree diagrams to find conditional probabilities
By Tutor GuyNo Comments
Those problems that ask you to find the probability of a series of events “without replacement” can be scary because the probabilities of each event keep changing. (These are known as conditional
probability problems.) If the number of possible outcomes isn’t too large, you can tame these problems by using a tree diagram to simplify your calculations.
1. For the first event, draw a tree branch for each possible outcome.
2. At the end of each branch, draw a tree branch for each possible outcome of the second event.
3. Continue until you have a column for every event.
4. For every branch on the tree, write down the probability of that event occurring at that location.
5. Then multiply all the branches from first event to last event to find the probability of any one outcome.
6. Add various events together to get the probability of any compound outcome.
Here’s a simple example that shows how this process works. Let’s say you have a candy dish with 10 red candies, 15 green candies and 20 blue candies. You want to know the probability that you draw at
least two red candies or at least two blue candies. There are a lot of different possibilities here, but a tree diagram simplifies everything greatly. Start by drawing a tree with every possible
outcome (R, G and B in this example). Then from each outcome, draw another tree representing each outcome for the second draw. Repeat for the third draw. Your tree will look like this:
(You can see that this process will get pretty unwieldy if there are too many outcomes or too many events.)
Next, label each branch with the probability for that outcome. Note that the probabilities change depending on which outcomes have already occurred. For our example, the tree would now look like
Finally, for each of the branch ends at the right, multiply together all the probabilities leading to that endpoint. For example, the very top branch, which represents RRR, you would multiply 10/45*9
/44*8/43 to find the probability of getting a red candy on all three draws. The final table looks like this (to make the table easier to read, we have calculated only those branches that represent at
least two reds or at least two blues):
The probability of our desired event is then the sum of all of listed probabilities: 0.5352.
Mar 23
Using multiplicity of factors to characterize graphs of rational functions
By Tutor GuyNo Comments
Rational functions can be scary because there are so many details to manage. Check other posts on this website for information on how to graph rational functions. In this post, I look at one small
clue that can help you figure out the behavior of a rational function as it approaches the vertical asymptotes. All you need to do is check the multiplicity of the factor in the denominator.
If the multiplicity of the factor is even, then the graph approaches +∞ from both sides of the asymptote, or it approaches -∞ from both sides of the asymptote.
If the multiplicity of the factor is odd, then the graph approaches +∞ on one side of the asymptote and approaches -∞ on the other side.
Here is an example that demonstrates this property:
$\text{Graph } \dfrac {(x-2)(x+1)}{(x-1)(x+2)^2}$
There are two vertical asymptotes for this function, at $x=-2$ and at $x=1.$ The $(x+2)$ factor is multiplicity 2 (even), so the graph approaches the same limit from both sides of the asymptote. The
$(x-1)$ factor is multiplicity 1 (odd), so the graph approaches opposite limits on either side of the asymptote. Here is the graph of the function, demonstrating this property:
Mar 16
Using multiplicity of factors to characterize graphs of polynomials
By Tutor GuyNo Comments
When you are asked to sketch the graph of a polynomial, you do not want to make a tree to calculate the values of various points. You don’t know where the “turning points” are, so you won’t be able
to connect the dots for the points you plot. Instead, you need to fully factor the polynomial and use the zeroes you find to draw the polynomial. In addition, the multiplicity of each factor tells
you whether the polynomial crosses the $x$-axis at that zero or “bounces”. The rule is very simple: If the factor has an odd multiplicity, the graph crosses the $x$-axis. If the multiplicity is even,
the graph bounces.
│multiplicity │behavior at $x$ │
│ │‑axis │
│odd │crosses │
│even │bounces │
Example: Sketch the graph of
Solution: First of all, plot the zeroes. For this problem, the zeroes are at $x=-1, x=0, \text{ and } x=1.$
Next, determine the degree of the polynomial. In this case, it is degree $6$. (Add the exponents of all the factors: $3+1+2=6.$) The degree tells you the end behavior, and you can draw arrows to show
that the function will go to positive infinity on the left and the right.
Now you can sketch the graph. At $x=-1,$ the zero is multiplicity 1, so the graph crosses the $x$-axis. At $x=0,$ the zero is multiplicity 3, so the graph also crosses the $x$-axis. Note that for
multiplicity 3, the graph doesn’t cross straight through the axis, but flattens out as it goes through. At $x=1,$ the zero is multiplicity 2, so the graph bounces at the $x$-axis. The final sketch is
shown below: | {"url":"http://tutoringsanjose.net/?cat=5","timestamp":"2024-11-07T19:55:02Z","content_type":"application/xhtml+xml","content_length":"93490","record_id":"<urn:uuid:ce4c1c91-9531-47f1-ac88-fc58540aaea9>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00617.warc.gz"} |
An Intuitive Graphical Approach to Understanding the Split-Plot Experiment
Timothy J. Robinson
University of Wyoming
William A. Brenneman and William R. Myers
The Proctor & Gamble Company
Journal of Statistics Education Volume 17, Number 1 (2009), jse.amstat.org/v17n1/robinson.html
Copyright © 2009 by Timothy J. Robinson, William A. Brenneman and William R. Myers, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium
without express written consent from the authors and advance notification of the editor.
Key Words: Hard to change factors; Restricted randomization; Whole-plot Factors; Sub-plot Factors.
While split-plot designs have received considerable attention in the literature over the past decade, there seems to be a general lack of intuitive understanding of the error structure of these
designs and the resulting statistical analysis. Typically, students learn the proper error terms for testing factors of a split-plot design via expected mean squares. This does not provide any true
insight as far as why a particular error term is appropriate for a given factor effect. We provide a way to intuitively understand the error structure and resulting statistical analysis in split-plot
designs through building on concepts found in simple designs, such as completely randomized and randomized complete block designs, and then provide a way for students to "see" the error structure
graphically. The discussion is couched around an example from paper manufacturing.
1. Introduction
Many industrial and agricultural experiments involve two types of factors, some with levels hard or costly to change and others with levels that are relatively easy to change. Examples of
hard-to-change factors include mechanical set-ups, environmental factors, and many others. When hard-to-change factors exist, it is in the practitioner’s best interest to minimize the number of times
the levels of these factors are changed. A common strategy is to run all combinations of the easy-to-change factors for a given setting of the hard-to-change factors. This restricted randomization of
the experimental run order results in a split-plot design (SPD).
Although great technical strides have been made in terms of the design and analysis of SPDs, there seems to be a general lack of intuitive understanding of the error terms and the resulting
statistical analysis. Typically, students learn the proper error terms for testing factors of a split-plot design via expected mean squares. While this context is certainly important, we have found
in our own consulting and teaching experience that the expected mean square framework does not provide any true insight as far as why a particular error term is appropriate for a given factor effect.
In this manuscript, we hope to improve the fundamental understanding of SPDs by taking a first-principles approach to describing the error structure through building on concepts already familiar to
students in simple designs and provide a way for students to "see" the SPD error structure graphically. The examples provided here are all of the industrial variety and while they may be more
interesting to those who teach and consult with engineers, the discussion is valid within any context of a split-plot design.
Since SPDs are essentially two or more error control experimental designs superimposed on top of one another, we follow the notation of Hinkelmann and Kempthorne (1994) by denoting a given split-plot
design as SPD(D[w],D[s]) where D[w] and D[s] refer to the designs in the whole-plot and sub-plot factors, respectively. An extensive, but not exhaustive list of references on the design and analysis
of SPDs includes Letsinger, Myers, and Lentner (1996); Christensen (1996); Huang, Chen and Voelkel (1998); Rao (1998); Bingham and Sitter (2001); Webb, Lucas and Borkowski (2004); Federer and King
(2007); Smith and Johnson (2007); and Kowalski, Parker and Vining (2007).
Two common SPDs are designs in which the whole-plot factor levels are assigned via a completely randomized design (CRD) and the sub-plot factors are assigned via a randomized complete block design
(RCBD) [i.e. SPD(CRD,RCBD)] and designs in which both the whole-plot factor levels and sub-plot factor levels are randomly assigned within a RCBD [i.e. SPD(RCBD, RCBD)]. We begin in Section 2 with a
review of the CRD and then move along to the RCBD in Section 3. In Section 4 we extend our discussion to split-plot designs. Throughout we show how the error structure dictated by the experimental
design can be explored through graphical methods. A common example from paper manufacturing will be discussed in all settings in order to unify the presentation.
2. Completely Randomized Designs CRD's
The most commonly assigned design structure for experiments is the CRD. The CRD assumes the availability of a set of homogenous experimental units (EUs). Experimental units are the physical entities
to which a factor level combination is applied. The experimental unit, upon exposure to a factor level combination is considered a replicate of the treatment combination. Replication or replicated
design refers to the occurrence of two or more replicates for a given treatment combination. To illustrate terminology, we refer to a modified version of the tensile strength example from Montgomery
(2001). In this example, a paper manufacturer is interested in determining the effect of three different preparation (henceforth referred to as prep) methods (Z) on the tensile strength of paper. For
simplicity, we will refer to the levels of Z as 1, 2, and 3. We will assume that there are enough resources to produce nine batches of pulp (three batches for each level of Z). Since the levels of Z
are randomly assigned to the batches, the batches are the experimental units. Consider the replicated 3-level design provided in Table 1. The notion of a CRD is that the order in which the prep
methods are utilized to produce batches of material is randomized.
Table 1. A replicated 3-level design for paper manufacturing example
│ │ Replicate │
│ │ 1 │1 │ 2 │ 1 │ 3 │ 2 │ 2 │ 3 │ 3 ││
│ Prep │ 2 │3 │ 2 │ 1 │ 2 │ 1 │ 3 │ 3 │ 1 ││
│ Method │ │ │ │ │ │ │ │ │ ││
│Tensile Strength │38.75│31│37.25│34.5│39.5│35.25│37.5│33.25│37.25││
A possible model for the 3-level CRD is
y[ij] = μ + τ[i] + ε[ij], i = 1,2,3; j = 1,2,3, (1)
where y[ij] is the ijth observation, μ is the overall mean, τ[i], is the ith treatment effect and ε[ij] is the experimental error component. Experimental error describes the variation among
identically and independently treated experimental units. For the CRD, it is typically assumed that that the ε[ij] are
i.i.d. N(0, σ^2). The experimental error variance, σ^2, describes the variance of observations on experimental units, for which the differences among the observations can only be attributed to
experimental error. The magnitude of σ^2 is a function of a variety of sources, including 1. natural variation among EUs; 2. observation/measurement error; 3. inability to reproduce the treatment
combinations exactly from one replicate to another; 4. interaction of treatments and replicates; and 5. other unaccounted for sources of variation.
In the CRD, the experimental error variance will be determined by the differences associated with the replicates nested within treatment. Specifically, one would look at the three prep method i
replicates (y[i1], y[i2], y[i3]) and see how their tensile strength values differ from their group mean
The error degrees of freedom (df error in (2) above) for the CRD arise from the fact that there are generally n replicates for each treatment level and one degree of freedom is used for estimation of
the treatment mean. Thus, for each of the t treatment levels there are n-1 degrees of freedom and pooling we have
error degrees of freedom. For the CRD in Table 1 with three replicates and three treatment levels we have 3*(3-1) = 6 df for the experimental error.
Figure 1a provides a graphical representation of the experimental error upon noting the dispersion among the three prep method replicates, y[i1], y[i2], y[i3], within the i^th prep method, averaged
across the prep methods. One can gain intuition regarding the treatment effect by visualizing the dispersion among the Table 2.
Figure 1. Experimental error representation CRD (a). Experimental error representation for SPD[CRD,RCBD] (b).
Table 2. Analysis of Variance for the CRD in Table 1.
Source DF SS MS F Prob>F
Prep Method 2 32.097 16.048 3.384 0.1038
Error 6 28.458 4.743
C.Total 8 60.555
3. Randomized Complete Block Designs (RCBDs)
Suppose for the paper manufacturing example, only three batches can be produced in a given day and environmental conditions from day to day are thought to influence tensile strength. Instead of
treating the design as a CRD, it is probably more efficient (lower experimental error variance) to utilize a RCBD where the day is the block. Table 3 provides the set-up of a RCBD for the modified
paper manufacturing example. Note that randomization of treatment levels occurs independently within each day.
Table 3. RCBD for the 3-level Paper Manufacturing Example
│ │ Day │
│ │ 1 │ 2 │ 3 ││
│ Prep │ 1 │3 │ 2 │ 2 │ 3 │ 1 │ 1 │ 2 │ 3 ││
│ Method │ │ │ │ │ │ │ │ │ ││
│Tensile Strength │34.5│31│38.75│37.25│33.25│35.25│37.25│39.5│37.5││
An appropriate model for the RCBD in Table 3 is
y[ij] = μ + τ[i] + β[j] + ε[ij], i = 1,2,3; j = 1,2,3, (4)
where y[ij], μ, τ[i] are as defined in (1), β[j] denotes the jth random day effect, and ε[ij] denotes the experimental error. Note for the RCBD that every prep method occurs in every day and
replication of treatments occurs across days. If the day effect is ignored in the analysis then the experimental error would include β[j] + ε[ij]. By including the day effect in the analysis, the day
effect is in essence extracted from the experimental error.
The day by treatment interaction and the experimental error are confounded in the RCBD. The intuition behind this statement can be seen through our example where we note that during a single day, the
three prep methods are randomized and the resulting tensile strengths are recorded. With just a single day, no replication exists and there would not be any way to test for the prep method effect
since the experimental error is confounded with any observed difference in prep methods. However, when three days worth of experiments are performed in this manner, there will be replicates of the
3-level experiment, one replicate for each day. In order to make sure we account for the fact that different days may result in different tensile strength results, the correct experimental error term
would be one that gets at the change in the observed differences between the 3-level prep variable from day to day. But this is exactly the definition of an interaction between prep method and days.
Consequently, the day by prep method interaction and the experimental error are confounded. For this reason one must assume that any differences in prep method across the days is not the result of an
actual interaction effect but instead the result of experimental error. Thus, the degrees of freedom for the expected error term for the RCBD are degrees of freedom for the day by prep method
interaction [(3-1) × (3-1)]. In general, the degrees of freedom are
where b is the number of blocks and t is the number of treatments.
Figure 2a provides a graphical representation of the experimental error for the RCBD in Table 3 under the assumption of no prep by day interaction. The comparison of prep method 2 to prep method 1 is
denoted by Δ[1] for day 1, Δ[2] in day 2, and Δ[3] in day 3. A different set of deltas would exist upon comparing prep methods 1 vs. 3 and 2 vs. 3. The experimental error is obtained as follows. Look
at the variation in the deltas for comparing prep methods 2 and 1. Also look at the variation in the deltas for comparing prep methods 3 and 1, and the variation in the deltas for comparing prep
methods 3 and 2. These three sets of variations in the deltas are pooled for an estimate of the experimental error variance. Notice that the variation in the deltas corresponds to a lack of
parallelism in the lines in Figure 2a. No variation corresponds to parallel lines and thus, negligible experimental error. Large variation results in lines that are not parallel and thus larger
experimenteral error. In a two-factor analysis of variance, a lack of parallelism is an indication of the existence of an interaction between the factors. In the RCBD, experimental error and block by
treatment interactions are confounded. Thus, variation in the deltas (used to estimate experimental error) and lack of parallelism (an indication of an interaction) provide the same information about
experimental error, assuming no interactions actually exist. The experimental error is quantified by the square root of the mean squared error.
Figure 2a also provides the overall means for each of the prep methods and one can get a general idea of the treatment effect by the magnitude of the differences in the treatment means relative to
the experimental error (deviation from parallel lines). In general, if the profiles are relatively parallel and widely separated then there is a significant treatment effect. The lack of parallelism
is quantified by the mean squared error in Table 4 (2.267) while the separation among the prep methods is quantified by the mean square for prep method (16.048) in Table 4. Similar to what was
observed in the CRD, when making the actual assessment of a treatment effect, the variation in the overall means for each prep method must be inflated by a factor of the square root of the number of
replicates (here replicates are blocks) and is represented by the mean sum of squares for treatments. Here, the dispersion among the prep method means is substantially larger than the experimental
error variance, thus suggesting a significant prep method effect, a fact evidenced by the small p-value (0.0485) for prep method in Table 4. Note the reduction in the error sum of squares from the
CRD (SSE = 28.458 in Table 2) to the RCBD (SSE = 9.069 in Table 4). In summary, the experimental error in a CRD is represented by (the average of) the dispersions among the replicates within each
treatment and the experimental error in a RCBD by the variation in treatment differences from block to block.
Table 4. Analysis of Variance Table for RCBD Analysis of Data in Table 3.
Source DF SS MS F Prob>F
Prep Method 2 32.097 16.048 7.078 0.0485
Day 2 19.389 9.694
Error 4 9.069 2.267
C.Total 8 60.555
Figure 2. Experimental error representation for RCBD (a). Whole-plot experimental error and whole-plot treatment effect representation for SPD[RCBD,RCBD] (b).
In the next section we demonstrate how the intuition of the experimental error in the CRD and RCBD can be extended to the split-plot design setting.
4. Split Plot Designs
Suppose there is interest in investigating the effect of a second factor, cooking temperature on tensile strength. In this experiment, once a batch is constructed with a particular prep method, the
batch is split into sub-units for cooking. Here, the batches are the whole-plot units with prep method as the whole-plot factor and the sub-units are cooking portions with cooking temperature
(henceforth referred to as temp) as the sub-plot factor. Prep method can be considered as the hard to change factor whereas temp is an easy to change factor since its levels are easily randomized
once the batch is constructed with a given prep method. For the SPD there are two separate randomizations and thus two separate experimental errors, one for the whole-plot factor levels and another
for the sub-plot factor levels. In this section, we will discuss a scenario in which the whole-plot factor levels are fully randomized and a second scenario in which the whole-plot factor levels are
randomized within a block, resulting in a RCBD for the whole-plot factor. Note that the sub-plot randomization is always restricted in the sense that randomization takes place separately within each
whole-plot, making each whole-plot a block for the sub-plot factor levels.
4.1 SPD's With Completely Randomized Whole-plot Levels, SPD[CRD,RCBD]
Let’s assume that all nine batches of pulp can be made on the same day and that no blocking is necessary. In this case, the three prep methods would be randomized to the nine batches, much like a
single variable experiment at three levels would be randomized with three replicates, i.e., a CRD. Table 5 presents a possible randomization structure at the whole-plot level.
Table 5: Randomization of the Whole-Plot Factor Prep Method Replicated Three Times.
Prep Method 2 1 1 3 1 2 2 3 3
Next, for a given prep method, the four levels of temp are randomly applied to the batch sub-units (the sub-plot units). The second level of randomization and order in which the experimental runs
would be performed is provided in Table 6.
Table 6: Randomization of the SPD[CRD,RCBD] in Prep Method and Temp
Prep Method 2 1 1 3 1 2 2 3 3
In this situation the whole-plot factor, prep method, at three levels with three replicates is randomized and then the sub-plot factor, temp, is randomized at the sub-plot level. Since the levels of
prep method are randomly assigned at the batch level, the batch effect must be assessed by comparison to a batch experimental error term which reflects the natural spread across batches. Similarly,
since the levels of temp are randomly assigned at the sub-unit level, the temp effect must be assessed by comparison to a sub-unit experimental error term reflecting natural dispersion across
sub-units. Contrast this to a CRD setting involving prep method and temp in which one would need 12 batches (one for each combination of prep method and temp) for a single replicate and 36 batches
for three replicates. A CRD would only have one experimental error term which would reflect batch dispersion. The SPD offers cost efficiency for the hard to change factor prep method as three
replicates of the SPD would only require nine changes of prep method versus the 36 required for the CRD.
An appropriate model for the SPD[CRD,RCBD] described above is
y[ijk] = μ + τ[i] + δ[j(i)] + γ[k] + (τγ)[ik] + ε[ijk], i = 1,2,3; j = 1,2,3; k = 1,2,3,4, (6)
where y[ijk] is the response on the j^th day for prep method i at temp k. The parameter μ is the overall mean, τ[i] is the fixed effect due to the ith whole-plot treatment (prep method), γ[j(i)] is
the whole-plot error, γ[k] is the fixed effect due to the kth sub-plot treatment (temp), (τγ)[ik] is the whole-plot by sub-plot interaction and ε[ijk] is the sub-plot error. It is typically assumed
that the δ[j(i)] are i.i.d. N(0,σ^2[δ]) with σ^2[δ] denoting the experimental error variance of the whole-plot units. The ε[ijk] are assumed i.i.d. N(0,σ^2[ε]) with σ^2[ε] denoting the experimental
error variance of the sub-plot units. Finally, it is assumed that the δ[j(i)] and ε[ijk] are independent of one another.
In providing intuition for the two experimental error components of the SPD[CRD,RCBD], we first begin with CRD at the whole-plot level. For the whole-plot experiment, we replicated the 3-level prep
methods three times and completely randomized the run order. In other words, we took the 9 runs of the prep method (1,1,1,2,2,2,3,3,3) and fully randomized them. Recall from our discussion in Section
2, the experimental error variance for a CRD is determined by the differences associated with the replicates nested within each prep method. For the SPD, the contribution of the ith whole-plot level
for the jth replicate is summarized by taking the mean response across the sub-plot levels, j^th day for the i^th prep method averaged across the observed cooking temps
Since the whole-plot design is a CRD, the whole plot experimental error variance will be determined by the differences associated with the whole plot replicates nested within the whole plot
treatment. Specifically, one would look at the three prep method i replicates
The degrees of freedom associated with the whole-plot error are calculated just as they were in (3) from Section 2. Due to the presence of both whole-plots and sub-plots now, we will modify the
notation and use
to denote the degrees of freedom error. Here, t[w] denotes the number of whole-plot treatment combinations and n[w] denotes the number of whole-plot replicates nested within each whole-plot treatment
The whole plot experimental error variance is easily visualized in Figure 1b by noting the dispersion among the three prep method replicates, i^th prep method. Figure 1b is equivalent to Figure 1a
but with the y[ij] (from Figure 1a) = Figure 1b). Therefore, the interpretation of the whole-plot treatment effects is analogous to the discussion of the treatment effect of the CRD in Section 2.
More specifically, one can get a general idea of the whole-plot treatment effects by comparing the dispersion in the treatment means (here, the dispersion among Table 7 is identical to that in the
CRD analysis found in Table 2. The equivalent results are due to the fact that when the data are balanced, taking the mean across the levels of the sub-plot factor within a whole-plot and then
performing a CRD analysis of the means is analogous to the split-plot analysis for the whole-plot factor. Note also that the whole-plot treatment and whole-plot error sums of squares are four times
that of the CRD, due to the four sub-plots within each whole-plot, therefore the F-ratio for the whole-plot treatment is unaffected.
Table 7: Analysis of variance for SPD[CRD,RCBD]
│ Source │DF│ SS │ MS │ F │ Prob>F │
│ Prep Method │2 │128.39│64.19 │3.38 │ 0.1038 │
│ │ │ │ │ │ │
│Reps (Prep Method) │6 │113.83│18.97 │ . │ . │
│ Whole-plot Error │ │ │ │ │ │
│ Temp │3 │434.08│144.69│36.43│< 0.0001 │
│Prep Method x Temp │6 │75.17 │12.53 │3.15 │ 0.0271 │
│ Sub-plot Error │18│71.50 │ 3.97 │ . │ . │
4.2 Whole-plot Experimental Error Variance for the SPD[RCBD,RCBD]
When there is a blocking factor, the whole-plot factor levels are randomized within the blocks. Consider again the scenario from Section 3 where it is only possible to make three batches of pulp in a
given day and environmental conditions from day to day are thought to influence tensile strength. Here, we have two blocking factors: one at the whole-plot level (prep methods randomly assigned
within a day) and the other at the sub-plot level (sub-plot levels randomly assigned within a whole-plot level). As in Section 4.1, the sub-plot factor is temp and the randomization of the levels of
temp takes place within each whole-plot, making the whole-plots blocks for the sub-plot factor. The randomization and run order for both the whole-plots and sub-plots is provided in Table 8. Contrast
the randomization for the SPD to a RCBD with two factors. In the RCBD, each day would require 12 batches, one for each of the 12 combinations of prep method and temp. A single randomization would
take place, namely the order in which the 12 batches are run within a day. This design would not be feasible here since it was stated that only three batches can be run on a given day. The SPD
overcomes the necessity for so many batches to be run in a given day by incorporating two levels of randomization.
First the order of the three prep methods (whole-plot levels) would be randomized for a given day, and then, separately, the levels of temp (sub-plot levels) are randomized to the cooking portions
within each batch. Thus, for a given day the SPD would require only three batches of material to be produced.
Table 8: Randomization of the SPD[RCBD,RCBD] in Prep Method and Temp
Day 1 Day 2 Day 3
Prep Method 2 3 1 1 3 2 3 1 2
(42) (29) (36) (28) (40) (40) (32) (31) (40)
(38) (26) (37) (32) (32) (31) (39) (41) (39)
(41) (36) (35) (40) (31) (36) (34) (40) (35)
(34) (33) (30) (41) (30) (42) (45) (37) (44)
An appropriate model for the SPD described above is
y[ijk] = μ + τ[i] + β[j] + δ[ij] + γ[k] + (τγ)[ik] + ε[ijk], i = 1,2,3; j = 1,2,3; k = 1,2,3,4, (9)
where μ, τ[i], β[j], γ[k], and (τγ)[ik] are as defined in (4) and (6), δ[ij] denotes the whole-plot error, and ε[ijk] is the sub-plot error. The same distributional assumptions made with the SPD
[CRD,RCBD] for the error terms are made here. Similar to the discussion in Section 4.1, in considering the design at the whole-plot level, it is helpful to view the responses as the i on day j. Since
the whole-plot treatments (prep methods) are randomized according to a RCBD, the block (day) by treatment interaction and the whole-plot experimental error are confounded (see the discussion in
Section 3). For this reason one must assume that any differences in prep method across the days are not the result of an actual interaction effect but instead the result of whole plot experimental
error. Thus, the degrees of freedom for the whole plot error term are the degrees of freedom for the day by prep method interaction [(3-1) × (3-1)]. In general, the whole-plot error df are given by
where t[w] is the number of whole-plot levels and n[w] is the number of whole-plot blocks. Note that the degrees of freedom for the denominator of the prep method F-statistic in the SPD[RCBD,RCBD]
has four degrees of freedom instead of six in the SPD[CRD, RCBD] case.
Figure 2b provides a graphical representation of the experimental error for the whole-plot factor and is identical to Figure 2a but with Figure 2b also provides the overall mean for each of the prep
methods and one can get a general idea of the whole-plot treatment effects by the magnitude of the differences in the means with respect to the experimental error (i.e. , deviation from parallel
lines). As with the RCBD, when making the actual assessment of a treatment effect, the variation in the overall means for each prep method must be inflated by a factor of the square root of the
number of replicates (here replicates are blocks) and is represented by the mean sum of squares for prep method (128.39) in Table 9. If the profiles in the block by whole-plot treatment interaction
plot, (Figure 2b for our example), are relatively parallel and widely separated then there is a significant whole-plot effect. Thus, the visualization of a significant whole plot effect via Figure 2b
is identical to the visualization of a treatment effect in the RCBD using Figure 2a. This is further evidenced by the fact that the p-value for prep method in Table 9 is precisely the same as that
given in Table 4. Unlike the standard RCBD where only one type of experimental unit exists, the presence of whole-plot and sub-plot units in the SPD implies that one needs to be careful in the
interpretation of the whole-plot effect in the case of a possible interaction between the whole-plot and sub-plot effects. More will be discussed regarding whole plot treatment interactions with
subplot interactions in Section 4.3.
Table 9: Analysis of variance for the SPD[RCBD,RCBD] case
│ Source │DF│ SS │ MS │ F │ Prob>F │
│ Day │2 │77.56 │38.78 │ . │ . │
│ Prep Method │2 │128.39│64.19 │7.08 │ 0.0485 │
│ Day x Prep Method │4 │36.28 │ 9.07 │ . │ . │
│ Whole-plot Error │ │ │ │ │ │
│ Temp │3 │434.08│144.69│36.43│< 0.0001 │
│Prep Method x Temp │6 │75.17 │12.53 │3.15 │ 0.0271 │
│ Sub-plot Error │18│71.50 │ 3.97 │ . │ . │
4.3 Sub-plot Experimental Error Variance
As mentioned earlier, although there are different randomization schemes possible for the whole-plot factor levels, any randomization scheme for the sub-plot factors will be restricted since
sub-plot factor levels are always randomized within whole-plots. To conceptualize the sub-plot experimental design, it is helpful to focus upon a single level of the whole-plot factor. In our
example, imagine formulating three batches (i.e. three whole-plot replicates) of pulp using a single prep method and then splitting each of these batches into four equal cooking portions (i.e. 12
total cooking portions). For each batch separately, the levels of temp are randomly assigned to the four cooking portions. Table 10 presents an example of this randomization structure. Note that the
sub-plot design is simply an RCBD where the blocks are the replicates of the specific whole-plot level (prep method). Thus, to understand the sub-plot error term, all one needs to do is to identify
the variable(s) in the data set which uniquely define(s) the whole-plot replicates (batches). In the SPD[CRD,RCBD] case, batches of pulp uniquely define the whole-plot replicate variable while in the
SPD[RCBD,RCBD] case, the day variable uniquely defines the replicates. For both types of SPDs, the sub-plot experimental error variance within a given prep method is estimated via the whole-plot
replicate variable by sub-plot variable (temp) interaction. The error degrees of freedom would be given by
where n[w] is the number of whole-plot replicates within a given whole-plot level and t[s] is the number of sub-plot treatment levels.
Table 10: Sub-Plot Structure for one Prep Method
Batch Number Nested within Prep Method
(Blocking variable at the Sub-Plot Level)
Temp 200 225 250 275 200 225 250 275 200 225 250 275
The sub-plot error structure for prep method 1 is visualized in Figure 3a where the whole-plot replicate variable by temp interaction is plotted for prep method 1. The individual points in Figure 3a
are the tensile strengths for prep method 1 across each of the j whole plot replicates and k cooking temperatures (i.e. the y[1jk]'s). Recall for RCBDs the block by treatment interaction is
confounded with experimental error and any difference in treatment effects observed across blocks is assume to be experimental error variance. Note that Δ[11], Δ[12] and Δ[13] represent the observed
tensile strength differences at a temp of 225 and a temp of 200 across the three whole plot replicates for prep method 1. A different set of deltas would be observed for each of the other temp level
pairwise comparisons. If the Δ's differ from one replicate to the next, i.e. , lack of parallelism, this suggests a replicate (block) by temp interaction. Since the design structure is an RCBD, the
differences in the Δ's represent the sub-plot error variance. This is identical to the discussions and illustrations for the RCBD in Sections 3 and 4.2 regarding Figure 2a and Figure 2b.
Since there are a total of three prep methods, the overall estimate of sub-plot error variance would be one in which the sub-plot error variances are pooled across all of the levels of the whole-plot
variable (prep method). One would have to look at all three plots (Figures 3a, Figure 3b and Figure 3c) to get a sense for the overall sub-plot error variance. The magnitude of the sub-plot error
variance would be reflected by the overall lack of parallelism across Figures 3a, 3b and 3c. The overall degrees of freedom for the sub-plot experimental error would then be
where the expression in (12) is simply that of (11) multiplied by the number of whole-plot levels t[w]. Note that the expression for the sub-plot error degrees of freedom in (12) does not depend on
the type of design at the whole-plot level since the whole-plot replicates (whether true replicates or replicates across blocks) form the blocks for the sub-plot design. This fact is illustrated in
Table 7 [SPD(CRD,RCBD)] and Table 9 [SPD(RCBD,RCBD)] where we use the interaction effect of replication (day) by temp [(3-1)*(4-1)] nested within the three prep methods to estimate the sub-plot
error for a total of (3-1)*(4-1)*3 = 18 degrees of freedom.
To visualize the sub-plot effect Figures 3a, 3b and 3c provide the group means for each of the levels of temp. Let us first focus on Figure 3a where one can get a general idea of the sub-plot
treatment effect by the magnitude of the differences in the overall sub-plot (temp) means relative to the lack of parallelism. In observing the differences among the four temp means versus the mild
lack of parallelism, one would anticipate a possible temp effect for prep method 1. A similar evaluation would be done for prep method 2 and prep method 3 by looking at Figures 3b and 3c. Overall, if
the profiles are relatively parallel and widely separated for each of the whole-plot levels (prep method) then that would indicate a potentially significant sub-plot effect.
At this point, it is important to remember that any observed sub-plot effect should not be interpreted until one has evaluated whether or not there is a significant interaction between the whole-plot
and sub-plot effects. Observing Figures 3a, 3b and 3c one can also assess a potential whole-plot by sub-plot interaction. For example, in prep method 1 (Figure 3a), the means for 275 and 250 are much
closer to each other than they are in prep method 3 (Figure 3c). This indicates a possible whole-plot by sub-plot interaction. Note for this example, the whole-plot by sub-plot interaction is indeed
significant (p-value = 0.0271 in Tables 7 and 9. The sub-plot error variance is used to assess the whole-plot by sub-plot interaction.
Figure 3. Replication(Block) by Temp interaction for Prep Method 1 (3a). Replication(Block) by Temp interaction for Prep Method 2 (3b). Replication(Block) by Temp interaction for Prep Method 3 (3c).
5. Conclusions
Providing the intuition behind the analysis of SPDs is not an easy task. In this paper we show that the whole-plot and sub-plot error structure can be broken down into easy to understand CRD or RCBD
designs. The whole-plot error is estimated by the effect of the replication variable nested within the whole-plot factor for a CRD at the whole-plot level while the whole-plot error is estimated by
the block by whole-plot factor interaction effect for a RCBD at the whole-plot level. We also showed that at the sub-plot level, the error is estimated by pooling the replicate (block) by sub-plot
factor interaction effects over the whole-plot levels. All of these concepts were illustrated in an intuitive graphical approach, thus allowing students to "see" the error structure and gain
intuition of the statistical analysis by associating each source of variation in the SPD ANOVA table with a corresponding plot.
Bingham, D. and Sitter, R. S. (2001). "Design Issues for Fractional Factorial Experiments," Journal of Quality Technology, 33, 2-15.
Box, G.E.P. and Jones, S. (1992). "Split-Plot Designs for Robust Product Experimentation," Journal of Applied Statistics, 19, 3-26.
Christensen, R. (1996). Analysis of Variance, Design and Regression, New York: Chapman & Hall.
Federer, W.T. and King, F. (2007). Variations on Split Plot and Split Block Experiment Designs, New Jersey: Wiley.
Hinkelmann, K.H. and Kempthorne, O. (1994). Design and Analysis of Experiments, Vol. 1, New York: John Wiley & Sons.
Huang, P., Chen, D., and Voelkel, J. O. (1998). "Minimum-Aberration Two-Level Split-Plot Designs," Technometrics, 40, 314-326.
Kowalski, S.M., Parker, P.A. and Vining, G.G. (2007). "Tutorial: Industrial Split-plot Experiments," Quality Engineering, 19, 1-16.
Letsinger, J. D., Myers, R. H., and Lentner, M. (1996). "Response Surface Methods for Bi-Randomization Structure," Journal of Quality Technology, 28, 381-397.
Montgomery, D.C. (2001). Design and Analysis of Experiments, 5^th edition, New York: John Wiley & Sons.
Rao, P. V. (1998). Statistical Research Methods in Life Sciences, New York: Duxbury Press.
Smith, C. and Johnson, D. (2007). "Comparing analyses of unbalanced split-plot Experiments," Journal of Statistical Computation and Simulation, 77, 119-129.
Webb, D., Lucas, J. M. and Borkowski, J. J. (2004). "Factorial Experiments when Factor Levels Are Not Necessarily Reset," Journal of Quality Technology, 36, 1-11.
Yates, F. (1935). "Complex Experiments," Journal of the Royal Statistical Society, Supplement 2, 181-247.
Yates, F. (1937). "The Design and Analysis of Factorial Experiments," Commonwealth Bureau of Soil Science, Tech. Comm., No. 35.
Timothy J. Robinson
Associate Professor of Statistics
University of Wyoming
Laramie,WY 82071
William A. Brenneman
Principle Statistician
Department of Statistics
The Procter & Gamble Company
William R. Myers
Section Head
Department of Statistics
The Procter & Gamble
Volume 17 (2009) | Archive | Index | Data Archive | Resources | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications | {"url":"http://jse.amstat.org/v17n1/robinson.html","timestamp":"2024-11-02T11:21:26Z","content_type":"text/html","content_length":"125514","record_id":"<urn:uuid:2c4409f6-5b68-4fbd-9e6e-7488084776bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00395.warc.gz"} |
Kostya's Boring Codec World
NihAV: now with lossless audio encoder
Since I wanted to try my hoof at various encoding concepts it’s no wonder that after lossy audio encoder (IMA ADPCM with trellis encoding), lossless video encoder (ZMBV, using my own deflate
implementation for compressing), lossy video encoder (Cinepak, for playing with vector quantisation, and VP6, for playing with many other concepts) it was time for a lossless audio encoder.
To remind you, there are essentially two types of lossless audio compressors—fast and asymmetric (based on LPC filters) and slow and symmetric (based on adaptive filters, usually long LMS ones). The
theory behind them is rather simple and described below.
Adaptive filters are called so because they’re updated after each processed sample depending on the error between the actual and predicted sample value. Those filters tend to be long (from 16 to 1280
samples seen in the wild) and often there are several layers of those to make the prediction even better. So encoder does the same this as the decoder: filters input samples, updates filter
coefficients and codes prediction errors. Thus an encoder for such format is not that hard to implement but it’s rather boring.
The approach with fixed filters is different: you take rather short block of data (usually several thousands samples; adaptive filters work on tens of thousands samples or more—they adapt to the data
better after a long run), calculate short filter (the usual LPC order is 6-32), apply it to the data and code the remaining residues. During encoding you can spend a lot of time selecting the optimal
filter order. For decoding you simply read filter coefficients and apply the filter to the decoded residues. For the long time though I had a question how it is done (and why).
I don’t know where the actual answer is written so here’s how I understand it: the input data is arbitrary long and the filter is short, so we must use some metric to map one to another. And for that
correlation is used (essentially a sum of samples multiplied by the same samples shifted by several positions). And the filter is a set of coefficients that if multiplied by the sequence of
correlation values gives the next correlation value. This can be expressed in matrix form and luckily the matrix with correlation values has a special type called Toeplitz matrix and this equation
can be solved by several methods like Levinson-Durbin method or Cholesky decomposition, to name the most famous ones. I chose Levinson-Durbin method as it’s the simplest one (and seems to be the
fastest for the usual lossless audio filter sizes).
So I read the method description in Wikipedia and implemented it after that description. After I’d finally managed to make it work as supposed I discovered that since matrix is symmetric, I can
simply use just backward vectors as forward vectors are the mirrored version of those.
And with a new routine to calculate LPC filter at my disposal, I decided to try it by writing some simple lossless audio encoder. FLAC was a passable candidate (not the best format but overall the
codec is simple and I’ve written a decoder for it already). And, after a lot of debugging, it works as expected (i.e. it compresses data with a ratio comparable to the official FLAC encoder on not
very high level).
Of course it’s far from being on par with any serious FLAC encoder (and IIRC nothing beats flake in terms of compression ratio), but my goal was to learn how calculating LPC works and test it in some
lossless audio codec—and the goal is accomplished. Now I should move to something else. For instance, The Mike has asked me to look at some RoQ files…
Write wavpack encoder.
I considered doing that but WavPack is based on adaptive filters so it’s not that good for my purposes. | {"url":"https://codecs.multimedia.cx/2021/10/nihav-now-with-lossless-audio-encoder/","timestamp":"2024-11-10T22:01:21Z","content_type":"application/xhtml+xml","content_length":"26340","record_id":"<urn:uuid:db73faff-3501-4279-b685-90d5d841faa8>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00808.warc.gz"} |
Game Engine Architecture
4.3. Matrices 161
coordinates. The unit basis vector along this axis will be denoted L or R,
as appropriate.
The mapping between the (front, up, left ) labels and the (x, y, z) axes is com-
pletely arbitrary. A common choice when working with right-handed axes is
to assign the label front to the positive z-axis, the label left to the positive x-axis,
and the label up to the positive y-axis (or in terms of unit basis vectors, F = k,
L = i, and U = j). However, it’s equally common for +x to be front and +z to be
right (F = i, R = k, U = j). I’ve also worked with engines in which the z-axis is
oriented vertically. The only real requirement is that you stick to one conven-
tion consistently throughout your engine.
As an example of how intuitive axis names can reduce confusion, consid-
er Euler angles (pitch, yaw, roll), which are oft en used to describe an aircraft ’s
orientation. It’s not possible to defi ne pitch, yaw, and roll angles in terms of
the (i, j, k) basis vectors because their orientation is arbitrary. However, we can
defi ne pitch, yaw, and roll in terms of the (L, U, F) basis vectors, because their
orientations are clearly defi ned. Specifi cally,
z pitch is rotation about L or R,
z yaw is rotation about U, and
z roll is rotation about F.
4.3.9.2. World Space
World space is a fi xed coordinate space, in which the positions, orientations,
and scales of all objects in the game world are expressed. This coordinate
space ties all the individual objects together into a cohesive virtual world.
The location of the world-space origin is arbitrary, but it is oft en placed
near the center of the playable game space to minimize the reduction in fl oat-
ing-point precision that can occur when (x, y, z) coordinates grow very large.
Likewise, the orientation of the x-, y-, and z-axes is arbitrary, although most
Figure 4.17. One possible choice of the model-space front, left and up axis basis vectors for
an airplane. | {"url":"https://issuhub.com/view/index/4147?pageIndex=183","timestamp":"2024-11-14T22:04:43Z","content_type":"text/html","content_length":"12488","record_id":"<urn:uuid:51200599-cf48-4fb3-a98e-8e88b904c204>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00439.warc.gz"} |
4.7: Discrete Random Variables (Exercises)
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax.
4.1: Introduction
4.2: Probability Distribution Function (PDF) for a Discrete Random Variable
Q 4.2.1
Suppose that the PDF for the number of years it takes to earn a Bachelor of Science (B.S.) degree is given in Table.
\(x\) \(P(x)\)
3 0.05
4 0.40
5 0.30
6 0.15
7 0.10
1. In words, define the random variable \(X\).
2. What does it mean that the values zero, one, and two are not included for \(x\) in the PDF?
Exercise 4.3.5
Complete the expected value table.
\(x\) \(P(x)\) \(x*P(x)\)
0 0.2
1 0.2
2 0.4
3 0.2
Exercise 4.3.6
Find the expected value from the expected value table.
\(x\) \(P(x)\) \(x*P(x)\)
2 0.1
4 0.3 4(0.3) = 1.2
6 0.4 6(0.4) = 2.4
8 0.2 8(0.2) = 1.6
\(0.2 + 1.2 + 2.4 + 1.6 = 5.4\)
Exercise 4.3.7
Find the standard deviation.
\(x\) \(P(x)\) \(x*P(x)\) \((x – \mu)^{2}P(x)\)
2 0.1 2(0.1) = 0.2 (2–5.4)^2(0.1) = 1.156
4 0.3 4(0.3) = 1.2 (4–5.4)^2(0.3) = 0.588
6 0.4 6(0.4) = 2.4 (6–5.4)^2(0.4) = 0.144
8 0.2 8(0.2) = 1.6 (8–5.4)^2(0.2) = 1.352
Exercise 4.3.8
Identify the mistake in the probability distribution table.
\(x\) \(P(x)\) \(x*P(x)\)
1 0.15 0.15
2 0.25 0.50
3 0.30 0.90
4 0.20 0.80
5 0.15 0.75
The values of \(P(x)\) do not sum to one.
Exercise 4.3.9
Identify the mistake in the probability distribution table.
\(x\) \(P(x)\) \(x*P(x)\)
1 0.15 0.15
2 0.25 0.40
3 0.25 0.65
4 0.20 0.85
5 0.15 1
Use the following information to answer the next five exercises: A physics professor wants to know what percent of physics majors will spend the next several years doing post-graduate research. He
has the following probability distribution.
\(x\) \(P(x)\) \(x*P(x)\)
1 0.35
2 0.20
3 0.15
5 0.10
6 0.05
Exercise 4.3.10
Define the random variable \(X\).
Let \(X =\) the number of years a physics major will spend doing post-graduate research.
Exercise 4.3.11
Define \(P(x)\), or the probability of \(x\).
Exercise 4.3.12
Find the probability that a physics major will do post-graduate research for four years. \(P(x = 4) =\) _______
\(1 – 0.35 – 0.20 – 0.15 – 0.10 – 0.05 = 0.15\)
Exercise 4.3.13
Find the probability that a physics major will do post-graduate research for at most three years. \(P(x \leq 3) =\) _______
Exercise 4.3.14
On average, how many years would you expect a physics major to spend doing post-graduate research?
\(1(0.35) + 2(0.20) + 3(0.15) + 4(0.15) + 5(0.10) + 6(0.05) = 0.35 + 0.40 + 0.45 + 0.60 + 0.50 + 0.30 = 2.6\) years
Use the following information to answer the next seven exercises: A ballet instructor is interested in knowing what percent of each year's class will continue on to the next, so that she can plan
what classes to offer. Over the years, she has established the following probability distribution.
• Let \(X =\) the number of years a student will study ballet with the teacher.
• Let \(P(x) =\) the probability that a student will study ballet \(x\) years.
Exercise 4.3.15
Complete Table using the data provided.
\(x\) \(P(x)\) \(x*P(x)\)
1 0.10
2 0.05
3 0.10
5 0.30
6 0.20
7 0.10
Exercise 4.3.16
In words, define the random variable \(X\).
\(X\) is the number of years a student studies ballet with the teacher.
Exercise 4.3.17
\(P(x = 4) =\) _______
Exercise 4.3.18
\(P(x < 4) =\) _______
\(0.10 + 0.05 + 0.10 = 0.25\)
Exercise 4.3.19
On average, how many years would you expect a child to study ballet with this teacher?
Exercise 4.3.20
What does the column "\(P(x)\)" sum to and why?
The sum of the probabilities sum to one because it is a probability distribution.
Exercise 4.3.21
What does the column "\(x*P(x)\)" sum to and why?
Exercise 4.3.22
You are playing a game by drawing a card from a standard deck and replacing it. If the card is a face card, you win $30. If it is not a face card, you pay $2. There are 12 face cards in a deck of 52
cards. What is the expected value of playing the game?
\(−2\left(\dfrac{40}{52}\right)+30\left(\dfrac{12}{52}\right) = −1.54 + 6.92 = 5.38\)
Exercise 4.3.23
You are playing a game by drawing a card from a standard deck and replacing it. If the card is a face card, you win $30. If it is not a face card, you pay $2. There are 12 face cards in a deck of 52
cards. Should you play the game?
4.3: Mean or Expected Value and Standard Deviation
Q 4.3.1
A theater group holds a fund-raiser. It sells 100 raffle tickets for $5 apiece. Suppose you purchase four tickets. The prize is two passes to a Broadway show, worth a total of $150.
1. What are you interested in here?
2. In words, define the random variable \(X\).
3. List the values that \(X\) may take on.
4. Construct a PDF.
5. If this fund-raiser is repeated often and you always purchase four tickets, what would be your expected average winnings per raffle?
Q 4.3.2
A game involves selecting a card from a regular 52-card deck and tossing a coin. The coin is a fair coin and is equally likely to land on heads or tails.
• If the card is a face card, and the coin lands on Heads, you win $6
• If the card is a face card, and the coin lands on Tails, you win $2
• If the card is not a face card, you lose $2, no matter what the coin shows.
1. Find the expected value for this game (expected net gain or loss).
2. Explain what your calculations indicate about your long-term average profits and losses on this game.
3. Should you play this game to win money?
S 4.3.2
The variable of interest is \(X\), or the gain or loss, in dollars.
The face cards jack, queen, and king. There are \((3)(4) = 12\) face cards and \(52 – 12 = 40\) cards that are not face cards.
We first need to construct the probability distribution for \(X\). We use the card and coin events to determine the probability for each outcome, but we use the monetary value of \(X\) to determine
the expected value.
Card Event \(X\) net gain/loss \(P(X)\)
Face Card and Heads 6 \(\left(\frac{12}{52}\right) \left(\frac{1}{2}\right) = \left(\frac{6}{52}\right)\)
Face Card and Tails 2 \(\left(\frac{12}{52}\right) \left(\frac{1}{2}\right) = \left(\frac{6}{52}\right)\)
(Not Face Card) and (H or T) –2 \(\left(\frac{40}{52}\right) (1) = \left(\frac{40}{52}\right)\)
• \(\text{Expected value} = (6)\left(\frac{6}{52}\right) + (2)\left(\frac{6}{52}\right) + (-2)\left(\frac{40}{52}\right) = -\frac{32}{52}\)
• \(\text{Expected value} = –$0.62\), rounded to the nearest cent
• If you play this game repeatedly, over a long string of games, you would expect to lose 62 cents per game, on average.
• You should not play this game to win money because the expected value indicates an expected average loss.
Q 4.3.3
You buy a lottery ticket to a lottery that costs $10 per ticket. There are only 100 tickets available to be sold in this lottery. In this lottery there are one $500 prize, two $100 prizes, and four
$25 prizes. Find your expected gain or loss.
Q 4.3.4
Complete the PDF and answer the questions.
\(x\) \(P(x)\) \(xP(x)\)
0 0.3
1 0.2
3 0.4
1. Find the probability that \(x = 2\).
2. Find the expected value.
Q 4.3.5
Suppose that you are offered the following “deal.” You roll a die. If you roll a six, you win $10. If you roll a four or five, you win $5. If you roll a one, two, or three, you pay $6.
1. What are you ultimately interested in here (the value of the roll or the money you win)?
2. In words, define the Random Variable \(X\).
3. List the values that \(X\) may take on.
4. Construct a PDF.
5. Over the long run of playing this game, what are your expected average winnings per game?
6. Based on numerical values, should you take the deal? Explain your decision in complete sentences.
Q 4.3.6
A venture capitalist, willing to invest $1,000,000, has three investments to choose from. The first investment, a software company, has a 10% chance of returning $5,000,000 profit, a 30% chance of
returning $1,000,000 profit, and a 60% chance of losing the million dollars. The second company, a hardware company, has a 20% chance of returning $3,000,000 profit, a 40% chance of returning
$1,000,000 profit, and a 40% chance of losing the million dollars. The third company, a biotech firm, has a 10% chance of returning $6,000,000 profit, a 70% of no profit or loss, and a 20% chance of
losing the million dollars.
1. Construct a PDF for each investment.
2. Find the expected value for each investment.
3. Which is the safest investment? Why do you think so?
4. Which is the riskiest investment? Why do you think so?
5. Which investment has the highest expected return, on average?
S 4.3.6
1. Software Company
\(x\) \(P(x)\)
5,000,000 0.10
1,000,000 0.30
–1,000,000 0.60
Hardware Company
\(x\) \(P(x)\)
3,000,000 0.20
1,000,000 0.40
–1,000,00 0.40
Biotech Firm
\(x\) \(P(x)\)
6,00,000 0.10
0 0.70
–1,000,000 0.20
2. $200,000; $600,000; $400,000
3. third investment because it has the lowest probability of loss
4. first investment because it has the highest probability of loss
5. second investment
Q 4.3.7
Suppose that 20,000 married adults in the United States were randomly surveyed as to the number of children they have. The results are compiled and are used as theoretical probabilities. Let \(X =\)
the number of children married people have.
\(x\) \(P(x)\) \(xP(x)\)
0 0.10
1 0.20
2 0.30
4 0.10
5 0.05
6 (or more) 0.05
1. Find the probability that a married adult has three children.
2. In words, what does the expected value in this example represent?
3. Find the expected value.
4. Is it more likely that a married adult will have two to three children or four to six children? How do you know?
Q 4.3.8
Suppose that the PDF for the number of years it takes to earn a Bachelor of Science (B.S.) degree is given as in Table.
\(x\) \(P(x)\)
3 0.05
4 0.40
5 0.30
6 0.15
7 0.10
On average, how many years do you expect it to take for an individual to earn a B.S.?
Q 4.3.9
People visiting video rental stores often rent more than one DVD at a time. The probability distribution for DVD rentals per customer at Video To Go is given in the following table. There is a
five-video limit per customer at this store, so nobody ever rents more than five DVDs.
\(x\) \(P(x)\)
0 0.03
1 0.50
2 0.24
4 0.70
5 0.04
1. Describe the random variable \(X\) in words.
2. Find the probability that a customer rents three DVDs.
3. Find the probability that a customer rents at least four DVDs.
4. Find the probability that a customer rents at most two DVDs. Another shop, Entertainment Headquarters, rents DVDs and video games. The probability distribution for DVD rentals per customer at
this shop is given as follows. They also have a five-DVD limit per customer.
\(x\) \(P(x)\)
0 0.35
1 0.25
2 0.20
3 0.10
4 0.05
5 0.05
5. At which store is the expected number of DVDs rented per customer higher?
6. If Video to Go estimates that they will have 300 customers next week, how many DVDs do they expect to rent next week? Answer in sentence form.
7. If Video to Go expects 300 customers next week, and Entertainment HQ projects that they will have 420 customers, for which store is the expected number of DVD rentals for next week higher?
8. Which of the two video stores experiences more variation in the number of DVD rentals per customer? How do you know that?
Q 4.3.10
A “friend” offers you the following “deal.” For a $10 fee, you may pick an envelope from a box containing 100 seemingly identical envelopes. However, each envelope contains a coupon for a free gift.
• Ten of the coupons are for a free gift worth $6.
• Eighty of the coupons are for a free gift worth $8.
• Six of the coupons are for a free gift worth $12.
• Four of the coupons are for a free gift worth $40.
Based upon the financial gain or loss over the long run, should you play the game?
1. Yes, I expect to come out ahead in money.
2. No, I expect to come out behind in money.
3. It doesn’t matter. I expect to break even.
Q 4.3.11
Florida State University has 14 statistics classes scheduled for its Summer 2013 term. One class has space available for 30 students, eight classes have space for 60 students, one class has space for
70 students, and four classes have space for 100 students.
1. What is the average class size assuming each class is filled to capacity?
2. Space is available for 980 students. Suppose that each class is filled to capacity and select a statistics student at random. Let the random variable \(X\) equal the size of the student’s class.
Define the PDF for \(X\).
3. Find the mean of \(X\).
4. Find the standard deviation of \(X\).
Q 4.3.12
In a lottery, there are 250 prizes of $5, 50 prizes of $25, and ten prizes of $100. Assuming that 10,000 tickets are to be issued and sold, what is a fair price to charge to break even?
S 4.3.12
Let \(X =\) the amount of money to be won on a ticket. The following table shows the PDF for \(X\).
\(x\) \(P(x)\)
0 0.969
5 \(\frac{250}{10,000} = 0.025\)
25 \(\frac{50}{10,000} = 0.005\)
100 \(\frac{10}{10,000} = 0.001\)
Calculate the expected value of \(X\).
\[0(0.969) + 5(0.025) + 25(0.005) + 100(0.001) = 0.35\]
A fair price for a ticket is $0.35. Any price over $0.35 will enable the lottery to raise money.
4.4: Binomial Distribution
Q 4.4.1
According to a recent article the average number of babies born with significant hearing loss (deafness) is approximately two per 1,000 babies in a healthy baby nursery. The number climbs to an
average of 30 per 1,000 babies in an intensive care nursery.
Suppose that 1,000 babies from healthy baby nurseries were randomly surveyed. Find the probability that exactly two babies were born deaf.
Use the following information to answer the next four exercises. Recently, a nurse commented that when a patient calls the medical advice line claiming to have the flu, the chance that he or she
truly has the flu (and not just a nasty cold) is only about 4%. Of the next 25 patients calling in claiming to have the flu, we are interested in how many actually have the flu.
Q 4.4.2
Define the random variable and list its possible values.
S 4.4.2
\(X =\) the number of patients calling in claiming to have the flu, who actually have the flu.
\(X = 0, 1, 2, ...25\)
Q 4.4.3
State the distribution of \(X\).
Q 4.4.4
Find the probability that at least four of the 25 patients actually have the flu.
Q 4.4.5
On average, for every 25 patients calling in, how many do you expect to have the flu?
Q 4.4.6
People visiting video rental stores often rent more than one DVD at a time. The probability distribution for DVD rentals per customer at Video To Go is given Table. There is five-video limit per
customer at this store, so nobody ever rents more than five DVDs.
\(x\) \(P(x)\)
0 0.03
1 0.50
2 0.24
4 0.07
5 0.04
1. Describe the random variable \(X\) in words.
2. Find the probability that a customer rents three DVDs.
3. Find the probability that a customer rents at least four DVDs.
4. Find the probability that a customer rents at most two DVDs.
S 4.4.6
1. \(X =\) the number of DVDs a Video to Go customer rents
2. 0.12
3. 0.11
4. 0.77
Q 4.4.7
A school newspaper reporter decides to randomly survey 12 students to see if they will attend Tet (Vietnamese New Year) festivities this year. Based on past years, she knows that 18% of students
attend Tet festivities. We are interested in the number of students who will attend the festivities.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many of the 12 students do we expect to attend the festivities?
5. Find the probability that at most four students will attend.
6. Find the probability that more than two students will attend.
Use the following information to answer the next three exercises: The probability that the San Jose Sharks will win any given game is 0.3694 based on a 13-year win history of 382 wins out of 1,034
games played (as of a certain date). An upcoming monthly schedule contains 12 games.
Q 4.4.8
The expected number of wins for that upcoming month is:
1. 1.67
2. 12
3. \(\frac{382}{1043}\)
4. 4.43
S 4.4.8
d. 4.43
Let \(X =\) the number of games won in that upcoming month.
Q 4.4.9
What is the probability that the San Jose Sharks win six games in that upcoming month?
1. 0.1476
2. 0.2336
3. 0.7664
4. 0.8903
Q 4.4.10
What is the probability that the San Jose Sharks win at least five games in that upcoming month?
1. 0.3694
2. 0.5266
3. 0.4734
4. 0.2305
Q 4.4.11
A student takes a ten-question true-false quiz, but did not study and randomly guesses each answer. Find the probability that the student passes the quiz with a grade of at least 70% of the questions
Q 4.4.12
A student takes a 32-question multiple-choice exam, but did not study and randomly guesses each answer. Each question has three possible choices for the answer. Find the probability that the student
guesses more than 75% of the questions correctly.
S 4.4.13
• \(X =\) number of questions answered correctly
• \(X \sim B(32, \frac{1}{3})\)
• We are interested in MORE THAN 75% of 32 questions correct. 75% of 32 is 24. We want to find \(P(x > 24)\). The event "more than 24" is the complement of "less than or equal to 24."
• Use the LibreTexts calculator.
• \(P(x > 24) = 0\)
• The probability of getting more than 75% of the 32 questions correct when randomly guessing is very small and practically zero.
Q 4.4.14
Six different colored dice are rolled. Of interest is the number of dice that show a one.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. On average, how many dice would you expect to show a one?
5. Find the probability that all six dice show a one.
6. Is it more likely that three or that four dice will show a one? Use numbers to justify your answer numerically.
Q 4.4.15
More than 96 percent of the very largest colleges and universities (more than 15,000 total enrollments) have some online offerings. Suppose you randomly pick 13 such institutions. We are interested
in the number that offer distance learning courses.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. On average, how many schools would you expect to offer such courses?
5. Find the probability that at most ten offer such courses.
6. Is it more likely that 12 or that 13 will offer such courses? Use numbers to justify your answer numerically and answer in a complete sentence.
S 4.4.15
1. \(X =\) the number of college and universities that offer online offerings.
2. 0, 1, 2, …, 13
3. \(X \sim B(13, 0.96)\)
4. 12.48
5. 0.0135
6. \(P(x = 12) = 0.3186 P(x = 13) = 0.5882\) More likely to get 13.
Q 4.4.16
Suppose that about 85% of graduating students attend their graduation. A group of 22 graduating students is randomly chosen.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many are expected to attend their graduation?
5. Find the probability that 17 or 18 attend.
6. Based on numerical values, would you be surprised if all 22 attended graduation? Justify your answer numerically.
Q 4.4.17
At The Fencing Center, 60% of the fencers use the foil as their main weapon. We randomly survey 25 fencers at The Fencing Center. We are interested in the number of fencers who do not use the foil as
their main weapon.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many are expected to not to use the foil as their main weapon?
5. Find the probability that six do not use the foil as their main weapon.
6. Based on numerical values, would you be surprised if all 25 did not use foil as their main weapon? Justify your answer numerically.
S 4.4.17
1. \(X =\) the number of fencers who do not use the foil as their main weapon
2. 0, 1, 2, 3,... 25
3. \(X \sim B(25,0.40)\)
4. 10
5. 0.0442
6. The probability that all 25 not use the foil is almost zero. Therefore, it would be very surprising.
Q 4.4.18
Approximately 8% of students at a local high school participate in after-school sports all four years of high school. A group of 60 seniors is randomly chosen. Of interest is the number who
participated in after-school sports all four years of high school.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many seniors are expected to have participated in after-school sports all four years of high school?
5. Based on numerical values, would you be surprised if none of the seniors participated in after-school sports all four years of high school? Justify your answer numerically.
6. Based upon numerical values, is it more likely that four or that five of the seniors participated in after-school sports all four years of high school? Justify your answer numerically.
Q 4.4.19
The chance of an IRS audit for a tax return with over $25,000 in income is about 2% per year. We are interested in the expected number of audits a person with that income has in a 20-year period.
Assume each year is independent.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many audits are expected in a 20-year period?
5. Find the probability that a person is not audited at all.
6. Find the probability that a person is audited more than twice.
S 4.4.19
1. \(X =\) the number of audits in a 20-year period
2. 0, 1, 2, …, 20
3. \(X \sim B(20, 0.02)\)
4. 0.4
5. 0.6676
6. 0.0071
Q 4.4.20
It has been estimated that only about 30% of California residents have adequate earthquake supplies. Suppose you randomly survey 11 California residents. We are interested in the number who have
adequate earthquake supplies.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. What is the probability that at least eight have adequate earthquake supplies?
5. Is it more likely that none or that all of the residents surveyed will have adequate earthquake supplies? Why?
6. How many residents do you expect will have adequate earthquake supplies?
Q 4.4.21
There are two similar games played for Chinese New Year and Vietnamese New Year. In the Chinese version, fair dice with numbers 1, 2, 3, 4, 5, and 6 are used, along with a board with those numbers.
In the Vietnamese version, fair dice with pictures of a gourd, fish, rooster, crab, crayfish, and deer are used. The board has those six objects on it, also. We will play with bets being $1. The
player places a bet on a number or object. The “house” rolls three dice. If none of the dice show the number or object that was bet, the house keeps the $1 bet. If one of the dice shows the number or
object bet (and the other two do not show it), the player gets back his or her $1 bet, plus $1 profit. If two of the dice show the number or object bet (and the third die does not show it), the
player gets back his or her $1 bet, plus $2 profit. If all three dice show the number or object bet, the player gets back his or her $1 bet, plus $3 profit. Let \(X =\) number of matches and \(Y =\)
profit per game.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. List the values that \(Y\) may take on. Then, construct one PDF table that includes both \(X\) and \(Y\) and their probabilities.
5. Calculate the average expected matches over the long run of playing this game for the player.
6. Calculate the average expected earnings over the long run of playing this game for the player.
7. Determine who has the advantage, the player or the house.
S 4.4.21
1. \(X =\) the number of matches
2. 0, 1, 2, 3
3. \(X \sim B(3,16)(3,16)\)
4. In dollars: −1, 1, 2, 3
5. \(\frac{1}{2}\)
6. Multiply each \(Y\) value by the corresponding \(X\) probability from the PDF table. The answer is −0.0787. You lose about eight cents, on average, per game.
7. The house has the advantage.
Q 4.4.22
According to The World Bank, only 9% of the population of Uganda had access to electricity as of 2009. Suppose we randomly sample 150 people in Uganda. Let \(X =\) the number of people who have
access to electricity.
1. What is the probability distribution for \(X\)?
2. Using the formulas, calculate the mean and standard deviation of \(X\).
3. Use your calculator to find the probability that 15 people in the sample have access to electricity.
4. Find the probability that at most ten people in the sample have access to electricity.
5. Find the probability that more than 25 people in the sample have access to electricity.
Q 4.4.23
The literacy rate for a nation measures the proportion of people age 15 and over that can read and write. The literacy rate in Afghanistan is 28.1%. Suppose you choose 15 people in Afghanistan at
random. Let \(X =\) the number of people who are literate.
1. Sketch a graph of the probability distribution of \(X\).
2. Using the formulas, calculate the (i) mean and (ii) standard deviation of \(X\).
3. Find the probability that more than five people in the sample are literate. Is it is more likely that three people or four people are literate.
S 4.4.23
1. \(X \sim B(15, 0.281)\)
Figure 4.4.1.
1. Mean \(= \mu = np = 15(0.281) = 4.215\)
2. Standard Deviation \(= \sigma = \sqrt{npq} = \sqrt{15(0.281)(0.719)} = 1.7409\)
3. \(P(x > 5) = 0.2246\)
\(P(x = 3) = 0.1927\)
\(P(x = 4) = 0.2259\)
It is more likely that four people are literate that three people are.
4.5: Geometric Distribution
Q 4.5.1
A consumer looking to buy a used red Miata car will call dealerships until she finds a dealership that carries the car. She estimates the probability that any independent dealership will have the car
will be 28%. We are interested in the number of dealerships she must call.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. On average, how many dealerships would we expect her to have to call until she finds one that has the car?
5. Find the probability that she must call at most four dealerships.
6. Find the probability that she must call three or four dealerships.
Q 4.5.2
Suppose that the probability that an adult in America will watch the Super Bowl is 40%. Each person is considered independent. We are interested in the number of adults in America we must survey
until we find one who will watch the Super Bowl.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many adults in America do you expect to survey until you find one who will watch the Super Bowl?
5. Find the probability that you must ask seven people.
6. Find the probability that you must ask three or four people.
S 4.5.2
1. \(X =\) the number of adults in America who are surveyed until one says he or she will watch the Super Bowl.
2. \(X \sim G(0.40)\)
3. 2.5
4. 0.0187
5. 0.2304
Q 4.5.3
It has been estimated that only about 30% of California residents have adequate earthquake supplies. Suppose we are interested in the number of California residents we must survey until we find a
resident who does not have adequate earthquake supplies.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. What is the probability that we must survey just one or two residents until we find a California resident who does not have adequate earthquake supplies?
5. What is the probability that we must survey at least three California residents until we find a California resident who does not have adequate earthquake supplies?
6. How many California residents do you expect to need to survey until you find a California resident who does not have adequate earthquake supplies?
7. How many California residents do you expect to need to survey until you find a California resident who does have adequate earthquake supplies?
Q 4.5.4
In one of its Spring catalogs, L.L. Bean® advertised footwear on 29 of its 192 catalog pages. Suppose we randomly survey 20 pages. We are interested in the number of pages that advertise footwear.
Each page may be picked more than once.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many pages do you expect to advertise footwear on them?
5. Is it probable that all twenty will advertise footwear on them? Why or why not?
6. What is the probability that fewer than ten will advertise footwear on them?
7. Reminder: A page may be picked more than once. We are interested in the number of pages that we must randomly survey until we find one that has footwear advertised on it. Define the random
variable \(X\) and give its distribution.
8. What is the probability that you only need to survey at most three pages in order to find one that advertises footwear on it?
9. How many pages do you expect to need to survey in order to find one that advertises footwear?
S 4.5.4
1. \(X =\) the number of pages that advertise footwear
2. \(X\) takes on the values 0, 1, 2, ..., 20
3. \(X \sim B(20, \frac{29}{192})\)
4. 3.02
5. No
6. 0.9997
7. \(X =\) the number of pages we must survey until we find one that advertises footwear. \(X \sim G(\frac{29}{192})\)
8. 0.3881
9. 6.6207 pages
Q 4.5.5
Suppose that you are performing the probability experiment of rolling one fair six-sided die. Let \(\text{F}\) be the event of rolling a four or a five. You are interested in how many times you need
to roll the die in order to obtain the first four or five as the outcome.
• \(p =\) probability of success (event \(\text{F}\) occurs)
• \(q =\) probability of failure (event \(\text{F}\) does not occur)
1. Write the description of the random variable \(X\).
2. What are the values that \(X\) can take on?
3. Find the values of \(p\) and \(q\).
4. Find the probability that the first occurrence of event \(\text{F}\) (rolling a four or five) is on the second trial.
Q 4.5.5
Ellen has music practice three days a week. She practices for all of the three days 85% of the time, two days 8% of the time, one day 4% of the time, and no days 3% of the time. One week is selected
at random. What values does \(X\) take on?
Q 4.5.6
The World Bank records the prevalence of HIV in countries around the world. According to their data, “Prevalence of HIV refers to the percentage of people ages 15 to 49 who are infected with HIV.”^1
In South Africa, the prevalence of HIV is 17.3%. Let \(X =\) the number of people you test until you find a person infected with HIV.
1. Sketch a graph of the distribution of the discrete random variable \(X\).
2. What is the probability that you must test 30 people to find one with HIV?
3. What is the probability that you must ask ten people?
4. Find the (i) mean and (ii) standard deviation of the distribution of \(X\).
Q 4.5.7
According to a recent Pew Research poll, 75% of millenials (people born between 1981 and 1995) have a profile on a social networking site. Let \(X =\) the number of millenials you ask until you find
a person without a profile on a social networking site.
1. Describe the distribution of \(X\).
2. Find the (i) mean and (ii) standard deviation of \(X\).
3. What is the probability that you must ask ten people to find one person without a social networking site?
4. What is the probability that you must ask 20 people to find one person without a social networking site?
5. What is the probability that you must ask at most five people?
S 4.5.7
1. \(X \sim \text{G}(0.25)\)
1. Mean \(= \mu = \frac{1}{p} = \frac{1}{0.25} = 4\)
2. Standard Deviation \(= \sigma = \sqrt{\frac{1-p}{p^{2}}} = \sqrt{\frac{1-0.25}{0.25^{2}}} \approx 3.4641\)
3. \(P(x = 10) = \text{geometpdf}(0.25, 10) = 0.0188\)
4. \(P(x = 20) = \text{geometpdf}(0.25, 20) = 0.0011\)
5. \(P(x \leq 5) = \text{geometcdf}(0.25, 5) = 0.7627\)
4.6: Hypergeometric Distribution
Q 4.6.1
A group of Martial Arts students is planning on participating in an upcoming demonstration. Six are students of Tae Kwon Do; seven are students of Shotokan Karate. Suppose that eight students are
randomly picked to be in the first demonstration. We are interested in the number of Shotokan Karate students in that first demonstration.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many Shotokan Karate students do we expect to be in that first demonstration?
Q 4.6.2
In one of its Spring catalogs, L.L. Bean® advertised footwear on 29 of its 192 catalog pages. Suppose we randomly survey 20 pages. We are interested in the number of pages that advertise footwear.
Each page may be picked at most once.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many pages do you expect to advertise footwear on them?
5. Calculate the standard deviation.
S 4.6.2
1. \(X =\) the number of pages that advertise footwear
2. 0, 1, 2, 3, ..., 20
3. \(X \sim \text{H}(29, 163, 20); r = 29, b = 163, n = 20\)
4. 3.03
5. 1.5197
Q 4.6.3
Suppose that a technology task force is being formed to study technology awareness among instructors. Assume that ten people will be randomly chosen to be on the committee from a group of 28
volunteers, 20 who are technically proficient and eight who are not. We are interested in the number on the committee who are not technically proficient.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many instructors do you expect on the committee who are not technically proficient?
5. Find the probability that at least five on the committee are not technically proficient.
6. Find the probability that at most three on the committee are not technically proficient.
Q 4.6.4
Suppose that nine Massachusetts athletes are scheduled to appear at a charity benefit. The nine are randomly chosen from eight volunteers from the Boston Celtics and four volunteers from the New
England Patriots. We are interested in the number of Patriots picked.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. Are you choosing the nine athletes with or without replacement?
S 4.6.4
1. \(X = \) the number of Patriots picked
2. 0, 1, 2, 3, 4
3. \(X \sim H(4, 8, 9)\)
4. Without replacement
Q 4.6.5
A bridge hand is defined as 13 cards selected at random and without replacement from a deck of 52 cards. In a standard deck of cards, there are 13 cards from each suit: hearts, spades, clubs, and
diamonds. What is the probability of being dealt a hand that does not contain a heart?
1. What is the group of interest?
2. How many are in the group of interest?
3. How many are in the other group?
4. Let \(X =\) _________. What values does \(X\) take on?
5. The probability question is \(P\)(_______).
6. Find the probability in question.
7. Find the (i) mean and (ii) standard deviation of \(X\).
4.7: Poisson Distribution
Q 4.7.1
The switchboard in a Minneapolis law office gets an average of 5.5 incoming phone calls during the noon hour on Mondays. Experience shows that the existing staff can handle up to six calls in an
hour. Let \(X =\) the number of calls received at noon.
1. Find the mean and standard deviation of \(X\).
2. What is the probability that the office receives at most six calls at noon on Monday?
3. Find the probability that the law office receives six calls at noon. What does this mean to the law office staff who get, on average, 5.5 incoming phone calls at noon?
4. What is the probability that the office receives more than eight calls at noon?
S 4.7.1
1. \(X \sim P(5.5); \mu= 5.5; \sigma = \sqrt{5.5} \approx 2.3452\)
2. \(P(x \leq 6) = \text{poissoncdf}(5.5, 6) \approx 0.6860\)
3. There is a 15.7% probability that the law staff will receive more calls than they can handle.
4. \(P(x > 8) = 1 – P(x \leq 8) = 1 – \text{poissoncdf}(5.5, 8) \approx 1 – 0.8944 = 0.1056\)
Q 4.7.2
The maternity ward at Dr. Jose Fabella Memorial Hospital in Manila in the Philippines is one of the busiest in the world with an average of 60 births per day. Let \(X =\) the number of births in an
1. Find the mean and standard deviation of \(X\).
2. Sketch a graph of the probability distribution of \(X\).
3. What is the probability that the maternity ward will deliver three babies in one hour?
4. What is the probability that the maternity ward will deliver at most three babies in one hour?
5. What is the probability that the maternity ward will deliver more than five babies in one hour?
Q 4.7.3
A manufacturer of Christmas tree light bulbs knows that 3% of its bulbs are defective. Find the probability that a string of 100 lights contains at most four defective bulbs using both the binomial
and Poisson distributions.
S 4.7.3
Let \(X =\) the number of defective bulbs in a string.
Using the Poisson distribution:
• \(\mu = np = 100(0.03) = 3\)
• \(X \sim P(3)\)
• \(P(x \leq 4) = \text{poissoncdf}(3, 4) \approx 0.8153\)
Using the binomial distribution:
• \(X \sim \text{B}(100, 0.03)\)
• \(P(x \leq 4) \approx 0.8179\)
The Poisson approximation is very good—the difference between the probabilities is only 0.0026.
Q 4.7.4
The average number of children a Japanese woman has in her lifetime is 1.37. Suppose that one Japanese woman is randomly chosen.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. Find the probability that she has no children.
5. Find the probability that she has fewer children than the Japanese average.
6. Find the probability that she has more children than the Japanese average.
Q 4.7.5
The average number of children a Spanish woman has in her lifetime is 1.47. Suppose that one Spanish woman is randomly chosen.
1. In words, define the Random Variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. Find the probability that she has no children.
5. Find the probability that she has fewer children than the Spanish average.
6. Find the probability that she has more children than the Spanish average .
S 4.7.5
1. \(X =\) the number of children for a Spanish woman
2. 0, 1, 2, 3,...
3. \(X \sim P(1.47)\)
4. 0.2299
5. 0.5679
6. 0.4321
Q 4.7.6
Fertile, female cats produce an average of three litters per year. Suppose that one fertile, female cat is randomly chosen. In one year, find the probability she produces:
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _______
4. Find the probability that she has no litters in one year.
5. Find the probability that she has at least two litters in one year.
6. Find the probability that she has exactly three litters in one year.
Q 4.7.7
he chance of having an extra fortune in a fortune cookie is about 3%. Given a bag of 144 fortune cookies, we are interested in the number of cookies with an extra fortune. Two distributions may be
used to solve this problem, but only use one distribution to solve the problem.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many cookies do we expect to have an extra fortune?
5. Find the probability that none of the cookies have an extra fortune.
6. Find the probability that more than three have an extra fortune.
7. As \(n\) increases, what happens involving the probabilities using the two distributions? Explain in complete sentences.
S 4.7.7
1. \(X =\) the number of fortune cookies that have an extra fortune
2. 0, 1, 2, 3,... 144
3. \(X \sim B(144, 0.03)\) or \(P(4.32)\)
4. 4.32
5. 0.0124 or 0.0133
6. 0.6300 or 0.6264
7. As \(n\) gets larger, the probabilities get closer together.
Q 4.7.8
According to the South Carolina Department of Mental Health web site, for every 200 U.S. women, the average number who suffer from anorexia is one. Out of a randomly chosen group of 600 U.S. women
determine the following.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many are expected to suffer from anorexia?
5. Find the probability that no one suffers from anorexia.
6. Find the probability that more than four suffer from anorexia.
Q 4.7.9
The chance of an IRS audit for a tax return with over $25,000 in income is about 2% per year. Suppose that 100 people with tax returns over $25,000 are randomly picked. We are interested in the
number of people audited in one year. Use a Poisson distribution to answer the following questions.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many are expected to be audited?
5. Find the probability that no one was audited.
6. Find the probability that at least three were audited.
S 4.7.9
1. \(X =\) the number of people audited in one year
2. 0, 1, 2, ..., 100
3. \(X \sim P(2)\)
4. 2
5. 0.1353
6. 0.3233
Q 4.7.10
Approximately 8% of students at a local high school participate in after-school sports all four years of high school. A group of 60 seniors is randomly chosen. Of interest is the number that
participated in after-school sports all four years of high school.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. How many seniors are expected to have participated in after-school sports all four years of high school?
5. Based on numerical values, would you be surprised if none of the seniors participated in after-school sports all four years of high school? Justify your answer numerically.
6. Based on numerical values, is it more likely that four or that five of the seniors participated in after-school sports all four years of high school? Justify your answer numerically.
Q 4.7.11
On average, Pierre, an amateur chef, drops three pieces of egg shell into every two cake batters he makes. Suppose that you buy one of his cakes.
1. In words, define the random variable \(X\).
2. List the values that \(X\) may take on.
3. Give the distribution of \(X\). \(X \sim\) _____(_____,_____)
4. On average, how many pieces of egg shell do you expect to be in the cake?
5. What is the probability that there will not be any pieces of egg shell in the cake?
6. Let’s say that you buy one of Pierre’s cakes each week for six weeks. What is the probability that there will not be any egg shell in any of the cakes?
7. Based upon the average given for Pierre, is it possible for there to be seven pieces of shell in the cake? Why?
S 4.7.11
1. \(X =\) the number of shell pieces in one cake
2. 0, 1, 2, 3,...
3. \(X \sim P(1.5)\)
4. 1.5
5. 0.2231
6. 0.0001
7. Yes
Use the following information to answer the next two exercises: The average number of times per week that Mrs. Plum’s cats wake her up at night because they want to play is ten. We are interested in
the number of times her cats wake her up each week.
Q 4.7.12
In words, the random variable \(X =\) _________________
1. the number of times Mrs. Plum’s cats wake her up each week.
2. the number of times Mrs. Plum’s cats wake her up each hour.
3. the number of times Mrs. Plum’s cats wake her up each night.
4. the number of times Mrs. Plum’s cats wake her up.
Q 4.7.13
Find the probability that her cats will wake her up no more than five times next week.
1. 0.5000
2. 0.9329
3. 0.0378
4. 0.0671
4.8: Discrete Distribution (Playing Card Experiment)
4.9: Discrete Distribution (Lucky Dice Experiment) | {"url":"https://stats.libretexts.org/Courses/Lake_Tahoe_Community_College/Book%3A_Introductory_Statistics_(OpenStax)_With_Multimedia_and_Interactivity_LibreTexts_Calculator/04%3A_Discrete_Random_Variables/4.07%3A_Discrete_Random_Variables_(Exercises)","timestamp":"2024-11-02T18:47:51Z","content_type":"text/html","content_length":"273741","record_id":"<urn:uuid:a8d4af2f-b423-4d64-9071-53563207d4e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00029.warc.gz"} |
Nuclear spin-rotation constants
Nuclear spin-rotation constants¶
In this tutorial we introduce the calculation of nuclear spin-rotation tensors as given in the DIRAC code, based on the theoretical developments by I. Agustín Aucar et al. For details, you are
welcome to consult [Aucar_JCP2012].
The nuclear spin-rotation (SR) tensor elements of a nucleus \(N\) are given by
\[M_{N,\alpha\beta} = M_{N,\alpha\beta}^{NU} + M_{N,\alpha\beta}^{EV} + M_{N,\alpha\beta}^{LR}\]
They have three terms: the first of them is independent of the electronic variables, whereas the second and third are given by an expectation value (EV) and a linear response (LR) function,
In tensorial notation, the nuclear spin-rotation contributions of a nucleus \(N\) are given, in SI units, by:
\[\begin{split}{\bf M}_N^{NU} =& \; \frac{1}{4\pi\epsilon_0c^2} \, \frac{e^2 \hslash^2}{2 m_p h} \, g_N \sum_{M \neq N} \frac{Z_M}{|{\bf R}_{MN}|^3} \\ & \hspace{2cm} \left( \left\{ \left[ {\bf R}_
{M,CM} - \left( 1 - \frac{Z_N m_p}{m_N g_N} \right) {\bf R}_{N,CM} \right] \cdot {\bf R}_{MN} \right\} {\bf 1} \right. \\ & \hspace{4cm} \left. - \left[ {\bf R}_{M,CM} - \left( 1 - \frac{Z_N m_p}{m_N
g_N} \right) {\bf R}_{N,CM} \right] {\bf R}_{MN} \right) \; \cdot \; {\bf I}^{-1} \\ & \\ & \\ {\bf M}_N^{EV} =& \; \frac{1}{4\pi\epsilon_0c^2} \, \frac{e^2 \hslash^2}{2 m_p h} \, g_N \left( 1 - \
frac{Z_N m_p}{m_N g_N} \right) \\ & \hspace{2cm} \left[ \left( \langle 0 | \frac{{\bf r}-{\bf r}_N}{|{\bf r} -{\bf r}_N|^3 } | 0 \rangle \cdot {\bf R}_{N,CM} \right) {\bf 1} - \langle 0 | \frac{{\bf
r}-{\bf r}_N}{|{\bf r} -{\bf r}_N|^3 } | 0 \rangle \; \; {\bf R}_{N,CM}\right] \; \cdot \; {\bf I}^{-1} \\ & \\ & \\ {\bf M}_N^{LR} =& \; \frac{1}{4\pi\epsilon_0c^2} \, \frac{e^2 \hslash^2}{2 m_p h}
\, g_N \; \langle\langle \; \left (\frac{{\bf r}-{\bf r}_N}{|{\bf r} -{\bf r}_N|^3 }\times c \, {\bf \alpha}\right) \; ; \; {\bf J}_e \; \rangle\rangle \; \cdot \; {\bf I}^{-1}\end{split}\]
where \({\bf R}_{MN}\) is the position of nucleus \(M\) with respect to the position of nucleus \(N\); \({\bf R}_{N,CM}\) is the position of nucleus \(N\) with respect to the molecular center of
mass; \({\bf I}\) is the inertia tensor of the molecule and \({\bf J}_e = \left({\bf r} - {\bf R}_{CM} \right) \times {\bf p}+{\bf S}_e\) is the electronic total angular momentum.
For molecules at their equilibrium geometry, it is found that the sum of the first two terms, \({\bf M}_N^{NU} (eq) + {\bf M}_N^{EV} (eq)\), is equal to a new tensor which is independent of the
electronic variables (see Eq. 60 of [Aucar_JCP2012]),
\[\begin{split}{\bf M}_N^{nuc} =& \; {\bf M}_N^{NU} (eq) \; + \; {\bf M}_N^{EV} (eq) \\ =& \; \frac{1}{4\pi\epsilon_0c^2} \, \frac{e^2 \hslash^2}{2 m_p h} \, g_N \sum_{M \neq N} Z_M \left[ \left( {\
bf R}_{M,CM} \cdot \frac{{\bf R}_{MN}}{|{\bf R}_{MN}|^3} \right) {\bf 1} - {\bf R}_{M,CM} \frac{{\bf R}_{MN}}{|{\bf R}_{MN}|^3} \right] \cdot {\bf I}^{-1}\end{split}\]
Therefore, the electronic dependence of the nuclear spin-rotation tensor of a nucleus \(N\) in a molecule in equilibrium is completely given by its linear response term, \({\bf M}_N^{LR}\) (see Eq.
59 of [Aucar_JCP2012]).
The current implementation gives by default (.PRINT values up to 3) results only for molecules in equilibrium.
Application to the HF molecule¶
As an example, we show a calculation of the SR constant of the fluorine nucleus at the Hydrogen fluoride molecule. The input file spinrot.inp is given by
Spin-rotation constant
*END OF
whereas the molecular input file HF_cv3z.mol is
Hydrogen fluoride. Experimental bond length: 0.917 A
dyall.cv3z basis set
C 2 A .10D-15
9.0 1
F 0.00000000000 0.00000000000 0.00000000000 Isotope=19
LARGE BASIS dyall.cv3z
1.0 1
H 0.00000000000 0.00000000000 0.91700000000 Isotope=1
LARGE BASIS dyall.cv3z
The calculation is run using:
pam --inp=spinrot --mol=HF_cv3z
As a result, SR constants are obtained at the coupled Hartree-Fock level. The code also works at the DFT level.
It is also interesting to note that as .URKBAL is requested in the present calculation, the results will be very close to those obtained using the RKB prescription, because the diamagnetic-like (e-p)
contributions to \(M_{N,\alpha\beta}^{LR}\) are almost zero (see Eq. 60 of [Aucar_JCP2012]).
Reading the output file¶
As the .PRINT flag in the input file is set to 4, the results are fully detailed and given in both kHz and ppm units.
The SR constant of the fluorine nucleus, in kHz, will look like:
Spin-rotation constants (kHz) for F
Nuclear g-value: 5.257736
Total spin-rotation constant (SRC) : -317.17730994
Nuclear contribution to SRC (M^nuc) : 52.47420202
Electronic contribution to SRC (M^LR) : -369.65151197
M^LR-L(e-e) : -381.52309467
M^LR-S(e-e) : 11.85860365
M^LR-L(e-p) : 4.27896875
M^LR-S(e-p) : -4.26598970
******** M^NU ********: 55.00521470
******** M^EV ********: -2.51499774
******** M^LR ********: -369.65151197
******** M^total ********: -317.16129500
One should recall that in the case of linear molecules, as the present one, only one tensor element is printed out (the nuclear spin-rotation constant) because for these molecules the SR tensor has
only two equal and non-zero diagonal elements.
As it can be seen, the total SR constant of the fluorine nucleus is given, and then separated in its two terms (nuc and LR). The latter contribution, the linear response function, is further
separated in their (e-e) and (e-p) parts, as well as in their \(\mathbf{L}\) and \(\mathbf{S}\) parts. It is seen how the (e-p) contribution to the linear response term of the SR constants is almost
In addition, the results are fully detailed, showing the three terms mentioned above (NU, EV and LR).
Finally, if one is interested in the relationship between the SR constants and .SHIELDING (see [Aucar_JPCL2016] and [AucarChap2019]), which is given, in SI units, by
\[{\bf \sigma}_N = \; \frac{1}{4\pi\epsilon_0c^2} \, \frac{e^2}{2} \; \langle\langle \; \left (\frac{{\bf r}-{\bf r}_N}{|{\bf r} -{\bf r}_N|^3 }\times c \, {\bf \alpha}\right) \; ; \; \left( {\bf r}-
{\bf r}_{GO} \times c \, {\bf \alpha}\right) \; \rangle\rangle\]
then we print also the SR constants in ppm, by multiplying each SR tensor element \(M_{N,\alpha\beta}\) (given in kHz) by \(\frac{m_p I_{\beta \gamma}}{g_N}\,\frac{2\, \pi \, 10^9}{m_e \, \hslash}\),
where \(m_p\) and \(m_e\) are the proton and electron masses, respectively, \(\hslash\) is the reduced Planck’s constant, and \(g_N\) is the g-factor of nucleus N.
In this way, one obtains the following output:
Spin-rotation constants (in ppm) for F
Total spin-rotation constant (SRC) : -88.19431889
Nuclear contribution to SRC (M^nuc) : 14.59097597
Electronic contribution to SRC (M^LR) : -102.78529486
M^LR-L(e-e) : -106.08630700
M^LR-S(e-e) : 3.29740318
M^LR-L(e-p) : 1.18981000
M^LR-S(e-p) : -1.18620104
******** M^NU ********: 15.29474932
******** M^EV ********: -0.69932024
******** M^LR ********: -102.78529486
******** M^total ********: -88.18986578 | {"url":"http://www.diracprogram.org/doc/release-24/tutorials/spinrot/tutorial.html","timestamp":"2024-11-10T02:59:10Z","content_type":"text/html","content_length":"27343","record_id":"<urn:uuid:5b0b9772-12d0-4c1a-bc00-7e15c374603e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00411.warc.gz"} |
Huffman Coding -: Greedy Algorithms …FTC - FcukTheCode
Huffman Coding -: Greedy Algorithms …FTC
Huffman Coding – Activity Selection Problem
👉 👉 Activity Selection Problem
Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. It compresses data very effectively saving from 20% to 90% memory, depending on the
characteristics of the data being compressed. We consider the data to be a sequence of characters.
Huffman’s greedy algorithm uses a table giving how often each character occurs (i.e., its frequency) to build up an optimal way of representing each character as a binary string. Huffman code was
proposed by David A. Huffman in 1951.
Suppose we have a 100,000 character data file that we wish to store compactly. We assume that there are only 6 different characters in that file. The frequency of the characters are given by:
We have many options for how to represent such a file of information. Here, we consider the problem of designing a Binary Character Code in which each character is represented by a unique binary
string, which we call a codeword.
The constructed tree will provide us with:
If we use a fixed-length code, we need three bits to represent 6 characters. This method requires 300,000 bits to code the entire file. Now the question is, can we do better?
A variable-length code can do considerably better than a fixed-length code, by giving frequent characters short codewords and infrequent characters long codewords.
This code requires: (45 X 1 + 13 X 3 + 12 X 3 + 16 X 3 + 9 X 4 + 5 X 4) X 1000 = 224000 bits to represent the file, which saves approximately 25% of memory. One thing to remember, we consider here
only codes in which no codeword is also a prefix of some other codeword.
These are called prefix codes. For variable-length coding, we code the 3-character file
abc as 0.101.100 = 0101100, where “.” denotes the concatenation. Prefix codes are desirable because they simplify decoding. Since no codeword is a prefix of any other, the codeword that begins an
encoded file is unambiguous.
We can simply identify the initial codeword, translate it back to the original character, and repeat the decoding process on the remainder of the encoded file. For example, 001011101 parses uniquely
as 0.0.101.1101, which decodes to aabe.
In short, all the combinations of binary representations are unique. Say for example, if one letter is denoted by 110, no other letter will be denoted by 1101 or 1100. This is because you might face
confusion on whether to select 110 or to continue on concatenating the next bit and select that one.
Compression Technique:
The technique works by creating a binary tree of nodes. These can stored in a regular array, the size of which depends on the number of symbols, n. A node can either be a leaf node or an internal
node. Initially all nodes are leaf nodes, which contain the symbol itself, its frequency and optionally, a link to its child nodes. As a convention, bit ‘0’ represents left child and bit ‘1’
represents right child. Priority queue is used to store the nodes, which provides the node with lowest frequency when popped. The process is described below:
1. Create a leaf node for each symbol and add it to the priority queue.
2. While there is more than one node in the queue:
3. Remove the two nodes of highest priority from the queue.
4. Create a new internal node with these two nodes as children and with frequency equal to the sum of the two nodes’ frequency.
5. Add the new node to the queue.
6. The remaining node is the root node and the Huffman tree is complete.
For our example:
The pseudo-code looks like:
Procedure Huffman(C): // C is the set of n characters and related information
n = C.size
Q = priority_queue()
for i = 1 to n
n = node(C[i])
end for
while Q.size() is not equal to 1
Z = new node()
Z.left = x = Q.pop
Z.right = y = Q.pop
Z.frequency = x.frequency + y.frequency
end while
Return Q
Although linear-time given sorted input, in general cases of arbitrary input, using this algorithm requires pre – sorting. Thus, since sorting takes O(nlogn) time in general cases, both methods have
same complexity.
Since n here is the number of symbols in the alphabet, which is typically very small number (compared to the length of the message to be encoded), time complexity is not very important in the choice
of this algorithm.
Decompression Technique:
The process of decompression is simply a matter of translating the stream of prefix codes to individual byte value, usually by traversing the Huffman tree node by node as each bit is read from the
input stream. Reaching a leaf node necessarily terminates the search for that particular byte value. The leaf value represents the desired character.
Usually the Huffman Tree is constructed using statistically adjusted data on each compression cycle, thus the reconstruction is fairly simple. Otherwise, the information to reconstruct the tree must
be sent separately. The pseudo-code:
Procedure HuffmanDecompression(root, S):
// root represents the root of Huffman Tree
n := S.length // S refers to bit-stream to be decompressed
for i := 1 to n
current = root
while current.left != NULL and current.right != NULL
if S[i] is equal to '0'
current := current.left
current := current.right
i := i+1
print current.symbol
Greedy Explanation:
Huffman coding looks at the occurrence of each character and stores it as a binary string in an optimal way. The idea is to assign variable-length codes to input input characters, length of the
assigned codes are based on the frequencies of corresponding characters.
We create a binary tree and operate on it in bottom-up manner so that the least two frequent characters are as far as possible from the root. In this way, the most frequent character gets the
smallest code and the least frequent character gets the largest code.
Huffman Coding -: Change-making problem 👇 👇 🙂
👉 👉 Change-making problem | {"url":"https://www.fcukthecode.com/huffman-coding-greedy-algorithms-ftc/","timestamp":"2024-11-11T12:35:45Z","content_type":"text/html","content_length":"154814","record_id":"<urn:uuid:16e41011-d61d-47ef-bc79-c6b3cba285fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00473.warc.gz"} |
dtrttp - Linux Manuals (3)
dtrttp (3) - Linux Manuals
dtrttp.f -
subroutine dtrttp (UPLO, N, A, LDA, AP, INFO)
DTRTTP copies a triangular matrix from the standard full format (TR) to the standard packed format (TP).
Function/Subroutine Documentation
subroutine dtrttp (characterUPLO, integerN, double precision, dimension( lda, * )A, integerLDA, double precision, dimension( * )AP, integerINFO)
DTRTTP copies a triangular matrix from the standard full format (TR) to the standard packed format (TP).
DTRTTP copies a triangular matrix A from full format (TR) to standard
packed format (TP).
UPLO is CHARACTER*1
= 'U': A is upper triangular.
= 'L': A is lower triangular.
N is INTEGER
The order of the matrices AP and A. N >= 0.
A is DOUBLE PRECISION array, dimension (LDA,N)
On exit, the triangular matrix A. If UPLO = 'U', the leading
N-by-N upper triangular part of A contains the upper
triangular part of the matrix A, and the strictly lower
triangular part of A is not referenced. If UPLO = 'L', the
leading N-by-N lower triangular part of A contains the lower
triangular part of the matrix A, and the strictly upper
triangular part of A is not referenced.
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,N).
AP is DOUBLE PRECISION array, dimension (N*(N+1)/2
On exit, the upper or lower triangular matrix A, packed
columnwise in a linear array. The j-th column of A is stored
in the array AP as follows:
if UPLO = 'U', AP(i + (j-1)*j/2) = A(i,j) for 1<=i<=j;
if UPLO = 'L', AP(i + (j-1)*(2n-j)/2) = A(i,j) for j<=i<=n.
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 105 of file dtrttp.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://www.systutorials.com/docs/linux/man/docs/linux/man/3-dtrttp/","timestamp":"2024-11-06T20:58:10Z","content_type":"text/html","content_length":"9060","record_id":"<urn:uuid:499359fb-0623-4045-be09-4134585a10ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00398.warc.gz"} |
Introduction to Linearly Stratified
What is/are Linearly Stratified?
Linearly Stratified
Further, we have proposed a modification in the key nondimensional parameters [Rayleigh number Ra^{'} (signifying buoyancy) and Sherwood number Sh^{'} (signifying mass flux)] and droplet lifetime τ_
{c}^{'}, based on the hypothesis of linearly stratified droplet surroundings (with revised concentration difference ΔC^{'}), taking into account the geometry of the confinements. ^[1] Gravity
currents produced from a full-depth lock release propagating at the base of a linearly stratified ambient are investigated by means of newly conducted three-dimensional high-resolution simulations in
conjunction with corresponding two-dimensional simulations and laboratory experiments. ^[2] We consider small, physically consistent perturbations of a linearly stratified fluid that would result
from a localized mixing near a particular depth. ^[3] The static linearly stratified state, which is an equilibrium of the unforced system, is not an equilibrium for any non-zero forcing amplitude. ^
[4] Horizontal warm buoyant jets injecting into a linearly stratified ambience are common in lakes, estuaries, and oceans. ^[5] Flows past two spheres immersed in a horizontally moving, linearly
stratified fluid are investigated at a moderate Reynolds number of 300. ^[6] The system instruments a vertical buoyant jet discharging in a linearly stratified environment. ^[7] Experimental results
are obtained by towing a horizontal cylinder in a horizontal direction perpendicular to its axes, at a constant speed in a linearly stratified fluid made of salty water. ^[8] Coupled triads (two sets
of resonant triads with one member in common) can arise in linearly stratified fluids. ^[9] We use analog experiments on relatively dense mono- and bi-disperse particle-freshwater and
particle-saltwater jets injected into a linearly stratified saltwater layer to revisit, characterize and understand how transitions among Buoyant Plume (BP), Total Collapse (TC) and Partial Collapse
(PC) multiphase jet regimes in a traditional source strength (−Ri0) - particle concentration (ϕ0) parameter space are modified by particle inertial effects expressed through a Stokes number (St) and
particle buoyancy effects expressed through a Sedimentation number (Σ). ^[10] Selective withdrawal is commonly implemented in nonlinearly stratified ambient, which typically has stratified ambient
conditions, for purposes of controlling quality. ^[11] A numerical model has been constructed and the investigation of the dynamics of a cylindrical localized region of turbulent disturbances in a
longitudinal horizontally homogeneous shear flow of a linearly stratified fluid has been carried out. ^[12] The transient natural convection boundary-layer flow adjacent to a vertical plate heated
with a time-dependent heat flux in an initially linearly stratified ambient fluid with Prandtl number (Pr) smaller than one was studied. ^[13] Experiments were conducted to investigate the response
of outflow temperature to influencing factors in a nonlinearly stratified fluid with the particle image velocimetry (PIV) technique. ^[14] When a fountain is injected into a linearly stratified
fluid, its behavior will be governed by the stratification of the ambient fluid, represented by the dimensionless temperature stratification parameter (s), in addition to the Reynolds number (Re) and
the Froude number (Fr). ^[15] With an experimentally motivated ansatz that the downstream horizontal velocity and buoyancy structure is either i) entirely linearly stratified ii) consists of a
well-mixed uniform lower region overlain by a linearly stratified region, we can relate the upstream conditions to the downstream conditions as a function of source Froude number, downstream gradient
Richardson number, and a shape factor φ. ^[16] Numerical simulation of two coalescing turbulent forced plumes in linearly stratified fluids. ^[17] Laboratory studies considering the settling of a
sphere in a linearly stratified fluid confirmed that stratification may dramatically enhance the drag on the body, but failed to identify the generic physical mechanism responsible for this increase.
^[18] Direct numerical simulations are used to characterize wind-shear effects on entrainment in a barotropic convective boundary layer (CBL) that grows into a linearly stratified atmosphere. ^[19]
Two zero-order bulk models (ZOMs) are developed for the velocity, buoyancy, and moisture of a cloud-free barotropic convective boundary layer (CBL) that grows into a linearly stratified atm. ^[20]
When a torus oscillates horizontally in a linearly stratified fluid, the wave rays form a double cone, one upward and one downward, with two focal points where the wave amplitude has a maximum due to
wave focusing. ^[21] We consider small, physically consistent perturbations of a linearly stratified fluid that would result from a localized mixing near a particular depth. ^[1] Flows past two
spheres immersed in a horizontally moving, linearly stratified fluid are investigated at a moderate Reynolds number of 300. ^[2] Experimental results are obtained by towing a horizontal cylinder in a
horizontal direction perpendicular to its axes, at a constant speed in a linearly stratified fluid made of salty water. ^[3] Coupled triads (two sets of resonant triads with one member in common) can
arise in linearly stratified fluids. ^[4] A numerical model has been constructed and the investigation of the dynamics of a cylindrical localized region of turbulent disturbances in a longitudinal
horizontally homogeneous shear flow of a linearly stratified fluid has been carried out. ^[5] Experiments were conducted to investigate the response of outflow temperature to influencing factors in a
nonlinearly stratified fluid with the particle image velocimetry (PIV) technique. ^[6] When a fountain is injected into a linearly stratified fluid, its behavior will be governed by the
stratification of the ambient fluid, represented by the dimensionless temperature stratification parameter (s), in addition to the Reynolds number (Re) and the Froude number (Fr). ^[7] Numerical
simulation of two coalescing turbulent forced plumes in linearly stratified fluids. ^[8] Laboratory studies considering the settling of a sphere in a linearly stratified fluid confirmed that
stratification may dramatically enhance the drag on the body, but failed to identify the generic physical mechanism responsible for this increase. ^[9] When a torus oscillates horizontally in a
linearly stratified fluid, the wave rays form a double cone, one upward and one downward, with two focal points where the wave amplitude has a maximum due to wave focusing. ^[10] Gravity currents
produced from a full-depth lock release propagating at the base of a linearly stratified ambient are investigated by means of newly conducted three-dimensional high-resolution simulations in
conjunction with corresponding two-dimensional simulations and laboratory experiments. ^[1] Selective withdrawal is commonly implemented in nonlinearly stratified ambient, which typically has
stratified ambient conditions, for purposes of controlling quality. ^[2] The transient natural convection boundary-layer flow adjacent to a vertical plate heated with a time-dependent heat flux in an
initially linearly stratified ambient fluid with Prandtl number (Pr) smaller than one was studied. ^[3] | {"url":"https://academic-accelerator.com/Manuscript-Generator/zh/Linearly-Stratified","timestamp":"2024-11-14T14:33:45Z","content_type":"text/html","content_length":"483756","record_id":"<urn:uuid:cfae4709-fbd8-4d51-8fcc-cfae7c051d07>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00445.warc.gz"} |
Sign Changer Circuit with Op-Amp
Op-amps or operational amplifiers are widely used in electronic circuits for a variety of applications. One such application is the Sign Changer Circuit or phase shifter circuit, which is used to
vary the gain of a signal from 1 to -1 and thus change the amplitude and phase of the input signal. In this blog post, we will discuss the working of the Sign Changer Circuit with Op-Amp.
The working of the op-amp based sign changer circuit is shown in the animation below.
What is an op-amp sign changer circuit?
A op-amp based sign changer circuit is a circuit that changes the gain from 1 to -1 of an input signal. Following shows the circuit diagram of op-amp based sign changer.
Working of Sign Changer Circuit with Op-Amp
The Sign Changer Circuit with Op-Amp uses the inverting amplifier configuration of the Op-Amp to change the sign(phase) and amplitude of the input signal. When the wiper of the potentiometer is at
the extreme left then the output signal is same as input signal with no change in amplitude and phase. This is because due to this wiper position, the input signal is fed into both the inverting and
non-inverting terminal of the op-amp. Therefore amplification of signal from both the terminal happens and the overall gain is given by the following equation.
\(A_v = A_v(invert) +A_v(non-invert)\) -------->(1)
The voltage gain due to inverting channel is,
\(A_v(invert) = -\frac{R_2}{R_1}\)
\(A_v(invert) = -1\) since R1=R2
The voltage gain due to non-inverting channel is,
\(A_v(non-invert) = 1+\frac{R_2}{R_1}=1+1=2\) since R1=R2
And therefore the total voltage gain from equation(1) is,
\(A_v = -1 + 2=1\)
And so in this case the output signal is same as input signal in amplitude and phase.
When the wiper is shifted to right, the amplitude decreases along with phase change. Eventually when the wiper reaches the extreme right, the amplitude of the output signal is same as the input but
the sign changes to -ve, that is phase shift of 180 occurs between the output and input signal. This is because when the wiper is at extreme right side, the non-inverting terminal is connected to
ground then the op-amp operates as inverting amplifier configuration with the voltage gain given by the following equation.
\(A_v(invert) = -\frac{R_2}{R_1}\)
and since the resistors are equal we have,
\(A_v(invert) = -1\)
Video demonstration
See the following video to learn how the sign changer circuit with LM358N operational amplifier works.
Applications of Sign Changer Circuit with Op-Amp
The Sign Changer Circuit with Op-Amp has the following applications:
• It is used in signal processing circuits to invert or change the polarity of a signal.
• It is used in audio amplifier to change the phase of the audio signal.
• It is used in instrumentation circuits to measure the response of a system to a negative stimulus.
The Sign Changer Circuit with Op-Amp is a simple and efficient circuit for changing the sign of an input signal. It can also be called as phase shifter circuit. The circuit provides a high level of
accuracy and can be easily implemented using a few components. The circuit finds applications in various fields like signal processing, audio amplifiers, and instrumentation amplifier circuits. | {"url":"https://www.ee-diary.com/2023/03/sign-changer-circuit-with-op-amp.html","timestamp":"2024-11-12T15:25:06Z","content_type":"application/xhtml+xml","content_length":"201621","record_id":"<urn:uuid:119ea767-c01f-4ed4-8337-20f6fc3e76d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00562.warc.gz"} |
Some common Machine Learning, Statistics and Data Science terms starts with I
Word ### Description
Imputation Imputation is a technique used for handling missing values in the data. This is done either by statistical metrics like mean/mode imputation or by machine learning techniques like kNN
For example,
If the data is as below
Name Age
Akshay 23
Akshat NA
Viraj 40
The second row contains a missing value, so to impute it we use mean of all ages, i.e.
Name Age
Akshay 23
Akshat 31.5
Viraj 40
Inferential Statistics In inferential statistics, we try to hypothesize about the population by only looking at a sample of it. For example, before releasing a drug in the market, internal tests are
done to check if the drug is viable for release. But here we cannot check with the whole population for viability of the drug, so we do it on a sample which best represents the population.
IQR IQR (or interquartile range) is a measure of variability based on dividing the rank-ordered data set into four equal parts. It can be derived by Quartile3 – Quartile1.
Iteration Iteration refers to the number of times an algorithm’s parameters are updated while training a model on a dataset. For example, each iteration of training a neural network takes certain
number of training data and updates the weights by using gradient descent or some other weight update rule. | {"url":"https://discuss.boardinfinity.com/t/some-common-machine-learning-statistics-and-data-science-terms-starts-with-i/6230","timestamp":"2024-11-13T19:23:47Z","content_type":"text/html","content_length":"17166","record_id":"<urn:uuid:cb630280-df9f-4107-ada0-82fe7c1de7e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00873.warc.gz"} |
Erik D. Demaine
Paper by Erik D. Demaine
Erik D. Demaine, Sarah Eisenstat, Mashhood Ishaque, and Andrew Winslow, “One-Dimensional Staged Self-Assembly”, Natural Computing, volume 12, number 2, 2013, pages 247–258.
We introduce the problem of staged self-assembly of one-dimensional nanostructures, which becomes interesting when the elements are labeled (e.g., representing functional units that must be
placed at specific locations). In a restricted model in which each operation has a single terminal assembly, we prove that assembling a given string of labels with the fewest steps is equivalent,
up to constant factors, to compressing the string to be uniquely derived from the smallest possible context-free grammar (a well-studied O(log n)-approximable problem) and that the problem is
NP-hard. Without this restriction, we show that the optimal assembly can be substantially smaller than the optimal context-free grammar, by a factor of Ω(√n/log n) even for binary strings of
length n. Fortunately, we can bound this separation in model power by a quadratic function in the number of distinct glues or tiles allowed in the assembly, which is typically small in practice.
This paper is also available from SpringerLink.
The paper is 20 pages.
The paper is available in PostScript (6743k), gzipped PostScript (640k), and PDF (591k).
Related papers:
Staged1D_DNA2011 (One-Dimensional Staged Self-Assembly)
See also other papers by Erik Demaine. These pages are generated automagically from a BibTeX file.
Last updated July 23, 2024 by Erik Demaine. | {"url":"https://erikdemaine.org/papers/Staged1D_NACO/","timestamp":"2024-11-04T15:24:11Z","content_type":"text/html","content_length":"5689","record_id":"<urn:uuid:5837edba-19fd-4ccc-96b8-df46eeece104>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00861.warc.gz"} |
Revision guides
Maths revision guides
I’m “currently” writing up revision guides for A-level maths. At the moment I’m basing them on the Edexcel maths course, but I want these to be more general so will try to include the stuff in other
boards when I update them. For now I will assume GCSE maths knowledge (basic algebra and all that), but in this guides I will try to make everything as clear as possible, reducing assumptions about
your knowledge as much as possible, so if you pick things up quickly you might get annoyed at this approach.
Released:Complex Numbers – Edexcel FP1 (updated 2/9/12)Started, and expected to be released by [arbitrary optimistic date]:Complex Numbers
– Edexcel FP2
– Edexcel FP1, FP3
Differential Equations – Edexcel FP2
Sequences and Series – Edexcel C1, C2, FP1
These guides are based primarily on my own knowledge
with help from the relevant textbooks and the internet, but nothing should be directly copied. They are written with LaTeX, with the Ti
Z package especially used for diagrams. You’re free to do whatever you want with them provided it’s not commercial.
History notes
A friend of mine wrote up some notes (
) for Russian and British history (OCR A). They’re worth a look, even if you’re just interested in history without studying it academically. I have no control over this so please don’t ask me
anything about it. | {"url":"http://www.tomred.org/revision-guides.html","timestamp":"2024-11-11T23:27:07Z","content_type":"text/html","content_length":"36745","record_id":"<urn:uuid:99cb5914-39d0-41ac-95c0-5f212106efcd>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00050.warc.gz"} |
University of Alabama Repository :: Browsing Department of Information Systems, Statistics & Management Science by Author "Adams, Benjamin Michael"
Department of Information Systems, Statistics & Management Science
Permanent URI for this community
Browsing Department of Information Systems, Statistics & Management Science by Author "Adams, Benjamin Michael"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
• Advances in mixture modeling and model based clustering
(University of Alabama Libraries, 2015) Michael, Semhar K.; Melnykov, Volodymyr; University of Alabama Tuscaloosa
Cluster analysis is part of unsupervised learning that deals with finding groups of similar observations in heterogeneous data. There are several clustering approaches with the goal of minimizing
the within cluster variance while maximizing the variance between clusters. K-means or hierarchical clustering with different linkages can be thought as distance-based approaches. Another
approach is model-based which relies on the idea of finite mixture models. This dissertations will propose new advances in clustering area mostly related to model-based clustering and its
extension to the K-means algorithm. This report has five chapters. The first chapter is a literature review on recent advances in the area of model-based clustering and finite mixture modeling.
Main advances and challenges are described in the methodology section. Then some interesting and diverse applications of model-based clustering are presented in the application section. The
second chapter deals with a simulation study conducted to analyze the factors that affect complexity of model-based clustering. In the third chapter we develop a methodology for model-based
clustering of regression time series data and show its application to annual tree rings. In the fourth chapter, we utilize the relationship between model-based clustering and the Kmeans algorithm
to develop a methodology for merging clusters formed by K-means to find meaningful grouping. The final chapter is dedicated to the problem of initialization in model-based clustering. It is well
known fact that the performance of model-based clustering is highly dependent on initialization of the EM algorithm. So far there is no method that comprehensively works in all situations. In
this project, we use the idea of model averaging and initialization using the emEM algorithm to solve this problem.
• Construction of estimation-equivalent second-order split-split-plot designs
(University of Alabama Libraries, 2011) Yuan, Fang; Perry, Marcus B.; University of Alabama Tuscaloosa
In many experimental settings, some experimental factors are very hard to change or very expensive to change, some factors are hard to change, and some factors are easy to change, which usually
leads to a split-split-plot design. In such a case, there are randomization restrictions in our experiments. If the data is analyzed as if it were a completely randomized design, the results
could be misleading. The analysis of split-split-plot designs is more complicated relative to the completely randomized design, as generalized least squares (GLS) is recommended for estimating
the factor effects, and restricted maximum likelihood (REML) is recommended for estimating the variance components. As an alternative, one can consider estimation-equivalent designs, wherein
ordinary least squares (OLS) and GLS estimates of the factor effects are equivalent. These designs provide practical benefits from the perspective of design selection and estimation and are
consistent with traditional response surface methods. Although much work has been done with respect to estimation-equivalent second-order split-plot designs, less emphasis has been placed on
split-split-plot (and higher strata) designs of this type. My research is to derive the general conditions for achieving OLS-GLS equivalence and use these conditions to construct balanced and
unbalanced estimation-equivalent second-order split-split-plot designs from the central composite design (CCD).
• Contributions to joint monitoring of location and scale parameters: some theory and applications
(University of Alabama Libraries, 2012) McCracken, Amanda Kaye; Chakraborti, Subhabrata; University of Alabama Tuscaloosa
Since their invention in the 1920s, control charts have been popular tools for use in monitoring processes in fields as varied as manufacturing and healthcare. Most of these charts are designed
to monitor a single process parameter, but recently, a number of charts and schemes for jointly monitoring the location and scale of processes which follow two-parameter distributions have been
developed. These joint monitoring charts are particularly relevant for processes in which special causes may result in a simultaneous shift in the location parameter and the scale parameter.
Among the available schemes for jointly monitoring location and scale parameters, the vast majority are designed for normally distributed processes for which the in-control mean and variance are
known rather than estimated from data. When the process data are non-normally distributed or the process parameters are unknown, alternative control charts are needed. This dissertation presents
and compares several control schemes for jointly monitoring data from Laplace and shifted exponential distributions with known parameters as well as a pair of charts for monitoring data from
normal distributions with unknown mean and variance. The normal theory charts are adaptations of two existing procedures for the known parameter case, Razmy's (2005) Distance chart and Chen and
Cheng's (1998) Max chart, while the Laplace and shifted exponential charts are designed using an appropriate statistic for each parameter, such as the maximum likelihood estimators.
• Contributions to outlier detection methods: some theory and applications
(University of Alabama Libraries, 2011) Dovoedo, Yinaze Herve; Chakraborti, Subhabrata; University of Alabama Tuscaloosa
Tukey's traditional boxplot (Tukey, 1977) is a widely used Exploratory Data Analysis (EDA) tools often used for outlier detection with univariate data. In this dissertation, a modification of
Tukey's boxplot is proposed in which the probability of at least one false alarm is controlled, as in Sim et al. 2005. The exact expression for that probability is derived and is used to find the
fence constants, for observations from any specified location-scale distribution. The proposed procedure is compared with that of Sim et al., 2005 in a simulation study. Outlier detection and
control charting are closely related. Using the preceding procedure, one- and two-sided boxplot-based Phase I control charts for individual observations are proposed for data from an exponential
distribution, while controlling the overall false alarm rate. The proposed charts are compared with the charts by Jones and Champ, 2002, in a simulation study. Sometimes, the practitioner is
unable or unwilling to make an assumption about the form of the underlying distribution but is confident that the distribution is skewed. In that case, it is well documented that the application
of Tukey's boxplot for outlier detection results in increased number of false alarms. To this end, in this dissertation, a modification of the so-called adjusted boxplot for skewed distributions
by Hubert and Vandervieren, 2008, is proposed. The proposed procedure is compared to the adjusted boxplot and Tukey's procedure in a simulation study. In practice, the data are often
multivariate. The concept of a (statistical) depth (or equivalently outlyingness) function provides a natural, nonparametric, "center-outward" ordering of a multivariate data point with respect
to data cloud. The deeper a point, the less outlying it is. It is then natural to use some outlyingness functions as outlier identifiers. A simulation study is performed to compare the outlier
detection capabilities of selected outlyingness functions available in the literature for multivariate skewed data. Recommendations are provided.
• On the detection and estimation of changes in a process mean based on kernel estimators
(University of Alabama Libraries, 2012) Mercado Velasco, Gary Ricardo; Perry, Marcus B.; University of Alabama Tuscaloosa
Parametric control charts are very attractive and have been used in the industry for a very long time. However, in many applications the underlying process distribution is not known sufficiently
to assume a specific distribution function. When the distributional assumptions underlying a parametric control chart are violated, the performance of the control chart could be potentially
affected. Since robustness to departures from normality is a desirable property for control charts, this dissertation reports three separate papers on the development and evaluation of robust
Shewhart-type control charts for both the univariate and multivariate cases. In addition, a statistical procedure is developed for detecting step changes in the mean of the underlying process
given that Shewhart-type control charts are not very sensitive to smaller changes in the process mean. The estimator is intended to be applied following a control chart signal to aid in
diagnosing root cause of change. Results indicate that methodologies proposed throughout this dissertation research provide robust in-control average run length, better detection performance than
that offered by the traditional Shewhart control chart and/or the Hotelling's control chart, and meaningful change point diagnostic statistics to aid in the search for the special cause.
• Reduced bias prediction regions and estimators of the original response when using data transformations
(University of Alabama Libraries, 2015) Walker, Michael; Perry, Marcus B.; University of Alabama Tuscaloosa
Initially motivated by electron microscopy experiments, we develop an approximate prediction interval on the univariate response variable Y, where it is assumed that a normal-theory linear model
is fit using a transformed version of Y, and the transformation type is contained in the Box-Cox family. Further motivated by A-10 single-engine climb experiments, we then develop an approximate
prediction interval on the univariate response Y, in which a linear model is fit using a transformed version of Y, contained in the Manly exponential family. For each case, we derive a
closed-form approximation to the kth moment of the original response variable Y, which is then used to estimate the mean and variance of Y, given parameter estimates obtained from fitting the
model in the transformed domain. Chebychev’s inequality is then used to construct a 100(1 − α)% prediction interval estimator on Y based on these mean and variance estimators. Extended data
obtained from the A-10 single-engine climb experiments motivates the development of prediction regions in the original domain of a q-variate response vector Y through the use of multivariate
extensions of both the Box-Cox power transformation and the Manly exponential transformation. For each transformation, we derive closed-form approximations to the kth moment of each original
response Y, as well as a closed-form approximation to E(Yi Yi'), which are used to estimate the mean and variance of each Y and the covariance between them, given parameter estimates obtained
from fitting the model in the transformed domain. Exploiting two multivariate analogs of Chebyshev’s inequality, we construct an approximate 100(1 − α)% prediction sphere and ellipsoid on the
original response vector Y.
• Three essays on improving ensemble models
(University of Alabama Libraries, 2013) Xu, Jie; Gray, J. Brian; University of Alabama Tuscaloosa
Ensemble models, such as bagging (Breiman, 1996), random forests (Breiman, 2001a), and boosting (Freund and Schapire, 1997), have better predictive accuracy than single classifiers. These
ensembles typically consist of hundreds of single classifiers, which makes future predictions and model interpretation much more difficult than for single classifiers. Breiman (2001b) gave random
forests a grade of A+ in predictive performance, but a grade of F in interpretability. Breiman (2001a) also mentioned that the performance of an ensemble model depends on the strengths of the
individual classifiers in the ensemble and the correlations among them. Reyzin and Schapire (2006) stated that "the margins explanation basically says that when all other factors are equal,
higher margins result in lower error," which is referred to as the "large margin theory." Shen and Li (2010) showed that the performance of an ensemble model is related to the mean and the
variance of the margins. In this research, we improve ensemble models from two perspectives, increasing the interpretability and/or decreasing the test error rate. We first propose a new method
based on quadratic programming that uses information on the strengths of the individual classifiers in the ensemble and their correlations, to improve or maintain the predictive accuracy of an
ensemble while significantly reducing its size. In the second essay, we improve the predictive accuracy of random forests by adding an AdaBoost-like improvement step to random forests. Finally,
we propose a method to improve the strength of the individual classifiers by using fully-grown trees fitted on weighted resampling training data and then combining the trees by using the AdaBoost
• Three essays on the use of margins to improve ensemble methods
(University of Alabama Libraries, 2012) Martinez Cid, Waldyn Gerardo; Gray, J. Brian; University of Alabama Tuscaloosa
Ensemble methods, such as bagging (Breiman, 1996), boosting (Freund and Schapire, 1997) and random forests (Breiman, 2001) combine a large number of classifiers through (weighted) voting to
produce strong classifiers. To explain the successful performance of ensembles and particularly of boosting, Schapire, Freund, Bartlett and Lee (1998) developed an upper bound on the
generalization error of an ensemble based on the margins, from which it was concluded that larger margins should lead to lower generalization error, everything else being equal (sometimes
referred to as the "large margins theory"). This result has led many researchers to consider direct optimization of functions of the margins (see, e.g., Grove and Schuurmans, 1998; Breiman, 1999
Mason, Bartlett and Baxter, 2000; and Shen and Li, 2010). In this research, we show that the large margins theory is not sufficient for explaining the performance of AdaBoost. Shen and Li (2010)
and Xu and Gray (2012) provide evidence suggesting that generalization error might be reduced by increasing the mean and decreasing the variance of the margins, which we refer to as "squeezing"
the margins. For that reason, we also propose several alternative techniques for squeezing the margins and evaluate their effectiveness through simulations with real and synthetic data sets. In
addition to the margins being a determinant of the performance of ensembles, we know that AdaBoost, the most common boosting algorithm, can be very sensitive to outliers and noisy data, since it
assigns observations that have been misclassified a higher weight in subsequent runs. Therefore, we propose several techniques to identify and potentially delete noisy samples in order to improve
its performance. | {"url":"https://ir.ua.edu/browse/author?scope=3a17eea7-5599-498d-97a5-a89192781f24&value=Adams,%20Benjamin%20Michael","timestamp":"2024-11-06T17:19:42Z","content_type":"text/html","content_length":"530810","record_id":"<urn:uuid:5626b751-4852-45d6-995c-eaf4475e2627>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00488.warc.gz"} |
Algebra Story Problem: Find the speed of the current
Algebra Story Problem: Find the speed of the current
Two miles upstream from his starting point, a canoeist passed a log floating in the river's current. After paddling upstream for one more hour, he paddled back and reached his starting point just as
the log arrived. Find the speed of the current.
STEP 1:
You are asked to find the speed of the current. Assume a variable for this quantity, say,
. We come across some other quantities in the given information which can also be denoted by
for ease in reference.
See how the parentheses get cleared when each term inside the parentheses is multiplied by the factor outside.
1. t – total time, right from when the canoeist sees the log till he reaches back at the starting point.
2. r – rate of the canoeist
3. x – distance paddled upstream by the canoeist after he sees the log.
STEP 2:
Recall that the distance traveled is the product of rate and time. Since the rate of the log is the same as that of the current, the log also has covered 2 miles.
This will help you to frame an equation,
2 = ct
. You can also rewrite it as
t = ^2/[c]
STEP 3:
It is to be noted that t is the total time taken by the canoeist to travel the upstream distance
, then downstream and finally 2 miles.
Since the direction of current is against that of the canoeist, the time taken to cover his upstream journey will be:Similarly, the time taken to cover downstream
distance is equal to:
Also, the time it takes for the canoeist to cover the final 2 miles downstream is:The sum of these time periods equal
STEP 4:
It takes 1 hour for the person to travel the upstream distance
. So, you can replace
with 1 and
with r – c. To
make the denominators of the fractions
same, you can also find an equivalent fraction for 1.Now, add the numerators of the fractions keeping the denominator.
STEP 5:
We have already obtained an expression for
at the start. Equate the two expressions.
Cross multiply to clear the fractions give:See how we can
remove the parentheses
from each side. Multiply the terms inside by the factor outside.
STEP 6:
The final step is to solve the equation for the variable
.The speed of the current is
1 mile per hour
Sep 29, 2019 Helpful
very nicely solved. Very helpful as well.
May 21, 2015 Reply to comment from Apr 23, 2013
Since it is known that x/(r-c) = 1, solving the equation would result in x = r-c.
Apr 23, 2013 Step 4
In step 4, why can you change x to r-c? | {"url":"https://www.mathexpression.com/algebra-story-problem-find-the-speed-of-the-current.html","timestamp":"2024-11-08T20:26:23Z","content_type":"text/html","content_length":"45405","record_id":"<urn:uuid:ccb982df-82ee-4243-ad34-fa3c3d33db36>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00736.warc.gz"} |
10.7 Software and Technology Exercises
Learning Objectives
1. Predict likely range of values in a normal distribution.
2. Recognize assignable and unassignable causes of statistical variation.
Predicting Value Ranges Using Standard Deviation
Real production processes are never perfect. In some cases, a few products that are too small or that do not work will just cause inconvenience, but in other cases they might be life threatening.
Samples of the production process will show how much variation occurs. If it appears that the variations are distributed equally above and below the mean (average), it might be assumed that the
statistics of a normal distribution can be used to predict the percentage of products that will be defective when many of them are produced even if none of the samples are defective.
Some projects are initiated to increase the quality by reducing the variation in production. To understand the language of statistics and how it is used to justify a project, it is useful to gain a
“feel” for how the distribution of samples is described by the standard deviation. A spreadsheet can be used to simulate samples of production runs where the mean and standard deviation can be chosen
to show their relationship in a normal distribution. By trying different values for the standard deviation and observing the effect on the distribution of estimated samples in a chart, you can
develop a sense of how the two are related.
Recall that a standard deviation is called a sigma and represented by the Greek letter σ and the 68-95-99.7 rule refers to the percentage of samples that will be within one, two, and three standard
deviations of the mean.
Examine a Normal Distribution
Complete the exercise by following these instructions:
1. Navigate to the directory location where the exercise files for this unit are located and open Ch10STD.xls in a spreadsheet program such as MS Excel.
2. In cell A2, replace StudentName with your name.
3. This worksheet is designed to simulate a set of sample values that vary from the mean for random reasons and form a normal distribution. The data and calculations in columns A through F are
hidden. They are used to calculate the values in column G on which the chart is based.
4. Notice the following features of the spreadsheet:
□ Column A has bins. In this example, the bins are .1 units wide. The size of the bin and the horizontal scale of the chart are determined by the value in cell L5.
□ This simulation uses forty-two bins that are distributed equally above and below the mean. The mean value can be specified in cell L4.
□ The usual method is to sample a sequence of products, count the number that fall into each bin, and then calculate the standard deviation. In this simulation, you can specify the standard
deviation in cell L3, and the percentage of samples that are likely to occur in each bin is calculated and displayed in column G. The display is rounded to a whole percent. The display of
decimal places can be increased using the spreadsheet’s controls.
□ The percentage of estimated samples that occur in each bin is charted using a column chart. The scale at the left side of the chart indicates the percent of the samples in each bin.
5. Compare the chart in the spreadsheet to the chart in Figure 10.13 “Normal Distribution of Gasoline Samples” that was used in the text. Observe that the standard deviation, σ, is .2 and that
almost all the sample values occur between 86.4 and 87.6—three σ on either side of the mean.
6. Open a word processing document and then save it as Ch10STDStudentName.doc. Switch back to the spreadsheet and capture the screen. Switch to the word processing document and paste the screen into
the document.
7. Switch back to the spreadsheet. To see the effect of a better production process that would have a σ of .1 instead of .2, click cell L3. Type .1 and then, on the Formula bar, click the Enter
button. The distribution narrows so that almost all the estimated samples are within .3 on either side of the mean (87.0), as shown in Figure 10.14 “Normal Distribution with Smaller Standard
8. Capture the screen showing in the narrow distribution and paste it into the word processing document.
9. In the spreadsheet, in cell L3, type .4 and then, on the Formula bar, click the Enter button. Notice that a larger standard deviation means the distribution is more spread out. Three standard
deviations is 1.2 (3 × .4), so almost all the samples will be within 1.2 on either side of the mean, as shown in Figure 10.15 “Normal Distribution with Larger Standard Deviation”.
10. Capture this screen and paste it into the word processing document.
11. Change the value in cell L3 to 1. Almost all the samples will be above 84 (87−3) and below 90 (87+3), but the horizontal scale is too small to show all the values.
12. Change the value in cell L5 to .3.
13. Capture the screen and paste it into the word processing document.
Use the Spreadsheet for a Different Example
The effects of a lower-than-expected octane rating in a passenger car might be engine knock during acceleration and less power climbing a hill, but the effect of lower-than-expected octane fuel in a
military aircraft might mean that the plane could not achieve the desired altitude or speed in a critical situation. Aviation gasoline is designed for use in high-performance engines that require 100
octane fuel. Use the spreadsheet to examine the estimated distribution of gasoline samples with a different mean and σ.
Examine a Normal Distribution
Complete the exercise by following these instructions:
1. Change the value in cell L4 to 100 and the standard deviation in cell L3 to .1. Notice that a standard deviation of .1 means that 99.7 percent of the gasoline samples will be between 99.7 and
100.3 octane.
2. Practice changing the mean and standard deviation values in the spreadsheet. Each time you do so, predict the high and low values that represent three σ above and below the mean and use the
spreadsheet to check your prediction. If the values extend beyond the sides of the chart, increase the increment value in cell L5.
3. Capture the screen that shows one of your estimates that is different from the examples shown in the previous steps and paste it into the word processing document.
4. In the word processing document, below the last screen, write between one hundred and two hundred words to describe what you learned about the relationship between the standard deviation and the
distribution of likely values. Specifically describe how you predict the upper and lower limits of the range.
5. Close the spreadsheet. Do not save the changes.
6. Save the word processing document as Ch10STDStudentName.doc.
7. Review your work and use the following rubric to determine its adequacy:
Element Best Adequate Poor
File name Ch10STDStudentName.doc Same or .docx Student name missing
file format
Predict likely range of values in a Five screen captures plus a reflective essay on what you learned about Same as Best Missing pictures; essay does not describe how the upper and
normal distribution predicting the upper and lower limits defined by 3 σ lower limits of 3 σ are calculated
8. Revise the document, if necessary. Save the document and submit it as directed by the instructor.
Recognizing Variations Due to Unassignable and Assignable Causes
W. Edwards Deming teaches that some variation is inevitable due to chance cause. A manager needs to recognize the difference between variations that are due to chance and those that indicate the
presence of an assignable cause or a trend. If it appears that there is an assignable cause for variation in quality, a project manager might be required to identify and fix the problem. To
communicate with process managers who are monitoring and sampling production, it is useful to understand the use of control charts.
A run chart is a type of chart that shows variations from the mean as a function of time. The value of each sample is plotted to show the day it was taken and how it differs from the mean. If the
variation is random, there will be roughly the same number of points above and below the mean.
A spreadsheet can be used to simulate random variations in production. In this exercise, the spreadsheet uses its random number function to pick two numbers that are positive and two that are
negative and adds them to the mean. Each number represents a variation that is between the control limits. Most of the time the positive and negative numbers cancel each other out and result in a sum
that is close to the mean, but occasionally the four random factors add up to values that are far from the mean.
In this part of the exercise, you observe variations in a run chart and frequency distribution chart that are due to random effects. You generate the random numbers several times to see what
production runs with random (unassignable) variations look like.
Examine a Run Chart with Random Effects
Complete the exercise by following these instructions:
1. Navigate to the directory location where the exercise files for this unit are located and open Ch10ControlChart.xls in a spreadsheet program such as MS Excel.
2. Observe that in cells B4 through B23 the RAND function is used to simulate the effects of four random influences on each day of production for a twenty-day period.
3. Scroll the screen or adjust the zoom so that you can see both charts. See Figure 10.16 “Screen Adjusted to Show Both Charts”.
4. On your keyboard, near the top, press the F9 key. The random functions pick new numbers. Observe how the samples on the run chart change and how the frequency distribution changes.
5. Press the F9 key several more times until you get a set of samples that are grouped close to the mean like the example shown in Figure 10.17 “Most Samples near the Mean”.
6. Open a word processing document and then save it as Ch10RunChartStudentName.doc. Switch back to the spreadsheet and capture the screen. Switch to the word processing document and paste the screen
into the document. Your values will differ from those in the figure.
7. According to Deming, it is not productive to hold employees to quality standards that they do not control. Consider the effect on employee moral if this set of samples was taken as the standard
by which the next run would be judged.
8. Press the F9 key again and stop at a set of samples that has a greater variation, such as the example shown in Figure 10.15 “Normal Distribution with Larger Standard Deviation”.
9. Capture this screen and paste it into Ch10RunChartStudentName.doc. Because of the chance-cause random factors, this set of data has more variation. If employee performance were punished or
rewarded based on this data, they would become discouraged because they do not control the quality. Leave both files open.
Examine a Run Chart with Assignable Cause
Complete the exercise by following these instructions:
An assignable cause can be mixed in with the chance-cause random effects. In this part, you introduce a factor that causes the samples to display a trend. You run the simulation several times to
learn how to recognize a set of data that is a mix of random (chance-cause) factors and a trend that is probably from an assignable cause.
1. In the spreadsheet, click cell B2. Type .03 and then, on the Formula bar, click the Enter button. The random functions are recalculated, but each value is increased by .03 over its predecessor.
2. Press the F9 key several times and observe how this trend appears within the samples such as the example in Figure 10.19 “Trend That Is Probably Due to an Assignable Cause”.
3. It is clear that action must be taken soon to prevent the next batch of samples from exceeding the control limit. The process manager might create a project to identify the assignable cause and
take the necessary action, such as replacing a worn-out piece of equipment. Choose an example where the upward trend is most apparent. Capture the screen and paste it into the word processing
4. Close the spreadsheet without saving the changes.
5. In the word processing document, below the last picture, write a reflective essay of between one hundred and two hundred words that describes how you would recognize the difference between run
charts that show assignable and unassignable causes. Discuss the effect on morale if one of the runs with random values that are close to the mean is chosen as the standard of performance by
which workers would be measured.
6. Leave the word processing document open.
7. Review your work and use the following rubric to determine its adequacy:
Element Best Adequate Poor
File name Ch10RunChartStudentName.doc Ch10RunChartStudentName.docx Did not include name in
file name
Recognize assignable and Three screen captures that show two random causes and one assignable cause; an essay that describes how Missing screen; essay does
unassignable causes of to recognize the difference and the effect on worker morale if a run with low random variation is chosen Same as Best not address both
statistical variation as a standard requirements
8. Save the file and submit it as directed by the instructor. | {"url":"https://open.lib.umn.edu/projectmanagement/chapter/10-7-software-and-technology-exercises/","timestamp":"2024-11-07T12:51:37Z","content_type":"text/html","content_length":"94761","record_id":"<urn:uuid:b24368db-683f-445b-81e4-5a7fb45359f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00254.warc.gz"} |
Zero-Sum Games
A[3]0, 0 −3, 3 1, −1
As in the above example, A has a dominating strategy: Strategy A[1] dominates A[2] and A[3]. However B does not have a dominating strategy: Strategy B[2] does best against A[1] and A[3], but strategy
B[1] does best against A[2]. However, since B knows that A will always select his dominant strategy A[1], he should always select B[2], with the result that A gets −1 and B gets 1.
The third scenario is where neither player has a dominating strategy, but there is a saddle point. The following example will illustrate what a saddle point is:
│ │ B[1] │ B[2] │ B[3] │
│A[1]│+5, −5 │−1, +1│+4, −4 │
│A[2]│−3, +3 │−2, +2│5, −5 │
│A[3]│+10, −10 │−3, +3│−20, +20 │
You can see here that neither player has a dominating strategy. Here, both players have to take into account the other's decision-making process. One way to go about it is to find the best of the
worst possible outcomes. For A, the safest choice would be to choose A[1], since the worst that could happen is that A scores −1 (if B chooses B[2]). If A chooses strategy A[2], he could score −3,
and with strategy A[3] he could score −20.
By choosing strategy A[1], A has minimized the maximum loss; such a strategy is called a minimax strategy. You could also look at it as maximizing the minimum gain, in which case you would refer to
it as a maximin strategy; in game theory, both terms refer to the same strategy.
Now, if B knew that A was going to select strategy A[1], then his best strategy would be strategy B[2].
Performing the same analysis for B, the strategy that minimizes the maximum loss is B[2].
Now, if A knew that B was going to select B[2], A would select strategy A[1].
If there is an outcome in which the payoffs to both players are the "best of the worst," this outcome is called a saddle point or a minimax. In this game, if A chooses A[1] and B chooses B[2], both
payoffs meet this criterion. Such a choice of strategies would be the best for both players. Neither player can do better unless the other player acts irrationally.
The last scenario to consider is where neither player has a dominating strategy and there is no saddle point. Let's consider a different example this time:
│ │ B[1] │ B[2] │
│A[1]│2, −2 │−1, 1 │
│A[2]│−5, 5 │3, −3 │
A's minimax is −1, in the upper right corner, while B's minimax is −2, in the upper left corner. Now, A's thought process might go like this: He might decide to select A[1], since the worst payoff is
−1. So, if B is able to figure out that A will select A[1], then B would select B[2]. But if B selects B[2], then it would be best for A to select A[2], to gain 3 points. But if A is going to select
A[2], then it would be best for B to select B[1] to get 5 points. In which case it would be best for A to select A[1]... Trying to find a single option results in going around in a circle.
How do you break the endless cycle? As it turns out, the way that both players can get the best possible results for themselves is through a mixed strategy, by randomizing their choices. In this
case, if A selects strategy A[1] 8⁄11 of the time, then he will get an average payoff of at least 1⁄11. B can ensure that A's average payoff is no more than 1⁄11 by selecting strategy B[1] 4⁄11 of
the time. | {"url":"http://mathlair.allfunandgames.ca/zerosum.php","timestamp":"2024-11-13T07:31:14Z","content_type":"text/html","content_length":"9731","record_id":"<urn:uuid:fa5527e8-9449-46ec-ab43-b3a4b1647d8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00807.warc.gz"} |
Stochastic Volatility Models for Exchange Rates and Their Estimation Using Quasi-Maximum-Likelihood Methods: an Application to the South African Rand
Kulikova, Maria; Taylor, D. R.
Journal of Applied Statistics, 40(3) (2013), 495-507
This paper is concerned with the volatility modeling of a set of South African Rand (ZAR) exchange rates. We investigate the quasi-maximum-likelihood (QML) estimator based on the Kalman filter and
explore how well a choice of stochastic volatility (SV) models fits the data. We note that a data set from a developing country is used. The main results are: (1) the SV model parameter estimates are
in line with those reported from the analysis of high-frequency data for developed countries; (2) the SV models we considered, along with their corresponding QML estimators, fit the data well; (3)
using the range return instead of the absolute return as a volatility proxy produces QML estimates that are both less biased and less variable; (4) although the log range of the ZAR exchange rates
has a distribution that is quite far from normal, the corresponding QML estimator has a superior performance when compared with the log absolute return. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=5&member_id=93&doc_id=1346","timestamp":"2024-11-14T01:02:31Z","content_type":"text/html","content_length":"9058","record_id":"<urn:uuid:2a50df94-4ac0-4188-8207-6210cd6bc0c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00278.warc.gz"} |
Top 15 Most Common Math Teacher Interview Questions & Answers
One can easily get an impression that Math is a subject of the past. People use calculators and computers for everything nowadays. To see a man doing their calculations with a pen of paper is slowly
becoming a rarity and a domain of old-school fellows… Nevertheless, some basic Math, together with Reading and Writing, is something every child should learn, regardless of the technological
advancement we observe, and have to live with. At least in my opinion… But what will happen in an interview for this interesting job? What questions will they ask you? And how you should answer them?
We will try to find it out it out on the following lines.
Before I proceed to the questions, I want to give you a few pointers. First of all, try to show some enthusiasm for the subject of Math, for teaching, and for your professional career. They shouldn’t
get an impression that you want to be a Math Teacher only because you studied the field. Secondly, avoid excessively short or excessively long answers. When something may not be clear or they may
misunderstand you, elaborate on your statements. But try to avoid lengthy answers that will only bore the hiring committee, and they will quickly forget them anyway. Let’s proceed to the questions.
Why do you want to be a Math Teacher?
Try to give them at least a couple of reasons. First one, the most obvious one–because you enjoy Math and you are good at it. It is important to teach subjects we enjoy teaching, because the children
can feel the difference.
Then you can refer to the importance of Math, how it forms the basis for so many other sciences (Physics, Chemistry, Trigonometry, etc), and that you see a meaningful purpose in teaching Math to
children on the given grade level. You can point out something else as well, just make sure that it sounds realistic, and they do not get an impression that you are making things up.
What strengths can you bring to the classroom as a Math Teacher?
The right answer here depends on your strengths and weaknesses, and on the way you try to present yourself in this interview. You can talk about communication skills, ability to explain complex
issues in a simple way (especially if you apply for a Math teacher job at high school or university level), passion for your field of teaching, some innovative teaching methods, and so on.
In any case, you should show confidence in your ability to teach Math. Sure enough, not everyone will get it and some children will end up with bad grades, but that’s always the case, and something
you cannot really impact from your position. But you believe to be able to help maximize the potential of each child, with your quality and engaging Math teaching. And that’s the most important thing
for the principal and other interviewers.
You are teaching a Math lesson and many children seem not to be getting it. What will you do?
The most important thing is to show your flexibility. Of course, what exactly you will do depends on many variables. Size of the classroom, whether you have any teaching assistants or aides, the
exact lesson you are teaching, and so on, and so forth. But you should say that you will always try to do something, because you are in the classroom for the children (and not the other way around,
children in the classroom for you), and want to make sure that at least the majority of them will get the lesson.
You can suggest changing the teaching method, simplifying the lesson, giving them more examples, splitting your lesson to smaller parts and trying to identify what exactly they aren’t getting (and
then explaining it in more detail), etc. The key is to show right attitude to this situation: if children are not getting it, you will do something, because you do not come to the classroom just to
lecture, not looking left or right, not caring how the students are doing…
Math isn’t a particularly popular subject, and some students may struggle with discipline. What will you do to in such a case?
The very best teachers do not point out the finger at the students. If students are disruptive, they point out the finger at themselves, asking what they could do better, to make the lesson more
engaging, and in that way ensure higher level of discipline in the classroom. You can suggest this course of action.
Another idea is saying that you want to clearly define the rules with your students, in the very first lesson, making sure that they understand eventual disciplinary measures, and how they should
behave. Then if they don’t you have many options, and sending them to the principal’s office should be the last resort really (but still an option you may use if anything else doesn’t work with some
What do you expect from the administrators, head of the department, fellow teachers and other colleagues?
You can say that you expect them to give you a chance, an opportunity to demonstrate your teaching skills, and to become a member of their collective. Pointing out an open communication, and
cooperation in certain cases (such as when you need a help of a counselor with one of the students) is also a good idea. Most schools look for teachers who love teamwork. Make sure to present
yourself in such a way in this interview.
Another alternative is saying that you prefer having minimal expectations on others, and focus rather on your own role in teaching, and in relationship building. You want to be attentive to the needs
and feelings of your colleagues, talk with them often, and anytime they need a helping hand, and you have time, you are ready to offer your help.
Can you tell us more about your Math teaching experience?
This one is easy–either you have experience or you don’t. If you have, explain briefly where you taught and what subjects, and point out any good results you achieved with your students. Doing so you
give the hiring committee members a clear indication that you do not go to the classes merely to teach. You want to progress with the students, and perhaps even succeed in some competitions with the
most talented ones. I also suggest you to avoid any negative remarks on the address of your former colleagues, even if you left your last job because you had bad relationship with them.
When you lack experience, you should simply show your confidence in your ability to teach Math. At the end of the day every teacher–even the best one–has to have their first teaching job one day. Now
it is your turn, but you sincerely believe that with your Math skills, and with your attitude to teaching, you will do well.
Why do you want to teach Math here, and not at some other school?
You have many options here. First one is praising their place for something. Perhaps the school has an excellent reputation, modern equipment in the classes, or a renowned leadership. Or you really
like the campus, the location, their mission statement. Simply point out anything that caught your eye and made you decide for their place instead of some other.
Another option is referring to the particular grade level, or the job of a Math teacher. Maybe you did not find any other vacancies for Math teachers (at elementary, secondary, high school level),
and so you decided to apply with them. At the end of the day, each place of work has some pluses and minuses, and for you the name of the school or reputation isn’t the most important thing. You
wanted to teach Math, they were looking for a new Math Teacher, and so you submitted your application.
Other questions you may face in your Math Teacher job interview
• What made you chose Math over other subjects?
• Where do you see yourself as a teacher in five years from now?
• What do you do to keep your knowledge of Math up to date?
• In your opinion, what are the latest trends in teaching Math?
• Do you consider yourself a leader?
• After everything we discussed here, do you want to add something or do you have any questions?
Final thoughts
Interview for a job of a Math Teacher belongs to easier teaching interviews. They won’t give you some Math riddles to solve, or anything similar in the interviews. And you typically won’t compete
with many other applicants for the job, because Math isn’t really one of the popular subjects among teachers.
As long as you show right attitude to your teaching mission, some passion for teaching Math, and do not remain silent when facing their questions, they should hire you. I wish you good luck!
May also interest you: | {"url":"https://teacher-interviewquestions.com/math-teacher-interview-questions/","timestamp":"2024-11-11T07:18:16Z","content_type":"text/html","content_length":"118223","record_id":"<urn:uuid:bb974545-d677-4451-a5cb-0618a6f9040e>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00581.warc.gz"} |
How to Calculate Standard Error? Formula & Importance
How to calculate standard error? The standard error is a statistic that measures the variability of the sample mean around the population mean. It is calculated by taking the standard deviation of
the sample means and dividing it by the square root of the sample size.
The standard error is an important measure because it tells us how likely it is that our sample mean is close to the population mean. The smaller the standard error, the more likely it is that our
sample mean is close to the population means.
Standard Error Formula
In statistics, the standard error (SE) is a measure of the variability of the sample mean around the population mean. It is computed as the standard deviation of the difference between the sample
means and the population means, divided by the square root of the number of observations. The standard error is also utilized to calculate confidence intervals.
When you calculate a statistic from a sample, such as a sample mean or percentage, it’s important to know how precise that statistic is. The accuracy of a statistic is measured by its standard error.
The smaller the standard error, the more precise the statistic.
Standard Error of the Mean (SEM)
The standard error of the mean (SEM) is a statistic that indicates the accuracy of the sample mean. The SEM is an estimate of the standard deviation of the sampling distribution of the sample mean.
The SEM is calculated by dividing the standard deviation of the sample by the square root of n, where n is the number of observations in the sample. The SEM can be used to determine whether the
difference between the two means is statistically significant.
Standard Error of Estimate (SEE)
The Standard Error of Estimate (SEE) is an important statistic that measures the accuracy of predictions made by a regression model. The SEE is computed as the standard deviation of the residuals,
which are the differences between the observed values and the predicted values. The smaller the SEE, the more accurate the predictions made by the model.
The SEE can be used to determine whether a regression model is adequate for predicting future events. If the SEE is too large, it indicates that the model is not accurately predicting future events
and should be revised. The SEE can also be used to compare different regression models; if one model has a smaller SEE than another model, it is likely that it is more accurate in its predictions.
How to calculate Standard Error
The calculation of the standard error is one of the most important steps in any statistical analysis. The standard error measures the variability of the sampling distribution, and it is used to
calculate confidence intervals. In order to calculate the standard error, you need to know the mean and the standard deviation of your sample.
The standard error can be calculated using the formula:
Where “n” is the sample size and “x” is the mean of that sample. The standard deviation can be computed using this formula:
where “x” is again the mean of the sample, and “s” is the standard deviation.
Standard Error Example
Standard Error is a statistic that is used in statistics to measure the variability of the data. It is also known as the standard deviation of the sampling distribution. Standard Error is important
because it measures how close the sample mean is to the population mean.
To calculate Standard Error, you need to know the population standard deviation and the sample size. The formula for Standard Error is:
The larger the sample size, the smaller the Standard Error will be. This is because there is more variability in a small sample size than in a large one.
Standard Error can be used to estimate confidence intervals. A confidence interval gives you an idea of how likely it is that the population mean falls within a certain range.
Here’s an example: Suppose you want to know what 95% confidence interval for the average weight of women in America is.
Importance of Standard Error
The standard error is an important part of any research project. It helps researchers determine the accuracy of their results. The standard error can be used to calculate confidence intervals, which
show the range of likely values for a given statistic.
This information can help researchers determine whether their results are statistically significant. The standard error is also used to calculate p-values, which indicate the probability that a given
result was achieved by chance. Researchers use this information to determine whether their findings are worth publishing.
Standard Error and Standard Deviation in Finance
When working with numbers, it is important to understand the difference between standard error and standard deviation. The standard error is a measure of the variability of a statistic, while
standard deviation is a measure of the variability of the data. In finance, it is important to understand both concepts in order to make sound investment decisions.
The standard error is used when calculating confidence intervals. A confidence interval gives you an idea of how likely it is that the true value of a population parameter lies within a given range.
The size of the confidence interval depends on the standard error of the statistic being used. The less the standard error, the narrower the confidence interval will be.
Standard deviation is used when measuring risk and return. Risk measures how much variation there is in returns from one investment to another.
Q: Is standard error the same as SEM?
A: The standard error (SE) and standard deviation (SD) are two measures of variability that are often confused. The standard error is a measure of the variability of the sample mean, while SD is a
measure of the variability of the individual data points.
They are not always equal, but they are always related. The standard error can be used to calculate confidence intervals for the mean, and it is also used in hypothesis testing.
Q: What is a good standard error?
A: A good standard error is a measure of how accurate a statistic is. It is the standard deviation of the sampling distribution of the statistic. This tells you how close the statistic is to the
population parameter it is estimating. A small standard error means that the statistic is very close to the population parameter and a large standard error means that the statistic is far from the
population parameter. | {"url":"http://higheducations.com/how-to-calculate-standard-error/","timestamp":"2024-11-04T17:15:35Z","content_type":"text/html","content_length":"93627","record_id":"<urn:uuid:bba8784f-3583-4d1b-b52e-c356c08a417c>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00397.warc.gz"} |
Frequency of Radiation Calculator - Calculator Wow
Frequency of Radiation Calculator
In the realm of physics and engineering, the study of electromagnetic radiation is fundamental. Understanding the frequency of radiation is crucial for various applications, from designing
communication systems to analyzing the energy levels of different types of radiation. The Frequency of Radiation Calculator is a tool designed to simplify this process by allowing users to compute
the frequency based on the total energy of the radiation. This calculator is indispensable for scientists, engineers, and students who need quick and accurate frequency measurements to further their
research and projects.
Understanding the frequency of radiation holds significant importance across multiple fields:
1. Electromagnetic Spectrum Analysis: Helps in identifying the type of radiation, whether it’s radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, or gamma rays.
2. Communication Systems: Essential for designing and optimizing communication systems, including radio, television, and cellular networks, by specifying the frequency bands used for transmission.
3. Spectroscopy: In spectroscopy, knowing the frequency of radiation helps in analyzing the interaction between electromagnetic waves and matter, providing insights into molecular and atomic
4. Medical Imaging: In medical fields, such as MRI and X-rays, accurate frequency calculations are vital for imaging and diagnostic purposes.
5. Astronomy: Assists astronomers in studying celestial bodies and phenomena by analyzing the frequencies of the radiation they emit.
How to Use
Using the Frequency of Radiation Calculator is straightforward. Follow these steps:
1. Enter the Total Energy of Radiation: Input the total energy of the radiation in joules (J) into the provided field. This energy is typically obtained from measurements or theoretical
2. Enter Planck’s Constant: Input the value of Planck’s constant, which is approximately 6.626×10−346.626 \times 10^{-34}6.626×10−34 joule-seconds (J·s). The calculator uses this constant to
determine the frequency.
3. Calculate Frequency: Click the “Calculate Frequency” button to compute the frequency. The result will be displayed in hertz (Hz), showing the number of cycles per second.
The calculator simplifies the complex relationship between energy and frequency, making it accessible even for those with minimal technical background.
10 FAQs and Answers
1. What is the Frequency of Radiation Calculator?
The Frequency of Radiation Calculator computes the frequency of electromagnetic radiation based on the total energy input and Planck’s constant.
2. Why is frequency important?
Frequency determines the type of radiation and is crucial for various applications, including communication systems, medical imaging, and scientific research.
3. What values do I need to input?
You need to input the total energy of radiation and Planck’s constant.
4. What is Planck’s constant?
Planck’s constant is a fundamental physical constant used to relate energy and frequency, with a value of approximately 6.626×10−346.626 \times 10^{-34}6.626×10−34 J·s.
5. Can I use the calculator for different types of radiation?
Yes, the calculator can be used for any type of electromagnetic radiation as long as you have the energy value.
6. How precise is the frequency result?
The result is displayed in scientific notation to handle a wide range of values, providing a precise and readable output.
7. What happens if I enter invalid values?
The calculator will display “Invalid Input” if non-numeric or negative values are entered.
8. Can I calculate the frequency of radiation manually?
Yes, but the calculator automates the process and reduces the chance of errors.
9. Is there a limit to the input values?
The calculator can handle a wide range of positive values. Extremely large or small values may not be practical for certain applications.
10. Can I use this calculator for educational purposes?
Absolutely! It’s a useful tool for students and educators to understand the relationship between energy and frequency.
The Frequency of Radiation Calculator is an essential tool for anyone dealing with electromagnetic radiation. By providing a straightforward method to compute frequency from energy, it supports a
wide range of applications, from scientific research to practical engineering solutions. Its ease of use and accuracy make it a valuable resource for professionals and students alike, helping to
demystify complex concepts and facilitate a better understanding of the electromagnetic spectrum. Whether you’re involved in designing communication systems, analyzing spectroscopic data, or studying
celestial phenomena, this calculator can be a vital asset in your work. | {"url":"https://calculatorwow.com/frequency-of-radiation-calculator/","timestamp":"2024-11-01T22:44:09Z","content_type":"text/html","content_length":"66174","record_id":"<urn:uuid:0872d6dc-9654-4d1d-8335-48a424283ac6>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00059.warc.gz"} |
An Introduction to Quantum Field Theory by Michael E. Peskin
• Quantum
• Thread starter Greg Bernhardt
• Start date
In summary, "An Introduction to Quantum Field Theory" by Michael E. Peskin and Dan V. Schroeder is a comprehensive and detailed textbook that is widely used in universities. It covers QFT in the
context of high energy physics, but may not be as useful for those interested in condensed matter. While the book is thorough, it can be challenging to keep track of the bigger picture due to the
extensive formulas and calculations. Some topics are not covered well, such as the functional integral formalism and representation theory of groups. Despite these shortcomings, it is considered a
classic and a valuable resource for those interested in QFT.
This massive book on QFT is a standard text nowadays and used at many universities. The book is extensive and very detailed. If you manage to follow the text and keep up with all the nitty-gritty
details, then you are well underway into mastering QFT -- but this is quite a challenge. The book is great for QFT when applied to high energy physics, but less so from a condensed matter
perspective. The chapters on renormalization, symmetry breaking and gauge theories are very thorough.
It can be quite difficult to keep a bigger picture of what you are exactly doing (and why) at any given point throughout the book, as you can get easily lost in the sea of formulas and details of the
The book is lacking in some topics. For instance, the treatment of the functional integral formalism is somewhat poor. You also need to use other resources for the representation theory of groups
(Lie groups and the Lorentz group in particular), because it's not really treated well here.
Still, it's already a classic and a must-read for any QFT-enthusiast.
Science Advisor
Gold Member
2023 Award
It's a pretty good introduction to relativistic (vacuum) QFT. The strength is that it teaches how to calculate things, which is very important to get the idea of QFT. The drawback is the huge number
of typos and some glitches in the foundations. E.g., there are dimensionful arguments in logarithms in the chapter about the renormalization group, which is kind of ironic ;-).
FAQ: An Introduction to Quantum Field Theory by Michael E. Peskin
1. What is Quantum Field Theory (QFT)?
Quantum Field Theory is a theoretical framework used in physics to describe the behavior of particles and fields at a quantum level. It combines principles of quantum mechanics and special relativity
to explain the interactions between particles and their associated fields.
2. Who is Michael E. Peskin?
Michael E. Peskin is a theoretical physicist and professor at Stanford University. He is well-known for his contributions to quantum field theory and particle physics, and his textbook "An
Introduction to Quantum Field Theory" is widely used in graduate-level courses.
3. What topics does "An Introduction to Quantum Field Theory" cover?
This textbook covers a wide range of topics in quantum field theory, including the basics of quantum mechanics and special relativity, the quantization of scalar and spinor fields, Feynman diagrams
and perturbation theory, and more advanced topics such as renormalization and the Standard Model of particle physics.
4. Is this textbook suitable for beginners?
While "An Introduction to Quantum Field Theory" is a comprehensive and well-respected textbook, it is primarily aimed at graduate students with a strong background in physics and mathematics. It may
be challenging for beginners to understand without prior knowledge of quantum mechanics and special relativity.
5. Are there any prerequisites for studying this textbook?
As mentioned earlier, a strong background in physics and mathematics is necessary to fully understand the concepts presented in this textbook. Specifically, students should have a solid understanding
of classical mechanics, electromagnetism, and quantum mechanics, as well as mathematical tools such as calculus, linear algebra, and group theory. | {"url":"https://www.physicsforums.com/threads/an-introduction-to-quantum-field-theory-by-michael-e-peskin.665440/","timestamp":"2024-11-11T14:09:12Z","content_type":"text/html","content_length":"93092","record_id":"<urn:uuid:47e1c573-6f4b-4dcb-b954-aa71f9a4e962>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00056.warc.gz"} |
Advent of Code 2021, days 6 to 10
Continuing our Advent of Code adventure from last time. Let’s see what the next batch of puzzles has in store.
Day 6
Simulating the life cycles of lanternfish… My implementation for part 1 was naive: I just keep a list of fish, and update/extend the list at every step:
def simulate(days: int) -> int:
with open('input.txt', 'r') as f_input:
fish = list(map(lambda x: int(x), f_input.readline().split(',')))
for _ in range(days):
new_fish_count = 0
for i, f in enumerate(fish):
if f == 0:
fish[i] = 6
new_fish_count += 1
fish[i] -= 1
if new_fish_count > 0:
fish.extend([8 for _ in range(new_fish_count)])
return len(fish)
This does work, but after completing part 1 we are hit with AoC’s infamous “part 2 is just part 1, but more steps”. We don’t have to do anything different for part 2; we just have to simulate 256
days instead of 80.
The original naive implementation doesn’t scale to this size, as even assuming 1 byte per fish, the final list would require twice as much RAM as I have in this computer. Not to mention constantly
enumerating and updating this giant list becomes undoable long before that.
After some deep pondering, I realized that we don’t have to keep the list of all individual fish at all; we just have to keep track of how many fish there are of each age, and update these numbers at
each step. The new simulation function then becomes:
def simulate(days: int) -> int:
with open('input.txt', 'r') as f_input:
fish = list(map(lambda x: int(x), f_input.readline().split(',')))
# map of the number of fishes for each age (age can be 0..8)
fish_age_count = defaultdict(lambda: 0)
for f in fish:
fish_age_count[f] += 1
for _ in range(days):
fish_age_count_new = {}
# aging for fishes 1..8 -> simply age by 1 day
for i in range(1, 9):
fish_age_count_new[i - 1] = fish_age_count[i]
# aging for fishes 0 -> create new fishes with age 8, then add to population of age 6
fish_age_count_new[6] += fish_age_count[0]
fish_age_count_new[8] = fish_age_count[0]
for i in range(9):
fish_age_count[i] = fish_age_count_new[i]
return sum([fish_age_count[i] for i in range(9)])
This runs pretty much instantly for both parts.
Day 7
This one wasn’t very interesting to me. It’s more a math/statistics question as opposed to a programming puzzle.
For part 1, I guessed that we needed either the mean or the median. I tried both on the example, and it turned out we needed the median. I applied that to my input, and apparently got the right
answer. Great.
import statistics
with open('input.txt', 'r') as f_input:
positions = list(map(lambda x: int(x), f_input.readline().split(',')))
def solution_part_1():
target = int(statistics.median(positions))
print(sum([abs(p - target) for p in positions]))
Part 2 was more guesswork on my part.
First, the new movement cost. I suspected there would be some easy way of computing it, so I literally googled “1+2+3+4” and to my surprise found a Wikipedia article of the same name, which gave me
the movement cost formula.
def movement_cost(x: int) -> int:
return (x * (x + 1)) // 2
Bruteforcing a whole bunch of positions wouldn’t be very efficient, so I again made an assumption and guessed that the optimal position would be somewhere close to the mean. So I did a kind of guided
brute-force around that guess, and again got the correct answer.
def solution_part_2():
mean_pos = int(statistics.mean(positions))
[movement_cost(abs(p - t)) for p in positions]
) for t in range(mean_pos - 10, mean_pos + 10)] # guess that the best pos is somewhere close to mean
I leaned nothing today.
Day 8
Again, it wasn’t really about programming today. This one is more of a logic puzzle.
I spent a decent amount of time up front to figure out how to decode the numbers, and programmed that deduction logic into a class:
class InputLine:
def __init__(self, l: str):
split = l.split(' ')
self.patterns = split[:10]
self.output = split[-4:]
self.digit_map = {}
# 1 is the only 2-segment number
enc_1 = next(x for x in self.patterns if len(x) == 2)
self.digit_map[''.join(sorted(enc_1))] = 1
# 7 is the only 3-segment number
enc_7 = next(x for x in self.patterns if len(x) == 3)
self.digit_map[''.join(sorted(enc_7))] = 7
# 4 is the only 4-segment number
enc_4 = next(x for x in self.patterns if len(x) == 4)
self.digit_map[''.join(sorted(enc_4))] = 4
# 8 is the only 7-segment number
enc_8 = next(x for x in self.patterns if len(x) == 7)
self.digit_map[''.join(sorted(enc_8))] = 8
# 5-segment numbers: 2, 3, 5
# 6-segment numbers: 0, 6, 9
# 6 is the only 6-segment number that is not a superset of 7
enc_6 = next(x for x in self.patterns if len(x) == 6 and not set(x) > set(enc_7))
self.digit_map[''.join(sorted(enc_6))] = 6
# 5 is the only 5-segment number that is a subset of 6
enc_5 = next(x for x in self.patterns if len(x) == 5 and set(x) < set(enc_6))
self.digit_map[''.join(sorted(enc_5))] = 5
# 0 is the only 6-segment number that is not a superset of 5
enc_0 = next(x for x in self.patterns if len(x) == 6 and not set(x) > set(enc_5))
self.digit_map[''.join(sorted(enc_0))] = 0
# 3 is the only 5-segment number that is a superset of 7
enc_3 = next(x for x in self.patterns if len(x) == 5 and set(x) > set(enc_7))
self.digit_map[''.join(sorted(enc_3))] = 3
# 9 is the only 6-segment number that is a superset of 3
enc_9 = next(x for x in self.patterns if len(x) == 6 and set(x) > set(enc_3))
self.digit_map[''.join(sorted(enc_9))] = 9
# 2 is the only 5-segment number that is not a subset of 9
enc_2 = next(x for x in self.patterns if len(x) == 5 and not set(x) < set(enc_9))
self.digit_map[''.join(sorted(enc_2))] = 2
def value(self) -> int:
v = 0
v += self.digit_map[''.join(sorted(self.output[0]))] * 1000
v += self.digit_map[''.join(sorted(self.output[1]))] * 100
v += self.digit_map[''.join(sorted(self.output[2]))] * 10
v += self.digit_map[''.join(sorted(self.output[3]))]
return v
input_lines = []
with open('input.txt', 'r') as f_input:
for line in f_input:
And that’s basically it.
For part 1, we just count the number of times the unique-length digits occur:
def solution_part_1():
print(sum([len([o for o in l.output if len(o) in [2, 3, 4, 7]]) for l in input_lines]))
Then for part 2, we sum the actual values:
def solution_part_2():
print(sum([l.value() for l in input_lines]))
gg, ez.
Day 9
This one is a bit more interesting. We have a hightmap (basically a grid), and we have to find out some of its properties.
First, parse the input:
# (0,0) is top-left
heightmap = []
with open('input.txt', 'r') as f_input:
for line in f_input:
heightmap.append(list(map(lambda x: int(x), line.rstrip())))
height = len(heightmap)
width = len(heightmap[0])
To make things slightly less confusing, I also add a helper function to get the value for a given x-y position:
def value(x: int, y: int) -> int:
return heightmap[y][x]
For the first part, we need to find all local minima, which are points that are lower than all of their neighbors. We’ll first create a helper function to fetch the values of a point’s neighbors:
def neighbor_values(x: int, y: int) -> List[int]:
n_list = []
# top
if y > 0:
n_list.append(value(x, y - 1))
# bottom
if y < height - 1:
n_list.append(value(x, y + 1))
# left
if x > 0:
n_list.append(value(x - 1, y))
# right
if x < width - 1:
n_list.append(value(x + 1, y))
return n_list
Then, the local-minimum function is pretty simple:
def is_local_minimum(x: int, y: int) -> bool:
point_value = value(x, y)
return all([p > point_value for p in (neighbor_values(x, y))])
For part 1, we then just sum the values of all local minima:
def solution_part_1():
risk = 0
for i_x in range(width):
for i_y in range(height):
if is_local_minimum(i_x, i_y):
risk += 1 + value(i_x, i_y)
That was pretty easy, but part 2 gets a bit more tricky. Now we have to find the three largest basins, which are areas on the grid enclosed by points with value 9.
For this I just loop through all points in the grid, and from each point that I haven’t seen yet, I start a BFS along its neighbors, stopping when I hit the borders or nodes with value 9. I first
need a second neighbor function that returns coordinates instead of values:
def neighbor_coords(x: int, y: int) -> List[Tuple[int, int]]:
n_list = []
# top
if y > 0:
n_list.append((x, y - 1))
# bottom
if y < height - 1:
n_list.append((x, y + 1))
# left
if x > 0:
n_list.append((x - 1, y))
# right
if x < width - 1:
n_list.append((x + 1, y))
return n_list
I then go through the whole grid, add all of the basins I find to a list, and then sort by length to get the top 3.
def solution_part_2():
seen = set()
basins = []
for i_x in range(width):
for i_y in range(height):
n_v = value(i_x, i_y)
if n_v == 9 or (i_x, i_y) in seen:
seen.add((i_x, i_y))
basin = [(i_x, i_y)]
to_check = Queue()
for n in neighbor_coords(i_x, i_y):
if n not in seen:
while to_check.qsize() > 0:
n = to_check.get()
n_x, n_y = n
n_v = value(n_x, n_y)
if n_v == 9:
for n in neighbor_coords(n_x, n_y):
if n not in seen:
print(len(basins[-1]) * len(basins[-2]) * len(basins[-3]))
The key to this solution, besides the BFS, was using a set instead of a list to keep track of seen nodes, which drastically improved runtime.
Day 10
This one was fun. After all, dealing with mismatched brackets is part of a programmer’s daily routine. As such, it was immediately obvious to me that all we needed was a stack to create a simple
scoring function for part 1:
from collections import deque
bracket_pairs = {('(', ')'), ('[', ']'), ('{', '}'), ('<', '>')}
def score_illegal(line: str) -> int:
brackets = deque()
for c in line:
if c in ('(', '[', '{', '<'):
c_open = brackets.pop()
c_pair = (c_open, c)
if c_pair not in bracket_pairs:
match c:
case ')':
return 3
case ']':
return 57
case '}':
return 1197
case '>':
return 25137
return 0
Here we just find the first non-matching pair, and return the appropriate score.
We then just sum all the scores for part 1:
with open('input.txt', 'r') as f_input:
nav_prgm = [x.rstrip() for x in f_input.readlines()]
def solution_part_1():
print(sum([score_illegal(x) for x in nav_prgm]))
Part 2 is very similar, only this time we have to score the missing brackets. We read through the line and push/pop all brackets, and then end by scoring the remaining open brackets:
def score_incomplete(line: str) -> int:
brackets = deque()
for c in line:
if c in ('(', '[', '{', '<'):
brackets.pop() # assumes that this matches, and it's not an illegal line
score = 0
while len(brackets) > 0:
c = brackets.pop()
score *= 5
match c:
case '(':
score += 1
case '[':
score += 2
case '{':
score += 3
case '<':
score += 4
return score
We score all non-illegal lines this way, and then select the middle score:
def solution_part_2():
scores = []
for line in nav_prgm:
if score_illegal(line) == 0:
print(scores[len(scores) // 2])
And that’s another 5 days done! | {"url":"https://www.bartwolff.com/blog/2021/12/10/advent-of-code-part-2/","timestamp":"2024-11-06T21:11:55Z","content_type":"text/html","content_length":"60839","record_id":"<urn:uuid:fbb3bb6d-c02c-4184-9d57-03943321ff16>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00523.warc.gz"} |
Problem F: Cycle
As you know, a cell cycle is start from one end phase to next end phase of Mitosis (or Amitosis). Similarly, let’s define a string cycle. Let T be a substring of S, and S can be generated by T. That
means if we append T to T for several times, S will become a substring of T. The string cycle of string S is the smallest length of such T. Now give you a string, please find the string cycle of S. | {"url":"https://acm.sustech.edu.cn/onlinejudge/problem.php?cid=1024&pid=5","timestamp":"2024-11-05T04:09:42Z","content_type":"text/html","content_length":"9350","record_id":"<urn:uuid:a8688b58-e9b7-4d21-b04e-8f4711c265a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00784.warc.gz"} |
9.1: 2-4 Trees
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
A 2-4 tree is a rooted tree with the following properties:
Property \(\PageIndex{1}\) (height).
All leaves have the same depth.
Property \(\PageIndex{2}\) (degree).
Every internal node has 2, 3, or 4 children.
An example of a 2-4 tree is shown in Figure \(\PageIndex{1}\).
Figure \(\PageIndex{1}\): A 2-4 tree of height 3.
The properties of 2-4 trees imply that their height is logarithmic in the number of leaves:
A 2-4 tree with \(\mathtt{n}\) leaves has height at most \(\log \mathtt{n}\).
Proof. The lower-bound of 2 on the number of children of an internal node implies that, if the height of a 2-4 tree is \(h\), then it has at least \(2^h\) leaves. In other words,
\[ \mathtt{n} \ge 2^h \enspace . \nonumber\]
Taking logarithms on both sides of this inequality gives \(h \le \log \mathtt{n}\).
Adding a leaf to a 2-4 tree is easy (see Figure \(\PageIndex{2}\)). If we want to add a leaf \(\mathtt{u}\) as the child of some node \(\mathtt{w}\) on the second-last level, then we simply make \(\
mathtt{u}\) a child of \(\mathtt{w}\). This certainly maintains the height property, but could violate the degree property; if \(\mathtt{w}\) had four children prior to adding \(\mathtt{u}\), then \
(\mathtt{w}\) now has five children. In this case, we split \(\mathtt{w}\) into two nodes, \(\mathtt{w}\) and \(\mathtt{w}\)', having two and three children, respectively. But now \(\mathtt{w}\)' has
no parent, so we recursively make \(\mathtt{w}\)' a child of \(\mathtt{w}\)'s parent. Again, this may cause \(\mathtt{w}\)'s parent to have too many children in which case we split it. This process
goes on until we reach a node that has fewer than four children, or until we split the root, \(\mathtt{r}\), into two nodes \(\mathtt{r}\) and \(\mathtt{r'}\). In the latter case, we make a new root
that has \(\mathtt{r}\) and \(\mathtt{r'}\) as children. This simultaneously increases the depth of all leaves and so maintains the height property.
Figure \(\PageIndex{2}\): Adding a leaf to a 2-4 Tree. This process stops after one split because \(\texttt{w.parent}\) has a degree of less than 4 before the addition.
Since the height of the 2-4 tree is never more than \(\log \mathtt{n}\), the process of adding a leaf finishes after at most \(\log \mathtt{n}\) steps.
Removing a leaf from a 2-4 tree is a little more tricky (see Figure \(\PageIndex{3}\)). To remove a leaf \(\mathtt{u}\) from its parent \(\mathtt{w}\), we just remove it. If \(\mathtt{w}\) had only
two children prior to the removal of \(\mathtt{u}\), then \(\mathtt{w}\) is left with only one child and violates the degree property.
Figure \(\PageIndex{3}\): Removing a leaf from a 2-4 Tree. This process goes all the way to the root because each of \(\mathtt{u}\)'s ancestors and their siblings have only two children.
To correct this, we look at \(\mathtt{w}\)'s sibling, \(\mathtt{w'}\). The node \(\mathtt{w'}\) is sure to exist since \(\mathtt{w}\)'s parent had at least two children. If \(\mathtt{w'}\) has three
or four children, then we take one of these children from \(\mathtt{w'}\) and give it to \(\mathtt{w}\). Now \(\mathtt{w}\) has two children and \(\mathtt{w'}\) has two or three children and we are
On the other hand, if \(\mathtt{w'}\) has only two children, then we merge \(\mathtt{w}\) and \(\mathtt{w'}\) into a single node, \(\mathtt{w}\), that has three children. Next we recursively remove \
(\mathtt{w'}\) from the parent of \(\mathtt{w'}\). This process ends when we reach a node, \(\mathtt{u}\), where \(\mathtt{u}\) or its sibling has more than two children, or when we reach the root.
In the latter case, if the root is left with only one child, then we delete the root and make its child the new root. Again, this simultaneously decreases the height of every leaf and therefore
maintains the height property.
Again, since the height of the tree is never more than \(\log \mathtt{n}\), the process of removing a leaf finishes after at most \(\log \mathtt{n}\) steps. | {"url":"https://eng.libretexts.org/Bookshelves/Computer_Science/Databases_and_Data_Structures/Book%3A_Open_Data_Structures_-_An_Introduction_(Morin)/09%3A_Red-Black_Trees/9.01%3A_2-4_Trees","timestamp":"2024-11-15T02:39:02Z","content_type":"text/html","content_length":"130435","record_id":"<urn:uuid:c17fcd3d-469d-4d7d-be69-16ff4c36fe84>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00845.warc.gz"} |
how to find unweighted gpa How to calculate your weighted gpa: 7 steps (with pictures) - CoreyBarba
If you are searching about New features: Course status and GPA summary – Transcript Maker you’ve came to the right web. We have 35 Pictures about New features: Course status and GPA summary –
Transcript Maker like Is It Better to Report Weighted or Unweighted Gpa – Davon-has-Wood, How to Find Unweighted GPA and Weighted GPA (& Convert Them) – Get and also Weighted vs. Unweighted GPA: What
You Need to Know – The Scholarship System. Here you go:
New Features: Course Status And GPA Summary – Transcript Maker
gpa transcript summary cumulative unweighted weighted course example do current shows semester status features enable transcripts subject subjects which also
26+ Georgetown Gpa Calculator – MahfujKiiara
What Is A GPA Scale? (+Weighted Vs. Unweighted GPA)
unweighted weighted gpa grades example
How To Calculate GPA (In Just 60 Seconds) | Difference Between Weighted
gpa calculate unweighted weighted difference
How To Calculate Your GPA
gpa calculate unweighted scale calculation perform following then
Is It Better To Report Weighted Or Unweighted Gpa – Davon-has-Wood
Gpa Weighted Scale | Blog Dandk
gpa weighted scale calculate school high calculating student
What Is Unweighted GPA? How To Calculate It. – Student-Tutor Education Blog
gpa unweighted tutor calculated
The Ultimate Guide To Weighted Vs Unweighted GPA
How To Calculate And Improve High School GPA
gpa calculate unweighted weighted
How To Find Unweighted GPA And Weighted GPA (& Convert Them) – Get
How Do You Calculate Your Unweighted GPA? The Ultimate Guide
gpa unweighted calculate do albert ultimate guide updated june team last
Understanding Your GPA – CollegeCalm
How To Calculate Your Weighted GPA: 7 Steps (with Pictures)
gpa calculate weighted
Unweighted And Weighted GPA: How To Calculate Them – June29.com
PPT – Paxon School For Advanced Studies PowerPoint Presentation, Free
gpa weighted unweighted vs school paxon studies advanced grade final given year semester graded courses basis end long
What Is The Difference Between Weighted GPA And Unweighted GPA
gpa unweighted weighted between difference student
How To Calculate Your GPA: Use This Calculator And Guide | CollegeVine Blog
gpa calculator calculate collegevine screenshot grades semester worth example look may
Weighted Vs Unweighted GPA | Prep Expert
gpa unweighted weighted vs courses take gpas prep sat besides act various learning second check now
GPA Weighted E Unweighted: Qual é A Diferença?
How Do You Calculate Your GPA? Step-by-Step Instructions
gpa calculate do school high grades find transcript step university hs decimals into instructions fs same
How Is GPA Calculated On RaiseMe? – RaiseMe
gpa raise calculate calculated unweighted
Weighted Vs. Unweighted GPA: What You Need To Know – The Scholarship System
gpa weighted unweighted
How To Calculate Gpa Video – Haiper
Weighted Vs. Unweighted GPA: Know The Difference From AP Guru
PPT – Class Of 2015 And 2016 PowerPoint Presentation, Free Download
gpa weighted unweighted
How To Calculate Gpa Of Semester – Haiper
High School GPA Calculator
gpa school high calculator grade cumulative format calculate step average middle calculating current select grades figure percentage point schools public
Weighted Vs. Unweighted GPA: What You Need To Know – The Scholarship System
gpa weighted unweighted secure thescholarshipsystem
How To Calculate Unweighted GPA.
gpa unweighted calculate score therefore
And The Weighted As Well As Unweighted GPA #click #courses #document #
school high transcript homeschool gpa unweighted weighted example transcripts template homeschooling courses document du kaynak well click higher education stuff
What Is Unweighted GPA? How To Calculate It. – Student-Tutor Education Blog
gpa unweighted tutor
Weighted Vs. Unweighted GPA: Know The Difference From AP Guru
Gpa Calculator High School 4 3 Scale Weighted | Blog Dandk
gpa school high calculating calculator scale weighted guide calculate numbers half if earned note
High School GPA Calculator
How to find unweighted gpa and weighted gpa (& convert them). How to calculate unweighted gpa.. Gpa weighted e unweighted: qual é a diferença? | {"url":"https://coreybarba.com/how-to-find-unweighted-gpa/","timestamp":"2024-11-08T17:59:25Z","content_type":"text/html","content_length":"73394","record_id":"<urn:uuid:d503b836-683a-4162-87c8-137a0ec495e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00393.warc.gz"} |
An Etymological Dictionary of Astronomy and Astrophysics
Fr.: variation
1) General: An instance of changing, or something that changes.
2) Astro.: The periodic inequality in the Moon's motion that results from the combined gravitational attraction of the Earth and the Sun. Its period is half the synodic month, that is 14.77 days, and
the maximum longitude displacement is 39'29''.9.
See also: → calculus of variations, → annual variation, → secular variation.
M.E., from O.Fr. variation, from L. variationem (nominative variatio) "difference, change," from variatus, p.p. of variare "to change," → vary.
Varteš, verbal noun from vartidan, → vary.
Fr.: variationnel
Of or describing a → variation.
→ variation; → -al.
variational principle
پروز ِورتشی
parvaz-e varteši
Fr.: principe variationnel
Any of the physical principles that indicate in what way the actual motion of a state of a mechanical system differs from all of its kinematically possible motions or states. Variational principles
that express this difference for the motion or state of a system in each given instant of time are called → differential. These principles are equally applicable to both → holonomic and →
nonholonomic systems. Variational principles that establish the difference between the actual motion of a system during a finite time interval and all of its kinematically possible motions are said
to be → integral. Integral variational principles are valid only for holonomic systems. The main differential variational principles are: the → virtual work principle and → d'Alembert's principle.
→ variational; → principle.
Fr.: variété
1) The quality or state of having different forms or types.
2) A number or collection of different things especially of a particular class.
3) Something differing from others of the same general kind.
4) Any of various groups of plants or animals ranking below a species (Merriam-Webster.com).
M.Fr. variété, from L. varietatem (nominative varietas) "difference, diversity; a kind, variety, species, sort," from varius, → various.
Vartiné, from vartin, → various, + noun/nuance suffix -é.
Varignon's theorem
فربین ِورینیون
farbin-e Varignon
Fr.: théorème de Varignon
The → moment of the resultant of a → coplanar system of → concurrent forces about any center is equal to the algebraic sum of the moments of the component forces about that center.
Named after Pierre Varignon (1654-1722), a French mathematician, who outlined the fundamentals of statics in his book Projet d'une nouvelle mécanique (1687).
Fr.: varié
1) Of different kinds, as two or more things; differing one from another.
2) Marked by or exhibiting variety or diversity (Dictionary.com).
M.Fr. varieux and directly from L. varius "changing, different, diverse," → vary.
Vartin, from vart "change," present stem of vartidan, → vary, + adj. suffix -in.
۱) ورتیدن؛ ۲) ورتاندن
Fr.: changer, varier
1) To undergo change in form, substance, appearance, etc.
2) To change or alter. → covariance.
M.E. varien, from O.Fr. varier, from L. variare "change, alter, make different," from varius "variegated, different, spotted."
Vartidan "to change," from Mid.Pers. vartitan "to change, turn" (Mod.Pers. gardidan "to turn, to change"); Av. varət- "to turn, revolve;" cf. Skt. vrt- "to turn, roll," vartate "it turns round,
rolls;" L. vertere "to turn;" O.H.G. werden "to become;" PIE base *wer- "to turn, bend."
bordâr (#)
Fr.: vecteur
Any physical quantity which requires a direction to be stated in order to define it completely, for example velocity. Compare with → scalar.
From L. vector "one who carries or conveys, carrier," from p.p. stem of vehere "carry, convey;" cognate with Pers. vâz (in parvâz "flight"); Av. vaz- "to draw, guide; bring; possess; fly; float,"
vazaiti "guides, leads" (cf. Skt. vah- "to carry, drive, convey," vahati "carries," pravaha- "bearing along, carrying," pravāha- "running water, stream, river;" O.E. wegan "to carry;" O.N. vegr;
O.H.G. weg "way," wegan "to move," wagan "cart;" M.Du. wagen "wagon;" PIE base *wegh- "to drive").
Bordâr "carrier," agent noun from bordan "to carry, transport" (Mid.Pers. burdan; O.Pers./Av. bar- "to bear, carry," barəθre "to bear (infinitive);" Skt. bharati "he carries;" Gk. pherein "to carry;"
L. ferre "to carry;" PIE base *bher- "to carry").
vector analysis
آنالس ِبرداری
ânâlas-e bordâri
Fr.: analyse vectorielle
The study of → vectors and → vector spaces.
→ vector; → analysis.
vector angular velocity
بردار ِتندای ِزاویهای
bordâr-e tondâ-ye zâviye-yi
Fr.: vecteur de vitesse angulaire
Of a rotating body, a vector of magnitude ω (→ angular velocity) pointing in the direction of advance of a right-hand screw which is turned in the direction of rotation.
→ vector; → angular; → velocity.
vector boson
بوسون ِبرداری
boson-e bordâri
Fr.: boson vectoriel
In nuclear physics, a → boson with the spin quantum number equal to 1.
→ vector; → boson.
vector calculus
افماریک ِبرداری
afmârik-e bordâri
Fr.: calcul vectoriel
The study of vector functions between vector spaces by means of → differential and integral calculus.
→ vector; → calculus.
vector density
چگالی ِبردار
cagâli-ye bordâr
Fr.: densité de vecteur
A → tensor density of → order 1.
→ vector; → density.
vector field
میدان ِبرداری
meydân-e bordâri (#)
Fr.: champ vectoriel
A vector each of whose → components is a → scalar field. For example, the → gradient of the scalar field F, expressed by: ∇F = (∂F/∂x)i + (∂F/∂y)j + (∂F/∂z)k.
→ vector; → field.
vector function
کریای ِبرداری
karyâ-ye bordâri
Fr.: fonction vectorielle
A function whose value at each point is n-dimensional, as compared to a scalar function, whose value is one-dimensional.
→ vector; → function.
vector meson
مسون ِبرداری
mesoon-e bordâri
Fr.: meson vectoriel
Any particle of unit spin, such as the W boson, the photon, or the rho meson.
→ vector; → meson.
vector perturbation
پرتورش ِبرداری
partureš-e bordâri
Fr.: perturbation vectorielle
The perturbation in the → primordial Universe plasma caused by → vorticity. These perturbations cause → Doppler shifts that result in → quadrupole anisotropy.
→ vector; → perturbation.
vector product
فرآورد ِبرداری
farâvard-e bordâri
Fr.: produit vectoriel
Of two vectors, a vector whose direction is perpendicular to the plane containing the two initial vectors and whose magnitude is the product of the magnitudes of these vectors and the sine of the
angle between them: A x B = C, C = |AB sin α|. The direction of C is given by the → right-hand screw rule. Same as → cross product. See also → scalar product.
→ vector; → product.
vector space
فضای ِبرداری
fazâ-ye bordâri (#)
Fr.: espace vectoriel
A system of mathematical objects consisting of a set of (muultidimensional) vectors associated with a set of (one-dimensional) scalars, such that vectors can be added together and vectors can be
multiplied by scalars while preserving the ordinary arithmetic properties (associativity, commutativity, distributivity, and so forth).
→ vector; → space.
Vega (α Lyr)
واقع، نسر ِواقع
Vâqe', Nasr-e Vaqe' (#)
Fr.: Véga
The brightest star in the constellation → Lyra and the 5th brightest star in the sky. It is an A type → main sequence star of visual magnitude 0.03. Vega is also one of the closer stars to the Earth,
lying just 25.0 light-years away. Vega's axis of rotation is nearly pointing at the Earth, therefore it is viewed pole-on. Fast rotation has flattened Vega at its poles, turning it from a sphere into
an oblate spheroid. The polar diameter of Vega is 2.26 times that of the Sun, and its equatorial diameter 2.75 solar. The poles are therefore hotter (10,150 K) than the equator (7,950 K).
Vega, from Ar. al-Waqi' contraction of an-Nasr al-Waqi' (النسرالواقع) "swooping eagle," from an-Nasr "eagle, vulture" + al-Waqi' "falling, swooping." | {"url":"https://dictionary.obspm.fr/index.php?showAll=1&&search=V&&formSearchTextfield=&&page=2","timestamp":"2024-11-14T20:57:05Z","content_type":"text/html","content_length":"33032","record_id":"<urn:uuid:c1ca7972-7577-434b-acc4-19319d94b37b>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00692.warc.gz"} |
What is angular frequency and frequency?
Ans: As we know, angular frequency is defined as the angular displacement per unit time. So, the formula for angular frequency will be, ω =2π/T, where ω is the angular frequency, 2π is the angular
displacement, and T is the time period. In the case of frequency, it is the number of repeated events per unit time.
What is angular frequency in physics?
What is Angular Frequency? For a sinusoidal wave, the angular frequency refers to the angular displacement of any element of the wave per unit of time or the rate of change of the phase of the
waveform. It is represented by ω.
Why we use angular frequency in simple harmonic motion?
The number of oscillations carried by the bob in SHM is called its angular frequency – which measures how many times the bob oscillates in a specific time. When the body’s motion repeats itself at
frequent periods, it is said to be in periodic motion.
What is angular frequency called?
Angular frequency (ω), also known as radial or circular frequency, measures angular displacement per unit time. Its units are therefore degrees (or radians) per second.
What is the SI unit of angular frequency?
In SI units, angular frequency is normally presented in radians per second, even when it does not express a rotational value. From the perspective of dimensional analysis, the unit Hertz (Hz) is also
correct, but in practice it is only used for ordinary frequency f, and almost never for ω.
What is angular frequency in SHM Class 11?
Solution : Angular frequency is the number of oscillations per second of a particle executing simple harmonic motion.
Is angular frequency and oscillation frequency the same?
No, the frequency and angular frequency are not the same things. Angular frequency is the change in the angle of the oscillating particle in unit time, whereas the frequency is the oscillation made
in one second. They both are different terms used for a different concept of physics.
What is the difference between resonant frequency and angular frequency?
The angular frequency has a formula of ω = 2πf. Resonant angular frequency refers to a condition in which both XL and Xc become equal in amplitude at a particular frequency. Inductive reactance and
capacitive reactance are 180° apart in-phase and cancel out each other at resonant angular frequency.
What is the difference between angular frequency and frequency in SHM?
Angular frequency can have the units radians per second. Frequency on the other hand might refer the simple harmonic motion or any object with a repeating motion. Frequency is generally in the units
Hertz and can be rpm (which would be converted from angular frequency).
Is angular frequency constant in SHM?
Importantly, angular velocity of SHM is not constant – whereas angular frequency is constant. The angular velocity in angular SHM is obtained either as the solution of equation of motion or by
differentiating expression of angular displacement with respect to time.
What is the angular frequency of a simple pendulum?
The frequency at which the pendulum oscillates, in cycles per second, is ν = ω/2π, and the period, T, equals 2π√(l/g).
What is the significance of angular frequency?
The angular frequency is important in determining whether an object can stay above the ground against gravity, or whether a spinning top can stay standing. It also is important in creating the
frequency of mains electricity supplies and reducing the heat due to friction in engines.
What is equation and unit of angular frequency?
f = frac1365 The formula for angular frequency is the oscillation frequency ‘f’ measured in oscillations per second, multiplied by the angle through which the body moves. The angular frequency
formula for an object which completes a full oscillation or rotation is computed as: omega = 2pi f.
Is angular frequency the period?
What is the dimensional formula of angular frequency?
Angular Frequency: ω=ν×2π where ν is the frequency.
What is the relationship between ω ω and FF?
In general, ω is the angular speed – the rate change of angle (as in a circular motion). Frequency (f) is 1/T or the number of periodic oscillations or revolutions during a given time period.
How do you convert angular frequency?
Since the wave speed is equal to the wavelength times the frequency, the wave speed will also be equal to the angular frequency divided by the wave number, ergo v = ω / k.
What unit is angular velocity?
The units for angular velocity are radians per second (rad/s).
What do you mean by angular velocity?
angular velocity, time rate at which an object rotates, or revolves, about an axis, or at which the angular displacement between two bodies changes. In the figure, this displacement is represented by
the angle θ between a line on one body and a line on the other.
What is the expression of angular SHM?
Comparison of SHM with Angular SHM In linear shm, the displacement of the particle is measured in terms of linear displacement r. The restoring force is F = -kr, where k is a spring constant or force
constant, force per unit displacement. In this case, the inertia factor is the mass of the body executing shm.
Why is it called simple harmonic motion?
The motion is called harmonic because musical instruments make such vibrations that in turn cause corresponding sound waves in air.
What defines simple harmonic motion?
Simple harmonic motion is defined as a periodic motion of a point along a straight line, such that its acceleration is always towards a fixed point in that line and is proportional to its distance
from that point. From: Newnes Engineering and Physical Science Pocket Book, 1993.
What is the relation between angular momentum and frequency?
There exists an important relationship between angular velocity and frequency and it is given by the following formula: angular velocity is equal to the product of the frequency and the constant 2pi.
The constant 2pi comes from the fact that one revolution per second is equivalent to 2pi radians per second.
What is the frequency of an oscillation?
It is the number of oscillations in the one-time unit, says in a second. A pendulum that takes 0.5 seconds to make one full oscillation has a frequency of 1 oscillation per 0.5 seconds or 2
oscillations per second.
How is amplitude related to angular frequency?
Because the sine function oscillates between –1 and +1, the maximum velocity is the amplitude times the angular frequency, vmax=Aω v max = A ω . The maximum velocity occurs at the equilibrium
position (x=0) when the mass is moving toward x=+A . | {"url":"https://physics-network.org/what-is-angular-frequency-and-frequency/","timestamp":"2024-11-05T20:15:35Z","content_type":"text/html","content_length":"310493","record_id":"<urn:uuid:8ff2684a-8547-4a77-b594-2c4da0bc4fdb>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00036.warc.gz"} |
Gregor Kovacic
Gregor Kovačič received batchelor's degrees in Physics and Mathematics from the University of Ljubljana, Slovenia, and a Ph.D. in Applied Mathematics from California Institute of Technology. He was a
Postdoctoral Fellow at the Los Alamos National Laboratory before joining the Mathematical Sciences Faculty at Rensselaer. Gregor is the recipient of a Prešeren's Student Prize in Slovenia, a
Director's Funded Postdoctoral Fellowship at Los Alamos, an NSF Career Award, and a Sloan Research Fellowship.
Gregor's research began in low-dimensional dynamical systems, in particular, in singular perturbation theory of systems with internal resonances. His current research interests include studies of
nonlinear evolution equations and their scientific applications, particularly in dispersive waves, optics, and neuroscience. Recently, he has been exploring dynamics and statistics of dispersive
wave-like and completely integrable partial differential equations and their applications to nonlinear resonant optics, light propagation through “metamaterials” with exotic properties of the
refractive index, and the modeling of and dynamics in neuronal networks.
Other Focus Areas
Matematical neuroscience, wave dynamics, integrable systems, nonlinear resonant optics, optics of metamaterials
The following is a selection of recent publications in Scopus. Gregor Kovacic has 68 indexed publications in the subjects of Physics and Astronomy, Mathematics, Neuroscience. | {"url":"https://faculty.rpi.edu/gregor-kovacic","timestamp":"2024-11-11T16:59:53Z","content_type":"text/html","content_length":"28647","record_id":"<urn:uuid:e75decd4-f69f-48e2-b50b-a84efeedb033>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00811.warc.gz"} |
Determining mass-accretion and jet mass-loss rates in post-asymptotic giant branch binary systems
Issue A&A
Volume 641, September 2020
Article Number A175
Number of page(s) 16
Section Stellar structure and evolution
DOI https://doi.org/10.1051/0004-6361/202038414
Published online 28 September 2020
A&A 641, A175 (2020)
Determining mass-accretion and jet mass-loss rates in post-asymptotic giant branch binary systems^⋆
^1 Department of Physics and Astronomy, Macquarie University, Sydney, NSW 2109, Australia
e-mail: dylan.bollen@kuleuven.be
^2 Astronomy, Astrophysics and Astrophotonics Research Centre, Macquarie University, Sydney, NSW 2109, Australia
^3 Instituut voor Sterrenkunde (IvS), Celestijnenlaan 200D, KU Leuven 3001, Belgium
Received: 13 May 2020
Accepted: 27 July 2020
Aims. In this study we determine the morphology and mass-loss rate of jets emanating from the companion in post-asymptotic giant branch (post-AGB) binary stars with a circumbinary disc. In doing so
we also determine the mass-accretion rates onto the companion, and investigate the source feeding the circum-companion accretion disc.
Methods. We perform a spatio-kinematic modelling of the jet of two well-sampled post-AGB binaries, BD+46°442 and IRAS 19135+3937, by fitting the orbital phased time series of Hα spectra. Once the jet
geometry, velocity, and scaled density structure are computed, we carry out radiative transfer modelling of the jet for the first four Balmer lines to determine the jet densities, thus allowing us to
compute the jet mass-loss rates and mass-accretion rates. We distinguish the origin of the accretion by comparing the computed mass-accretion rates with theoretically estimated mass-loss rates, both
from the post-AGB star and from the circumbinary disc.
Results. The spatio-kinematic model of the jet reproduces the observed absorption feature in the Hα lines. The jets have an inner region with extremely low density in both objects. The jet model for
BD+46°442 is tilted by 15° with respect to the orbital axis of the binary system. IRAS 19135+3937 has a smaller tilt of 6°. Using our radiative transfer model, we find the full 3D density structure
of both jets. By combining these results, we can compute the mass-loss rates of the jets, which are of the order of 10^−7−10^−5M[⊙]yr^−1. From this we estimate mass-accretion rates onto the
companion of 10^−6−10^−4M[⊙]yr^−1.
Conclusions. Based on the mass-accretion rates found for these two objects, we conclude that the circumbinary disc is most likely the source feeding the circum-companion accretion disc. This is in
agreement with the observed depletion patterns in post-AGB binaries, which is caused by re-accretion of gas from the circumbinary disc that is under-abundant in refractory elements. The high
accretion rates from the circumbinary disc imply that the lifetime of the disc will be short. Mass transfer from the post-AGB star cannot be excluded in these systems, but it is unlikely to provide a
sufficient mass-transfer rate to sustain the observed jet mass-loss rates.
Key words: stars: AGB and post-AGB / binaries: spectroscopic / circumstellar matter / stars: mass-loss / ISM: jets and outflows / accretion, accretion disks
© ESO 2020
1. Introduction
Binarity can have a significant impact on the evolution of stars from low to intermediate mass. The binary interactions in these systems can alter their mass-loss history, orbital parameters, and
lifetimes and can lead to other phenomena such as excretion and accretion discs, jets, and bipolar nebulae (Hilditch 2001). Post-AGB stars in binary systems are no exception. They are stars of low to
intermediate mass in a final transition phase after the AGB (Van Winckel 2003). The luminous post-AGB star in these binary systems is in orbit with a main-sequence (MS) companion of low mass
(0.1−2.5M[⊙], Oomen et al. 2018). Due to their binary interaction history, post-AGB binary systems end up with periods and eccentricities that are currently unexplained by theory (Van Winckel 2018).
During the previous AGB phase, the star endures a period of mass loss as high as 10^−4−10^−3M[⊙]yr^−1 (Ramstedt et al. 2008). When in a binary system, the mass loss of the AGB star can be
concentrated on the orbital plane of the system, with the bulk of the mass being ejected via the L2 Lagrangian point (Hubová & Pejcha 2019; Bermúdez-Bustamante et al. 2020). The focused mass loss of
the star can then become a circumbinary disc (Shu et al. 1979; Pejcha et al. 2016; MacLeod et al. 2018). Observational studies have confirmed the presence of such discs in post-AGB binary systems.
Many post-AGB stars have a near-IR dust excess in their spectral energy distribution (SED) that can be explained by dust in the proximity of the central binary system. The observed dust excess is a
clear signature of dust residing in a circumbinary disc, close to the system (De Ruyter et al. 2006; Deroo et al. 2006, 2007; Kamath et al. 2014, 2015). The compact nature of the infrared dust excess
has also been confirmed through interferometric studies (Bujarrabal et al. 2013; Hillen et al. 2013, 2016; Kluska et al. 2018). Additionally, Hillen et al. (2016) and Kluska et al. (2018) identified
a flux excess at the location of the companion in the reconstructed interferometric image of post-AGB binary IRAS 08544−4431. This flux excess is too large to originate from the companion, and most
likely stems from an accretion disc around the companion.
Another commonly observed phenomenon in post-AGB binaries is a high-velocity outflow or jet (Gorlova et al. 2012). Optical spectra of these objects show a blue-shifted absorption feature in the
Balmer lines during superior conjunction when the companion star is located between the post-AGB star and the observer (Gorlova et al. 2012, 2015), as can be seen in Fig. A.1. The absorption feature
in the Balmer lines is interpreted in terms of a jet launched from the vicinity of the companion that scatters the continuum light from the post-AGB star travelling towards the observer during this
phase in the binary orbit (Gorlova et al. 2012). Due to the orbital motion of the binary, the photospheric light of the post-AGB star shines through various parts of the jet, providing a tomography
of the jet. Hence, the orbital-phase dependent variations in the Balmer lines of these jet-creating post-AGB binaries contain an abundance of information about the jet and the binary system (Bollen
et al. 2017, 2019).
The jets are likely launched by an accretion disc around the companion. An unknown component in these jet-creating post-AGB binaries is the source feeding the circum-companion accretion disc that
launches the jet. Direct observations of the mass-transfer to the circum-companion disc do not exist. The two plausible sources are the post-AGB star, which could transfer mass via the first
Lagrangian point (L1) to the companion, and the re-accretion from the circumbinary disc around the system. While this mass transfer has not yet been observed directly, we observe refractory element
depletion in the atmosphere of post-AGB stars in binary systems (Waters et al. 1992; Van Winckel et al. 1995; Gezer et al. 2015; Kamath & Van Winckel 2019). It has been suggested that this depletion
pattern is caused by re-accretion of circumbinary gas, which is depleted of refractory elements by the formation of dust in the disc. Oomen et al. (2019) modelled this depletion pattern by
implementing the re-accretion of metal-poor gas in their evolutionary models obtained using the Modules for Experiments in Stellar Astrophysics (MESA) code. They compared these models with 58
observed post-AGB stars and found that initial mass-accretion rates must be greater than 3×10^−7M[⊙]yr^−1 in order to obtain the observed depletion patterns.
In our previous study (Bollen et al. 2017) we used the time series of Hα line profiles to show that jets in post-AGB binaries are wide and can reach velocities of ∼700km s^−1. These velocities are
of the order of the escape velocity of a MS star, pointing to the nature of the companion. In our recent study (Bollen et al. 2019) we created a more sophisticated spatio-kinematic model for the
jets, from which we determined their geometry, velocity, and scaled density structure.
In this paper, we fully exploit the potential of the tomography of the jet from the first four Balmer lines: Hα, Hβ, Hγ, and Hδ. We compute a radiative transfer model of the jet, with the aim of
estimating the mass-loss rate of the jet. We do this in two main parts: (1) the spatio-kinematic modelling, as described by Bollen et al. (2019), and (2) the radiative transfer modelling of the jet.
Here, we focus on part 2 and the mass-ejection and -accretion rates. We use two well-sampled, jet-launching post-AGB binaries for our analysis: BD+46°442 (Gorlova et al. 2012; Bollen et al. 2017) and
IRAS 19135+3937 (Gorlova et al. 2015; Bollen et al. 2019). Both objects have been observed for the past ten years with the HERMES spectrograph mounted on the Mercator telescope, La Palma, Spain (
Raskin et al. 2011), providing a good amount of data covering the orbital phase of the binary.
The paper is organised as follows. We describe the methods of our spatio-kinematic modelling and radiative transfer modelling in Sect. 2. We present the results for BD+46°442 and IRAS 19135+3937 in
Sects. 3 an 4, respectively. We discuss these results in Sect. 5 and give a conclusion and summary in Sect. 6.
2. Methods
In this study, we expand on the spatio-kinematic model carried out in our previous work (Bollen et al. 2019) by adding new components in the jet structure and we include a new radiative transfer
model. Splitting the calculations into two parts, the spatio-kinematic modelling and the radiative transfer modelling, allows us to fit the jet structure and obtain the jet mass-loss rates. In the
following subsections, we give a short description of the spatio-kinematic modelling part of the fitting, including improvements of the technique pioneered by Bollen et al. (2019), followed by a
description of the new radiative transfer modelling.
2.1. Spatio-kinematic modelling of the jet
To obtain the geometry and kinematics of the jet, we follow the model-fitting routine used by Bollen et al. (2019). In brief, we create a spatio-kinematic model of the jet, from which we reproduce
the absorption features in the Hα line. The modelled lines are then fitted to the observations. To fit our model to the data, we use the emcee-package, which applies an MCMC algorithm (Foreman-Mackey
et al. 2013). This gives us the best-fitting parameters for the jet.
The model consists of three main components: the post-AGB star, the MS companion, and the jet. The location of the post-AGB star and the companion are determined for each orbital phase by the orbital
parameters listed in Table B.1. The jet in the model is a double cone, centred on the companion. The post-AGB star is approximated as a uniform flat disc facing the observer. We trace the light
travelling from the post-AGB star, along the line of sight towards the observer. When a ray from the post-AGB star goes through the jet, the amount of absorption by the jet is calculated. The
absorption is determined by the optical depth
$Δ τ ν ( s ) = c τ ρ sc ( s ) Δ s ,$(1)
with c[τ] the scaling parameter and Δs the length of the line element at position s(θ,z). The scaled density ρ[sc] in this model is dimensionless and is a function of the polar angle θ and height z
in the jet:
$ρ sc ( θ , z ) = ( θ θ out ) p ( z 1 AU ) − 2 .$(2)
Here p is the exponent, which is a free parameter in the model; θ[out] is the outer jet angle; and z is the height of the jet above the centre of the jet cone. Hence, we calculate the relative
density structure, which can then be scaled by the scaling parameter c[τ], in order to fit the synthetic spectra to the observations. By doing so the computations of optical depth are fast. The
absolute density of the jet is estimated in Sect. 2.2.
In this model we implement the same three jet configurations as in Bollen et al. (2019): a stellar jet, an X-wind, and a disc wind. The velocity profile used for the stellar jet and X-wind models is
defined as
$v ( θ ) = v out + ( v in − v out ) · f 1 ( θ ) ,$(3)
where v[out] and v[in] are the outer and inner velocities, and with
$f 1 ( θ ) = e − p v · f 2 ( θ ) − e − p v 1 − e − p v .$(4)
Here p[v] is a free parameter, and f[2] is defined as
$f 2 ( θ ) = | θ − θ cav θ out − θ cav | ,$(5)
where θ[out] is the outer jet angle and θ[cav] the cavity angle of the jet.
The velocity profile of the disc wind is dependent on the Keplerian velocity at the location in the disc from where the material is ejected. For the inner jet region between the jet cavity and the
inner jet angle (θ[cav]< θ< θ[in]), we have
$v ( θ ) = v in , cav + ( v in , sc − v in , cav ) · ( θ − θ cav θ in − θ cav ) p v ,$(6)
with v[in,cav] the jet velocity at the cavity angle (θ[cav]) and v[in,sc] the jet velocity at the inner boundary angle (v[in]). For the outer jet region (θ[in]< θ< θ[out]) the velocity is
defined as
$v M = v out , sc tan θ out .$(8)
We define the scaled inner velocity as v[in,sc]=v[M]⋅(tanθ[in])^−1/2 and the scaled outer velocity as v[out,sc]=c[v]⋅v[out]. The parameter v[out] is the outer jet velocity, which is equal
to the Keplerian velocity at the launching point in the disc. The scaling factor c[v] can have values between 0 and 1. Hence, the disc wind velocity is lower than or equal to the Keplerian velocity
from its launching point in the disc.
In the three jet configurations, we included two important updates. The first update is the ability to model a jet whose axis is tilted with respect to the direction perpendicular to the orbital
plane. As can be seen in Fig. 1, the absorption feature is not completely centred on the phase of superior conjunction, i.e. when the MS companion is between the post-AGB primary and the observer.
This can be explained by a tilt in the jet, causing the absorption feature to be observed later in the orbital phase. A tilted jet in the binary system would lead to a precessing motion of the jet.
This is not uncommon and has been previously observed in pre-planetary nebulae (Sahai et al. 2017; Yung et al. 2011). We implemented this jet tilt as an extra free parameter in our fitting routine.
Fig. 1.
Dynamic spectra for the Balmer lines of BD+46°442. Upper left: Hα, upper right: Hβ, lower left: Hγ, lower right: Hδ. The black dashed line indicates the phase of superior conjunction. The white
line indicates the radial velocity of the post-AGB star. The colour gradient represents the strength of the line at each phase.
The second update to our model presented in Bollen et al. (2019) is the introduction of a jet cavity for the X-wind and the disc wind configurations. In Bollen et al. (2019) we showed that the
density in the innermost region of the outflow is extremely low, thus barely contributing to the absorption. This is in agreement with the disc wind theory of Blandford & Payne (1982) and the X-wind
theory of Shu et al. (1994). According to these theories the disc material is launched at angles of 30° with respect to the jet axis, although farther from the launch point the angle can decrease
substantially due to magnetic collimation. In our model, we allow some flexibility for the cavity angle parameter by giving it a lower limit of 20°. We compare the new version of the spatio-kinematic
model with the older version, which does not include the cavity and tilt, during our analysis in Sects. 3.1 and 4.1.
2.2. Radiative transfer model of the jet
The spatio-kinematic model is used as input for the radiative transfer model. Hence, the geometry, velocity, and scaled density structure are fixed with the values estimated from the spatio-kinematic
model. By calculating the radiative transfer through the jet, the absolute jet densities can be determined, from which we can then calculate the jets’ mass-ejection rate. Here, we use the equivalent
width (EW) of the Balmer lines to fit the model to the observations. The fitting parameters are the absolute jet densities and temperatures instead of relative density differences throughout the jet.
Hence, the optical depth calculations become more CPU intensive.
2.2.1. Radiative transfer
In our radiative transfer code we assume thermodynamic equilibrium, and that the jet medium is isothermal. Hence, each line of sight through the jet to the disc of the star will have the same
temperature. Here we use the formal solution of the 1D radiative transfer equation, where the source function S[ν] is described by the Planck function B[ν] (see chapter 1 Rybicki & Lightman 1979).
For the incident intensity of the post-AGB star in the model we use a synthetic stellar spectrum from Coelho (2014), which is chosen based on the parameters of the post-AGB star.
Using the Boltzmann equation, and by expressing the Einstein coefficients in terms of the oscillator strength f[12], the absorption coefficient α[ν] can be written as
$α ν ( s ) = π e 2 m e c ϕ ν n l ( s ) f lu [ 1 − e − Δ E / k T ] ,$(9)
with n[l] and n[u] the densities in the lower and upper energy level, f[lu] the oscillator strength, and ΔE the energy difference between the upper and lower energy levels. Hence, the computation of
the intensity is dependent on the number density n[l], the temperature T, and the normalised line profile ϕ[ν]. This normalised line profile ϕ[ν] is described as a Doppler profile for Hβ, Hγ, and Hδ.
For Hα, we follow the description in Muzerolle et al. (2001), Kurosawa et al. (2006), and Kurosawa et al. (2011) instead. As Muzerolle et al. (2001) showed, the Stark broadening effect can become
significant in the optically thick Hα line. Hence, we describe the line profile of Hα with the Voigt profile:
$ϕ ( ν ) = 1 π 1 / 2 Δ ν D a π ∫ − ∞ ∞ e − y 2 ( Δ ν Δ ν D − y ) 2 + a 2 d y .$(10)
Here Δν[D] is the Doppler width of the line, a=Γ/4πΔν[D], and y=(ν−ν[0])/Δν[D] with ν[0] the line centre. The Doppler line width Δν[D] is a function of the thermal velocity v[D]:
$Δ ν D = ν 0 v D c = ν 0 c 2 k T m p .$(11)
We use the damping constant Γ described by Vernazza et al. (1973), which is given by the sum of the natural broadening, Van der Waals broadening, and the linear Stark broadening effects:
$Γ = C Rad + C VdW ( n HI 10 22 m − 3 ) ( T 5000 K ) 0.3 + C Stark ( n e 10 18 m − 3 ) 2 / 3 .$(12)
Here C[Rad], C[VdW], and C[Stark] are the broadening constants of the natural broadening, Van der Waals, and Stark broadening effects, respectively; n[HI] is the neutral hydrogen number density; and
n[e] is the electron number density. For the broadening constants, we use the values from Luttermoser & Johnson (1992): C[rad] = 8.2 × 10^−3 Å, C[VdW] = 5.5 × 10^−3 Å, and C[Stark] = 1.47 × 10^−2 Å.
2.2.2. Numerical integration of the radiative transfer equation
In our model we divide the light from the post-AGB star into N[r] rays. To compute the radiative transfer through the jet, we solve the 1D radiative transfer equation numerically. Hence, we iterate
over each grid point along each ray, as shown in Fig. 2. This ray is split into N[j] grid points between the point of entry and exit in the jet. The intensity at a grid point i along the ray is
computed as follows:
$I ν ( τ i ) = I ν ( τ i − 1 ) e − ( τ i − 1 − τ i ) + B ν ( τ i ) [ 1 − e − ( τ i − 1 − τ i ) ] .$(13)
Fig. 2.
Schematic representation of the radiative transfer calculations in the jet. The ray travelling from the star to the observer is split into N[j] grid points where it passes through the jet. Each
grid point has a density ρ and a velocity v. We iterate over each grid point in order to determine the resulting intensity along this line for each wavelength.
Hence, if we want to calculate the intensity through the whole line of sight through the jet, the observed intensity I[ν](τ[n]) will be
$I ν ( τ n ) = I ν ( τ 0 ) e − ( τ 0 − τ n ) + B ν ( τ n ) [ 1 − e − ( τ n − 1 − τ n ) ] + ∑ i = 1 n − 1 B ν ( τ i ) [ 1 − e − ( τ i − 1 − τ i ) ] e − ( τ i − τ n ) .$(14)
Since the ray has been divided into discrete intervals, the optical depth τ[i] will be calculated as
$τ i + 1 − τ i = ρ i κ ν , i Δ s = α ν , i Δ s ,$(15)
with κ[ν,i] the opacity. This procedure is iterated for each ray from the post-AGB star and each frequency ν. Hence, in general, the model consists of N[r] rays leaving the post-AGB surface, which
are divided in N[j] grid points and for which the intensity is computed for a total of N[ν] frequencies. As we are assuming that the jet is isothermal and the jet velocities significantly smaller
than the speed of light (v/c≪1), we do not need to compute the Planck function B[ν] for each grid point. However, this more general formulation will allow non-isothermal jet models to be computed
in the future.
2.2.3. Equivalent width as tracer of absorption
The photospheric light from the post-AGB star that travels through the jet will be scattered by the hydrogen atoms in the jet, causing the absorption features in the Balmer lines. To quantify this
scattering in our model and the observations we use the EW of the Balmer lines as fitting parameter. We do this for two main reasons. The first is that the EW of a line will be higher for stronger
extinction. Hence, the EW quantifies the amount of scattering by the jet. The EW is highly dependent on the level populations of hydrogen at the location where the line of sight passes through the
jet. In our model these level populations are determined by the local density and by the temperature of the jet at those locations.
Second, the ratio of EW between the four Balmer lines (Hα, Hβ, Hγ, and Hδ) is also dependent on the chosen jet temperature and density. This ratio can change dramatically when these two parameters
are changed. This makes the EW an ideal quantity in our fitting to find the absolute jet densities and temperatures.
3. Jet model for BD+46°442.
BD+46°442 is a jet-launching post-AGB binary system for which we obtained 36 spectra during one-and-a-half orbital cycles of the binary orbit of 140.82days (Van Winckel et al. 2009; Oomen et al.
2018). In this study we adopt the orbital parameters listed in Oomen et al. (2018, see Appendix B). The scattering by the jet is observable in the first four Balmer lines (Hα, Hβ, Hγ, and Hδ), hence
we focus on these line for our analysis. The Balmer lines are shown in Appendix C. The signal-to-noise ratio of the spectra lies between S/N=22 and S/N=60 in the Hα line, and drops to values
between S/N=12 and S/N=37 in the Hδ line. In Fig. 1 we present the dynamic Balmer line spectra for BD+46°442. In the dynamic spectra we plot the continuum-normalised spectra as a function of
orbital phase and interpolate between each of the spectra. In this way the orbital phase-dependent variations in the line become apparent.
3.1. Spatio-kinematic model of BD+46°442
We compare the quality of the fit for the three jet configurations through their reduced chi-square and Bayesian Information Criteria (BIC) values^1. The best-fitting model is the X-wind with a
reduced chi-square of $χ ν 2 =0.23$. A chi-square lower than unity indicates that the model is over-fitting the data. In our case, this is caused by overestimating the uncertainty on the data, which
is determined from the signal-to-noise ratio of the spectra (σ=(S/N)^−1) and the uncertainty in the emission feature of the synthetic spectra that is provided as input for the modelling. We impose
a χ^2 of unity for the best-fitting model and scale the χ^2 of the other models appropriately, in order to compare their relative difference. The scaled chi-square values are $χ stellar 2 =1.12$, $χ
X0wind 2 =1$, and $χ discwind 2 =1.17$. Hence, the X-wind configuration gives a slightly better fitting result compared to the other two configurations. This is also confirmed by the BIC values of
the three models. The X-wind has the lowest BIC and therefore fits the data best: BIC[stellar]−BIC[X-wind]=1007 and BIC[discwind]−BIC[X-wind]=1452. For this reason we use the best-fitting
parameters from the spatio-kinematic modelling of the X-wind for further calculations. We note, however, that the relative difference in χ^2 between the three model configurations is not significant,
and thus we conclude that the three model configurations fit the data equally well.
The best-fitting parameters of the model are tabulated in Table 1, and its model spectra are shown in the upper right panel of Fig. 3. The binary inclination for this model is about 50°. The jet has
a half-opening angle of 35°. The inner boundary angle θ[in]=29° is the polar angle in the jet along which the bulk of the mass will be ejected. The geometry of the binary system and the jet are
represented in Fig. 4. The material that is ejected in the inner regions of the jet reaches velocities up to 490km s^−1. These velocities are of the order of the escape velocity from the surface of
a MS star, confirming the nature of the companion. The velocities at the outer edges are lower at 41km s^−1. The radius of the post-AGB star in the best-fitting model is 27.2R[⊙] (0.127AU).
Fig. 3.
Interpolated observed and modelled dynamic spectra of the Hα line. The upper spectra are the observations (left) and model spectra (right) of BD+46°442. The lower spectra are the observations and
model spectra of IRAS 19135+3937. The colours represent the normalised flux.
Fig. 4.
Geometry of the binary system and the jet of BD+46°442 at superior conjunction, when the post-AGB star is directly behind the jet, as viewed by the observer. In all three plots the full orange
circle denotes the post-AGB star. The orange star indicates the location of the companion. The red cross is the location of the centre of mass of the binary system. The radius of the post-AGB star
is to scale. The jet is represented in blue, and the colour indicates the relative density of the jet (see colour scale). The dashed black line is the jet axis and the dotted white lines are the
inner jet edges. The jet cavity is the inner region of the jet. Upper left panel: system viewed along the orbital plane from a direction perpendicular to the line of sight to the observer. Right
panel: jet viewed from an angle perpendicular to the X-axis. The post-AGB star is located behind the companion and its jet in this image. The jet tilt is noticeable from this angle. Lower left
panel: binary system viewed from above, perpendicular to the orbital plane. The grey dashed lines represent the Roche radii of the two binary components and the full black line shows the Roche
Table 1.
Best-fitting jet configuration and parameters for the spatio-kinematic model of BD+46°442 and IRAS 19135+3937.
Additionally, we implemented a jet tilt and jet cavity in this model (see Sect. 2.1). The resulting model has a cavity angle of 20°. The jet tilt for BD+46°442 is relatively large with ϕ[tilt]=15°.
The effect of this tilt is noticeable in the resulting model spectra. The jet absorption feature is not centred at orbital phase 0.5, but at a later phase between 0.55 and 0.6. To evaluate the
performance of the new modified spatio-kinematic model that includes a jet cavity and tilt, we do an additional model fitting with the old version that does not include these features and compare the
two model fitting results. The χ^2 of the new model ($χ new 2 =1$) is lower than the old version ($χ old 2 =1.35$). If we account for the two extra parameters in the new model by comparing the BIC
instead, we get a difference in BIC between the two results of ΔBIC=2980, with the lower BIC for the new model, implying a better fit for this model. This demonstrates that the implemented jet
cavity and jet tilt improve the spatio-kinematic model for this object.
3.2. Radiative transfer model of BD+46°442
We apply the radiative transfer model for BD+46°442 to compute the amount of absorption caused by the jet that blocks the light from the post-AGB star. The setup for the radiative transfer model is
similar to that described in Sect. 2.1. For each ray of light the background intensity I[ν,0] is the background spectrum given in Sect. 3.1. For each orbital phase we calculate the amount of
absorption by the jet for each ray. Additionally, the output from the MCMC fitting routine in Sect. 3.1 (i.e. the spatio-kinematic model of BD+46°442) is used as input in our the radiative transfer
model. Hence, there are only two fitting parameters: jet number density n[j] and jet temperature T[j]. We assume the jet temperature to be uniform for the segment of the jet through which the rays
travel. The jet number density n[j] is defined as the number density at the inner edge of the jet θ[in] at a height of 1AU. In the case of BD+46°442 the best-fitting jet configuration is an X-wind.
Hence, the density in the jet at each grid point can be determined from the density profile of the jet that was used for the X-wind in the spatio-kinematic model. This density profile is defined as
$n ( θ , z ) = n j ( θ θ in ) p z − 2 ,$(16)
with p either the exponent for the inner-jet region p[in] or the outer-jet region p[out], which was determined in Sect. 3.1.
We use a grid of jet temperatures between 4400 K and 6000 K in steps of 100 K and the logarithm of the jet densities $log 10 ( n j m − 3 )$ between 14 and 18 in logarithmic steps of 0.1. This makes a
total of 697 grid calculations.
As described in Sect. 2.2.3, the EW of the Balmer lines represents the amount of absorption by the jet. In order to find the best-fitting model for our grid of temperatures and densities, we fit the
EW of the Balmer lines in the model to those of the observed Balmer lines for each spectra.
We fit the model to the data with a χ^2–goodness-of-fit test. Hence, the reduced $χ ν 2$ value for a model will be
$χ ν 2 = 1 ν ( ∑ i N o [ ( EW i o , H α − EW i m , H α ) 2 ( σ i H α ) 2 ] + ∑ i N o [ ( EW i o , H β − EW i m , H β ) 2 ( σ i H β ) 2 ] + ∑ i N o [ ( EW i o , H γ − EW i m , H γ ) 2 ( σ i H γ ) 2 ]
+ ∑ i N o [ ( EW i o , H δ − EW i m , H δ ) 2 ( σ i H δ ) 2 ] ) ,$(17)
with ν the degrees of freedom, N[o] the number of spectra (36 for BD+46°442, and 22 for IRAS 19135+3937), EW^o and EW^m the equivalent width of the observed and modelled line, and σ the standard
deviation determined by the signal-to-noise ratio of the spectra.
The resulting 2D $χ ν 2$ distribution for jet densities and temperatures is shown in Fig. 5 and the corresponding EW of the model in Fig. 7. The best-fitting model has a jet density of n[j]=
2.5×10^16m^−3 and jet temperature of T[j]=5600K. In order to determine the uncertainties on the fitting parameters, we convert the 2D chi-squared distribution into a probability
$P 2 D ( n j , T j ) ∝ exp ( − χ 2 / 2 ) .$(18)
Fig. 5.
Two-dimensional reduced chi-squared distribution for the grid of jet densities n[j] and temperatures T[j] for the fitting of BD+46°442. The white dot gives the location of the minimum reduced
chi-square value $χ ν,min 2 =43.9$. The contours represent the 1σ, 2σ, and 3σ intervals.
Fig. 7.
Equivalent width of the absorption by the jet for BD+46°442 as a function of orbital phase. The four panels show the EW in Hα (top left), Hβ (top right), Hγ (bottom left), and Hδ (bottom right).
The circles are the measured EWs of the absorption feature by the jet in the observations with their respective errors. The full line is the EW of the absorption feature for the best-fitting model.
The marginalised probability distribution can be found for each parameter via
$P 1 D ( n j ) = ∑ T j P 2 D ( n j , T j ) ,$(19)
$P 1 D ( T j ) = ∑ n j P 2 D ( n j , T j ) .$(20)
From these distributions we can determine a mean and standard deviation. This gives us a jet density and temperature of $n j = 2 . 5 − 0.7 + 0.9 × 10 16 m − 3$ and T[j]=5600±80K.^2
4. Jet model for IRAS 19135+3937
The second post-AGB binary system that we model is IRAS 19135+3937. For this object, we obtained 22 spectra during a full cycle with an orbital period of 126.97days (Van Winckel et al. 2009; Oomen
et al. 2018). As for BD+46°442, we adopt the orbital parameters of this system found by Oomen et al. (2018). The individual and dynamic spectra are shown in Appendix C and Fig. 6, respectively. The
signal-to-noise ratio for these spectra lie between S/N=26 and S/N=49 in Hα. In Hδ the signal-to-noise ratio ranges between S/N=5 and S/N=20.
Fig. 6.
Dynamic spectra for the Balmer lines of IRAS 19135+3937. Upper left: Hα, upper right: Hβ, lower left: Hγ, lower right: Hδ. The black dashed line indicated the phase of superior conjunction. The
white line indicates the radial velocity of the post-AGB star. The colour gradient represents the strength of the line at each phase.
4.1. Spatio-kinematic model of IRAS 19135+3937
The spatio-kinematic structure of IRAS 19135+3937 was modelled by Bollen et al. (2019). Here, we update it with the addition of the jet tilt and the jet cavity. The best-fitting jet configuration is
a disc wind, but all three models produce similar fits, as was the case in the fitting of Bollen et al. (2019) (BIC[stellar]−BIC[discwind]=87 and BIC[X-wind]−BIC[discwind]=116). The
best-fitting model parameters are tabulated in Table 1. This model has an inclination angle of i=72° for the binary system and a jet angle of θ[out]=67°. These angles are about 7° lower than
those found in the model fitting of Bollen et al. (2019). The jet reaches velocities up to 640km s^−1. At its edges the jet velocity is v[out]⋅c[v]=3km s^−1. The post-AGB star in our model has
a radius of 22.5 R[⊙] (0.105 AU), which is about 30% smaller than that found by Bollen et al. (2019). The geometry of the binary system and the jet are shown in Fig. 8. We compare the quality of the
fit for the best-fitting model of Bollen et al. (2019) with the best-fitting model in this work. The BIC for the model fitting in our work is significantly lower than the BIC found by Bollen et al.
(2019) (ΔBIC[old]−BIC[new]=4190). This shows that the jet tilt and jet cavity significantly improve the model fitting. The jet tilt for this object is relatively small (ϕ[tilt]=5.7°). This is
expected since there is no noticeable lag in the absorption feature in the spectra. The jet for this object has a significant jet cavity of θ[cav]=24°.
Fig. 8.
Similar to Fig. 4, but for IRAS 19135+3937.
4.2. Radiative transfer model of IRAS 19135+3937
We apply the radiative transfer model for IRAS 19135+3937. The best-fitting spatio-kinematic model found in Sect. 4.1 for IRAS 19135+3937 is a disc wind. We use this spatio-kinematic model and its
model parameters as input to calculate the radiative transfer in the jet for a grid of jet densities and temperatures. The density profile for the disc wind is similar to the X-wind (see Eq. (16)).
The grid of temperatures and densities is the same as for BD+46°442.
The 2D χ^2 distribution for the fitting is shown in Fig. 9 and the associated EW of the model is shown in Fig. 10. We calculate the marginalised probability distributions for n[j] and T[j], given by
Eqs. (19) and (20), from which we can determine the mean and standard deviations. This gives a jet density of $n j = 1 . 0 − 0.4 + 0.5 × 10 16 m − 3$ and jet temperature of T[j]=5330±180K.
Fig. 9.
Two-dimensional reduced chi-square distribution for the grid of jet densities n[j] and temperatures T[j] for the fitting of IRAS 19135+3937. The white dot gives the location of the minimum reduced
chi-square value $χ ν,min 2 =60.8$. The contours represent the 1σ, 2σ, and 3σ intervals.
Fig. 10.
As for Fig. 7, but for IRAS 19135+3937.
5. Discussion
By fitting the spatio-kinematic structure of the jet and estimating its density structure, we obtained crucial information about the jet. We can now estimate how much mass is being ejected by the
jet, which is essential for understanding the mass accretion onto the companion and determining the source feeding this accretion.
5.1. Jet mass-loss rate
The velocity and density structure of the jets, calculated by fitting the models, is used to estimate the mass-ejection rate. The mass-ejection rate of the jet for both systems is estimated by
calculating how much mass passes through the jet at a height of 1AU from the launch point.
In the case of BD+46°442, the velocity and density profiles are determined by the X-wind configuration (see Sect. 4.1). We calculate the density at a height of z=1AU using Eq. (3). The
mass-ejection rate can be found by the integral
$M ˙ jet = ∫ 0 R ρ ( r ) · v ( r ) · 2 π r d r ,$(21)
with r=1AU⋅tanθ. The velocity at each location in the jet is defined by Eq. (3). In this way we find a mass-ejection rate of $M ˙ jet = 7 − 2 + 3 × 10 − 7 M ⊙ yr − 1$ for BD+46°442.
For IRAS 19135+3937, the data is best fit by a disc wind model (see Sect. 3.1), whose velocity profile is described by Eqs. (6) and (7) for the inner and outer jet regions, respectively. From Eq. (21
), we find a mass-ejection rate of $M ˙ jet = 2 . 0 − 0.7 + 2 × 10 − 6 M ⊙ yr − 1$. We list these values as lower and upper limits in Table 2.
Table 2.
Derived mass-accretion and mass-loss rates in the two binary systems.
5.2. Ejection efficiency
By assuming an ejection efficiency Ṁ[jet]/Ṁ[acc], we can link the jet mass-loss rate (Ṁ[jet]) to the mass-accretion rate (Ṁ[acc]), and hence obtain a range of possible accretion rates onto the
circum-companion disc. By doing so, we can assess if the mass transfer from either the post-AGB or the circumbinary disc, or both can contribute enough mass to the circum-companion disc in order to
sustain the observed jet mass-loss rates.
Ejection efficiency has not been determined for post-AGB binary systems, but the same theory (i.e., magneto centrifugal driving) applies to YSOs, which have been studied extensively (Ferreira et al.
2007, and references therein). Moreover, discs in YSOs are comparable in size to those in our post-AGB binary systems (Hillen et al. 2017) and their mass-ejection rates are similar to the rates
estimated for jets in post-AGB systems (10^−8−10^−4M[⊙]yr^−1; Calvet et al. 1998; Ferreira et al. 2007). Current estimates of ejectioEWn efficiencies for T Tauri stars are in the range Ṁ[jet]/Ṁ
[acc] ∼ 0.01 − 0.1 (Cabrit et al. 2007; Cabrit 2009; Nisini et al. 2018). This said, the spread in these values is large and some studies have even found ratios higher than 0.3 (Calvet et al. 1998;
Ferreira et al. 2006; Nisini et al. 2018).
In this work we adopt a wide range of ejection efficiencies for both of our post-AGB binary objects, according to the typical ranges found through observations of YSOs: 0.01 < Ṁ[jet]/Ṁ[accr] < 0.3.
Under these assumptions, and by using the jet mass-loss rates from Sect. 5.1, the accretion rates onto the two companions are (see Table 2)
${ 1.7 × 10 − 6 M ⊙ yr − 1 < M ˙ acc , BD < 1 × 10 − 4 M ⊙ yr − 1 5 × 10 − 6 M ⊙ yr − 1 < M ˙ acc , IRAS < 4 × 10 − 4 M ⊙ yr − 1 .$(22)
Next, we use these results to look at possible sources feeding the accretion.
5.3. Sources of accretion onto the companion
There are two possible sources of mass transfer onto the companion. The first is the post-AGB primary itself, moving mass via the first Lagrange point L1 and creating a circum-companion accretion
disc. The second possibility is re-accretion of gas from the circumbinary discs (Van Winckel 2003; De Ruyter et al. 2006; Dermine et al. 2013).
5.3.1. Scenario 1: Mass transfer from the post-AGB star to the companion
We first assume that the accretion onto the companion is due to mass transfer from the primary via L1. To estimate the mass-transfer rate by the post-AGB star, we follow the prescription in Ritter
$M ˙ 1 = 2 π e ( k B m H μ 1 , ph T 1 ) 3 / 2 R 1 3 G M 1 ρ 1 , ph F ( q ) .$(23)
Here e is Euler’s number; k[B] is the Boltzmann constant; G is the gravitational constant; m[H] is the hydrogen mass; and μ[1,ph], T[1], R[1], M[1], and ρ[1,ph] are the mean molecular weight, the
temperature, the radius, and the mass of the primary star, respectively. The parameter F(q) is defined as
$F ( q ) = [ ( g ( q ) − ( 1 + q ) ) g ( q ) ] − 1 / 2 ( R 1 , RL a ) − 3 ,$(24)
with q=M[2]/M[1] the mass ratio, R[1,RL] the Roche lobe radius of the post-AGB star, and a the binary separation. In Eq. (24) g(q) is defined as
$g ( q ) = q x 3 + ( 1 − x ) − 3 ,$(25)
with x the distance between the mass centre of the post-AGB star and L1 in terms of the binary separation (a).
We assume a neutral cosmic mixture which implies a mean molecular weight of μ[1,ph]=0.8. We note that this prescription is based on Roche lobe overflow for a star filling its Roche lobe,
transferring mass to the companion. In this case the radius of the star R[1] is equal to the Roche radius R[1,RL]. However, from our results in in the spatio-kinematic modelling, the post-AGB stars
in these two systems do not fill their Roche lobes, as is shown in the geometrical representation of the systems in Figs. 4 and 8. The observations also support this result since a star that could
fill at least 80% of its Roche lobe would show ellipsoidal variations in its light curve (Wilson & Sofia 1976). The light curves of BD+46°442 and IRAS 19135+3937 do not show these variations (Bollen
et al. 2019). Hence, we extrapolate Eq. (23) by using the radius of the primary R[1] instead of the Roche radius R[1,RL].
Additionally, since we find that the post-AGB star does not fill its Roche lobe, the mass transfer would occur via a mechanism that is less efficient and weaker than RLOF. A few other possibilities
being wind-RLOF and Bondi-Hoyle-Lyttleton (BHL) accretion. In the case of wind-RLOF, the stellar wind will be focused to the orbital plane and most of the mass will be lost through the L1 point,
towards the secondary (Mohamed et al. 2007). The mass-transfer efficiency of wind-RLOF would vary between a few percent and 50% (de Val-Borro et al. 2009; Abate et al. 2013). In the case of BHL
accretion, the accretion efficiency would be significantly lower at about 1−10% (Abate et al. 2013; Mohamed & Podsiadlowski 2012). Hence, the upper limit for mass transfer through wind-RLOF would be
lower than that of RLOF. The upper limit for BHL accretion would be several orders of magnitude lower. Hence, by equating the mass-transfer from the post-AGB star using Eq. (23), we can get a good
estimation for an upper limit of mass-transfer from the post-AGB star to the companion.
To determine the photospheric density, we use the MESA stellar evolution code (MESA Paxton et al. 2011, 2013, 2015, 2018, 2019) to calculate the evolution of a post-AGB star with the correct mass. We
subsequently use the photospheric density from the MESA output at the time-step when the post-AGB star is the same size as our star. This gives us a value of 10^−10gcm^−2 for the photospheric
density of the star (ρ[1,ph]). The mass of the post-AGB star M[1] is set to 0.6M[⊙], which is a typical value for these objects. The mass of the companion M[2] is determined from the mass function
f(M[1]) and the inclination of the binary system found in the spatio-kinematic model fitting:
$f ( M 1 ) = M 2 3 sin 3 i ( M 1 + M 2 ) 2 .$(26)
This gives a mass of 1.07M[⊙] for the companion star of BD+46°442, resulting in a mass ratio of q=1.79. We find the Roche radius of the post-AGB star using the formula by Eggleton (1983):
$R RL , 1 = a · 0.49 q − 2 / 3 0.6 q − 2 / 3 + ln ( 1 + q − 1 / 3 ) .$(27)
Using these values for Eq. (23), we find the mass-transfer rate from the post-AGB star to the companion in BD+46°442 to be Ṁ[1] = 3.5 × 10^−7 M[⊙] yr^−1. This value is less than the lower limit of
the jet mass-loss rate in BD+46°442 (see Table 2). Moreover, in Sect. 5.1, we found the mass-accretion rate to be in the range of 1.7 × 10^−6 M[⊙] yr^−1 < Ṁ[accr] < 1 × 10^−4 M[⊙] yr^−1, which is
about five times the theoretical value for the upper-limit of mass-transfer from the post-AGB star to the companion. Hence, it is unlikely that the post-AGB star can contribute enough mass to the
circum-companion disc to sustain the observed jet outflow.
We conduct a similar analysis for IRAS 19135+3937 and we come to a similar conclusion. The mass of the companion is M[2]=0.46M[⊙] and the mass ratio would be q=0.77. We use the same values for μ
[1,ph] and ρ[1,ph], giving us a mass-transfer rate of Ṁ[RLOF] = 1.7 × 10^−7 M[⊙] yr^−1. Hence, this upper limit for mass-transfer from the post-AGB star for IRAS 19135+3937 is also lower than the
lower limit for the jet mass-loss rate of IRAS 19135+3937 and thus too low to match a measured mass-accretion rate in the range 4 × 10^−6 M[⊙] yr^−1 < Ṁ[accr] < 4 × 10^−4 M[⊙] yr^−1.
We conclude that it is unlikely that the mass transfer from the post-AGB star alone is responsible for feeding the accretion disc around the companion.
5.3.2. Scenario 2: Re-accretion from the circumbinary disc
Here, we consider the possibility of mass accretion from the circumbinary disc onto the central binary system. In order to give an estimate of re-accretion by the circumbinary disc of BD+46°442 and
IRAS 19135+3937, we use the mass-loss equation by Rafikov (2016), that defines the mass loss from the disc to the central binary as a function of time,
$M ˙ disc ( t ) = M 0 , disc t 0 ( 1 + t 2 t 0 ) − 3 / 2 ,$(28)
where M[0,disc] is the initial disc mass. These circumbinary discs have average disc masses of M[0,disc]=10^−2M[⊙] (Gielen et al. 2007; Bujarrabal et al. 2013, 2018; Hillen et al. 2017; Kluska
et al. 2018). Bujarrabal et al. (2013, 2018) derived disc masses of circumbinary discs of post-AGB binary systems ranging from 6×10^−4 to 5×10^−2M[⊙]. We use this range to estimate the mass-loss
rate from the disc.
The initial viscous time of the disc t[0] is defined by Rafikov (2016) as
$t 0 = 4 3 μ k B a b α [ 4 π σ ( G M b ) 2 ζ L 1 ] 1 / 4 ( η I L , ) 2 ,$(29)
where μ is the mean molecular weight, a[b] is the binary separation, α is the viscosity parameter, σ is Stefan-Boltzmann constant, L[1] is the luminosity of the post-AGB star, ζ is a constant factor
that accounts for the starlight that is intercepted by the disc surface at a grazing incidence angle, η is the ratio of angular momentum of the disc compared to that of the central binary, and I[L]
characterises the spatial distribution of the angular momentum in the disc. We fix several values at the same values as Rafikov (2016) and Oomen et al. (2019): μ=2m[p], α=0.01, ζ=0.1, and I[L]
=1, with m[p] the mass of a proton. The luminosity L[1] of BD+46°442 and IRAS 19135+3937 are $2100 − 800 + 1500 L ⊙$ and $2100 − 400 + 500 L ⊙$, respectively (Oomen et al. 2019). The angular
momentum of the circumbinary disc is typically of the order of the angular momentum of the central binary system (Bujarrabal et al. 2018; Izzard & Jermyn 2018). We set a range of η between 1.4 and 2,
where a value of 1.4 is appropriate for a disc with the bulk of its mass located at the inner disc rim.
Using Eq. (28) and assuming t=0, we can calculate a range of possible mass-loss rates by the disc. We find a range of 3 × 10^−8 M[⊙] yr^−1 < Ṁ[disc] < 6 × 10^−6 M[⊙] yr^−1 for BD+46°442, while in
the case of IRAS 19135+3937, we find that the re-accretion rate from the circumbinary disc is in the range of 5 × 10^−8 M[⊙] yr^−1 < Ṁ[disc] < 9 × 10^−6 M[⊙] yr^−1 (Table 2). When the disc matter
falls onto the central binary, it will be accreted by both the post-AGB star and the companion. Hence, the mass lost by the circumbinary disc should be twice the mass accreted by the circum-companion
accretion disc. If we compare the mass-accretion rate for BD+46°442 and IRAS 19135+3937 with the estimated mass-loss rate by the circumbinary disc, it shows that re-accretion from the circumbinary
disc is a plausible mechanism for the formation of the jet.
We note that only the higher estimates for mass-accretion rates from the circumbinary disc can explain our observationally derived rates. Hence, this would imply that for these two systems the disc
masses are at the high end of the range (M[0,disc]> 10^−2M[⊙]) and that we are observing the early stages of the re-accretion by the circumbinary disc. Nevertheless, our findings are in good
agreement with Oomen et al. (2019), who estimated that accretion rates should be higher than 3×10^−7M[⊙]yr^−1 and that disc masses should be higher than ∼10^−2M[⊙].
6. Summary and conclusion
In this paper our aim was to determine mass-transfer rates of jet-creating post-AGB binaries. We fully exploited the time series of high-resolution optical spectra from these binary systems. We
presented a new radiative transfer model for these jets and applied this model to reproduce the Balmer lines of two well-sampled post-AGB binary systems: BD+46°442 and IRAS 19135+3937. With this
model we were able to study the mass-loss rate of the jet and mass-accretion rate onto the companion, and to constrain the source of the accretion in these systems: the post-AGB star or the
circumbinary disc. Additionally, we expanded the spatio-kinematic model from Bollen et al. (2019). Our main conclusions can be summarised as follows:
1. We successfully reproduced the observed absorption feature in the Hα line profiles of our test sources with our improved spatio-kinematic model of the jet. By doing so, we obtained the kinematics
and 3D morphology of the jet. The implementation of the jet tilt in the model reproduced the observed lag of the absorption feature in the Balmer lines. This tilt is significant for both objects,
with values of 15° and 6° for BD+46°442 and IRAS 19135+3937, respectively. Likewise, the new jet cavity in the model improves the jet representation, as was suggested by Bollen et al. (2019).
2. We showed that we can acquire a 3D jet morphology by modelling the amount of absorption in the Hα lines from our spatio-kinematic model of the jet. By combining the results of the
spatio-kinematic and radiative transfer modelling, we found the crucial parameters to calculate jet mass-loss rates: jet velocity and geometry from the spatio-kinematic model and jet density
structure from the radiative transfer model.
3. We computed the mass-loss rate of the jet by combining the results of our spatio-kinematic model and radiative transfer model. The computed mass-loss rates for the jets in BD+46°442 and IRAS
19135+3937 are in the ranges (5−10)×10^−7M[⊙]yr^−1 and (1.3−4)×10^−6M[⊙]yr^−1, respectively, as shown in Table 2. These mass-ejection rates are comparable to the mass-ejection rates for
the jets in planetary nebulae and pre-planetary nebulae (Tocknell et al. 2014; Tafoya et al. 2019). Tocknell et al. (2014) found the mass-ejection rates to be 1−3×10^−7M[⊙]yr^−1 and 8.8×10^
−7M[⊙]yr^−1 for the Necklace and NGC 6778, respectively. These mass-ejection rates imply correspondingly high mass-accretion rates onto the companion that range between 1.7×10^−6M[⊙]yr^−1
and 1×10^−4M[⊙]yr^−1 for BD+46°442 and 4×10^−6M[⊙]yr^−1 and 4×10^−4M[⊙]yr^−1 for IRAS 19135+3937.
4. By determining the jet mass-loss rate we added an additional constraint on the nature of the accretion onto these systems. While the uncertainties are high, the circumbinary disc is the preferred
source of accretion feeding the jet rather than the post-AGB star: the accretion rates from the post-AGB stars are too low to justify the observed jet mass-loss rates. We note, however, that the
simultaneous accretion from the circumbinary disc and from the post-AGB star cannot be ruled out. Re-accretion from the circumbinary disc also naturally explains the abundance pattern of the
post-AGB star and is in agreement with the study by Oomen et al. (2019), who showed that high re-accretion rates (> 3×10^−7M[⊙]yr^−1) are needed in order to reproduce the observed depletion
patterns of post-AGB stars. These high re-accretion rates from the circumbinary disc can prolong the lifetime of the post-AGB star, and can thus have an important impact on the evolution of these
objects, provided that the disc can sustain the mass loss.
In our future studies, we will perform a comprehensive analysis of the whole diverse sample of jet-creating post-AGB binary systems by using both the spatio-kinematic and radiative transfer models.
The observational properties of these binaries and their jets are non-homogeneous. Hence, by analysing the whole sample, we aim to obtain strong constraints on the source of the accretion and
identify correlations between mass accretion, depletion patterns, and the orbital properties of post-AGB binaries.
This work was performed on the OzSTAR national facility at Swinburne University of Technology. OzSTAR is funded by Swinburne University of Technology and the National Collaborative Research
Infrastructure Strategy (NCRIS). DK acknowledges the support of the Australian Research Council (ARC) Discovery Early Career Research Award (DECRA) grant (95213534). HVW acknowledges support from the
Research Council of the KU Leuven under grant number C14/17/082. The observations presented in this study are obtained with the HERMES spectrograph on the Mercator Telescope, which is supported by
the Research Foundation - Flanders (FWO), Belgium, the Research Council of KU Leuven, Belgium, the Fonds National de la Recherche Scientifique (F.R.S.-FNRS), Belgium, the Royal Observatory of
Belgium, the Observatoire de Genève, Switzerland and the Thüringer Landessternwarte Tautenburg, Germany.
Appendix A: The absorption feature in the Hα line
Fig. A.1.
Hα line of BD+46°442 at two different phases in the orbital period. The Hα line displays a double-peaked emission feature with a central absorption feature during inferior conjunction (black solid
line), when the post-AGB star is between the jet and the observer. During superior conjunction, when the jet is between the post-AGB star and the observer, we observe a blue-shifted absorption
feature in the Hα line (blue dotted line).
Appendix B: Orbital parameters of BD+46°442 and IRAS 19135+3937
Appendix C: Balmer lines of BD+46°442 and IRAS 19135+3937
Fig. C.1.
Balmer lines of BD+46°442 as a function of wavelength. The spectra are given in arbitrary units and offset according to their orbital phase. Numbers on the right vertical axis indicate the orbital
phase of the spectra (from 0 to 100). The dashed vertical lines represent the centre of each Balmer line.
Fig. C.2.
Similar to Fig. C.1, but for IRAS 19135+3937.
All Tables
Table 1.
Best-fitting jet configuration and parameters for the spatio-kinematic model of BD+46°442 and IRAS 19135+3937.
Table 2.
Derived mass-accretion and mass-loss rates in the two binary systems.
All Figures
Fig. 1.
Dynamic spectra for the Balmer lines of BD+46°442. Upper left: Hα, upper right: Hβ, lower left: Hγ, lower right: Hδ. The black dashed line indicates the phase of superior conjunction. The white
line indicates the radial velocity of the post-AGB star. The colour gradient represents the strength of the line at each phase.
In the text
Fig. 2.
Schematic representation of the radiative transfer calculations in the jet. The ray travelling from the star to the observer is split into N[j] grid points where it passes through the jet. Each
grid point has a density ρ and a velocity v. We iterate over each grid point in order to determine the resulting intensity along this line for each wavelength.
In the text
Fig. 3.
Interpolated observed and modelled dynamic spectra of the Hα line. The upper spectra are the observations (left) and model spectra (right) of BD+46°442. The lower spectra are the observations and
model spectra of IRAS 19135+3937. The colours represent the normalised flux.
In the text
Fig. 4.
Geometry of the binary system and the jet of BD+46°442 at superior conjunction, when the post-AGB star is directly behind the jet, as viewed by the observer. In all three plots the full orange
circle denotes the post-AGB star. The orange star indicates the location of the companion. The red cross is the location of the centre of mass of the binary system. The radius of the post-AGB star
is to scale. The jet is represented in blue, and the colour indicates the relative density of the jet (see colour scale). The dashed black line is the jet axis and the dotted white lines are the
inner jet edges. The jet cavity is the inner region of the jet. Upper left panel: system viewed along the orbital plane from a direction perpendicular to the line of sight to the observer. Right
panel: jet viewed from an angle perpendicular to the X-axis. The post-AGB star is located behind the companion and its jet in this image. The jet tilt is noticeable from this angle. Lower left
panel: binary system viewed from above, perpendicular to the orbital plane. The grey dashed lines represent the Roche radii of the two binary components and the full black line shows the Roche
In the text
Fig. 5.
Two-dimensional reduced chi-squared distribution for the grid of jet densities n[j] and temperatures T[j] for the fitting of BD+46°442. The white dot gives the location of the minimum reduced
chi-square value $χ ν,min 2 =43.9$. The contours represent the 1σ, 2σ, and 3σ intervals.
In the text
Fig. 7.
Equivalent width of the absorption by the jet for BD+46°442 as a function of orbital phase. The four panels show the EW in Hα (top left), Hβ (top right), Hγ (bottom left), and Hδ (bottom right).
The circles are the measured EWs of the absorption feature by the jet in the observations with their respective errors. The full line is the EW of the absorption feature for the best-fitting model.
In the text
Fig. 6.
Dynamic spectra for the Balmer lines of IRAS 19135+3937. Upper left: Hα, upper right: Hβ, lower left: Hγ, lower right: Hδ. The black dashed line indicated the phase of superior conjunction. The
white line indicates the radial velocity of the post-AGB star. The colour gradient represents the strength of the line at each phase.
In the text
Fig. 8.
Similar to Fig. 4, but for IRAS 19135+3937.
In the text
Fig. 9.
Two-dimensional reduced chi-square distribution for the grid of jet densities n[j] and temperatures T[j] for the fitting of IRAS 19135+3937. The white dot gives the location of the minimum reduced
chi-square value $χ ν,min 2 =60.8$. The contours represent the 1σ, 2σ, and 3σ intervals.
In the text
Fig. 10.
As for Fig. 7, but for IRAS 19135+3937.
In the text
Fig. A.1.
Hα line of BD+46°442 at two different phases in the orbital period. The Hα line displays a double-peaked emission feature with a central absorption feature during inferior conjunction (black solid
line), when the post-AGB star is between the jet and the observer. During superior conjunction, when the jet is between the post-AGB star and the observer, we observe a blue-shifted absorption
feature in the Hα line (blue dotted line).
In the text
Fig. C.1.
Balmer lines of BD+46°442 as a function of wavelength. The spectra are given in arbitrary units and offset according to their orbital phase. Numbers on the right vertical axis indicate the orbital
phase of the spectra (from 0 to 100). The dashed vertical lines represent the centre of each Balmer line.
In the text
Fig. C.2.
Similar to Fig. C.1, but for IRAS 19135+3937.
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://www.aanda.org/articles/aa/full_html/2020/09/aa38414-20/aa38414-20.html","timestamp":"2024-11-11T04:53:00Z","content_type":"text/html","content_length":"313979","record_id":"<urn:uuid:502321f7-81b3-4dba-83f9-7ea90d818ed0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00443.warc.gz"} |
Restricted Boltzmann Machine Tutorial | Deep Learning Concepts | Edureka
In the era of Machine Learning and Deep Learning, Restricted Boltzmann Machine algorithm plays an important role in dimensionality reduction, classification, regression and many more which is used
for feature selection and feature extraction. This Restricted Boltzmann Machine Tutorial will provide you with a complete insight into RBMs in the following sequence:
Restricted Boltzmann Machine | Edureka
This Edureka video on “Restricted Boltzmann Machine” will provide you with a detailed and comprehensive knowledge of Restricted Boltzmann Machines, also known as RBM.
Let’s begin our Restricted Boltzmann Machine Tutorial with the most basic and fundamental question,
What are Restricted Boltzmann Machines?
Restricted Boltzmann Machine is an undirected graphical model that plays a major role in Deep Learning Framework in recent times. It was initially introduced as Harmonium by Paul Smolensky in 1986
and it gained big popularity in recent years in the context of the Netflix Prize where Restricted Boltzmann Machines achieved state of the art performance in collaborative filtering and have beaten
most of the competition.
It is an algorithm which is useful for dimensionality reduction, classification, regression, collaborative filtering, feature learning, and topic modeling.
Now let’s see how Restricted Boltzmann machine differs from other Autoencoder.
Subscribe to our YouTube channel to get new updates..!
Difference between Autoencoders & RBMs
Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. Typically, the number of hidden units is much less than the number of visible ones. The
task of training is to minimize an error or reconstruction, i.e. find the most efficient compact representation for input data.
RBM shares a similar idea, but it uses stochastic units with particular distribution instead of deterministic distribution. The task of training is to find out how these two sets of variables are
actually connected to each other.
One aspect that distinguishes RBM from other autoencoders is that it has two biases.
Now that we know what is Restricted Boltzmann Machine and what are the differences between RBM and Autoencoders, let’s continue with our Restricted Boltzmann Machine Tutorial and have a look at their
architecture and working.
Layers in Restricted Boltzmann Machine
Restricted Boltzmann Machines are shallow, two-layer neural nets that constitute the building blocks of deep-belief networks. The first layer of the RBM is called the visible, or input layer, and the
second is the hidden layer. Each circle represents a neuron-like unit called a node. The nodes are connected to each other across layers, but no two nodes of the same layer are linked.
The restriction in a Restricted Boltzmann Machine is that there is no intra-layer communication. Each node is a locus of computation that processes input and begins by making stochastic decisions
about whether to transmit that input or not.
Working of Restricted Boltzmann Machine
Each visible node takes a low-level feature from an item in the dataset to be learned. At node 1 of the hidden layer, x is multiplied by a weight and added to a bias. The result of those two
operations is fed into an activation function, which produces the node’s output, or the strength of the signal passing through it, given input x.
Next, let’s look at how several inputs would combine at one hidden node. Each x is multiplied by a separate weight, the products are summed, added to a bias, and again the result is passed through an
activation function to produce the node’s output.
At each hidden node, each input x is multiplied by its respective weight w. That is, a single input x would have three weights here, making 12 weights altogether (4 input nodes x 3 hidden nodes). The
weights between the two layers will always form a matrix where the rows are equal to the input nodes, and the columns are equal to the output nodes.
Each hidden node receives the four inputs multiplied by their respective weights. The sum of those products is again added to a bias (which forces at least some activations to happen), and the result
is passed through the activation algorithm producing one output for each hidden node.
Now that you have an idea about how Restricted Boltzmann Machine works, let’s continue our Restricted Boltzmann Machine Tutorial and have a look at the steps involved in the training of RBM.
Check out this Artificial Intelligence Course with Python training by Edureka to upgrade your AI skills to the next level
Training of Restricted Boltzmann Machine
The training of the Restricted Boltzmann Machine differs from the training of regular neural networks via stochastic gradient descent.
The Two main Training steps are:
The first part of the training is called Gibbs Sampling. Given an input vector v we use p(h|v)for prediction of the hidden values h. Knowing the hidden values we use p(v|h) :
v. This process is repeated k times. After k iterations, we obtain another input vector v_k which was recreated from original input values v_0.
• Contrastive Divergence step
The update of the weight matrix happens during the Contrastive Divergence step. Vectors v_0 and v_k are used to calculate the activation probabilities for hidden values h_0 and h_k :
The difference between the outer products of those probabilities with input vectors v_0 and v_k results in the updated matrix :
Using the update matrix the new weights can be calculated with gradient ascent, given by:
Restricted Boltzmann Machine Tutorial: Collaborative Filtering
Recognizing Latent factors in the Data
Let us assume that some people were asked to rate a set of movies in the scale of 1-5 and each movie could be explained in terms of a set of latent factors such as drama, fantasy, action and many
more. Restricted Boltzmann Machines are used to analyze and find out these underlying factors.
The analysis of hidden factors is performed in a binary way, i.e, the user only tells if they liked (rating 1) a specific movie or not (rating 0) and it represents the inputs for the input/visible
layer. Given the inputs, the RMB then tries to discover latent factors in the data that can explain the movie choices and each hidden neuron represents one of the latent factors.
Let us consider the following example where a user likes Lord of the Rings and Harry Potter but does not like The Matrix, Fight Club and Titanic. The Hobbit has not been seen yet so it gets a -1
rating. Given these inputs, the Boltzmann Machine may identify three hidden factors Drama, Fantasy and Science Fiction which correspond to the movie genres.
Using Latent Factors for Prediction
After the training phase, the goal is to predict a binary rating for the movies that had not been seen yet. Given the training data of a specific user, the network is able to identify the latent
factors based on the user’s preference and sample from Bernoulli distribution can be used to find out which of the visible neurons now become active.
The image shows the new ratings after using the hidden neuron values for the inference. The network identified Fantasy as the preferred movie genre and rated The Hobbit as a movie the user would
The process from training to the prediction phase goes as follows:
• Train the network on the data of all users
• During inference-time, take the training data of a specific user
• Use this data to obtain the activations of hidden neurons
• Use the hidden neuron values to get the activations of input neurons
• The new values of input neurons show the rating the user would give yet unseen movies
Now with this, we come to an end to this Restricted Boltzmann Machine Tutorial. I Hope you guys enjoyed this article and understood the working of RBMs, and how it is used to decompress images. So,
if you have read this, you are no longer a newbie to Restricted Boltzmann Machine.
Got a question for us? Please mention it in the comments section of “Restricted Boltzmann Machine Tutorial” and we will get back to you.
Comments 0 Comments
Join the discussion | {"url":"https://www.edureka.co/blog/restricted-boltzmann-machine-tutorial/","timestamp":"2024-11-08T18:19:39Z","content_type":"text/html","content_length":"229038","record_id":"<urn:uuid:019f4a19-7d77-42e1-9c48-2874ad5b6120>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00489.warc.gz"} |
What are higher-order networks?
Networks - as mathematical representations of interconnected units -
are relevant in a wide range of real-wold systems, ranging from
interconnected neural cells in the brain to relationship networks
between authors of scientific articles. Traditionally, networks have
been equated with the mathematical concept of a graph that specifies
relations between pairs of individual units. While this approach has
produced for example powerful tools to analyze data, there has been a
shift to recognize the importance of relations and interactions beyond
pairs, namely relations between more than two individual units.
In the paper What are higher-order networks? published recently in
SIAM Review, VU mathematician C Bick and
coauthors take account of these recent developments from a mathematical
perspective. They provide a unified perspective on recent research where
nonpairwise interactions play a role. These range from topological data
analysis to network dynamical systems, thereby connecting different
research directions of interest to the Department of Mathematics at the VU. | {"url":"https://www.amsterdam-dynamics.nl/what-are-higher-order-networks/","timestamp":"2024-11-06T18:41:42Z","content_type":"text/html","content_length":"17594","record_id":"<urn:uuid:485e36ae-bb22-4d9a-a8bc-e6dfcf39f12b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00166.warc.gz"} |
Quantum Computing basics with Q# – The superposition of a qubit
There are some problems so difficult, so incredibly vast, that even if every supercomputer in the world worked on the problem, it would still take longer than the lifetime of the universe to solve!
The phrase above, found in Microsoft Docs, captures today’s problems with our regular computers and as a result the limitations we face in our every day lives. Numerous problems in environment,
health, agriculture and many other fields have a solution lost in a vast ocean of possible answers that need to be tested one by one for correctness.
Enters Quantum Computing!
What is Quantum Computing
Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computations. These quantum-mechanical phenomena are described by quantum physics as the
behavior of atoms and fundamental particles, like electrons and photons. A quantum computer operates by controlling the behavior of these particles, but in a way that is completely different from
regular computers. Quantum computers are vastly more capable of solving problems like integer factorization (which underlies RSA encryption), but since they are notoriously difficult to build and
program they are crippled by errors in such a degree, that their operation falls apart before any nontrivial program has a chance to complete.
What is Quantum Superposition
Imagine two colliding waves in the sea; sometimes they perfectly add to make a bigger wave, sometimes they completely cancel each other out, but often it’s just a combination of both somewhere in
between. This constructive or destructive interference of waves is known as superposition in classical physics.
In Quantum mechanics though, superposition can be a bit weirder!
A cat, with a Geiger counter, and a bit of poison in a sealed box. Quantum mechanics says that after a while, the cat is both alive and dead. A person looking into the box will either find the cat
alive or dead, however it is assumed to be both alive and dead before you look into the box.
Particles that exist in different states (for example in different positions or moving at different speeds, etc) are thought of as existing across all the possible states at the same time! Yes, a
particle in superposition can exist in two places at the same time and/or move at different speeds simultaneously. This “unnatural” state of matter, practically impossible for a human brain to
comprehend, it’s one of the weirdest realities of quantum physics.
The famous thought experiment, Schrödinger’s cat, was an attempt to illustrate the problems of Copenhagen interpretation of quantum mechanics by applying quantum mechanics to everyday objects.
Copenhagen interpretation, devised somewhere around 1926, was stating that physical systems, in general, do not have a state until measured.
However, this weird behavior becomes human-friendly once a measurement of a particle is made; when for example we check its position, no matter how we checked it, the superposition is lost and the
particle exists in one known state.
What is Quantum entanglement
Quantum entanglement that Einstein described as “spooky action over distance”, is a relationship between fundamental particle properties that persist over distance without requiring transmission of
Imagine for example, that we put each member of a pair of gloves in boxes (somehow without seeing them) and mail one of them to the opposite side of the earth. Whoever is the recipient should be
able, by looking inside her/his box, to determine the properties of the glove that we still have with us in our box. If she/he states “it’s blue” and we open our box our glove will be blue. Simple
and not that spooky at all, right? But what about, if the recipient throws the box in green paint affecting the glove’s color. Will that make my glove change color too? In the particle world, yes. My
glove will instantly be green…!
In other words, measurements of physical properties such as position, momentum, spin, and polarization performed on entangled particles are found to be perfectly correlated, even when the particles
are separated by a large distance. Measurements on one particle, affect the entangled system as a whole.
Quantum programming with Q#
Quantum programming is the process of assembling sequences of instructions, called quantum programs, that are capable of running on a quantum computer. Quantum programming languages help express
quantum algorithms using high-level constructs. High-end programming languages, like Q#, use Quantum software development kits, that provide the means to simulate a quantum environment.
Hello Quantum World!
Q# is built to play well with .NET languages such as C# and F#; we just need to install the Quantum Development Kit! In order to do so we need to download and install this Visual Studio Extension:
When the installation is completed, we can start Visual Studio (it has to be v16.3+) and perform the following steps to create a new Q# application:
• Start Visual Studio (v16.3+)
• Click on “Create a new project”
• Type “Q#” in the search box
• Select Q# Application
• Select Next
• Choose a name and location for your application
• Make sure that “place project and solution in same directory” is unchecked
• Select Create
And that’s it! We can now run our app and get the familiar greetings:
Hello quantum world!
Randomly select an atom in the Universe
That is, if all atoms in the universe had an index! What we really going to create is a random number generator with an upper limit bigger than the number of the atoms in the universe. This, not so
impressive feat, will perfectly demonstrate the very basics of working with Qubits. This is just a blog post after all, not the full documentation!
The Int in Q# is a 64-bit signed integer, where BigInt is a signed integer of arbitrary size and it’s based on BigInteger. Read more about the The Type Model.
The most interesting type in Q#, the Qubit, upgrades the bit we know to mythical proportions! Where classical bits hold a single binary value such as 0 or 1, the state of a qubit can be in a
superposition of 0 and 1! Conceptually, a qubit can be thought of as a direction in space (also known as a vector), where a qubit can be in any of the possible directions. The two classical states
are the two directions on one axes (e.g. x’x); representing 100% chance of measuring 0 and 100% chance of measuring 1. This representation is also more formally visualized by the bloch sphere.
As a first step, let’s try to allocate a qubit, set it to superposition and then measure the result. This will give us, after the measurement, either 100% of the times 1 or 100% of the times 0; while
in superposition though it has exactly 50% chances of being either 0 or 1:
operation GetRandomResult() : Result {
using (q = Qubit()) { // Allocate a qubit.
H(q); // Hadamard operation; put the qubit to superposition. It now has a 50% chance of being 0 or 1.
return MResetZ(q); // Measure the qubit value in the Z basis, and reset it to the standard basis state |0〉 after;
// MResetX(q) and MResetY(q) do the same for X and Y basis.
An operation is for quantum operations in quantum data, where a function modified classical data. Learn how to work with them in the Q# Operations and Functions section of the documentation.
MResetZ measures a single qubit in the Z basis, and resets it to the standard basis state |0〉 following the measurement. Similar operations are the MResetX and MResetY. All the measurement
operations are contained in the Microsoft.Quantum.Measurement Q# library.
Since we are talking about 0s and 1s, we need a way to find how many bits represent the max limit requested (for us the number of atoms in the universe). Thankfully, the library
Microsoft.Quantum.Math contains a function named BitSizeL which does exactly that for signed integers of arbitrary size (BitSizeI does it for Int). We then need a loop to add to an array of Results
the Result returned by the GetRandomResult() method:
mutable bits = new Result[0]; //This is how to declare a mutable variable
let bitSize = BitSizeL(max); //This is how to declare an immutable variable
for (_ in 1..bitSize) { //Loop
let result = GetRandomResult(); //Get the result of GetRandomResult() into the the result variable
set bits += [result]; //Update the array bits, adding at the end the current result
Almost there! We just need to convert this array of Result to a BigInt. We can do that by first converting the array of Result to an array of bool, and then the array of bool to BigInt:
let randomNumber = BoolArrayAsBigInt(ResultArrayAsBoolArray(bits));
All supported conversions are in the Microsoft.Quantum.Convert Q# Library.
There is a problem though! The number of bits that can represent the number requested as max can potentially represent an even bigger number! To compensate for this possible error we can
conditionally recurse until we find the correct one:
return randomNumber > max
? SampleRandomNumberInRange(max)
| randomNumber;
And finally done! Follows the entire Q# program that returns a really big random number:
namespace Quantum.RandomNumber {
open Microsoft.Quantum.Convert;
open Microsoft.Quantum.Math;
open Microsoft.Quantum.Measurement;
open Microsoft.Quantum.Canon;
open Microsoft.Quantum.Intrinsic;
operation Main () : BigInt {
let max = PowL(10L, 80);
Message($"The lucky atom is number ");
return Generate(max);
operation Generate(max : BigInt ) : BigInt {
mutable bits = new Result[0]; //This is how to declare a mutable variable
let bitSize = BitSizeL(max); //This is how to declare an immutable variable
for (_ in 1..bitSize) { //Loop
let result = GetRandomResult(); //Get the result of GetRandomResult() into the the result variable
set bits += [result]; //Update the array bits, adding at the end the current result
let sample = BoolArrayAsBigInt(ResultArrayAsBoolArray(bits)); // Convert to BigInt
return sample > max //Check and return if sample is less or equal to max
? Generate(max)
| sample;
operation GetRandomResult() : Result {
using (q = Qubit()) { // Allocate a qubit.
H(q); // Hadamard operation; put the qubit to superposition. It now has a 50% chance of being 0 or 1.
return MResetZ(q); // Measure the qubit value in the Z basis, and reset it to the standard basis state |0〉 after;
// MResetX(q) and MResetY(q) do the same for X and Y basis.
Happy coding!
Q# works very well with C#, so it is easy to create a Q# Library and use that library from C# (or F#). An example of this interoperability is the Quantum.RandomNumber solution in my GitHub
Further reading
There are many resources already available that can help you start your journey in the Quantum World. Quantum Katas is a good start to learn by doing, so is the Quantum Teleportation Sample for
example can help you understand the quantum entanglement, a way of moving a quantum state from one location to another without having to move physical particle(s) along with it. The entry point for
all possible resources is the Microsoft Quantum Documentation.
Physicists have been talking about the power of quantum computing for over 30 years, but the question have always been: will it ever do something useful? Google answered that question just a few
months ago, in late 2019, with the Quantum Supremacy experiment that successfully performed a computation in 200 seconds that would otherwise need 10000 years in the faster supercomputer available
today! Quantum computing will massively change our lives in so many fields the next years, that it is easily comparable with the invention of the first transistor that led to the 3rd Industrial
Revolution, the Digital Revolution. When the first computers appeared, nobody believed that there is going to be a smart phone that will hold more computational power than all the computers of the
era combined and we could have that power in our pockets…! | {"url":"https://blog.georgekosmidis.net/quantum-computing-basics-with-q-the-superposition-of-a-qubit.html","timestamp":"2024-11-12T23:38:13Z","content_type":"text/html","content_length":"47728","record_id":"<urn:uuid:d1452bf6-20cd-4b56-ad64-0a52fd533d69>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00585.warc.gz"} |
Concatenate Cells in Excel
In this tutorial, we will learn how to concatenate two cells in Excel using the CONCATENATE function. Concatenation is the process of combining two or more text strings into a single string. This can
be useful when you want to join the contents of multiple cells together. The CONCATENATE function in Excel allows you to easily achieve this. Let's dive into the details of how to use this formula
and some examples to illustrate its usage.
The CONCATENATE function in Excel takes multiple arguments, each representing a text string or a cell reference. In our case, we will provide the cell references of the two cells we want to
concatenate. The function will then combine the values from these cells into a single string.
For example, if we have the following data in cells A1 and B1:
To concatenate the values from these cells, we can use the formula =CONCATENATE(A1, B1). This will return the value "JohnDoe", which is the concatenation of the values from cell A1 and B1.
By using the CONCATENATE function, you can easily join the contents of multiple cells in Excel and manipulate strings as needed. This can be particularly useful when working with large datasets or
when performing data analysis tasks. Now that you understand how to concatenate cells in Excel, let's explore some more examples to solidify your understanding of this formula.
An Excel formula
Formula Explanation
The CONCATENATE function is used to concatenate or join two or more text strings together. In this case, we are concatenating the values from cell A1 and B1 into one string.
Step-by-step explanation
1. The CONCATENATE function takes multiple arguments, each representing a text string or a cell reference.
2. In this formula, we provide the cell references A1 and B1 as the arguments to the CONCATENATE function.
3. The values in cell A1 and B1 are combined into a single string.
4. The result is the concatenated string of the values from cell A1 and B1.
For example, if we have the following data in cells A1 and B1:
| A | B |
| | |
| John | Doe |
The formula =CONCATENATE(A1, B1) would return the value "JohnDoe", which is the concatenation of the values from cell A1 and B1. | {"url":"https://codepal.ai/excel-formula-generator/query/LsEUkWvG/excel-formula-concatenate-cells","timestamp":"2024-11-13T19:38:37Z","content_type":"text/html","content_length":"89934","record_id":"<urn:uuid:d40db822-3a3c-4b09-989d-21027da480b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00384.warc.gz"} |
Biographical Sketch
Scott Weinstein
Biographical Sketch
Background: Scott Weinstein received his A.B. from Princeton University in 1969 and his Ph.D. from the Rockefeller University in 1975. He joined the faculty of the University of Pennsylvania in 1975
where he is currently a Professor in the Department of Philosophy with secondary appointments in the Departments of Computer and Information Science and Mathematics.
Research Interests: computational learning theory, descriptive complexity theory, finite model theory, mathematical logic, philosophy of mathematics.
Selected publications:
"Some Applications of Kripke Models to Formal Systems of Intuitionistic Analysis," Annals of Mathematical Logic, v. 16, 1979, pp. 1-32.
"The Intended Interpretation of Intuitionistic Logic," Journal of Philosophic Logic, v. 12, 1983, pp.261-270.
"Learning Theory and Natural Language" (with D. Osherson and M. Stob), Cognition, v. 17, 1984, pp. 1-28; reprinted in R. Matthews and W. Demopoulos (eds.), Learnability and Linguistic Theory, Kluwer
Academic Publishers, 1989, pp. 19-50.
Systems that Learn: An Introduction for Cognitive and Computer Scientists, (with D. Osherson and M. Stob), MIT Press, 1986.
"Social Learning and Collective Choice," (with D. Osherson and M. Stob), Synthese, v. 30, 1987, pp. 319-347.
"Mechanical Learners Pay a Price for Bayesianism," (with D. Osherson and M. Stob), The Journal of Symbolic Logic, v. 53, 1988, pp. 1245-1251.
"Synthesizing Inductive Expertise," (with D. Osherson and M. Stob), Information and Computation, v. 77, 1988, pp. 138-161.
"Minimal Consequence in Sentential Logic," (with M. A. Papalaskari), The Journal of Logic Programming, v. 7, 1990, pp. 1-13.
"A Reason for Theoretical Terms," (with H. Gaifman and D. Osherson), Erkenntnis, v. 32, 1990, pp. 149-159.
"A Universal Inductive Inference Machine," (with D. Osherson and M. Stob), Journal of Symbolic Logic, v. 56, 1991, pp. 661-672.
"Current Research Trends in Computational Learning Theory in the United States" (in Japanese), (with N. Abe and D. Osherson), Monthly Magazine of the Information Processing Society of Japan, v. 32,
1991, pp. 272-281.
"Universal Methods of Scientific Inquiry," (with D. Osherson and M. Stob), Machine Learning, v. 9, 1992, pp. 261-271.
"Infinitary Logic and Inductive Definability over Finite Structures," (with A. Dawar and S. Lindell), Information and Computation, v. 119, 1995, pp. 160-175.
"On the Study of First Language Acquisition," (with D. Osherson), Journal of Mathematical Psychology, v. 39, 1995, pp. 129-145.
"Preservation Theorems in Finite Model Theory," (with E. Rosen), in Daniel Leivant (ed.), Logic and Computational Complexity, Springer, 1995, pp. 480-502.
"Centering: A Framework for Modelling the Local Coherence of Discourse," (with B. Grosz and A. Joshi) Computational Linguistics v. 21(1995), pp. 203-225.
"k-Universal Finite Graphs," (with E. Rosen and S. Shelah) in Boppana, R. and Lynch, J. (eds.), Logic and Random Structures, American Mathematical Society, 1997, pp. 65-77.
"Formal Learning Theory," (with D. de Jongh, D. Osherson, and E. Martin) in J. van Bentham and A. ter Meulen (eds.), Handbook of Logic and Linguistics, Amsterdam: North-Holland Publishing Company,
1997, pp. 737-775.
"Elementary Properties of the Finite Ranks," (with A. Dawar, K. Doets, and S. Lindell) Mathematical Logic Quarterly, v. 44 (1998), pp. 349-353.
"Path Constraints in Semistructured Databases," (with P. Buneman and W. Fan) Journal of Computer and System Sciences, v. 61 (2000), pp. 146-193.
"Logic in Finite Structures: Definability, Complexity, and Randomness," in D. Jacquette (ed.), Companion to Philosophical Logic, Oxford: Blackwell Publishers, 2002, pp. 332-348.
"Interaction between Path and Type Constraints," (with P. Buneman and W. Fan) ACM Transactions on Computational Logic, v. 4 (2003), pp. 530-577. | {"url":"https://www.cis.upenn.edu/~weinstei/briefbio2.html","timestamp":"2024-11-08T17:50:06Z","content_type":"text/html","content_length":"4569","record_id":"<urn:uuid:ca3485a0-a8db-438f-9272-b5eea64fa37e>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00007.warc.gz"} |
So far as I known, there are three methods of calculation, the first one is fixed angle-calculating power, the second one is fixed power-calculating angle, and the last one is both-calculating angle
and power.
The formulas of Amour
Amour usually uses the first methods of calculation (fixed angle-calculating power and fixed power-calculating angle, they are quite easier than the last method of calculation), so amour is quite
simple to learn how to use, compared with Trico and Boomer ( both need the third method of calculation, e.g. Trico’s 3in1, XieShang’s (Boomer player also from knat) Tail wind XiaoPao), but if u want
to be a Amour pro, which needs u calculate accurately, then it sure not so easy to get to.
The formulas on Amour I knew so far are the Full power-Pao, BanPao (I cannot find a equivalent word of it, nvm, just a name), XiaoPao, angle20, 30, 40, 50, 60, 70, and 80. Now I’d suggest u all
remember one point by heart, that is when u check the distance of the mobile u gotta attack, the just start of it is the start of the red arrow, not all the starts are its bot, especially when u use
low angle, u cannot see the bot’s body at all! And the end is the mobile u gotta get. Then some guys may ask me how to check the distance, now listen, just press the mouse’s right button to catch the
screen, don’t loose the button, move the mouse until the left side (take u attack right direction for example) of your screen (visual square of your monitor) nearly next to the read arrow. And one
screen is the visual screen of your monitor, usually is the length of the screen. This point is important to your precision of your calculation. The followings are their introductions.
l Formula one.
Wind 0 full power Pao (90-distance=fire angle)
Half screen=angle85 (divide first half screen into 5 parts, 1=89, 2=88, 3=87, 4=86, 5=85)
One screen=10.8angle(90-10.8) ( that means, angle80 can get 9.8distance if one screen=10, then angle79 can get the 11 distance, it is one screen+one bot distance. Within of it is divided into 5
parts, 1=84, 2=83, 3=82, 4=81, 5=80
One and half screen, u can use angle 74 to get it, the third half screen = 6 parts ( 1=79, 2=78, 3=77, 4=76, 5=75, 6=74)
Two screen, u can use angle 67 to get it , the fourth half screen=7 parts,(1=73, 2=72, 3=71, 4=70, 5=79, 6=68, 7=67)
If with wind, then u need wind chart, 10distace u use this wind chart
photo.viafriend.com/photo...278531.JPG (chart1)
within one screen, pls adjust the wind factor properly, remember, the distance is shorter, then the wind factor is smaller.
Two screen distance , pls use the following wind chart:
between one and two screen, pls adjust the wind factor properly depends on the distance.
Note: there are some advantages of full power pao, the first of all is there is not over drag or less drag, but still existing some disadvantages, for example, the distance of 5 and 6 > one bot, that
means it will happen that if u less one angle, then u over shoot, if u add one angle, then u less shoot. What the hell it is if u meet this situation. LOL.
l Formula Two. BanPao ( the Author is CHNShenPu, from knat)
Banpao is more accurate than Fullpao , 90-distance (one screen = 20 parts) – wind X factor of against wind (+ wind X factor of tail wind) = fire angle.
Fixed power is: 1/2screen=2.8, 3/4screen=2.85, 1screen=2.95, 3/2screen=3.05.
1/2screen distance use this wind chart1, 1screen use wind chart2, 3/2screen use the following wind chart: photo.viafriend.com/photo...285975.JPG (chart3)
Note: within them , u need to adjust the power and wind factor properly depends on the wind and distance. Now following the 1screen distance’s wind and its corresponding angle
0---0 (a=0)
1---0 (a=0)
2---0.5 ( a=0.5)
3---1 (a=0.33)
4---2 (a=0.5)
10----6 (a=0.6)
20---13 (a=0.65)
26----19 (a=0.75) (horrible L)
a=wind factor. This wind is water level wind, that is the wind’s angle is zero. If against wind , then less the corresponding angle, if the tail wind, then add it.
l Formula Three. XiaoPao, the most important formulas, SS shot has enough time to become from a normal bomb into a missile, if the missile get the force, can get a wonderful damage, more accurate
than Full power Pao and BanPao.
40partsPao.(I’d suggest u use this formula within 3/4S(screen), usually, I use this formula within 1/2S)
Formula: 90-distance+tail wind x its wind factor (- against wind x its wind factor)= fire angle.
Fixed the power: 1/4(A (angle) 80)) S=2.0, 1/2(A70) S=2.05
Use this wind chart: img19.photo.163.com/jc.y2...269034.jpg (Chart4)
30partsPao. (I’d suggest u use this formula within 1S, usually, I use this formula within 3/4S)
1S=30parts, 1/2S=A75,
Fixed power points are: 1/2S=2.35, 3/4S=2.4, 1S=2.45
Formula: 90-distance+tail wind x its wind factor (- against wind x its wind factor)= fire angle
Why I suggest u use this formula within 3/4S, because when the against wind is over 20scale, the power u use cannot get over 3/4S. But if the tail wind is strong enough, u can get 3/2S distance!
10D(distance) use Chart1, 20D use Chart2, 30D(1S) pls use Chart3. 10D and 20D can use Chart4 too. I think you can find the correct wind factors.
l Formula Four. Fixed angle-calculating power, now I gotta explain a theory to u, if X + Y = 90A ( X and Y present an angle ) then , if wind zero, then u use same angle and same power, can get the
same distance in water level line. If u want to ask me why, then I’d suggest u ask the softnxy Cor.
20A and 70A formula.
1/4A=1.45, 1/2S=2.05, 3/4S=2.55, 1S=2.95, 5/4S=3.3, 6/4S=3.65.
these are the statistic I checked when the wind was zero, if with wind, 20A and 70A are quite different, I have my own methods to calculate, but not yet formed a formula ( have no time now ), I am
sorry for it, but I want to remind u, 20A and 30A, 60A and 70A have some connection, after u read the 30A and 60A formulas, I am sure u will have got some ideas of them.
30A and 60A formula
The power chart is : img19.photo.163.com/jc.y2...281796.jpg
The wind factor chart is: img19.photo.163.com/jc.y2...281871.jpg
Because the 60A formula, other play has posted, so here I not need to post it again, to cut my formulas shorter.
40A and 50A formula.
Power chart is: img19.photo.163.com/jc.y2...288107.jpg
40A and 50A are two important angles, cos u can easily get the dark green angle ( real angle, can get more damage than light green angle ), now I gotta give u make your own formulas, guys, could you
please make me your 40A and 50A wind factors? Though I have my own formulas of them. I think though this making your own formulas action, your skills sure will achieve to a new higher standard.
10A and 80A formulas.
In fact I few use 80A formula, because I have a better choice. Now following the powers and its distances.
1/4S=2.0(can get this result from the 40partsPao), 1/2S=2.8 (can get this result from BanPao ),3/4S=3.65. 10A formula, I have an only experience, that is try, I think god always beside on u, jaja, in
fact it is called as one of shotgun.
l Formulas Five. About Tornado and Height Difference.
About height difference. If the opponent are much higher than u , I suggest u not use low angle, or only can cause lol from other players. I advise u to use BanPao, XiaoPao as much as u can. For
Banpao, wind zero, within 1S, the 1/2S( length of your screen)=2A, that is if the opponent is 1/2s higher than u, than u less 2A. 1/4S=1A. with the wind, the calculation will be more complicated,
cos, sometimes the against wind so strong, let the shot be boomer shot, which counteracts the height difference’s influence, so in some occasion, I just ignore it if the against wind so strong. With
more practice, I think u will get your own height difference experiences. For low angle’s height differences, e.g, 30A, within 1S, one bot height difference u just adjust your power from 0.1-0.3
according to the distance properly. Low height difference 1S distance 4/10S=0.35 power(wind0), etc.
About Tornado, please read the formula of knat( 60A, wind0), in fact, I think other players should notice that, tornado has its own scale like wind, and some times occur two tornados, which is more
complicated, further more, there is two kinds of tornados, one I called catalyzing tornado, with its influence, u should less power and higher angle, the other is delaying tornado, with its
influence, u should add power or lower angle. I have my own experiences of it, but not formed a system yet, so I am afraid I would mislead u, so here, I dun post it, I think u can make one too, only
need more practice. Remember, just practice, practice, you will become stronger and stronger.
Now comes the end of the full introduction of the Amour Mobile. Turtle is almost the same as Amour, u can choose the formulas up to u to use, but I’d suggest u that the power should be over 2.0, so
the shot two of the Turtle has enough time to 2in1(needs 2 seconds), get more damage, and cooler. If the angle of Turtle add 5 degree, and the movement ability improve by 20%, I sure will use turtle,
though it still not the strongest mobile. When I use Turtle, I usually use full power Pao, BanPao, XiaoPao and 70A, jaja.
l Formulas Six, Turtls high angle SS 6in1.
Wind0, fixed power points are:3.65-4.0 according to the distances,
1S=12parts, within 1/2S, I’d suggest u fixed the power at 3.65. 1/2S=6 parts, 1=89, 2=88, 3=87, 4=86,5=85, 4=84(1/2S)….10=80, 11=19, 12(1S)=78, I suggest 1S distance u use power at 3.75-3.8. 4.0
power can get 1.45S. remember, over one screen, with the distance longer, the power should be larger.
With wind influence.
First of all, calculate the 6in1power.
The upright and downright wind give the 6in1 power point more influence, the methods of calculation is like Trico 3in1, so I’d suggest u go to find a good Trico formula ( knat’s trico 6in1 power
chart is not bad) as a reference, but still your own experiences is the most important, e.g, the upright wind scale is 20, I use 2.8power ( come from 3.65-20 X some wind factor ), can let the ss shot
6in1, then after u have read the BanPao formula, u should notice BanPao just fix power, calculating angle, so 2.8power is good to use BanPao, if have angle, u can count it out, then let the Turtle’s
SS bombs 6in1! This shot sure can kill many girls, they will wow even if they get damn damage hurt, is it cool? Whenever I use turtle’s SS high angle 6in1, my darling always happy like a monkey,
seeing she so happy, my heart was content.
Second, calculating the angle.
After calculating the power, then u right angle is the key of your cool 6in1 SS shot. Trico pros should know some fixed power points, then use Bao( some formulas like BanPao, XiaoPao, etc.) formulas
to finish the 3in1 shot, though turtle high angle 6in1 shot is harder than trico 3in1, but they get a lot of commons, so if u want to be turtle high angle 6in1 pro, u should learn how to play trico
well, after that, I think it is easier for u to use Turtle high angle 6in1. | {"url":"http://creedo.gbgl-hq.com/china_u_armor_turtle.php","timestamp":"2024-11-11T13:48:10Z","content_type":"text/html","content_length":"13185","record_id":"<urn:uuid:dd3e842e-4825-4c83-8120-02ca5ac17162>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00753.warc.gz"} |
Quantum valuing
Quantum mechanics is decent, as pure mechanics goes. Quantum physics is a mistake, a mistake of not philosophizing. To take it to that level of physics one must philosophize or get the fuck out. But
quantum valuing ...yes. VO with some nice numerontological math could really crack this wide open.
Numbers indicate a value structure; pure logic. Quantum just means smallest perceived, and unable to penetrate those depths as predictable causality beyond statistics. We do not need a statistic to
predict t where and how fast a rock will fall if we drop it, nor to land someone on the moon. So where does QP get off? Stupid arrogance of these false mystics.
They need to learn from Spinoza, Newton, Einstein, Descartes, Pierce, they need to learn some philosophy. Which is to say they must learn how to value themselves. If you let the quantum world value
your work then your work quickly becomes nothing.
Stranger: we live in 10 dimensions
You: How do you know that?
Stranger: math proves it
You: What math and how? Please show me.
Stranger: ok
Stranger: quantum physics shows how electrons cross barriers with probability
You: Yes I know that principle, it is used in electronics as a kind of junction.
Stranger: that's a dimensional jump
You: But that statistical probability only describes something we don't yet understand about how the electron interacts with the barrier
You: No, what..? How do you go from the junction to "dimensions jump"?
Stranger: that's what the math says it is
Believe it or not but we had a decent conversation before he started into QP.
If he is talking about String Theory, is math "proves" that there are "rolled up dimensions" that together with the perceptible ones amount to 10 of them, but it aso proves that it can never
empirically be verified because the cosmos woud have to be heavier than it is.
Your number-philosophy of 2015 or 2014 (I forget) was terribly ominous, you were on the verge of deciphering the quantum in more solid terms than its probabilities, namely deriving it from geometries
somehow expertly using that matrix I created / discovered
246813579, etc)
You were behaving not entirely like a human inside that math, truly it was almost scary.
Yes that was some truly badass shit that even the gods were like "dude slow down".
Yes. Not in the least, I think, on account of who would get their hands on it. | {"url":"https://beforethelight.forumotion.com/t1123-quantum-valuing","timestamp":"2024-11-11T11:19:31Z","content_type":"text/html","content_length":"76587","record_id":"<urn:uuid:09d9a76b-ff73-4479-8636-ba1eb40bcfeb>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00156.warc.gz"} |