id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
12200
https://www.youtube.com/watch?v=WxtfEv7Pm4I
WildLinAlg10: Equations of lines and planes in 3D UNSW eLearning 69000 subscribers 66 likes Description 11405 views Posted: 1 Nov 2010 This video shows how we work with lines in the plane and planes in 3D space in Linear Algebra. A line in 2D is represented by a linear equation in x and y, a plane in 3D by a linear equation in x,y and z. Both can also be described in parametric form. It is important to be able to change from a Cartesian to a parametric form. The space of all lines in the plane has a curious connection with the Mobius band. Lines in 3D are somewhat trickier to describe since they require two linear equations. This is the tenth video in a first course of Linear Algebra given by N J Wildberger of the School of Mathematics and Statistics at UNSW. NJ Wildberger is also the developer of Rational Trigonometry: a new and better way of learning and using trigonometry---see his WildTrig YouTube series under user `njwildberger'. There you can also find his series on Algebraic Topology, History of Mathematics and Universal Hyperbolic Geometry. 12 comments Transcript: Introduction hello I'm Norman Walker and today we're going to talk about lines and planes in three dimensions and how we represent them with equations before we move to three dimensions it's important to understand the two dimensional situation very well so let's start by talking about lines in two dimensions here is the usual a fine grid plane with x axis Equations of lines and y axis not necessarily perpendicular and let's start with an example of a line say this green line here light L in ordinary high school mathematics we learn that lines are represented by the equation y equals MX plus B that emphasizes the fact that Y is a function of X in that notation this line here would be the line 1/2 X minus 1 because the y-intercept is minus 1 and the slope of the line the Delta Y over Delta X is 1 over 2 there's the smoke and there is the y-intercept now that's very appropriate for a calculus orientation towards lines but in linear algebra we want to think about them not as functions but as primary geometrical objects so for that it's much preferable to think in terms of x and y as having equal value or equal weighting so we're to treat x and y symmetrically then we're going to prefer to write the equation of a line in the form ax plus B y equals C so we can rewrite this equation here if we multiply by 2 and bring you the X and the y is all on one side and the constant on the other side then we get X minus 2 y equals 2 that's the also the equation for that line alright let's observe that there are two important points on most lines the places where the line intersects the coordinate axes to find the y-intercept we can set X equal to zero if we start with the general equation ax plus B y equals C and we set X equal to zero that we can solve for y Y is equal to C over B and we get the y-intercept with coordinates 0 C over B in this case this one here and if we set y equal to zero that means we're getting the point on the x-axis we get x equals C over a and so the point is C over a comma 0 that's in this case the point 2 0 it's important to remember that whenever we divide by something in mathematics we have to make sure that that something is not equal to 0 so these formulas only work when the denominators are not 0 this is usually called the Cartesian equation of the line and it's going to play an important role for us in linear algebra of course but there are also other ways of representing a lie there Parametric equations of lines is also a parametric equation for a line and that involves a parameter that might be say lambda for example if we have two points a and B on a line then the line can be described by the affine or very centric combination 1 minus lambda a + lambda B that represents an arbitrary point on the line and depending on what lambda is that gives you different points on that line that's called a parametric equation of a line and let's have a look at what it looks like for the line we were talking about in the previous example we might take the point A to B to 0 B to be the point 6 2 in that case this expression is 1 minus lambda times a plus lambda times B and we could put things together the x-coordinate is 1 minus lambda times 2 plus 6 lambda from there and the y-coordinate is 0 here and 2 lambda there we expand out and bring the lambdas together and the constants together we get all together for lambda plus 2 comma 2 we can leave it in that form or we could also write it like this which is instructive to take the parts involving lamda out and the point to zero here plus lambda times for two and now what I'm doing is I'm writing this for two as a vector because it represents the vector joining this point to this point so the geometrical meaning of this is we're starting at the point two zero and then we're going some multiple of this vector here and since it's a vector I write it as a column vector so there's a point on the line that one there and there's a direction vector for the line and neither of these is unique we could have chosen instead of two zero we could have chosen that point or that point or any of the other points on the line and instead of this direction vector we could have chosen any multiple of it positive or negative that's a parametric equation for a line and so there are these two different ways of thinking about what a line is or how to represent a line there's the Cartesian equation and there's the parametric equation here is an example of a standard kind of problem that we need to be able to solve to find Example the intersection or the meet of two lines so let's take the line X minus 2y equals 2 which is the line we've already been considering and another line 5x plus 4y equals 20 which is this line right here we want to find where these two lines meet in other words we want to find the point XY which satisfies both this equation and this equation well in terms of what we've already done let's rewrite this using our matrix language this equation here as a pair of equations can be written as one single matrix equation where the matrix is 1 minus 2 5 for the matrix of coefficients appearing here so the left-hand side becomes this matrix times the vector XY and that's would equal the vector 220 on the other side and how do we solve this equation this matrix equation we solve it by finding the inverse of this matrix so essentially we're going to multiply both sides by the inverse of that matrix isolating X Y how do we find the inverse of that matrix well we first of all calculate the determinant which is 1 times 4 minus minus 2 times 5 that's 14 so we take 1 over 14 and then the matrix that's obtained here is obtained by taking well either remembering the formula or taking the minors of this matrix here and arranging them in a certain way so I remind you that the minors are the things that you get by eliminating a row and column say through that one there you just get this minor 4 and it's determinant is 4 so we put that 4 there the determinant of the minor corresponding to this position is just the 1 so it goes there the determinant of the minor corresponding to this is that minor there that's minus 2 but the minus 2 doesn't go here it goes there and the minor that's corresponding to this is the 5 and that goes not there but in that position there and then we multiply by plus minus minus plus to get 4 2 minus 5 1 so I've stated that in the same way that we were talking about the three-dimensional case just reminding you that this is the same prescription as how we found the inverse of a 3x3 matrix of course it's simpler in the 2 by 2 case just to remember that we swap these two diagonal entries and we multiply the off diagonal entries by minus more so when we multiply the inverse of this matrix on both sides we get this times this vector here and what is that well that's 8 plus 40 that's 48 divided by 14 that's 24 / 7 and the next entry minus 10 plus 20 that's 10 divided by 14 that's 5 over 7 and so that is the coordinates x and y which are the coordinates of this point here the point itself was actually the square brackets 24 7 comma 5/7 this is just what we obtained by solving the matrix equation so we found that point right there we also want to be able to solve that problem if the two lines are given to us in parametric form so the same two lines can be described in parametric form this way the line L this one here is 2 0 plus lambda 4 2 there's the point 2 0 and the vector 4 2 is the vector that gives us its direction this line here M has a point 0 5 on it and it's Direction vector is the vector say joining this point to this one here that's the vector that's 4 in the x-direction and minus 5 in the Y direction so there it is right there so these are two parametric forms one in terms of a parameter lambda the other one in terms of another parameter mu so what we want to do in this context is to find where these two lines meet to find the parameter lambda and the parameter mu so that these two positions actually agree so if we write this out as a point it's 2 plus 4 lambda 2 lambda if we write this out of the port it's 4 mu and 5 minus 5 mu so we want to find lambda and mu so that this port equals this point that's really two equations because we bought the first coordinate here to equal the first chorded here and we want the second coordinate here to equal the second coordinate there so that gives us two equations in lambda and mu and these two equations can be solved in exactly the same way as we did previously we first write it in a matrix form coefficients 4 - 4 - 5 giving us that matrix there the variables are lambda mu in that order and the right hand side is minus 2/5 we multiply by the inverse of this matrix the verse is obtained by taking the determinant which is 20 - -8 that's 1 over 28 determinant in the denominator the rest of it is obtained by interchanging those two entries 5 and 4 and multiply the off diagonal entries by minus 1 so we get the inverse matrix is 5 4 - 2 4 divided by 28 and when we multiply both sides of this equation by that inverse matrix we get lambda mu by itself on the left hand side and we get this inverse x - 2 5 and then we do the calculation - 10 + 20 that's 10/28 same as 5 over 20 14 + 4 + 20 that's 24 divided by 28 is the same as 6 over 7 so we get lambda equals 5 over 14 that's what we were solving for and mu equals 6 over 7 now to find the actual point of intersection we use either of these two values plug back into here either 5 14 into there as lambda or 6/7 into there as mu and in either case we get the same point 24 / 7 5 7 that we did in the previous example so we see in particular that a matrix equation like this can have different interpretations but in both of these two different techniques for solving the intersection of two lines one in the cartesian framework at one in a parametric framework we both got equations with two by two matrices and we had to invert them but the interpretation of what the meanings of the variables were or quite different here the variables correspond to the parameters that are involved in the lines and in the previous case that the variables were x and y the actual coordinates of the point that we're asked to find before we leave the two Special lines dimensional case let's remark that there are some special lines that we should pay a little bit of attention to because they a little bit different from the generic line first of all of course the X and the y-axis themselves are special the x-axis has the equation y equals 0 and that simple equation is still of the form ax plus dy equals C it's just that the coefficient of x is 0 coefficient of Y is 1 and the constant is 0 the y-axis on the other hand has equation x equals 0 now other lines which are closely related to these coordinate axes are the lines which are parallel to them and they also have very simple forms lines parallel to the axes are either the from y equals some constant say k or x equals some constant say m so for example this line here parallel to the y-axis is the line with equation x equals 2 it means that every point on here has the property that it's x coordinate is 2 and the y coordinate can be anything there's no constraint on the y coordinate this line here parallel to it is the line x equals 3 and 1/2 or 7 halves this line parallel to the x axis is the line y equals what well just point this out because sometimes these lines because even though they're simpler than the usual lines cause students a little bit of problem because they're a little bit different looking there's a few zeros in there when we're talking about lies it's Pencils stacks often convenient to have some special notation to denote certain families of lines that are related in a particular way so there are two such concepts pencils and stacks a pencil of lines refers to all the lines which go through a particular fixed point so there's a point P and if we look at all the lines going through that point P that's like a bouquet of lines we call that a pencil of lines not sure where the terminology comes from but it's very established terminology so all these lines in this pencil are concurrent and that's just one pencil corresponding to the point P if we chose some other point you and we they have a pencil of lines around the point Q and so on now sort of opposite to that is what I call a stack of lines this is aircrafts my own notation but I think it's quite good terminology a stack of lines refers to a bunch of lines which are all parallel so all these green lines here they're all parallel they went together that whole family of parallel lines we could call a stack of lines because they're all stacked up one beside the other there are different stacks of lines we choose a different direction say that one there then the stack of lines that it determines would be all the lines parallel to it so a pencil is determined by a point P and a stack is determined by essentially a direction or a vector V up to a multiple so this stack here this one I've shown you is determined essentially by that direction vector V or any multiple of it once you know the vector then you know the stack and now in terms of this terminology we can ask an answer a very interesting and somewhat unfamiliar question but one that I think many students will really appreciate seeing yes the question is what do all the lines in the plane look like so this is a kind of a guiding principle when you're studying mathematics if you've identified an All lines interesting family of objects for us say lines in the plane then of course we study individual lines that we've been doing with Cartesian equations or parametric equations but it's also useful and interesting to ask well what about all the lines is there something we can say about the entire space of all lines together so that's the question we're going to ask what does the space of all lines look like so we might be asking for example how would we parameterize or label all of the lines well one way of thinking about that is to say well let's choose a point P and think about the pencil of lines through P so there's a point P and we thinking about all the lines through P that's a pencil of lines now for every line in this pencil say that particular one they're called L there's a stack of lines the stack containing L that is all the lines parallel to that L we see that we get every line this way because every line no matter where it is is parallel to a line through P and it's a parallel to exactly one line through P so by varying the lines in the pencil P and then for each one of those lines considering its corresponding stack we are running through all the lines in the play so topologically or geometrically what does the pencil of lines through P look like well the lines through P naturally form a circle of lines why is that well it's interesting to think about this circle here that I've drawn through P if I draw that circle through P then every line in the pencil at P intersects that circle certainly at the point P but also in one other point and every point on the circle determines such a line that's true with the exception of the tangent line itself that only intersects the circle at that point in fact it sort of does a double intersection because it's tangent to the circle there and that means that we can associate to every point on this circle a line in the pencil and every line in the pencil is associated to exactly one point so we say that topologically the pencil of lines through P is a circle now what about the stack say parallel to a line L if we take that line L as well what does the stack of lines parallel to it look like the answer is a line there's a kind of a line of lines we are contained in the stack through L so to describe all the lines in 2-space we can start off by thinking about this circle representing the pencil of lines through P and then to every point on the circle we associate a line the line representing the stack of lines which are parallel to the line through that point so topologically those are geometrically we need to glue a line to every point on a circle so if we think of this circle and gluing a line to each point on the circle then we'll have a parameter space for the set of all lines in the plane what makes it interesting is that there's more than one way of gluing a line to every point on a circle in fact there are two ways of doing that Two ways of gluing lines so here are the two ways of gluing a line to every point on a circle the simple sort of naive way is to take your circle say it's the equator there and simply glue a line on every point giving us a cylinder now this line here is actually infinite line so it actually extends in all directions it is actually an infinite cylinder so that's one possibility for the space of all lines in the plane another possibility a little bit more subtle one is to glue the lines around with a twist we get something called a Mobius band and here's a picture of it this is the central circle the red one here and what we're doing is there's a line or part of a line I can't represent the whole thing very easily so just imagine that line there it's it's glued to the circle and we move around like this but as we get closer to this point here these lines come together and twist before they connect again so if you think of this as being the vertical line in the first case of a cylinder we had a situation like this we had a circle of lines in the second case we have a situation where we have a circle of lines like this but as we're coming around we are rotating at like this come around we're rotating became a model it's a piece of paper right here is a little strip the cylinder is obtained just by gluing things together in the usual way that's a cylinder sort of a base circle and lines fibery with a Mobius band we take the cylinder also but before we glue it together we make a little twist like this okay so now it looks a little bit different it's a twisted object but you should still think of it as being as having a central circle and lots of lines associated to each point on that central circle but there's a twist in it so a natural question is which of these two actually represents the lines in the plane it turns out that it's the Mobius band that does and to see that is a kind of acute argument there's something topologically different about this one and this one well there are several things that are different but one way of saying what a difference is is that if you take any loop on the cylinder just draw a little closed circle if you cut along that loop then the cylinder breaks up into two pieces in this case if you cut along here you would get the inside versus the remainder of the cylinder you'd have two different pieces but even if you took this loop here which goes around and you cut along there then the cylinder would break into two disjoint pieces that's a property that the Mobius band does not have it's true enough that if you took some small loop like this and you cut it out then there would be two pieces but if you chose this equator to cut then you could cut along the Mobius band just like the model I made you can do this at home make a Mobius band cut along the thing you will find that when you finish the cut the thing falls apart but it doesn't fall apart into two pieces because the things that were on top of the circle over here are connected to the things on the bottom because of this overlapping twist so that's a difference between the Mobius band and the cylinder if you remove a circle from the cylinder you always get two different pieces in other words you cannot connect a piece inside to a piece outside so how can we see that the lines in the plane are really a Mobius band and not a cylinder we can use this property so the analog of cutting out a circle is to consider a pencil a pencil of lines through a point P that naturally forms a circle in the space of all lines so suppose we cut out that particular circle so in other words we exclude the pencil of lines through the point P we remove them from consideration and ask the stuff that remains is it connected or not can we go from any one line to any other line in a continuous way so for example here's an arbitrary line L which is not in the pencil here's another line M which is not in the past can we go continuously from this line to this line avoiding the pencil well if you think about it naively you might say well no you can't because if you try to go from here to here somewhere in there we would pass through P and then we'd be connected through to the pencil but that's not the only way we get from L to M you could also get from L to M by going this way taking this line and moving it around like this to M in this way I can move from any line in the plane to any other line in the plane by avoiding the pencil P that was by never going through P so what I've kind of convinced you of here I hope is that if you remove the pencil at P you can still connect any two points in what's left over so that means we're not in the cylinder case we must be in the Mobius band case the space of all lines in the plane is a Mobius band and it's in my view the most natural and most pleasant manifestation of the Mobius band in mathematics it should be in every linear algebra text alright so now let's go to level up to three dimensions and let's talk about lines and planes in 3d when Planes in 3D we move to three dimensions the first thing to realize is that the analog of a line is a plane although we have points lines and planes in three dimensions the planes are the ones that act most like the lines in two dimensions lines in three dimensions are in fact more complicated as we'll see so let's talk first about planes in 3d a plane is given by a single equation in x y&z and it can be described in terms of three points that lie on the plane so here we have a plane through the point 3 0 0 0 1 0 0 0 2 these are points on the coordinate axes there's 3 0 0 there's 0 1 0 and there's a 0 0 2 and I've drawn the plane by joining the points by line so those lines lie in the plane and you should think of this plane as being sort of at an angle to the court it actually is cutting them at those three points this is the equation of this plate because each of these points satisfies the equation of the plane this is called the Cartesian equation of the plane and it's a direct analogue of the equation for a line in two dimensions we're just adding one more variable so some special cases involving this plane if we set X equal to 0 that term becomes 0 then we get an equation only involving two variables 6y plus 3z equals 6 or if we divide by 3/2 y plus Z equals 2 what's the meaning of that well when we set X equal to 0 we're talking about all those points three dimensions whose x-coordinate of zero in other words that lie in the plane spanned by the y and the z axis that's a coordinate plane that's in the picture if you think of X is coming out at you and y&z is being in the plane of the wall then we're just talking about the wall so when we set X equal to zero we're restricting what happens to the plane just viewed from this coordinate plane and what we get is that line joining these two points in that plane and that line in the y&z coordinates has this as its equation similarly if we set y equal to zero then we're talking about the courted plane through the x and the z axis so that's not plane there and that intersects our given plane pi in this green line here so that's a line in the X Z plane and it's equation in that plane is determined by setting y equal to zero here 2x plus 3 is at equals 6 it's the equation of that line in the plane of X and Zed and if we set Z equal to zero then we're talking about the horizontal plane the XY plane so that's that equal to zero then you get 2x plus 6y equals 6 or divided by 2x plus 3y equals 3 that's the equation to that ordinary line there in the XY plane two planes generally meet in a line if we have three planes well two of them meet in line and then the third plane meets that line in a point so three planes generally meet in a single point however there are exceptions if the three planes are have two parallel planes in them then there's no common meet here's an example of three planes there's one equation there's another equation there's another equation those are each representing planes in our three-dimensional space so suppose we want to find a common point of those three planes means we have to solve this system of raisins we know how to do that we rewrite this in matrix form there is the coefficient matrix for this left-hand side here are x y&z the variables and here is vector of numbers on the right-hand side and to solve this we have to multiply by the inverse of this matrix on both sides the determinant of this matrix is 1 times this two-by-two determinant 7 plus 4 that's 11 minus 3 times this 2 by 2 determinant that's 2 minus 2 that's 0 plus 4 times this 2x2 determinant for minus minus 7 that's 11 for a total of 55 that's nonzero so this matrix is invertible how do we find its inverse well let's do it in few steps the first step and the one that involves the most work is to find a lot of little 2 by 2 determinants so to do that efficiently let me introduce a new notation let's say that the D min of this matrix is the matrix of determinants of minors it means that for every position I over here I calculate the determinant of the corresponding minor so in this position it would be 7 minus minus 4 that's 11 so we'll call that the D min of this matrix it's just a notation to allow us to say explicitly going from here to here we've already calculated these 3 determinants and they're minors the determinant of a minor corresponding to this one would be 3 minus 8 much minus 5 and this will be 1 minus minus 4 that's 5 this one here determinate 2 minus minus 3 that's 5 over here minus 6 minus 28 and that's minus 34 right here - 2 - 8 that's minus 10 and last but not least here 7 minus 6 is 1 that's the majority of work to calculate the inverse let's call that the demon of the original matrix the matrix of determinants of minors so to calculate the inverse what do we do we take 1 over the determinant there's 1 over 55 now we take this matrix and we transpose it first of all so instead of going across here you go down still going across here we go down instead of going across here we go down and after we've taken the transpose we multiply all of these odd positions by minus 1 so I've done that there was a minus sign there I've changed it to a plus there was nothing there I put a minus sign there there was a minus sign here I've changed it to a plus so this rate here is the inverse matrix to this one and I've multiplied both sides by that inverse matrix so there's just X Y Zed on the left hand side and now we have the inverse times our vector - 3 1 - 2 now what do we get - 3 times 11 that's minus 33 plus 0 minus 22 that's minus 55 divided by 55 is minus 1 and over here - 15 plus 5 plus 10 that's 0 and over here 3 times 34 that's 102 plus 10 that's 112 minus 2 is 110 when we divide it by 55 we get 2 there's a solution x y&z so the common point of intersection is the point - 1 0 - I've written it as row vector in square brackets because it's a point now we can check that we can check that that point satisfies these equations by plugging things in for example - 1 + zero minus two is indeed minus 3 and minus three plus zero plus four is indeed 1 and minus four plus zero plus 2 gives indeed minus two so this really is common point of those three planes now let's talk a little bit about lines in Lines in 3D three dimensions a line can be described in a number of different ways perhaps the simplest way is to just think of it as determined by two points you have two points then there's a unique line through those two points another way is to think about in terms of a point and a vector if we have a point and we have a direction vector then there's exactly one line through the point with that direction vector another way to think about a line is as the intersection between two planes give you two planes the meat is aligned so that's another way of describing a line is by giving two planes which intersect along the line let's start with the two points idea so let's start say with this point a which is 3 1 1 so there is part of the box to create it and the point B which is 2 2 minus 1/2 in this direction 2 in this direction minus 1 in the Z direction so it's down here and those two points determine a line let's call that line L now in terms of affine combinations that we talked about earlier we can think about a point on this line a B there's our notation for the line as a combination of the points a and B determined by a parameter lambda 1 minus lambda I plus lambda B and we can rewrite that by bringing the a out and thinking of it as the point A plus lambda times the vector a B because the vector a B is the difference between the coordinates of B and a so for example over here we can think of this line as being described by first of all the point a which is three one one plus lambda times the vector joining A to B what is that vector we get it by taking the difference between the coordinates two minus three that's minus one two minus one that's 1 and minus 1 minus 1 is minus 2 so the vector a B from here to here is that vector there and I've written it in the column to remind you that it's actually a vector so there's a point plus lambda times that vector in other words you can start at this point and you can go any multiple of let of this vector either in this direction or in this direction of the multiple is negative so that's a parametric equation for this line has two ingredients the point on the line and the direction vector of the line and these two things are not unique point on a line we could have chosen any point and the direction vector well we could have chosen any multiple of that x 5 or minus three-halves and it would still be a direction vector a line has not only a Cartesian equation parametric equation it also has a Cartesian equation Cartesian equation means that it's an equation involving x y&z are basic coordinates in our Cartesian framework so let's have a look at that same line we've just been talking about 3 1 1 plus lambda times this vector minus 1 1 minus 2 and now let's think about that point on that line as being X Y and Zed now we want to find a relation or relations between x y&z that determine the line so the first thing to do is to realize that there are really three equations here there's one equation for the x coordinate which is 3 minus lambda equals x then for the y coordinate 1 plus lambda e why and 1 minus 2 lambda equals Z so this single equation involving vectors and ports he is equivalent to these three equations just involving the numbers now we can solve for lambda we can bring the 3 on the other side and divide by minus 1 so we get lambda equals X minus 3 over left minus 1 here lambda equals y minus 1 divided by 1 okay we don't really need that but I'll put it there anyway right here bring the 1 to the other side Z minus 1 divided by the coefficient minus 2 so we've solved for lambda and now we can forget about lambda because we know this has to equal this has to equal this this relation between XY and Z and there's really two equations here there's really this equals this and this equals this so I've written it as one equation but they're really two separate equations there this is the Cartesian equation of the line or a Cartesian equation of line because it's highly non unique how is it related to the original parametric equation well you can see the relation I hope that the point 3 1 1 ended up here 3 1 1 there the numbers that we subtracted from x y&z and the direction vector minus 1 1 minus 2 ended up here in the denominators minus 1 1 minus 2 so the point on L occurs in the numerator and the direction vector for the line is in the denominator so we could actually once you've done this a few times you can go immediately from here to there without going through that intermediate calculation and I pointed out once again that this is highly non unique because we could replace the point 3 1 1 with any other point on the line and instead of the direction vector and we can multiply that by any scalar and would still be the same and I've already pointed out that it really is two linear equations and what is a linear equation a linear equation we've said is a plane so this description of a line in Cartesian form is really telling us two planes that meet in a line so there's our line and we found one plane that's that plane there the plane X minus three over minus one equals y minus one over one that's a plane and there's another plane this equals this determines another plane and those two planes meet in our line and again those planes are highly non unique you could change them as long as they all go through that line there's an infinite number of possibilities so there is the parametric equation for a line there's a Cartesian equation for a line let's do another example like that Another example that it has just a little twist in it to be careful about so here's another parametric equation for a line line through the point 4 minus 1/5 with direction vector 2 0 minus 3 and I've written down the three equations 4 plus 2 lambda equals the x-coordinate minus 1 plus 0 equals the y-coordinate and 5 plus -3 lambda equals the set coordinate now if we do what we did last time we solve for lambda for each of the equations but notice what happens in the first equation we get X minus 4 over 2 fine but then the second equation there isn't a lambda in there so we can't really solve that second equation for lambda all right so let's carry on with the third equation we can solve that for lambda we get lambda equals Z minus 5 over minus 3 so we can get this equals this well what about that second equation well that's just a separate equation we just repeat it over here now if we ignore the lambda then we see we still have equations but they're a little bit different in relation to each other than previously we have this as one equation that's one plane and y equals minus one that's another plane that's a plane parallel to the X Z coordinate axis okay so that's still a plane that's a plane those two planes together those two equations determine the line so this is a Cartesian equation for that line alright so here's a little example of a problem suppose we're asked to find the meet of the line L with this parametric equation and the plane PI with Cartesian equation x plus 3y minus Z equals five so we have a given line you have a given plane we want to find the point where they meet well it's not too hard we're gonna substitute X Y Z this is the equation of the line written out in coordinates just combining the X the Y and the Z the X is all together 1 minus lambda the Y is 2 plus 4 lambda the Z is -3 plus lambda that's another way of writing this so I'm going to plug this into our equation of the plane there's X that's why there's it so we get there's x1 minus lambda plus 3 times y minus Z and that should equal 5 all right that's a single equation involving lambdas and how many Landers are there all together there's minus 1 there plus 12 minus 1 for a total of 10 lambdas on the left-hand side and then if we bring all the constants to the other side there's 5 to begin with where else do we have here we have 1 plus 6 plus 3 that's 10 we bring that 10 to the other side with 2 subtract it so we get 5 minus 10 that's minus 5 and so lambda is minus 1/2 and then to find the actual point we plug minus 1/2 into here to get three-halves minus 1/2 here that'll be 0 and minus 1/2 here we get minus 7 halves and so that is a point which lies both on this line and on this plane you can check that if you put three-halves in here 0 there minus 7 halves there way to get three-halves plus 7 halves that is indeed 5 that's 10 halves so we found the place where the line and the plane meet all right now let's have a go at a slightly more challenging problem suppose that we have two planes and we want to describe the line of intersection the line in which they meet okay so we're couldn't talk about the meet of two planes and let's look at an example here is one plane given by a single linear equation and here's another plane given by another equation and here's the picture that we have in our mind there's one plane somewhere like this another plane somewhere like this and these two planes are meeting in a line we want to describe that line okay so now the thing is a line is determined by a number of points say two points on the line so one way of finding this line is to find two points on the line and one way of finding a point on the line is to intersect these two planes with a third plane we take a third plane on arbitrary third plane it's to meet that common line in a point and the advantage of using a third plane is that we know how to find the meet of three planes that's what we've already done before but we want to do this not just once but we want to do it twice so that we can determine the line of meeting we don't want just one point on the line we need two points on that line to determine it so we're going to do a little bit of a trick here we're going to kill two birds with one stone by introducing a third plane which has a variable in it and that variable is going to allow us to find two different values at once and the third plane that we're going to introduce is going to be of the simplest possible kind because we don't want to make life harder for ourselves we're going to introduce the plane with equation Z equals lambda lambda is a parameter like 1 or 2 or 0 so that's a plane that's in this case parallel to the XY plane it's a horizontal plane and we're going to throw that into our two equations that we already have now we have three equations so we can write down the system for it there's the coefficients of the first plane there's the coefficients of the second plane and our third plane has just has a single Z in it on the right-hand side minus 3 1 and lambda this lambda doesn't bother us we don't need numbers on here we know how to invert this even if these are variables so that doesn't bother us we do exactly what we did before we calculate the inverse of this matrix we multiply both sides by the inverse of the matrix then we find X Y Z in terms of these numbers including the lambda how do we find the inverse of the matrix that's the main the main problem here is the d.min of that matrix it's the matrix formed by determinants of minors of this original matrix so I'll let you check that these are the determinants of the minors of that matrix the determinant of this matrix here it's relatively simple to find if we expand along this bottom row plus minus plus minus plus so it's plus this minus this plus one times this two-by-two determinant which is seven minus six of one so the determinant here is one it's a very simple calculation because this is row essentially of mostly zeros so the inverse matrix is then obtained by taking this demon matrix first of all doing 1 over the determinant but 1 over 1 is 1 so we don't bother then this transposed there's a transpose of that matrix followed by putting minus signs in these odd places and then when we multiply by minus 3 1 lambda we get an expression involving not just numbers but also some lambdas and what is this telling us this is telling us the point of intersection of these three planes where the third plane has this variable parameter in it so as that lambda varies we're getting all the intersections between these three planes in other words we're running along the line of intersection of the original two planes and what is that point it's a peer I've peeled off the constant term the stuff not involving lambdas minus 23 10 0 and then I've taken the stuff involving lambdas and written it as plus lambda times 11 minus 5 1 that's a parametric equation of a line it's a line through this point with this as the direction vector and that is the line of intersection or the meat of these two planes and we can check that in the following way first of all we can check that this point really does satisfy both of these two equations minus 23 plus 20 minus 0 - three yes - twenty-three times three that's - 69 plus 70 that's one plus zero that is indeed one so this point really does lie on this on both of those planes what about this vector what should its relationship be with these two planes well this is the vector of the direction of the line it's a common direction that both planes have so this should be a direction vector which is in the direction of both planes if we plug this in to the left hand side what happens is we get 11 - 10 - 1 that's 0 it's telling us that if we add a multiple of this vector to a solution the solutions not changing in other words we're still staying on the plane that's the meaning of that and if we plug this vector into the left-hand side here we get 33 - 35 plus 2 that's also 0 telling us that if we change x y&z by adding this amount then we're still staying on the plane so that really is a direction vector for both planes so we verified or checked our solution in terms of our original data this is a very interesting idea it's a method that's found in almost no linear algebra texts because most linear algebra texts do not emphasize the formula for the inverse of a 3x3 matrix like we have been doing we've so far completely avoided row reduction almost completely in our development of the theory this is a big advantage a big conceptual advantage for you and this is our way of introducing parameters into situations where we have fewer equations than variables so we'll keep that in mind for later a natural question that you may have on that previous example is why did we introduce the third plane in the form Z equals lambda Y not X equals lambda or y equals lambda in fact we could have done either of those two and we would have still gotten a correct answer so sometimes usually it doesn't matter which we choose but sometimes it does matter and let's have a look at an example like that so here is another example of two planes playing X plus three y minus two Z equals two and the plane 2x plus 6 y minus 5 that equals 3 suppose we want to find where those two planes meet well if we introduce the third plane Z equals lambda just like we did in the previous example then the matrix that we would get would be 1 3 minus 2 2 6 minus 5 0 0 1 and if we calculate the determinant of that matrix to see if it's invertible we would calculate from here it'd be this times this two-by-two determinant and that two-by-two determinant is 0 so the determinant of this 3x3 matrix is 0 it means it's not invertible you cannot invert that matrix so you cannot solve the system and our method fails what we've done wrong is we've introduced the wrong plane because this little two-by-two matrix was 0 so in this case we should use y equals lambda or x equals lambda let's use y equals lambda so then our third plane is y equals lambda and the system looks like this okay you can check then going through the same story there's the D min of that matrix the determinant is 1 the inverse of our original matrix is right there I've multiplied both sides by the inverse tan from this D min by taking a transpose and multiplying by minus ones then we multiply through we get 4 minus 3 lambda lambda 1 we separate that out for 0 1 plus lambda my three one zero we do get aligned the line of intersection or the meat of those two planes it's also useful to Parametric Equations have a parametric equation for a plane so if we have a plane pi and if we know three points on the plane and those three points are not all on line so they're not collinear we can say that the plane is determined by those three points will just write PI equals ABC then we can introduce direction vectors for the plane in terms of those three ports we can look at the vector say joining a to be call it V and the vector joining a to c call it u those are Direction vectors for the plane there are vectors which lie in the plane so for example if the plane is the plane of the wall then this is a direction vector so this is a direction vector for the wall so is this the direction vector for the wall so is this even though the vector is not actually in the wall I can translate it so that lies in the wall that's a direction vector however this pointing towards you it's not a direction vector for this plane because even if we translate it it doesn't lie entirely in the plane of course a plane has many possible different Direction vectors and we're just choosing two of them so they are highly non unique but then we can describe the plane or a general point on the plane in terms of the original point a and these two directions we can say that we can get to any point on the plane by starting at a and going some multiple of V plus some multiple of U we can write it this way an arbitrary point P on the plane can be written as start with a point a move in the direction V by a multiple Lam to say and then move in the direction u by a multiple beta so for example suppose that a was the point 1 0 minus 1 B is the point 2 1 3 and C is the point minus 4 2 5 then the plane determined by them we can express it parametrically by saying we could start at the point a and go a multiple of the vector a B how do we write down the vector from A to B we take the difference between coordinates 2 - 1 1 - 0 3 minus -1 so that's the vector joining a - B and here is the vector joining a - C - 4 - 1 2 - 0 + 5 - minus 1 6 so every point in this plane can be expressed as a combination of this point plus some multiple of that vector plus some multiple of that vector it's a parametric equation for the plane now natural question is how do we go from the Cartesian equation involving x wise and Zed's to the parametric equation and vice versa so first let me show you how to go from the parametric equation of a plane to the Cartesian equation of the plane the parametric equation of a plane looks like this it has a reference point on the plane and then two direction vectors for the plane we start at the point we go any multiple lambda of the first vector V plus any multiple Seita of the vector U and that should be an arbitrary point X Y Z on the plane now we want to go to an equation which is Cartesian in other words simply a linear equation in X Y Zed representing the same plane okay so here's a picture there's our point a there's our vector V which lies in the plane somewhere there's a vector U it's another vector lying in the plane here's an arbitrary point P in the plane and what's the relation that we can use the relation is that the vector joining a to P is in the same plane as the vectors V and u all three of these vectors lie in a plane now how do we capture that algebraically well we remember the determinant the determinant we originally thought of in terms of try vector we have three vectors the determinant represents the volume or the side volume of the parallelepiped formed by those three vectors and if those three vectors are coplanar then that determinant is zero so that determinant being zero is telling us that the vectors are coplanar and that's the criterion that we're going to use because the determinant is just this determinant of a matrix so we can write down the vector AP in one column the vector V in another column the vector U in another column and we can calculate the determinant of that so how do we calculate AP the vector joining this point to this point it's just the vector whose coordinates are X minus 1 Y minus 0 and Z minus minus 1 so there's the vector AP I've copied the vector V and I've copied the vector u and now I'm going to take the determinant of this thing and set it equal to 0 that's the condition of Cole plane arity that will be the condition that tells us that P lies in this plane so what's the determinant X minus 1 times this 2 by 2 determinant 6 minus 8 that's minus 2 minus y minus 0 times this two-by-two determinant 6 plus 20 that's 26 and then plus Z plus 1 times this 2 by 2 determinant 2 plus 5 is 7 so if we set that equal to 0 then and collect all the XYZ and Zed's on one side and all the constants on the other side what do we get we get minus 2x minus 26 y plus 7z on the left-hand side and on the right-hand side there is over here there's a two there is zero here and seven so there's altogether a nine here when we bring it to the other side we'll have minus nine so there is a Cartesian equation for that same plane as this is describing so that's how to go from the parametric equation to a Cartesian equation through a determinant in standard linear algebra courses the inner product and cross product are used here but we have not introduced the inner product and the cross product those are in fact metrical objects that depend on notion of perpendicularity z' and notion of distance this is a purely f-fine derivation of essentially the same thing it's in some sense more fundamental than the treatment that you find in most linear algebra textbooks so I'm going to think about going from a Cartesian equation of a plane to a parametric equation is pretty simple all what we need is three points on the plane so if there's our plane 2x minus 3y plus Z equals five we just have to cook up three points which lie on the plane and it's easy to do that because we just let any two of these variables be anything we want and we solve for the third one oh use simple values so we would say let X be 1 and Y be 0 then that has to be 3 if we let X and y be 0 then Z is 5 if we let this be 2 and this be 1 for example and that would be 4 minus 3 this one then has it has to be 4 ok so there's three out of billions of points you could have chosen now once you have three points you just write down the parametric equation of the plane in terms of one of the points say point a plus the two vectors formed by say a B and a C you can also use BC it doesn't matter which vectors you use as long as your two vectors in different directions lying in the plane the vector a B this minus this is minus 1 0 2 then the vector AC is this minus this that's one one one so there's an example of a parametric equation which is of course much less canonical than this one this one's unique up to scaling by constant can multiply this equation by five we still have the same plane but other than that this is a unique form so the Cartesian form is really in some sense preferable to the parametric form but the parametric form is quite useful in many concrete situations now let's have a look at some exercises we've done a lot of stuff this lecture time for you to practice all right only two exercises today but the first one is a little bit long so our first exercise here are three planes it's called the PI 1 PI 2 and PI 3 and I want you to practice visualizing things so so first of all sketch each plane in the usual X Y Zed coordinate system secondly find parametric equations for the lines obtained by meeting any two planes so any two of these planes meet in a line we expect and I want you to find parametric equations for those lines let's call say the meet of pi 1 pi 2 pi 1 wedge PI 2 with a down wedge and here are the three meets and also find Cartesian equations for these lines as well so you want to understand the three lines formed by these three planes parametric and Cartesian equations for those lines please and find the common meet between the three planes so that'll be the place where these three lines also meet and the second exercise find the Cartesian equation for this plane give it here parametrically and where does that plane meet this line this line given in terms of Cartesian equation okay so we've done well lines and planes equations of them very important basic understanding for three-dimensional geometry now we're in a position to think about transformations of three-dimensional geometry reflections dilations rotations projections all kinds of very practical applications of 3x3 matrices essentially so I hope you'll join me for that I'm Norman wahlburger thanks for listening
12201
https://www.ck12.org/flexi/math-grade-6/simplify-variable-expressions-involving-addition-and-subtraction/how-do-you-combine-like-terms/
Flexi answers - How do you combine like terms? | CK-12 Foundation Subjects Explore Donate Sign InSign Up All Subjects Math Grade 6 Expressions with Variables Question How do you combine like terms? Flexi Says: Like terms are expressions that have identical variable parts. According to this definition, you can only combine like terms if they are identical. Combining like terms only applies to addition and subtraction.This is not a true statement when referring to multiplication and division. The numerical part of an algebraic term is called the coefficient. To combine like terms, you add (or subtract) the coefficients of the identical variable parts. Analogy / Example Try Asking: What is the sum of two opposite numbers?What is the coefficient of determination?How do you multiply integers? How can Flexi help? By messaging Flexi, you agree to our Terms and Privacy Policy
12202
https://www.journalofaccountancy.com/issues/2005/dec/howtoexcelatoptionsvaluation/
How to Excel at Options Valuation Skip to content This site uses cookies to store information on your computer. Some are essential to make our site work; others help us improve the user experience. By using the site, you consent to the placement of these cookies. Read ourprivacy policy to learn more. Close AICPA & CIMA: Home CPE & Learning My Account Toggle navigation Toggle search Search for: TECH & AI All articles Artificial Intelligence (AI) Microsoft Excel Information Security & Privacy Latest Stories Incorporating prompt engineering into the accounting curriculum Create a dynamic to-do list with Excel’s checkboxes Another way to manage authentication texts TAX All articles Corporations Employee benefits Individuals IRS procedure Latest Stories Paper tax refund checks on the way out as IRS shifts to electronic payments IRS keeps per diem rates unchanged for business travel year starting Oct. 1 Details on IRS prop. regs. on tip income deduction PRACTICE MANAGEMENT All articles Diversity, equity & inclusion Human capital Firm operations Practice growth & client service Latest Stories Paper tax refund checks on the way out as IRS shifts to electronic payments Practice mobility update: New NASBA tool tracks changes for CPAs IRS keeps per diem rates unchanged for business travel year starting Oct. 1 FINANCIAL REPORTING All articles FASB reporting IFRS Private company reporting SEC compliance and reporting Latest Stories SEC accepting Professional Accounting Fellow applications SEC names new chief accountant SEC ends legal defense of its climate rules AUDIT All articles Attestation Audit Compilation and review Peer review Quality Management Latest Stories AICPA unveils new QM resources to help firms meet Dec. 15 deadline 8 steps to build your firm’s quality management system on time Auditing Standards Board proposes a new fraud standard MANAGEMENT ACCOUNTING All articles Business planning Human resources Risk management Strategy Latest Stories Business outlook brightens somewhat despite trade, inflation concerns AICPA & CIMA Business Resilience Toolkit — levers for action Economic pessimism grows, but CFOs have strategic responses Home News Magazine Podcast Topics Advertisement feature FINANCIAL REPORTING How to Excel at Options Valuation Build a flexible, spreadsheet-based lattice model for better calculations. BY LUIS BETANCOURT, CHARLES P. BARIL AND JOHN W. BRIGGS December 1, 2005 Please note: This item is from our archives and was published in 2005. It is provided for historical reference. The content may be out of date and links may no longer function. Related September 19, 2025 A&A Focus recap: AI considerations in A&A, GASB updates, and practical lease accounting challenges September 19, 2025 Accounting for software: FASB issues improved guidance September 15, 2025 OMB announces plan to eliminate 60 accounting rules for federal contractors TOPICS Accounting & Reporting Editor’s note: This article uses a simplified example to illustrate how a lattice model works. In the exhibits, the option term is only four years—much shorter than the 10-year life of a typical employee stock option. So in practice the calculations will be more extensive than in these exhibits and companies may have to divide the time period into additional intervals. he guidance from FASB is clear: Companies must determine and report the fair value of stock options they use to compensate employees. But because employee stock options can’t be traded publicly, their fair value is not readily available and must be estimated using option-pricing models. FASB Statement no. 123(R), Share-Based Payment( www.fasb.org/pdf/fas123r.pdf), allows entities to use any valuation model that is based on established principles of financial economic theory and reflects all substantive characteristics of the options. Both the Black-Scholes-Merton and lattice models meet these criteria. The former’s relative simplicity makes it popular with smaller companies—but it may not be adequate for public companies whose employees often exercise their options early. That calls for calculations a lattice model can better accommodate. (For more information, see “ Compare and Contrast.”) Neil J. Beaton, CPA/ABV, partner in charge of valuation services at Grant Thornton LLP in Seattle, said his firm has performed numerous engagements related to FASB Statement no. 123(R) and found a lattice model to be substantially more flexible than a Black-Scholes model, especially with respect to restricted employee stock option nuances such as vesting, early exercise and blackout periods. “Once we built our initial lattice model,” he said, “conforming it to the widely varying requirements of our diverse client base was fairly easy and has produced results more accurate than would have been possible with a Black-Scholes model alone.” Compareand ContrastBlack-Scholes-Merton model Was developed for the valuation of exchange-traded options. Is the most commonly used closed-form valuation model. Is adequate for companies that do not grant many stock options. Makes it easier to compare the financial results of different companies using it. Is simpler to apply than a lattice model because it is a defined equation. Cannot accommodate data describing unique employee stock option plans. Does not allow you to vary assumptions over time. Assumes options are exercised at maturity. Uses estimated weighted averages for expected volatility, dividend rate and risk-free rate, which it assumes are constant over the term of the option. (These weights, calculated outside of the model, are based on the company’s past experience. If no such data exist, the company follows the guidance in SEC Staff Accounting Bulletin no. 107 ( www.sec.gov/interps/account/sab107.pdf).) Uses an option’s estimated weighted average life—rather than its term—to consider the possibility of early exercise when computing the option’s fair value. Lattice model Is more complex to apply than the Black-Scholes model. Provides more flexibility to companies that grant many stock options. Requires staff with considerable technical expertise. Can accommodate assumptions related to the unique characteristics of employee stock options. Can accommodate assumptions that vary over time. May lead to more accurate estimates of option compensation expense. Is flexible enough to calculate the effects of changes in volatility factors, risk-free interest rates, dividends and estimates of expected early exercise over the option’s term. Requires data analysis to develop its assumptions. Requires in-house programming or third-party software. May be the only appropriate model in some circumstances—for example, when an option’s exercise is triggered by a specified increase in the underlying stock price. But even if employers know which valuation model works better for them, they still may have doubts about how to build it. An earlier JofA article (see “ No Longer an ‘Option,’” JofA, Apr.05, page 63) explained the workings of the Black-Scholes-Merton model. This month’s article provides detailed instructions for building a lattice model by making the necessary calculations in Excel. One company that chose to implement such a model is the Marysville, Ohio-based Scotts Co., a manufacturer of horticultural products. Its CFO, Chris Nagel, CPA, told the JofA in the April article on Black-Scholes that he preferred the lattice model because of its exceptional ability to capture assumptions about options’ term and volatility. “We had adopted Black-Scholes but now believe a lattice model is appropriate for valuing options,” Nagel said. “To value options, you have to make assumptions about the likely term and volatility, and I think a lattice model captures those variables better.” Because the lattice model makes it easy to vary assumptions and inputs over time, entities that grant a great many stock options to their employees will prefer its flexibility to the relatively rigid restrictions of the Black-Scholes-Merton model, which is more suitable for companies whose employee compensation includes few stock options. A lattice model can be complex for a company to implement, though. “Luckily, I’m not the one who has to grind through the numbers,” Nagel said. Advertisement But what if, in your company, you are the CPA who performs that function? If that’s the case, follow the examples below that illustrate the structure and functions of a lattice model. THE BASICSA lattice model assumes the price of stock underlying an option follows a binomial distribution, a type of probability distribution in which the underlying event has only one of two possible outcomes. For example, with respect to a share of stock, the price can go up or down. Starting at a point we’ll call time period zero, the assumption of either upward or downward movements over a number of successive periods creates a distribution of possible stock prices. This distribution of prices is referred to as a lattice, or tree, because of the pattern of lines used to graphically illustrate it. The lattice model uses this distribution of prices to compute the fair value of the option. Exhibit 1(below) illustrates an Excel stock-price tree based on the following assumptions: Current stock price of $30. Risk-free interest rate of 3%. Expected dividend yield of 0%. Stock-price volatility of 30%. Option exercise price of $30. Option term of four years. Exhibit 1 At the grant date, year 0, the stock price is $30 (cell B7). The model assumes that stock prices will increase at the risk-free interest rate (B15) minus the expected dividend yield (B16), then plus or minus the price volatility (B12) assumed for the stock. Thus, during year 1, the stock price increases by the risk-free rate, 3%; is unaffected by the assumed 0% expected dividend yield; and then either increases or decreases by 30% due to the expected volatility. The formula for cell E12, the year 1 upward path, is =D21(1+B15–B16)(1+B12). For the downward path, the formula for E29 is =D21(1+B15–B16)(1–B12). The resulting two possible outcomes for the stock price at the end of year 1 are an increase to $40.17 (E12) or a decrease to $21.63 (E29). In lattice terminology these two possibilities are referred to as nodes. Two similar possibilities for the end of year 2 emanate from each of the year 1 nodes. With the number of nodes doubling in each successive time period, the tree grows to 16 nodes after four years. Exhibit 1also contains the probabilities for each node on the tree. For example, at the end of year 2, the stock price of $53.79 (F8) has a probability of 0.25 (F9), which is the probability of two successive upward price movements. With a probability of 0.50 that the price will increase in any year, the probability of two successive upward movements is 0.25 (F9). In fact, two nodes reflect a stock price of $28.96 at the end of year 2 (F16 and F25). F16 represents the result of an upward movement in price in year 1 followed by a downward movement in year 2; F25 reflects a downward price movement in year 1 followed by an upward movement in year 2. Similar to the probability of two successive periods of upward price movement, the probabilities for F17 and F26 are 0.25. Advertisement The probability (that is, 0.0625) for each terminal node (column H) corresponds to four successive movements in the stock price. Knowthe OptionsUnlike stock options that are traded on an exchange, employee stock options Can be exercised, but not sold or transferred. Cannot be exercised during “blackout” periods, which companies typically declare just before releasing their earnings or at other times to prohibit employee purchases or sales of company stock or options. n Typically have terms of 10 years, in contrast to most traded options’ terms of less than one year. n Are subject to vesting periods of up to four years, during which the options cannot be exercised, and are forfeited by those who leave the company before becoming vested. n Often are exercised early for reasons such as divorce, separation from service or financial need. CARRY ON CRUNCHINGAfter developing a stock-price tree, the next step is to calculate the intrinsic value of the option at each terminal node by subtracting the option’s exercise price (B8) from the stock price at that node. If the stock price at the option’s expiration date exceeds the exercise price, the option is said to have intrinsic value and the options are assumed to be exercised. Otherwise, the option has no intrinsic value. Exhibit 2, below, presents an Excel template that calculates the option’s fair value. Columns J through M are added to exhibit 1’s stock price tree (hidden here for simplicity). This example presumes that option holders will not exercise their options early. Rows 5 through 20 represent the 16 terminal nodes from column H in exhibit 1. Exhibit 2 In column K the intrinsic values of the option at the corresponding nodes are computed using Excel IF statements to determine whether the stock prices at those nodes exceed the option’s exercise price. For example, cell K5’s formula is =IF(H5>B8,H5–B8,0). That formula calculates and displays the option’s intrinsic value, $66.44, the amount by which the terminal stock price exceeds the exercise price for the path reflecting four successive upward price movements. Column K shows the option is “in the money” or has intrinsic value at K5, K6, K7, K9 and K13 of the 16 terminal nodes. In column M the intrinsic values of the option are multiplied by their respective probabilities (column L). Then the present value of each is determined using the risk-free interest rate (B15). The formula in cell M5, –PV(B15,J5,,K5L5), computes the present value of the probability-weighted intrinsic value for the topmost terminal node (H5) in exhibit 1. (Editor’s note: Normally Excel’s PV function returns a negative value because Excel considers present value to be the outflow required to pay for future inflows. To prevent any confusion, cell M5’s PV statement begins with a negative sign and therefore expresses the present value as a positive.) Thus, the $3.69 present value represents the $66.44 intrinsic value weighted by its 0.0625 probability and discounted at a 3% rate for four years. Corresponding formulas in cells M6 through M20 calculate the intrinsic value for each of the other 15 terminal nodes in column H of exhibit 1. The summation (M22) of column M, $8.56, is the option’s fair value and the amount of expense to be recognized. A fuller application of the lattice model will allow CPAs to consider changes in stock price and other factors on at least a weekly basis. BEYOND THE BASICSThe lattice model has a key advantage over its Black-Scholes-Merton counterpart; it offers CPAs several ways to incorporate assumptions about the projected early exercise of options. One approach, demonstrated in FASB Statement no. 123(R), assumes the options will be exercised if the stock price reaches a selected multiple of the exercise price. Exhibit 3, below, illustrates this approach using a 2.0 early exercise factor(cell B9) that assumes all options will be exercised for pretermination nodes in years 3 or earlier if the stock price reaches $60—double the $30 exercise price. With all other assumptions being held constant in exhibit 3, the stock-price tree presented in exhibit 1remains valid. Note that the stock’s price reaches $60 prior to expiration only on the path (cell G6 in exhibit 1) that represents three successive years of upward price movements. In exhibit 3the options are assumed exercised with a gain to the employee of $42.02 (K13)—the difference between the year 3 $72.02 stock price (G6 in exhibit 1) and the $30 exercise price (B8). Advertisement Exhibit 3 When early exercise is considered, each node on the stock price tree must be examined to determine whether the options will be exercised early. Thus, exhibit 3contains 30 rows—one for each node in the exhibit 1tree. The formula in cell L13, =IF(AND(G6>5(B8B9),L5=0,L8=0),0.5^J13,0), examines whether the cell G6 stock price in the exhibit 1tree equals or exceeds the early exercise multiple. If the stock price meets this criterion and early exercise has not occurred in prior periods, the probability (G7) of this exhibit 1node is multiplied by the option’s intrinsic value (K13) and discounted by the risk-free interest rate (B15) to determine the path’s present value (M13). Because the exercise price multiple is not met for any other pretermination nodes, a probability of zero is specified in cells L5 to L6, L8 to L11 and L14 to L20. Of the 16 potential termination nodes in exhibit 1, the uppermost two (H5 and H7) are exercised early at the end of year 3. Since they are not outstanding in year 4, their corresponding cells in exhibit 3(L22 and L23) have a probability of zero. In year 4 the intrinsic values for the 14 paths not previously truncated are probability-weighted and discounted to determine their present values (that is, the probability is multiplied by the option’s intrinsic value and discounted by the risk-free interest rate to determine the path’s present value). The total of the present values of all the individual potential paths (M13 and M24 through M37) is the option’s fair value, $8.46. A lattice model also can accommodate additional expectations regarding when and the extent to which employees exercise their options. For example, rather than assuming that all the options are exercised if the stock price reaches a selected multiple of the exercise price, a lattice model also can permit the assumption that only a certain percentage of outstanding options are exercised. MEASURING UNPREDICTABILITYAnother benefit of the lattice model is that it can accommodate assumptions that vary over time. Exhibit 4, below, presents a stock-price tree that assumes the stock’s volatility decreases from 30% to 24% over the option’s four-year life. Exhibit 4 Exhibit 4shows how to specify individual volatility factors for each year of the option’s term (cells B12 through B15). The impact of decreasing the stock’s volatility in later years is evident on the tree’s top branch. After four successive periods of stock-price increases, the stock’s price in cell H5 ($87.78) is less than it is ($96.44) in the corresponding cell of exhibit 1. The lower volatilities reduced the magnitude of the stock price increases on the top branch. A similar tempering effect can be seen in cell H36 on the bottom branch, where the stock’s price in exhibit 4($9.57) is greater than it is in exhibit 1($8.11). The lower the volatility, the lower the option’s fair value. CHARLES P. BARIL is a professor and LUIS BETANCOURT, CPA, and JOHN W. BRIGGS are assistant professors at James Madison University’s School of Accounting in Harrisonburg, Va. Their respective e-mail addresses are barilcp@jmu.edu, betanclx@jmu.edu and briggsjw@jmu.edu. AICPA RESOURCES CPE Accounting for Stock Options and Other Stock-Based Compensation (textbook, # 732087JA). Infobytes: Stock Options and Other Share-Based Compensation Accounting (online courses): Advertisement Audit Considerations. Disclosures. Nonpublic Company Considerations. Measuring the Share-Based Payment. History and Summary of FASB 123(R). For information about Infobytes, see product no. BYTXX12JA at www.cpa2biz.com/infobytes. Publication Investment Valuation: Tools and Techniques for Determining the Value of Any Asset, 2nd edition (hardcover, # WI414883P0200DJA). For more information about these resources or to place an order, go to www.cpa2biz.comor call the Institute at 888-777-7077. Advertisement Google now allows you to select preferred sources. When you select the Journal of Accountancy as a preferred source, you’ll start to see more of our articles prominently displayed when our content is relevant to your search. Try it now. Latest news September 24, 2025 Paper tax refund checks on the way out as IRS shifts to electronic payments September 24, 2025 Practice mobility update: New NASBA tool tracks changes for CPAs September 23, 2025 IRS keeps per diem rates unchanged for business travel year starting Oct. 1 September 22, 2025 Managing teams, managing time: The importance of setting expectations September 19, 2025 Details on IRS prop. regs. on tip income deduction Advertisement Podcast September 25, 2025 Professional liability risks related to Form 1065, CPA firm acquisitions September 18, 2025 ‘We’re still the thinkers’ — a reminder for tax pros in the AI era September 11, 2025 Strong storytelling helps speakers deliver ‘medicine’ without the aftertaste Advertisement Most Read MAP Survey finds CPA firm starting pay on the rise IRS finalizes regulations for Roth catch-up contributions under SECURE 2.0 NASBA, AICPA release proposed revisions to CPE standards IRS releases draft form for tip, overtime, car loan, and senior deductions Paper tax refund checks on the way out as IRS shifts to electronic payments Features Previous 7 retirement tips for small firm CPAs 7 retirement tips for small firm CPAs Building a better CPA firm: Stepping up service offerings Building a better CPA firm: Stepping up service offerings 2025 tax software survey 2025 tax software survey Calming nervous clients nearing retirement Calming nervous clients nearing retirement 7 retirement tips for small firm CPAs 7 retirement tips for small firm CPAs Building a better CPA firm: Stepping up service offerings Building a better CPA firm: Stepping up service offerings 2025 tax software survey 2025 tax software survey Calming nervous clients nearing retirement Calming nervous clients nearing retirement 7 retirement tips for small firm CPAs 7 retirement tips for small firm CPAs Building a better CPA firm: Stepping up service offerings Building a better CPA firm: Stepping up service offerings 2025 tax software survey 2025 tax software survey Next FROM THIS MONTH'S ISSUE Flip out with the latest Tech Q&A The September Technology Q&A column shows how to create dynamic to-do lists with Excel's checkboxes and also how to set up multifactor authentication texts that don't rely on phones. Flip through both items and view a video walkthrough in our digital format. From The Tax Adviser August 30, 2025 2025 tax software survey August 30, 2025 Are you doing all you can to keep the cash method for your clients? July 31, 2025 Current developments in S corporations July 31, 2025 Paid student-athletes: Tax implications for universities and donors MAGAZINE Previous September 2025 September 2025 August 2025 August 2025 July 2025 July 2025 June 2025 June 2025 May 2025 May 2025 April 2025 April 2025 March 2025 March 2025 February 2025 February 2025 January 2025 January 2025 December 2024 December 2024 November 2024 November 2024 October 2024 October 2024 view all View All Next PUSH NOTIFICATIONS Coming soon: Learn about important news CPA LETTER DAILY EMAIL Subscribe to the daily CPA Letter Stay on top of the biggest news affecting the profession every business day. Follow this link to your marketing preferences on aicpa-cima.com to subscribe. If you don't already have an aicpa-cima.com account, create one for free and then navigate to your marketing preferences. Connect JofA on X JofA on Facebook HOME News Monthly issues Podcast A&A Focus PFP Digest Academic Update Topics RSS feed Site map ABOUT Contact us Advertise Submit an article Editorial calendar Privacy policy Terms & conditions SUBSCRIBE Academic Update CPE Express AICPA & CIMA SITES AICPA-CIMA.com Global Engagement Center Financial Management (FM) The Tax Adviser AICPA Insights Global Career Hub © 2025 Association of International Certified Professional Accountants. All rights reserved. Reliable. Resourceful. Respected.
12203
https://www.teacherspayteachers.com/Product/Classifying-Rational-and-Irrational-Numbers-Real-Number-System-Activity-5704269
Classifying Rational and Irrational Numbers - Real Number System Activity Log InSign Up Cart is empty Total: $0.00 View Wish ListView Cart Grade Elementary Preschool Kindergarten 1st grade 2nd grade 3rd grade 4th grade 5th grade Middle school 6th grade 7th grade 8th grade High school 9th grade 10th grade 11th grade 12th grade Adult education Resource type Student practice Independent work packet Worksheets Assessment Graphic organizers Task cards Flash cards Teacher tools Classroom management Teacher manuals Outlines Rubrics Syllabi Unit plans Lessons Activities Games Centers Projects Laboratory Songs Clip art Classroom decor Bulletin board ideas Posters Word walls Printables Seasonal Holiday Black History Month Christmas-Chanukah-Kwanzaa Earth Day Easter Halloween Hispanic Heritage Month Martin Luther King Day Presidents' Day St. Patrick's Day Thanksgiving New Year Valentine's Day Women's History Month Seasonal Autumn Winter Spring Summer Back to school End of year ELA ELA by grade PreK ELA Kindergarten ELA 1st grade ELA 2nd grade ELA 3rd grade ELA 4th grade ELA 5th grade ELA 6th grade ELA 7th grade ELA 8th grade ELA High school ELA Elementary ELA Reading Writing Phonics Vocabulary Grammar Spelling Poetry ELA test prep Middle school ELA Literature Informational text Writing Creative writing Writing-essays ELA test prep High school ELA Literature Informational text Writing Creative writing Writing-essays ELA test prep Math Math by grade PreK math Kindergarten math 1st grade math 2nd grade math 3rd grade math 4th grade math 5th grade math 6th grade math 7th grade math 8th grade math High school math Elementary math Basic operations Numbers Geometry Measurement Mental math Place value Arithmetic Fractions Decimals Math test prep Middle school math Algebra Basic operations Decimals Fractions Geometry Math test prep High school math Algebra Algebra 2 Geometry Math test prep Statistics Precalculus Calculus Science Science by grade PreK science Kindergarten science 1st grade science 2nd grade science 3rd grade science 4th grade science 5th grade science 6th grade science 7th grade science 8th grade science High school science By topic Astronomy Biology Chemistry Earth sciences Physics Physical science Social studies Social studies by grade PreK social studies Kindergarten social studies 1st grade social studies 2nd grade social studies 3rd grade social studies 4th grade social studies 5th grade social studies 6th grade social studies 7th grade social studies 8th grade social studies High school social studies Social studies by topic Ancient history Economics European history Government Geography Native Americans Middle ages Psychology U.S. History World history Languages Languages American sign language Arabic Chinese French German Italian Japanese Latin Portuguese Spanish Arts Arts Art history Graphic arts Visual arts Other (arts) Performing arts Dance Drama Instrumental music Music Music composition Vocal music Special education Speech therapy Social emotional Social emotional Character education Classroom community School counseling School psychology Social emotional learning Specialty Specialty Career and technical education Child care Coaching Cooking Health Life skills Occupational therapy Physical education Physical therapy Professional development Service learning Vocational education Other (specialty) Classifying Rational and Irrational Numbers - Real Number System Activity Rated 5 out of 5, based on 16 reviews 5.0(16 ratings) $2.00 Add to cart Wish List DescriptionReviews 16Q&AStandards 1 More from Miss Math Lady Thumbnail 1 Thumbnail 2 Thumbnail 3 Thumbnail 4 View Preview Share What others say "My students really liked it. A good project that was engaging. They did such a nice job that I put up their finished products." Jann S. "Great resource for my tutoring students. I always like to see where they are and how much they are learning." Kathleen d. See 14 more reviews Description Students will complete this Coloring Sort worksheet to classify rational numbers and irrational numbers. Students will identify the type of number by first creating a key using two different colors of colored pencils, crayons, markers, or highlighters and then they will color 28 circles that contain various rational and irrational numbers. One optional way you can scaffold this activity is by having students first use colored transparent counters to indicate the two types of numbers before they color the final product. This activity is a great way to get students working cooperatively in pairs or small groups. A variety of 28 real numbers are included: Fractions Decimals Integers Square Roots Cube Roots Expressions with Pi Students are provided with a worksheet with space to record their answers and evidence to prove how they sorted three of the rational numbers and three of the irrational numbers. Do you have a question about this resource? Please email me: cassie@missmathlady.com Report this resource to TPT Reported resources will be reviewed by our team. Report this resource to let us know if this resource violates TPT's content guidelines. Classifying Rational and Irrational Numbers - Real Number System Activity Rated 5 out of 5, based on 16 reviews 5.0(16 ratings) Miss Math Lady Follow 1.4k Followers $2.00 Add to cart Wish List Specs What's Included Grade 7 th - 9 th Mostly used with 7th and 8th Subject Algebra, Math Standards CCSS 8.NS.A.1 Tags Activities, Centers Save even more with bundles Classifying Real Numbers Worksheet, Digital Activity, & Number System Board This set of a classifying real numbers worksheet, digital activity, and a real number system bulletin board will provide your students with the practice they need to understand the relationship between the subsets of the real number system. The interactive bulletin board, printable sorting worksheet $7.99 Price $7.99$9.99 Original Price $9.99 Save $2.00 3 8th Grade Math Review Activities Classifying & Coloring Worksheets Bundle This set of 8th Grade Math Review Activities includes 14 coloring worksheets where students will practice classifying numbers, diagrams, equations, and more based on their characteristics, using what they've learned throughout the year.Students choose any 2, 3, 4, or 6 colors, color in the parts of $17.50 Price $17.50$28.50 Original Price $28.50 Save $11.00 14 Classifying Rational and Irrational Numbers Activity Printable & Digital BUNDLE This Classifying Rational and Irrational Numbers Activity provides your students with a choice between a printable AND digital version. They'll use two colors to identify rational numbers and irrational numbers. PRINTABLE Activity (Worksheet): Students will identify the type of number by first creat $3.00 Price $3.00$4.25 Original Price $4.25 Save $1.25 2 Reviews 5.0 Rated 5 out of 5, based on 16 reviews 16 ratings 5 16 4 0 3 0 2 0 1 0 Mostly used with 7th and 8th grades Reviews 2 5 7 3 6th 7th 8th 9th This product (16) All products (1,214) All verified TPT purchases All ratings 5 stars All grades 6th grade 7th grade 8th grade 9th grade Population Learning difficulties Emerging bilingual Sort by: Most recent Most relevant Most recent Highest rating Lowest rating A Great Resource for your 8th grade students Rated 5 out of 5 September 26, 2025 Met expectations Great value Standards-aligned My students really liked it. A good project that was engaging. They did such a nice job that I put up their finished products. Jann S. 7 reviews • New Jersey Grades taught:6th, 7th, 8th Great resource! Rated 5 out of 5 September 19, 2025 Met expectations Great value Standards-aligned Great way to assess understanding with easy, quick check ins! Kristyna S. 830 reviews • Massachusetts Grades taught:9th great resource Rated 5 out of 5 August 26, 2025 Met expectations Great value Standards-aligned Great resource, easy to put together and ready to use. Will use it again. Tamberly C. 1,159 reviews • Texas Grades taught:7th Student populations:Emerging bilinguals Rated 5 out of 5 June 27, 2025 Thank you for all your time and effort in creating this amazing resource! Heather Q. 319 reviews Grades taught:8th Rated 5 out of 5 May 20, 2025 Perfect fast formative to check rational/irrational numbers! Shannon Pilato (TPT Seller) 751 reviews Grades taught:7th Rated 5 out of 5 February 10, 2025 My students liked this activity! It was especially helpful with coloring rational vs. irrational Claire J. 126 reviews Grades taught:8th Rated 5 out of 5 November 20, 2024 This was a great activity for my algebra I students. Katie Washington (TPT Seller) 339 reviews Grades taught:9th Rated 5 out of 5 September 18, 2024 Great resource for my tutoring students. I always like to see where they are and how much they are learning. Kathleen D. 1,120 reviews Show more reviews Questions & Answers Please log into post a question. Be the first to ask Miss Math Lady a question about this product. Standards Log in to see state-specific standards (only available in the US). CCSS 8.NS.A.1 Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number. Meet the Teacher-Author ### Miss Math Lady Follow ★ Math Classroom Decor - Bulletin Boards and Posters ★ Printable & Digital Math Activities - Color by Number, Classifying, Self-Checking Task Cards, and more ★ Back to School - All About Me Activities, Teacher Planning Calendars, and more Massachusetts, United States 4.84 Store rating after 1.2k reviews 1.4k Followers You may also like previous Real Number System Activity 8th Grade Math Self-Checking Digital Activity $3.00 Original Price $3.00 Rated 5 out of 5, based on 2 reviews 5.0 (2) Classifying Rational and Irrational Numbers Self-Checking Digital Activity $3.00 Original Price $3.00 Rated 5 out of 5, based on 10 reviews 5.0 (10) Classifying Real Numbers Worksheet, Digital Activity, & Number System Board $7.99 Price $7.99$9.99 Original Price $9.99 8th Grade Math Review Activities Classifying & Coloring Worksheets Bundle $17.50 Price $17.50$28.50 Original Price $28.50 Rated 5 out of 5, based on 2 reviews 5.0 (2) next 0 1 More from this Teacher-Author ⏱️ ON SALE Today✏️ Back to School⇅ Number Lines✖️ Multiplication Facts TPT is the largest marketplace for PreK-12 resources, powered by a community of educators. Facebook Instagram Pinterest Twitter About Who we are We're hiring Press Blog Gift Cards Support Help & FAQ Security Privacy policy Student privacy Terms of service Tell us what you think Updates Get our weekly newsletter with free resources, updates, and special offers. Get newsletter IXL family of brands IXL Comprehensive K-12 personalized learning Rosetta Stone Immersive learning for 25 languages Wyzant Trusted tutors for 300 subjects Education.com 35,000 worksheets, games, and lesson plans Vocabulary.com Adaptive learning for English vocabulary Emmersion Fast and accurate language certification Thesaurus.com Essential reference for synonyms and antonyms Dictionary.com Comprehensive resource for word definitions and usage SpanishDictionary.com Spanish-English dictionary, translator, and learning FrenchDictionary.com French-English dictionary, translator, and learning Ingles.com Diccionario inglés-español, traductor y sitio de aprendizaje ABCya Fun educational games for kids © 2025 by IXL Learning |Protected by reCAPTCHAPrivacy•Terms
12204
https://zh.wikipedia.org/zh-hans/Topic:Uiwnlsp31c8jc4yp
维基百科:知识问答/存档/结构式讨论上的圆的切线与圆到底交于几点? 跳转到内容 [x] 主菜单 主菜单 移至侧栏 隐藏 导航 首页 分类索引 特色内容 新闻动态 最近更改 随机条目 帮助 帮助 维基社群 方针与指引 互助客栈 知识问答 字词转换 IRC即时聊天 联络我们 关于维基百科 特殊页面 搜索 搜索 [x] 外观 外观 移至侧栏 隐藏 文本 小 标准 大 此页面始终使用小字号 宽度 标准 宽 内容会尽可能占满您的浏览器窗口宽度。 颜色 (测试版) 自动 浅色 深色 此页面始终处于浅色模式。 资助维基百科 创建账号 登录 [x] 个人工具 资助维基百科 创建账号 登录 维基百科Discord、IRC、LINE、QQ及Telegram等各平台交流群欢迎大家加入。 在维基百科:知识问答/存档/结构式讨论的话题 [x] 添加语言 话题 [x] 简体 不转换 简体 繁體 大陆简体 香港繁體 澳門繁體 大马简体 新加坡简体 臺灣正體 查看历史 [x] 工具 工具 移至侧栏 隐藏 操作 查看历史 常规 链入页面 相关更改 上传文件 页面信息 获取短链接 下载二维码 打印/导出 下载为PDF 在其他项目中 <Wikipedia:知识问答/存档/结构式讨论 圆的切线与圆到底交于几点? 13个评论 • 2018年8月21日凌晨2点41分 7 年前 13 历史 固定链接 克勞棣 (留言贡献) 國民中學數學說:通過平面上一點的直線有無限多條,所以至少需通過兩個相異點才能決定一條直線。 但是國民中學數學也說:與圓交於兩點的直線是割線,與圓交於一點的直線是切線。 這乍看之下是矛盾的啊! 自然,利用極限的概念,我們應該可以說: 圓的割線與圓交於兩點,割線與圓心的距離小於圓的半徑,但是隨著割線與圓心的距離逐漸變大,此兩點就越來越靠近,當割線與圓心的距離等於半徑時,它就變成切線了。 問題是此時與圓相交的這兩點是"趨近於重疊",還是根本就"重疊"呢? 2018年8月16日凌晨3点51分 7 年前 固定链接 克勞棣 (留言贡献) 吉太小唯君:您說的我完全同意,所以圓的切線與圓交於幾點呢? 彭鹏君:「至少需通過兩個相異點才能決定一條直線」與「與圓交於一點的直線是切線」。 2018年8月16日早上6点32分 7 年前 固定链接 彭鹏 (留言贡献) 我觉得这不矛盾,确定一条切线的不只是一个点,而是一圆一点。因为切线除了必须通过圆上一点外,还不能与圆上除这个点外的其他任意一点重合。~~“通過兩個相異點”不是“決定一條直線”必要条件~~(这句话好像不对,划掉吧)。 编辑于 2018年8月16日上午10点34分 7 年前 固定链接 吉太小唯 (留言贡献) 通過兩個相異點應該是直線的必要條件,但通過的不是圓上兩點,而是圓外一點與圓上一點 2018年8月16日上午10点08分 7 年前 固定链接 克勞棣 (留言贡献) 吉太小唯君:可是割線就是通過圓上兩點。 2018年8月16日下午12点50分 7 年前 固定链接 吉太小唯 (留言贡献) 但那兩點連成的割線並不是離圓心最遠的一條線,即不是切線,切線與圓相切的點和圓心連成的線才是半徑。 所以,兩個相異點才能連成線:兩點於圓上時,產生一割線;一點於圓上,一點於圓外,產生一切線。 编辑于 2018年8月16日下午1点17分 7 年前 固定链接 克勞棣 (留言贡献) 1. 可是當割線與圓心的距離越來越長,長到等於半徑長時,不就變成切線了嗎?切線不應視為退化的割線嗎? 2. 附帶一提,離圓心最遠的一條線不是割線,但也不是切線,而是根本不與圓相交的直線,這個距離是任意大(你說得出它離圓心有多遠,它就可以有多遠)。 2018年8月16日下午1点57分 7 年前 固定链接 吉太小唯 (留言贡献) 切線可以應可說是退化的割線,但這時條件是兩點皆位在同一個座標點。要釐清一件事:兩點位在同一個座標上,可以稱作兩個點嗎?同一座標並不是兩相異點。 2018年8月17日凌晨2点04分 7 年前 固定链接 克勞棣 (留言贡献) 所以我才問「這兩點是"趨近於重疊",還是根本就"重疊?」呀! 既然當割線與圓心的距離越來越長,此兩點的距離就越來越短的話 2018年8月17日凌晨2点33分 7 年前 固定链接 吉太小唯 (留言贡献) 我認為,只有重疊時才叫作切線,不管如何趨近於重疊,仍然不是重疊。而重疊時也不能稱作相異兩點。 2018年8月17日凌晨2点36分 7 年前 固定链接 吉太小唯 (留言贡献) 我怎麼記得:極限是趨近於但不會等於。但求極限"值"時會等於某個值。 2018年8月16日凌晨4点01分 7 年前 固定链接 彭鹏 (留言贡献) 请问矛盾点在哪儿? 2018年8月16日凌晨5点34分 7 年前 固定链接 Brror (留言贡献) 在经过切点的同时还要与(经过切点的)半径垂直,即点斜式确定直线。 2018年8月21日凌晨2点41分 7 年前 固定链接 您当前无法参与讨论。您可以尝试登录。 检索自“ 本站的全部文字在知识共享 署名-相同方式共享 4.0协议之条款下提供,附加条款亦可能应用。(请参阅使用条款) Wikipedia®和维基百科标志是维基媒体基金会的注册商标;维基™是维基媒体基金会的商标。 维基媒体基金会是按美国国内税收法501(c)(3)登记的非营利慈善机构。 隐私政策 关于维基百科 免责声明 行为准则 开发者 统计 Cookie声明 手机版视图 编辑预览设置 搜索 搜索 在维基百科:知识问答/存档/结构式讨论的话题 添加语言添加话题
12205
https://discussions.unity.com/t/gpu-gems-3-procedural-terrain-in-unity-a-naive-exploration/545964
GPU Gems 3: Procedural Terrain in Unity - A Naive Exploration Hey all, I’m interested in recreating the procedural terrain technique developed by Ryan Geiss and detailed here: However, I’m new to the idea of procedurally-generated content and could use some help deciphering how to implement a voxel system like the one described in this chapter. If anyone could point me in the right direction I’d really appreciate it: most voxel terrain tutorials I’ve seen deal with the blockier Minecraft style, and I’m still a bit too new to all this to try downloading someone else’s code and working my way through it alone. Why don’t you use some of the premade ones in the asset store. Why don’t you use some of the premade ones in the asset store. Well, because I’m currently in school studying game development and I’d rather fight my way through this and learn something along the way than give up and use somebody else’s code. And as I said above, I’ve tried analyzing other people’s code but I don’t quite have the necessary understanding to do so. I’d recommend checking out this simple Voxel terrain project for a bare-bones implementation using either Marching Cubes or Marching Tetrahedra, and reading this simple explanation & implementation of the Marching Cubes algorithm. The Scrawk blog Voxel project is all implemented on the CPU, rather than in Shaders, so it’s not really a GPU-based solution as far as I can tell, but understanding how Voxel terrain works when written for the CPU would be a good foundation for figuring out how to translate it into a GPU solution. I think I may still be getting ahead of myself. First of all I need to understand how to get a script to put to procedurally build an infinite mesh. I’d like to accomplish this by checking if the far plane of the view frustum has run out of land, and if so then to build onto the existing mesh. My best guess on how to do this is to use voxels and chunks to build the mesh. So if my chunks are 32x32x32 voxels I could test if the center of the far plane is inside of an existing chunk. If so, do nothing, otherwise generate a new chunk, polygonize it (for the time being just a flat plane extending 32 units in the x and z directions and centered as the chunk’s y position), and append it to the existing mesh. That said, does anyone know of a tutorial that uses a similar technique? Or would anyone be able to quickly come up with some pseudocode to accomplish this? The Scrawk voxel project doesn’t implement infinite terrain, but it does implement WxLxH-sized chunks (with voxels of fixed size 1x1x1, and chunk sizes measured in number of voxels along each dimension of each chunk). The Unity Minecraft thread should have lots of people who you could ask about their experience implementing infinite terrain; the principle should be similar for both smooth and block voxels. The thread itself is overwhelmingly large, but you could probably give implementing infinite terrain a shot in either the Scrawk smooth-voxels project or in a block-y Minecraft tutorial (this Unity Minecraft tutorial is especially good and easy to follow, but also does not implement infinite terrain), and then post your code/questions to the Minecraft thread if you’re having trouble. Although I’ve not implemented infinite terrain myself, in principle it seems pretty straightforward; you just need to store the surface data (for MarchingCubes, smooth voxels) or individual block data in text files and, in Update(), loop through all nearby chunks to see if any currently visible chunks have left or entered your critical viewing radius, then load those chunks or disable them as appropriate. edit: If I remember correctly from reading through the Minecraft thread, at least one person has had frame-rate issues when implementing infinite terrain chunk loading/unloading on the CPU, so that might be an issue that requires that you use ShaderLab/Cg/HLSL so that vertex/fragment shaders can use the GPU to carry the workload. Alright, I’ve started to put together my basic code. At the moment I have a single 33x33 plane (since a chunk is going to contain 32x32 voxels aka 33x33 vertices) which I generate from a GameObject which contains a mesh generation script. I have a simple controller that allows me to look around and fly in all three axes. (I know this all might seem obvious to a lot of people who look at this, but I’m being very deliberate about detailing each step since other noobs like me might find this helpful, plus I’m saving this to write a blog post about eventually. So for me more detail is better.) So for my newest problems: my idea on how to store chunks is to create a 2D array, but obviously the issue is that the chunk you start in would be the center of the array (which makes sense since it’s generated at (0,0,0)). So it would therefore be impossible to have a map that extends infinitely in all directions since arrays have finite bounds. The other issue is how I generate new chunks. Naively I assume that you’d just want to generate chunks within some distance x of the player in all directions, but is this a smart solution? Alright, got some work done on this and I now have a triangulated plane: 1200×720 200 KB The script takes 2 parameters so far, voxel_size (which determines the length and width of a square) and dimensions (the overall chunk length and width). The code I came up with the build a triangulated mesh is very case heavy so I will definitely need to modify it, but it’s relatively fast considering it generated 2.7 tris in about .005 ms. I’ll be posting my code in a bit, but now I’m off to go watch Guardians of the Galaxy… a second time:) Alright, here’s my code for the plane generator: using UnityEngine; using System.Collections; using System.Collections.Generic; //provides access to List data structure public class PlaneGenerator : MonoBehaviour { private List<Vector3> verts = new List<Vector3>(); private List<int> tris = new List<int>(); private List<Vector2> uvs = new List<Vector2>(); public float voxel_size = 1; // dimensions of a single voxel public int dimensions = 32; //the number of voxels in each direction private Mesh mesh; // Use this for initialization void Start () { float chunk_size = voxel_size dimensions; mesh = GetComponent<MeshFilter>().mesh; float x = transform.position.x; float y = transform.position.y; float z = transform.position.z; for( int j=0 ; j < dimensions ; j++ ) { for( int i=0 ; i < dimensions ; i++ ) { //case where j = 0 if (j == 0) { if (i == 0) { verts.Add( new Vector3( x , y , z ) ); verts.Add( new Vector3( x , y , z + voxel_size ) ); verts.Add( new Vector3( x + voxel_size , y , z + voxel_size ) ); verts.Add( new Vector3( x + voxel_size , y , z ) ); tris.Add(0); tris.Add(1); tris.Add(3); tris.Add(1); tris.Add(2); tris.Add(3); } else { verts.Add( new Vector3( x + voxel_size + i voxel_size , y , z + voxel_size ) ); verts.Add( new Vector3( x + voxel_size + i voxel_size , y , z ) ); tris.Add(2i+1); tris.Add(2i); tris.Add(2i+3); tris.Add(2i); tris.Add(2i+2); tris.Add(2i+3); } // case where j = 1 } else if (j == 1) { if (i == 0) { verts.Add( new Vector3( x , y , z + 2 voxel_size ) ); //2 voxel_size only valid for j == 1 verts.Add( new Vector3( x + voxel_size , y , z + 2 voxel_size ) ); // rules for adding tri indices are hard coded for the j=1, i=0 case tris.Add(1); tris.Add(2 (dimensions + 1)); tris.Add(2); tris.Add(2 (dimensions + 1)); tris.Add(2 (dimensions + 1) + 1); tris.Add(2); } else{ verts.Add( new Vector3( x + voxel_size + i voxel_size , y , z + 2 voxel_size ) ); tris.Add(2i); tris.Add((2i) + 2dimensions + 2 - i); tris.Add(2i + 2); tris.Add((2i) + 2dimensions + 2 - i); tris.Add((2i) + 2dimensions + 2 - i + 1); tris.Add(2i + 2); } // case where j > 1 } else { if (i == 0) { // when i = 0 we have to add 2 points, a point on the left and a point on the right // the latter is added in both cases verts.Add( new Vector3( x , y , z + voxel_size j + voxel_size ) ); } // we add the right point in both cases verts.Add( new Vector3( x + voxel_size + i voxel_size , y , z + voxel_size j + voxel_size ) ); // triangle index code is the same in both cases tris.Add(jdimensions + j + i); tris.Add((jdimensions + j) + (dimensions+1) + i); tris.Add((jdimensions + j) + 1 + i); tris.Add((jdimensions + j) + (dimensions+1) + i); tris.Add((jdimensions + j) + (dimensions+1) + 1 + i); tris.Add(jdimensions + j + 1 + i); } } } mesh.Clear(); mesh.vertices = verts.ToArray(); mesh.triangles = tris.ToArray(); mesh.uv = uvs.ToArray(); mesh.Optimize(); mesh.RecalculateNormals(); } // Update is called once per frame void Update () { } } So here’s where I stand. I like the fact that the GPU Gems terrain can be written into a shader, but I’m not a fan of the fact that A) it can result in floating islands (since I’m looking for more realistic terrain) and B) I’ve read that Minecraft uses 2D Perlin noise to generate a height map followed by 3D perlin noise to warp the terrain and allow the formation of overhangs and caves. Does anyone who knows more about this than I do have any suggestions? Are there any other advantages/disadvantages to these methods that should know about? And is the code I’ve provided actually useful or is there a better way of generating a plane (if that’s even what I should be doing)? You should be generating triangles rather than planes, and doing so in a way consistent with the Marching Cubes algorithm, unless you’re now trying to do Minecraft infinite-terrain rather than MarchingCubes. If you want to do Minecraft terrain, I already linked an excellent tutorial on the topic; generating planes procedurally like you’re doing is at the basis of it, to get the 6 sides of each cube, but it seems odd to recreate the wheel when you could just follow along on a guided tour. The GPU Gems terrain generator should be modifiable to exclude floating islands, since it’s MarchingCubes-based, just like the Scrawk’s blog code. All you need is some manner of generating a suitable density function, which is reasonable to do once you understand how Marching Cubes works. I modified the Scrawk blog code to generate a system of rectangular caves, for example: Claustrophobic Caves Webplayer (may take as long as a minute to load; the cave system is large and all computation is handled on the CPU) Scrawk implemented warping to get overhangs, like the GPU, but I set my chunk sizes to 8x8 and carved out the interior of each room by following a simple X-Y-Z random walk (with some directional biases to generate long tunnels without turns, and multiple passes to get interweaving cave systems). A similar system that allows for the random walk to turn in any direction instead of just along the 3 main axes could give truly smooth, non-rectangular caverns. You may be able to perform similar, intuitive adjustments to eliminate floating islands from the GPU Gems code; after the original density function is created, you might search through the density function for isolated chunks of terrain (which you could do by selecting an arbitrary, ‘interior’ piece of the density function, then walking in all directions around it to nearby filled-in terrain, to find out how large the current, connected terrain piece is) and add pillars vis-a-vis the density function, below each island, whenever you find one, for example. To deal with the finite length on arrays you could load density-function data from .csv files or something similar as I said above, and re-center your chunks in your array every so often. Part of why I was messing around with planes was to experiment with a Minecraft-style implementation. My thought was to generate a flat plane, then modify the y position using 2D perlin noise before doing some kind of warp in the x and z directions. And while I was looking over the minecraft tutorial you linked to, I just thought I should mess around on my own at the same time to see what sort of implementation I could come up with. If I do decide to go with marching cubes, I have downloaded the Scrawk implementation and begun looking it over. However, I’m a bit confused by the edge lookup table he uses. I understand that each entry of the form #x### is a reference to a specific location in memory, but I don’t understand how that translates to an edge/list of edges. If anyone could explain that aspect of the algorithm I’d really appreciate it Look up Quadtrees! Very handy. Based on distance to a part within your quadtree you do LOD. Worth the read! Look up Quadtrees! Very handy. Based on distance to a part within your quadtree you do LOD. Worth the read! Thanks for the info. I had assumed quadtrees might satisfy LOD implementation. However, that’s an optimization I won’t worry about until later. For now it seems like understanding the marching cubes algorithm and generating 3D perlin noise will be difficult enough concepts. Another question I had recently: the GPU Gems implementation uses .vol files to contain 3D noise (which it does not specifically make note are perlin noise, but is something I’d assume) which is sampled using a voxel vertex’s xyz coordinates. However, I can’t seem to open .vol files, as doing so results in a mess of random characters (not just resticted to alphanumerics - &, @, and other odd symbols showed up as well.) Does anyone know what exactly a .vol file is, how it’s created, and if I perhaps need to generate my own? If it helps I’ve downloaded the files associated with Ryan Geiss’s original demo and could make the .zip available. There is some commentary in the code, but as I’ve mentioned before a lot of it just goes over my head If I do decide to go with marching cubes, I have downloaded the Scrawk implementation and begun looking it over. However, I’m a bit confused by the edge lookup table he uses. I understand that each entry of the form #x### is a reference to a specific location in memory, but I don’t understand how that translates to an edge/list of edges. If anyone could explain that aspect of the algorithm I’d really appreciate it The short answer is that the hexadecimal numbers you see in edgeTable need to be converted to binary numbers to be understandable; once converted, you can see that each hex number gives the appropriate binary number indicating which edges are cut by the density-function defined surface. In more length and detail: If you check out the Paul Bourke Marching Cubes tutorial, he does the same sort of thing (my understanding is that most Marching Cubes implementations pretty much copy-paste Bourke’s tabular code into their own, with conversions as appropriate for their chosen language), and walks through a few examples. As Bourke says, each cube consists of 12 edges, and the hex in edge table is just an alternative way of storing 12-bit binary numbers, where a 0 in the i-th position indicates that the underlying surface (given by the density function) does not appear to cross the i-th edge, and a 1 indicates that it does. The hex in the edge table is comprehensible if you convert it back to binary. Bourke gives this example: If for example the value at vertex 3 is below the isosurface value and all the values at all the other vertices were above the isosurface value then we would create a triangular facet which cuts through edges 2,3, and 11. The exact position of the vertices of the triangular facet depend on the relationship of the isosurface value to the values at the vertices 3-2, 3-0, 3-7 respectively. What makes the algorithm “difficult” are the large number (256) of possible combinations and the need to derive a consistent facet combination for each solution so that facets from adjacent grid cells connect together correctly. The first part of the algorithm uses a table (edgeTable) which maps the vertices under the isosurface to the intersecting edges. An 8 bit index is formed where each bit corresponds to a vertex. cubeindex = 0; if (grid.val < isolevel) cubeindex |= 1; if (grid.val < isolevel) cubeindex |= 2; if (grid.val < isolevel) cubeindex |= 4; if (grid.val < isolevel) cubeindex |= 8; if (grid.val < isolevel) cubeindex |= 16; if (grid.val < isolevel) cubeindex |= 32; if (grid.val < isolevel) cubeindex |= 64; if (grid.val < isolevel) cubeindex |= 128; Using the example earlier where only vertex 3 was below the isosurface, cubeindex would equal 0000 1000 or 8. edgeTable = 1000 0000 1100. This means that edge 2,3, and 11 are intersected by the isosurface. If you look at his picture, edges 2, 3 and 11 are ‘crossed’ by the surface. His 8-bit variable cube index (one bit per vertex of the cube) is 0000 1000 in this example, because vertex # 3 (corresponding to the 4th position from the right in the binary number) is the unique vertex below the surface in his example. The purpose of the hex in edge table is to use the decimal equivalent of this number ( 0000 1000 = 8 ) as an index, and map this index to the hex-equivalent of the 12-bit number 1000 0000 1100, which gives the relevant “cut” edges, as he says at the bottom of the quote. Navigating down to Bourke’s “int edgeTable=” declaration and initialization and counting from the top left (remembering that the first index is 0, as usual), we find that edgeTable is 0x80c. If you then use a hexadecimal to binary converter, you can see that 0x80c = 1000 0000 1100, which is the binary number that we need, because it indicates that exactly edges 2,3, and 11 are cut by the surface. Wow I feel dumb, I can’t believe I didn’t realize that form indicated hex format. Okay, so it makes sense now: if, for example, vertices 2 & 4 were the only ones inside/outside the surface then the 8-bit index would be 0001 0100, the decimal equivalent would be 20, meaning that we look up edgeTable to get 0x596. 0x596 translates to 0000 0101 1001 0110, which means that edges 1, 2, 4, 7, 8 and 10 are cut by the surface. Thanks for clearing this up, there so much for me to wrap my head around with this that I’m spacing on the simple things. No problem. It took me some staring at it to recognize what he was doing as well. It’s a bit weird that he doesn’t specifically point out that he’s converting from binary to hex and back again. Quick C# question: for my 3d array of density values is it better to use a multidimensional array [,] or a jagged array [ ][ ][ ]? I know one is perfectly rectangular and the other is an array of arrays of arrays, potentially with differing lengths, but is there any efficiency or memory reason to use one over the other? Alright, I’m working my way through Parl Bourke’s original implementation and Scrawk’s C# code, but I’ve come across this line in the marchCube function (which only runs the algorithm on a single cube): for(i = 0; i < 8; i++) if(cube[i] <= target) flagIndex |= 1<<i; I know that target is defined by the user and represents the value on which the mesh lies (something between -1 and 1). flagIndex is equal to 0, and cube I believe is an array representing the vertices of a single cube. Although I’ve never since this syntax before, I’ve learned by looking online that |= is a bitwise assignment operator, and << is the left shift binary operator. But I’m not sure what this line is actually doing, and if anyone could explain I’d really appreciate it [EDIT]: I might get it actually. Since the goal is to write a 1 to any vertex that is inside the mesh, we just want to use a bitwise left shift. So if vertex3 was the only vertex below the isosurface value then cube <= target, so flagIndex |= 1<<i would result in 0000 1000. If vertex2 and vertex6 were below the isovalue then at i=2 you’d get 0000 0100, and at i=6 you’d get 0100 1000. Is this correct? Alright, I’ve recreated the Marching Cubes algorithm. Next step: figuring out how to generate density values for each vertex similar to the process in the GPU Gems 3 implementation for(i = 0; i < 8; i++) if(cube[i] <= target) flagIndex |= 1<<i; … [EDIT]: I might get it actually. Since the goal is to write a 1 to any vertex that is inside the mesh, we just want to use a bitwise left shift. So if vertex3 was the only vertex below the isosurface value then cube <= target, so flagIndex |= 1<<i would result in 0000 1000. If vertex2 and vertex6 were below the isovalue then at i=2 you’d get 0000 0100, and at i=6 you’d get 0100 1000. Is this correct? This is how I understand that bit of code as well, yeah. The bitwise or assigment |= operator ensures that you retain prior changes to flagIndex and the bitwise left-shift sets i to the necessary binary number for each vertex. Curious to see how your implementation of the GPU Gems code goes. I’m still playing in the shallow end with ShaderLab/Cg/HLSL coding, and haven’t had a chance to try to convert Scrawk’s code into a GPU implementation. Related topics | Topic | | Replies | Views | Activity | --- --- | [WIP] Procedural World Generator Idea (Project in motion) Unity Engine Scripting | 25 | 2607 | September 21, 2016 | | [RELEASED] Ruaumoko: iso-surface, voxel mesh generation and terrain engine Community Showcases Asset-Store-Assets | 278 | 72501 | July 16, 2017 | | [WIP] Procedural Terrain via Marching Cubes Community Showcases | 31 | 14468 | November 12, 2014 | | Voxel tools: unreal landscape, shaders, meltable ice, trees Community Showcases | 73 | 25769 | February 27, 2014 | | [Marching Cubes] Infinite Terrain, Infinite Object, Fast and Optimized Community Showcases Asset-Store-Assets | 39 | 37886 | November 15, 2012 | Powered by Discourse, best viewed with JavaScript enabled
12206
https://openstax.org/books/biology-2e/pages/40-1-overview-of-the-circulatory-system
Skip to ContentGo to accessibility pageKeyboard shortcuts menu Biology 2e 40.1 Overview of the Circulatory System Biology 2e40.1 Overview of the Circulatory System Search for key terms or text. Learning Objectives By the end of this section, you will be able to do the following: Describe an open and closed circulatory system Describe interstitial fluid and hemolymph Compare and contrast the organization and evolution of the vertebrate circulatory system In all animals, except a few simple types, the circulatory system is used to transport nutrients and gases through the body. Simple diffusion allows some water, nutrient, waste, and gas exchange into primitive animals that are only a few cell layers thick; however, bulk flow is the only method by which the entire body of larger more complex organisms is accessed. Circulatory System Architecture The circulatory system is effectively a network of cylindrical vessels: the arteries, veins, and capillaries that emanate from a pump, the heart. In all vertebrate organisms, as well as some invertebrates, this is a closed-loop system, in which the blood is not free in a cavity. In a closed circulatory system, blood is contained inside blood vessels and circulates unidirectionally from the heart around the systemic circulatory route, then returns to the heart again, as illustrated in Figure 40.2a. As opposed to a closed system, arthropods—including insects, crustaceans, and most mollusks—have an open circulatory system, as illustrated in Figure 40.2b. In an open circulatory system, the blood is not enclosed in the blood vessels but is pumped into a cavity called a hemocoel and is called hemolymph because the blood mixes with the interstitial fluid. As the heart beats and the animal moves, the hemolymph circulates around the organs within the body cavity and then reenters the hearts through openings called ostia. This movement allows for gas and nutrient exchange. An open circulatory system does not use as much energy as a closed system to operate or to maintain; however, there is a trade-off with the amount of blood that can be moved to metabolically active organs and tissues that require high levels of oxygen. In fact, one reason that insects with wing spans of up to two feet wide (70 cm) are not around today is probably because they were outcompeted by the arrival of birds 150 million years ago. Birds, having a closed circulatory system, are thought to have moved more agilely, allowing them to get food faster and possibly to prey on the insects. Figure 40.2 In (a) closed circulatory systems, the heart pumps blood through vessels that are separate from the interstitial fluid of the body. Most vertebrates and some invertebrates, like this annelid earthworm, have a closed circulatory system. In (b) open circulatory systems, a fluid called hemolymph is pumped through a blood vessel that empties into the body cavity. Hemolymph returns to the blood vessel through openings called ostia. Arthropods like this bee and most mollusks have open circulatory systems. Circulatory System Variation in Animals The circulatory system varies from simple systems in invertebrates to more complex systems in vertebrates. The simplest animals, such as the sponges (Porifera) and rotifers (Rotifera), do not need a circulatory system because diffusion allows adequate exchange of water, nutrients, and waste, as well as dissolved gases, as shown in Figure 40.3a. Organisms that are more complex but still only have two layers of cells in their body plan, such as jellies (Cnidaria) and comb jellies (Ctenophora) also use diffusion through their epidermis and internally through the gastrovascular compartment. Both their internal and external tissues are bathed in an aqueous environment and exchange fluids by diffusion on both sides, as illustrated in Figure 40.3b. Exchange of fluids is assisted by the pulsing of the jellyfish body. Figure 40.3 Simple animals consisting of a single cell layer such as the (a) sponge or only a few cell layers such as the (b) jellyfish do not have a circulatory system. Instead, gases, nutrients, and wastes are exchanged by diffusion. For more complex organisms, diffusion is not efficient for cycling gases, nutrients, and waste effectively through the body; therefore, more complex circulatory systems evolved. Most arthropods and many mollusks have open circulatory systems. In an open system, an elongated beating heart pushes the hemolymph through the body and muscle contractions help to move fluids. The larger more complex crustaceans, including lobsters, have developed arterial-like vessels to push blood through their bodies, and the most active mollusks, such as squids, have evolved a closed circulatory system and are able to move rapidly to catch prey. Closed circulatory systems are a characteristic of vertebrates; however, there are significant differences in the structure of the heart and the circulation of blood between the different vertebrate groups due to adaptation during evolution and associated differences in anatomy. Figure 40.4 illustrates the basic circulatory systems of some vertebrates: fish, amphibians, reptiles, and mammals. Figure 40.4 (a) Fish have the simplest circulatory systems of the vertebrates: blood flows unidirectionally from the two-chambered heart through the gills and then the rest of the body. (b) Amphibians have two circulatory routes: one for oxygenation of the blood through the lungs and skin, and the other to take oxygen to the rest of the body. The blood is pumped from a three-chambered heart with two atria and a single ventricle. (c) Reptiles also have two circulatory routes; however, blood is only oxygenated through the lungs. The heart is three chambered, but the ventricles are partially separated so some mixing of oxygenated and deoxygenated blood occurs except in crocodilians and birds. (d) Mammals and birds have the most efficient heart with four chambers that completely separate the oxygenated and deoxygenated blood; it pumps only oxygenated blood through the body and deoxygenated blood to the lungs. As illustrated in Figure 40.4a. Fish have a single circuit for blood flow and a two-chambered heart that has only a single atrium and a single ventricle. The atrium collects blood that has returned from the body and the ventricle pumps the blood to the gills where gas exchange occurs and the blood is re-oxygenated; this is called gill circulation. The blood then continues through the rest of the body before arriving back at the atrium; this is called systemic circulation. This unidirectional flow of blood produces a gradient of oxygenated to deoxygenated blood around the fish’s systemic circuit. The result is a limit in the amount of oxygen that can reach some of the organs and tissues of the body, reducing the overall metabolic capacity of fish. In amphibians, reptiles, birds, and mammals, blood flow is directed in two circuits: one through the lungs and back to the heart, which is called pulmonary circulation, and the other throughout the rest of the body and its organs including the brain (systemic circulation). In amphibians, gas exchange also occurs through the skin during pulmonary circulation and is referred to as pulmocutaneous circulation. As shown in Figure 40.4b, amphibians have a three-chambered heart that has two atria and one ventricle rather than the two-chambered heart of fish. The two atria (superior heart chambers) receive blood from the two different circuits (the lungs and the systems), and then there is some mixing of the blood in the heart’s ventricle (inferior heart chamber), which reduces the efficiency of oxygenation. The advantage to this arrangement is that high pressure in the vessels pushes blood to the lungs and body. The mixing is mitigated by a ridge within the ventricle that diverts oxygen-rich blood through the systemic circulatory system and deoxygenated blood to the pulmocutaneous circuit. For this reason, amphibians are often described as having double circulation. Most reptiles also have a three-chambered heart similar to the amphibian heart that directs blood to the pulmonary and systemic circuits, as shown in Figure 40.4c. The ventricle is divided more effectively by a partial septum, which results in less mixing of oxygenated and deoxygenated blood. Some reptiles (alligators and crocodiles) are the most primitive animals to exhibit a four-chambered heart. Crocodilians have a unique circulatory mechanism where the heart shunts blood from the lungs toward the stomach and other organs during long periods of submergence, for instance, while the animal waits for prey or stays underwater waiting for prey to rot. One adaptation includes two main arteries that leave the same part of the heart: one takes blood to the lungs and the other provides an alternate route to the stomach and other parts of the body. Two other adaptations include a hole in the heart between the two ventricles, called the foramen of Panizza, which allows blood to move from one side of the heart to the other, and specialized connective tissue that slows the blood flow to the lungs. Together these adaptations have made crocodiles and alligators one of the most evolutionarily successful animal groups on earth. In mammals and birds, the heart is also divided into four chambers: two atria and two ventricles, as illustrated in Figure 40.4d. The oxygenated blood is separated from the deoxygenated blood, which improves the efficiency of double circulation and is probably required for the warm-blooded lifestyle of mammals and birds. The four-chambered heart of birds and mammals evolved independently from a three-chambered heart. The independent evolution of the same or a similar biological trait is referred to as convergent evolution. PreviousNext Order a print copy Citation/Attribution This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission. Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax. Attribution information If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: Access for free at If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution: Access for free at Citation information Use the information below to generate a citation. We recommend using a citation tool such as this one. Authors: Mary Ann Clark, Matthew Douglas, Jung Choi Publisher/website: OpenStax Book title: Biology 2e Publication date: Mar 28, 2018 Location: Houston, Texas Book URL: Section URL: © Jul 7, 2025 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
12207
https://ditki.com/course/usmle-comlex-step-1/genetic-developmental-disorders/infectious/1610/congenital-infections-torches/notes
Congenital Infections (TORCHeS): | ditki medical and biological sciences Draw it to Know it medical & biological sciences ===================================================================================================================== for institutions courses FAQ institutional registration log in › Congenital Infections (TORCHeS) Back to Course Overview Drawing Pad Drawing SavedQuiz Quiz Taken Quiz Passed Downloads Notes Notes Transcript Congenital Infections (TORCHeS) Sections Toxoplasma gondii Rubella virus Herpes Simplex Viruses 1 & 2 Others Cytomegalovirus Syphilis References Be aware that there are a few different versions of the TORCHeS acronym and the infectious diseases included; we'll use a relatively inclusive version for completeness. This tutorial includes key infections that are transmitted from the parent to fetus or neonate during pregnancy or birth. These infections are a significant source of fetal and neonatal mortality and childhood morbidity. Common manifestations: Slow growth, congenital heart disease, enlarged liver and/or spleen, jaundice, microcephaly or hydrocephaly, ocular lesions, and skin rashes. The severity of infection often depends on its timing: Outcomes are often more severe when infection occurs early in pregnancy; for example, early infections are more likely to lead to fetal loss. Where available, treatment should be administered as soon as possible to minimize long-term complications. TORCHEeS acronym: Toxoplasma gondii Other (including Varicella Zoster Virus, Parvovirus B 19, Listeriosis) Rubella Cytomegalovirus Herpes Simplex Virus HIV (some include HIV in the "Others" category) Syphilis (also sometimes included in the "others" category). Be aware that the "Others" category also sometimes includes additional infectious agents, such as Zika Virus. Toxoplasma gondii Protozoan parasite. Infected parents are usually asymptomatic. Infections occurring early in a pregnancy are less likely to be passed to the offspring; however, in the cases when infection is passed to the offspring early in pregnancy, the outcomes are worse. In addition to the non-specific signs we listed in our table, congenital toxoplasmosis is often associated with the "classic triad": diffuse intracranial calcifications, chorioretinitis, and hydrocephaly. Additionally, some neonates have a characteristic "blueberry muffin rash" wherein the skin is marked by raised, bluish spots. Long-term CNS complications can develop, including intellectual disabilities, seizures, spasticity/palsies, and vision impairments. Treatment includes anti-parasitic drugs, including pyrimethamine, sulfadiazine, and leucovorin. Rubella virus Infection can cause congenital rubella syndrome. Infection is often subclinical at birth, but characteristic manifestations may develop: Deafness, cataracts, and heart disease. Some infants have a blueberry muffin rash. No specific treatment for congenital rubella syndrome. The best prevention is to ensure that pregnant people are vaccinated against the virus. Herpes Simplex Viruses 1 & 2 Congenital infections are acquired in utero, and are rare. Neonatal infections are acquired during birth. 3 primary patterns of HSV 1 & 2 neonatal infections, all of which can include rash: — Mucocutaneous lesions are localized to the skin, eyes (conjunctivitis), and mouth ("SEM"). — In other cases, infection is localized in the CNS, and can manifest, for example, as meningitis. — In the most severe cases, infection is disseminated and can lead to multi-organ failure and death. Some authors report that the liver and lungs are especially compromised in disseminated infections. Acyclovir is used to treat neonatal HSV. Because we cover HIV in depth elsewhere, we're omitting it, here. — Be aware that recurring and/or opportunistic infections during childhood may be a warning sign of HIV infection. Others Varicella-Zoster Virus (aka, Human Herpes Virus 3, chickenpox): Congenital varicella is the result of primary infection in early pregnancy; mortality is high, and offspring that survive are likely to have skin and ocular lesions, hypoplastic limbs, and CNS abnormalities. Neonatal varicella is the result of primary infection in late pregnancy; infants often develop a vesiculopapular rash. In more severe cases, disseminated infection can lead to pneumonia, hepatitis, and encephalitis. Treatment includes VZV immune globulin, and acyclovir is used to treat disseminated infections. Congenital listeriosis: Occurs when pregnant people consume the bacteria Listeria monocytogenes in contaminated foods. Infection early in pregnancy often leads to fetal loss. Infection later in pregnancy can produce neonatal infections that are categorized as early or late onset. Early onset is associated with sepsis, pneumonia, and, in very severe cases, granulomatosis infanticeptica, which is associated with a high mortality rate. Granulomatosis infantiseptica is characterized by disseminated granulomas. Late onset listeriosis is associated with meningitis. Antibiotics can be used to treat congenital listeriosis. Parvovirus B19 Infection during pregnancy is associated with anemia, and that severe anemia can lead to fetal hydrops. Fetal hydrops is characterized by fluid accumulation in multiple fetal compartments, which can lead to respiratory distress and swelling of the abdomen. Intrauterine red blood cell transfusions have been used to treat severely anemic fetuses. Cytomegalovirus Congenital infection is common in the US; approximately 1 in 200 infants born in the US is affected. Most of these infants are asymptomatic, and only 1 in 5 with the infection will develop complications of congenital cytomegalovirus syndrome, which may not be apparent at birth. Congenital CMV is the leading cause of birth and developmental abnormalities in the US. Key manifestations of CMV syndrome include: deafness, blueberry muffin rash, and periventricular calcifications (as well as other CNS abnormalities). Treatment includes ganciclovir or valganciclovir. Syphilis Neonates with congenital syphilis are often asymptomatic at birth. Early onset illness is defined as the arrival of symptoms before two years of age. Common manifestations include: — Rash — "Snuffles" (syphilitic rhinitis) — Hepatomegaly with jaundice — Lymphadenopathy — Long bone abnormalities such as Wimberger's sign (which is characterized by bilateral destruction of the medial tibial metaphysis) — CNS abnormalities Late onset illness, defined as after 2 years of age, is characterized by a variety of abnormalities, including: — Hearing impairment — Facial features: Frontal bossing (forehead protrusion), interstitial keratitis, saddle nose (sunken nasal bridge), short maxilla (and possibly perforated hard palate), and protruding mandible. — Dental features: Hutchison incisors have a serrated appearance, and mulberry molars are pitted on the surface. — Some patients develop skeletal abnormalities such as "saber shins," characterized by tibias that are bowed anteriorly like saber blades, and Clutton's joints, which are characterized by symmetrical synovitis and joint swelling (especially of the knees). Treated with various formulations of penicillin. Additional Images of Congenital Syphilis: Already subscribed? Log in » Already subscribed? Log in » References Al, A. Knezevic et. "Disseminated Neonatal Herpes Caused by Herpes Simplex Virus Types 1 and 2 - Volume 13, Number 2—February 2007 - Emerging Infectious Diseases Journal - CDC." Accessed February 26, 2019. Baud, D., and G. Greub. "Intracellular Bacteria and Adverse Pregnancy Outcomes." Clinical Microbiology and Infection 17, no. 9 (September 1, 2011): 1312–22. Cheeran, Maxim C.-J., James R. Lokensgard, and Mark R. Schleiss. "Neuropathogenesis of Congenital Cytomegalovirus Infection: Disease Mechanisms and Prospects for Intervention." Clinical Microbiology Reviews 22, no. 1 (January 2009): 99–126. "Congenital HIV Symptoms & Causes | Boston Children's Hospital." Accessed September 11, 2018. "Congenital Syphilis - Pediatrics." Merck Manuals Professional Edition. Accessed September 11, 2018. "Congenital Syphilis: MedlinePlus Medical Encyclopedia." Accessed February 25, 2019. "Congenital Toxoplasmosis - Pediatrics." Merck Manuals Professional Edition. Accessed September 11, 2018. "Congenital Varicella Syndrome." NORD (National Organization for Rare Disorders) (blog). Accessed September 11, 2018. Corey, Lawrence, and Anna Wald. "Maternal and Neonatal HSV Infections." The New England Journal of Medicine 361, no. 14 (October 1, 2009): 1376–85. De Santis, Marco, Carmen De Luca, Ilenia Mappa, Terryann Spagnuolo, Angelo Licameli, Gianluca Straface, and Giovanni Scambia. "Syphilis Infection during Pregnancy: Fetal Risks and Clinical Management." Infectious Diseases in Obstetrics and Gynecology 2012 (2012). "Details - Public Health Image Library(PHIL)." Accessed September 11, 2018. Fernandes, Neil D., and Talel Badri. "Congenital Herpes Simplex." In StatPearls. Treasure Island (FL): StatPearls Publishing, 2018. "Human Immunodeficiency Virus (HIV) Infection in Infants and Children - Pediatrics." Merck Manuals Professional Edition. Accessed September 11, 2018. "Increase in Incidence of Congenital Syphilis — United States, 2012–2014." Accessed February 26, 2019. Khazaeni, Leila M. "Ocular Complications of Congenital Infections." NeoReviews 18, no. 2 (February 2017): e100–104. Lago, Eleonor G., Alessandra Vaccari, and Renato M. Fiori. "Clinical Features and Follow-up of Congenital Syphilis." Sexually Transmitted Diseases 40, no. 2 (February 2013): 85. McAuley, James B. "Congenital Toxoplasmosis." Journal of the Pediatric Infectious Diseases Society 3, no. Suppl 1 (September 2014): S30–35. Mehta, Vandana, C. Balachandran, and Vrushali Lonikar. "Blueberry Muffin Baby: A Pictoral Differential Diagnosis." Dermatology Online Journal 14, no. 2 (2008). "Neonatal Dermatology - Infective Lesions." Accessed September 11, 2018. "Neonatal Listeriosis - Pediatrics." Merck Manuals Professional Edition. Accessed September 11, 2018. Onesimo, Roberta, Danilo Buonsenso, Claudia Gioè, and Piero Valetini. "Congenital Syphilis: Remember to Not Forget." BMJ Case Reports 2012 (July 27, 2012). Purewal, Rupena, Lisa Costello, Srikanth Garlapati, Sanjay Mitra, Michelle Mitchell, and Kathryn S. Moffett. "Congenital Herpes Simplex Virus in the Newborn: A Diagnostic Dilemma." Journal of the Pediatric Infectious Diseases Society 5, no. 3 (September 1, 2016): e21–23. Sanchez, Thomas R, Mitchell D Datlow, and Anna E Nidecker. "Diffuse Periventricular Calcification and Brain Atrophy: A Case of Neonatal Central Nervous System Cytomegalovirus Infection." The Neuroradiology Journal 29, no. 5 (October 2016): 314–16. Su, J. R., S. M. Berman, D. Davis, H. S. Weinstock, and R. D. Kirkcaldy. "Congenital Syphilis - United States, 2003-2008." Atlanta, United States, Atlanta: U.S. Center for Disease Control, April 16, 2010. Draw it to Know it medical & biological sciences ===================================================================================================================== Courses Subject-Based (Graduate/Medical) Systems-Based (Graduate/Medical) Foundational (Undergraduate) MCAT Preparation USMLE/COMLEX Preparation Training Programs CME/CEU Credits About How to use DITKI DITKI Classroom FAQ - Individual Accounts FAQ - Institutional Accounts Contact Us Billing & Technical Support Institutional Sales Continuing Medical Education (CME) General Inquiries Privacy PolicyTerms of UseLTI CertificationAccessibility Statement All materials © Draw It to Know It, Creations, LLC Cookie Policy We use cookies to improve your experience, analyze site traffic, and personalize content as described by our Privacy Policy. Accept All Manage Preferences
12208
https://www.varsitytutors.com/practice/subjects/sat-math/help/solving-word-problems-with-one-unit-conversion
Solving Word Problems with One Unit Conversion Help Questions SAT Math › Solving Word Problems with One Unit Conversion Questions 1 - 10 Julie is a zookeeper responsible for feeding baby giraffes. Each giraffe should drink 12 quarts of milk per day, but Julie's milk containers measure in pints. How many pints should she feed each giraffe each day? (1 quart = 2 pints) 12 24 36 Explanation Whenever you're facing a unit conversion problem it is a good idea to use dimensional analysis to help you structure the math - whether you should multiply or divide by the provided conversion ratio - properly. That means that you'll set up the math such that the units you don't want in your answer cancel, leaving you with the units you do want. Here you're given quarts but asked to convert to pints, so you'll set up the math so that quarts are in the denominator and cancel, leaving you with pints: This means that quarts cancel leaving you with pints, and tells you that you have to multiply 12 by 2. The correct answer is 24. 2 To prepare for a bicycle race, Celeste wants to ride her bicycle for 20 miles. The app she is using to track her distance only cites distance in kilometers. Using the conversion that 1 kilometer = 0.62 miles, which of the following is closest to the exact number of kilometers she should ride? 15 21 32 39 Explanation When you're working with conversion problems, it is helpful to use dimensional analysis to make sure that you're making the proper "do I multiply or divide by this conversion" decision. Here you're given a number of miles (she wants to ride 20 miles) and you need to get to kilometers. So you can structure your math with miles in the denominator of the fraction, and that means that the units "miles" will cancel leaving you with just "kilometers": As you can see, this tells you to divide by 0.62, and when you've cancelled the unit "miles" you're left with just "kilometers." Note, also, that the question asks for an estimate ("which of the following is closest") so you can safely divide by just 0.6, or 3/5, to get to 33.33, and the only answer that is close is 32. 3 A carpenter is making a model house and he buys of crown moulding to use as accent pieces. He needs of the moulding for the house. How many feet of the material does he need to finish the model? Explanation We can solve this problem using ratios. There are in . We can write this relationship as the following ratio: We know that the carpenter needs of material to finish the house. We can write this as a ratio using the variable to substitute the amount of feet. Now, we can solve for by creating a proportion using our two ratios. Cross multiply and solve for . Simplify. Divide both sides by . Solve. Reduce. The carpenter needs of material. 4 Brionna uses a fitness app to track the number of steps she takes each day. For the month of February, she took a total of 320,000 steps. She estimates that 2,500 of her steps equates to one mile. Approximately how many miles did she walk in February? 144 128 115 100 Explanation When you're working with conversion problems, it is helpful to use dimensional analysis to ensure that you are properly applying the conversion (i.e. "should I multiply or divide?"). Here you're given a number of steps and you need to convert to miles using the conversion 2,500 steps = 1 mile. You can then set up your math so that the unit "steps" cancels and you're left with just "miles." That would mean: As you can see, steps will cancel due to division, meaning that you'll divide 320,000 by 2,500 and be left with the proper unit miles. To do this math by hand, you should also see that you can factor out 100 from both 320,000 and 2,500, leaving you with the problem 3200 divided by 25. This leads you to the correct answer, 128. 5 Caitlyn's goal for the upcoming track and field season is to extend her long jump personal best by 10 inches to break the school record. By approximately how many centimeters will she need to extend her personal best long jump? (1 inch = 2.54 centimeters) 4 15 25 54 Explanation Whenever you're workin with unit conversions, it is a good idea to use dimensional analysis to structure your math to help you choose the right operation (multiply vs. divide). This means that you'll set up your equation to cancel the units that you don't want in your answer, and therefore you'll be left with the proper units. Here you're given inches and asked to convert to centimeters, so you'll set up the math with inches in the denominator so that the units cancel: This means that you'll multiply 10 by 2.54, and the inch units will cancel ensuring that you're properly converting to centimeters. 10 times 2.54 is 25.4, which rounds to 25. 6 Rashid worked on his science project for a total of 450 minutes. How many hours did he spend working on his science project? 6.5 7.5 8 8.5 Explanation To convert between minutes and hours, you can use dimensional analysis to ensure that you are performing the right calculations. You know that there are 60 minutes for every 1 hour, and you start with minutes and need to get your answer in terms of hours. That means that you want to set up the equation so that the units "minutes" cancels and leaves you with just "hours": You can see here that "minutes" is in both numerator and denominator, allowing you to cancel those units. And it dictates that to get your answer you'll divide 450 by 60. That reduces as a fraction to 45 divided by 6, which comes out to 7.5. 7 Ajay plans to adopt a rescue dog from Europe, where the dogs' weights are measured in kilograms. He chooses a dog that weighs 25 kilograms; approximately how many pounds does that dog weigh? (1 kilogram = 2.2 pounds) 55 42 25 11 Explanation Whenever you're working with unit conversions, it is a good idea to use dimensional analysis to ensure that you are setting up your calculation - specifically the choice of whether you multiply or divide by the conversion ratio - properly. This means that you'll set up the math so that the units you're given cancel, leaving you with the units you need to arrive at in the end. Here you're given kilograms and want to get to pounds, so you'll multiply by a conversion with kilograms in the denominator so that kilograms cancel, leaving you with pounds: This means that you multiply 25 by 2.2, and that kilograms cancel and the only unit left is the one you want, pounds. 25 times 2.2 is 55, giving you the correct answer. 8 A carpenter is making a model house and he buys of crown moulding to use as accent pieces. He needs of the moulding for the house. How many feet of the material does he need to finish the model? Explanation We can solve this problem using ratios. There are in . We can write this relationship as the following ratio: We know that the carpenter needs of material to finish the house. We can write this as a ratio using the variable to substitute the amount of feet. Now, we can solve for by creating a proportion using our two ratios. Cross multiply and solve for . Simplify. Divide both sides by . Solve. Reduce. The carpenter needs of material. 9 A carpenter is making a model house and he buys of crown moulding to use as accent pieces. He needs of the moulding for the house. How many feet of the material does he need to finish the model? Explanation We can solve this problem using ratios. There are in . We can write this relationship as the following ratio: We know that the carpenter needs of material to finish the house. We can write this as a ratio using the variable to substitute the amount of feet. Now, we can solve for by creating a proportion using our two ratios. Cross multiply and solve for . Simplify. Divide both sides by . Solve. The carpenter needs of material. Tiffany wants to purchase an 18-foot length of rope to hang a tire swing. When she arrives at the store to buy the rope, all of the lengths are quoted in inches. How many inches should she purchase to have exactly 18 feet of rope? 144 Explanation When converting between units, it can be helpful to use dimensional analysis to set up your equation. You know that there are 12 inches in 1 foot, and that you need to convert 18 feet to a number of inches. If you then structure your math so that the units cancel via division, you can ensure that you're making the right "do I multiply or do I divide by 12" decision: Helps you determine that you need to multiply, because then the "feet" units cancel leaving you with the units you want, which is inches. This means you multiply 18 by 12, which gives you 216 inches. Page 1of 3 Return to subject
12209
https://mrpinelli.files.wordpress.com/2018/08/worksheet-6-doulbling-half-life.pdf
UNIT 4 – DOUBLING & HALF-LIFE Pg.1/2 MAP 4C – Worksheet PRACTICE: 1. Caffeine has a half-life of approximately 5 h. Suppose you drink a cup of coffee that contains 200 mg of caffeine. How long will it take until there is less than 10 mg of caffeine left in your bloodstream? Give your answer to 1 decimal place. • You learned that quantities that grow or decay exponentially increase or decrease at a constant percent rate. • Some quantities have a constant doubling time or half-life. • When the doubling time, d, or half-life, h, is known, the relationship between the initial amount, A0, and the amount A after time t can be modelled by these equations: Exponential Growth - Doubling time is the time it takes for a population to double in size. d t A A ) 2 ( 0 = Where: A = final quantity A0 = initial amount 2 = indicates doubling t = time d = doubling time Exponential Decay - Half-Life is the time it takes for a quantity to decay to half the original amount. h t A A ) 5 . 0 ( 0 = Where: A = final quantity A0 = initial amount 0.5 = indicates half-life t = time h = half-life UNIT 4 – DOUBLING & HALF-LIFE Pg.2/2 MAP 4C – Worksheet 2. Tritium, a radioactive gas that builds up in CANDU nuclear reactors, is collected, stored in pressurized gas cylinders, and sold to research laboratories. Tritium decays into helium over time. Its half-life is about 12.3 years. a) Write an equation that gives the mass of tritium remaining in a cylinder that originally contained 500 g of tritium. b) Estimate the time it takes until less than 5 g of tritium is present. 3. A colony of bacteria doubles in size every 20 min. How long will it take for a colony of 20 bacteria to grow to a population of 10 000? 4. An archaeologist uses radiocarbon dating to determine the age of a Viking ship. Suppose that a sample that originally contained 100 mg of Carbon-14 now contains 85 mg of Carbon-14.What is the age of the ship to the nearest hundred years? 5. A bacteria culture starts with 6 500 bacteria. After 2.5 hours, there are 208 000 bacteria present. What is the length of the doubling period? 6. A bacteria culture doubles every 0.25 hours. At time 1.25 hours, there are 40 000 bacteria present. How many bacteria were present initially? 7. A sodium isotope, Na24 , has a half-life of 15 hours. Determine the amount of sodium that remains from a 4 g sample after: a) 45 hours b) 100 hours c) 5 days 8. The 50 cent Bluenose is one of Canada’s most famous postage stamps. In 1930 it could be bought at the post office for $0.50. In 2000, a stamp in excellent condition was sold at an auction for $512. Determine the doubling time for the stamp’s value. 9. Strontium-90 has a half-life of 25 years. How long would it take for 40 mg of it to decay to: a) 20 mg b) 1.25 mg
12210
https://openstax.org/books/introductory-statistics-2e/pages/5-2-the-uniform-distribution
Skip to ContentGo to accessibility pageKeyboard shortcuts menu Introductory Statistics 2e 5.2 The Uniform Distribution Introductory Statistics 2e5.2 The Uniform Distribution Search for key terms or text. The uniform distribution is a continuous probability distribution and is concerned with events that are equally likely to occur. When working out problems that have a uniform distribution, be careful to note if the data is inclusive or exclusive of endpoints. Example 5.2 The data in Table 5.1 are 55 smiling times, in seconds, of an eight-week-old baby. | | | | | | | | | | | | --- --- --- --- --- | 10.4 | 19.6 | 18.8 | 13.9 | 17.8 | 16.8 | 21.6 | 17.9 | 12.5 | 11.1 | 4.9 | | 12.8 | 14.8 | 22.8 | 20.0 | 15.9 | 16.3 | 13.4 | 17.1 | 14.5 | 19.0 | 22.8 | | 1.3 | 0.7 | 8.9 | 11.9 | 10.9 | 7.3 | 5.9 | 3.7 | 17.9 | 19.2 | 9.8 | | 5.8 | 6.9 | 2.6 | 5.8 | 21.7 | 11.8 | 3.4 | 2.1 | 4.5 | 6.3 | 10.7 | | 8.9 | 9.4 | 9.4 | 7.6 | 10.0 | 3.3 | 6.7 | 7.8 | 11.6 | 13.8 | 18.6 | Table 5.1 The sample mean = 11.65 and the sample standard deviation = 6.08. We will assume that the smiling times, in seconds, follow a uniform distribution between zero and 23 seconds, inclusive. This means that any smiling time from zero to and including 23 seconds is equally likely. The histogram that could be constructed from the sample is an empirical distribution that closely matches the theoretical uniform distribution. Let X = length, in seconds, of an eight-week-old baby's smile. The notation for the uniform distribution is X ~ U(a, b) where a = the lowest value of x and b = the highest value of x. The probability density function is f(x) = for a ≤ x ≤ b. For this example, x ~ U(0, 23) and f(x) = for 0 ≤ X ≤ 23. Formulas for the theoretical mean and standard deviation are and For this problem, the theoretical mean and standard deviation are μ = = 11.50 seconds and σ = = 6.64 seconds. Notice that the theoretical mean and standard deviation are close to the sample mean and standard deviation in this example. Try It 5.2 The data that follow record the total weight, to the nearest pound, of fish caught by passengers on 35 different charter fishing boats on one summer day. The sample mean = 7.9 and the sample standard deviation = 4.33. The data follow a uniform distribution where all values between and including zero and 14 are equally likely. State the values of a and b. Write the distribution in proper notation, and calculate the theoretical mean and standard deviation. | | | | | | | | --- --- --- | 1 | 12 | 4 | 10 | 4 | 14 | 11 | | 7 | 11 | 4 | 13 | 2 | 4 | 6 | | 3 | 10 | 0 | 12 | 6 | 9 | 10 | | 5 | 13 | 4 | 10 | 14 | 12 | 11 | | 6 | 10 | 11 | 0 | 11 | 13 | 2 | Table 5.2 Example 5.3 Problem a. Refer to Example 5.2. What is the probability that a randomly chosen eight-week-old baby smiles between two and 18 seconds? Solution P(2 < x < 18) = (base)(height) = (18 – 2) = . Figure 5.11 Problem b. Find the 90th percentile for an eight-week-old baby's smiling time. Solution b. Ninety percent of the smiling times fall below the 90th percentile, k, so P(x < k) = 0.90. Figure 5.12 Problem c. Find the probability that a random eight-week-old baby smiles more than 12 seconds KNOWING that the baby smiles MORE THAN EIGHT SECONDS. Solution c. This probability question is a conditional. You are asked to find the probability that an eight-week-old baby smiles more than 12 seconds when you already know the baby has smiled for more than eight seconds. Find P(x > 12|x > 8) There are two ways to do the problem. For the first way, use the fact that this is a conditional and changes the sample space. The graph illustrates the new sample space. You already know the baby smiled more than eight seconds. Write a new f(x): f(x) = = for 8 < x < 23 P(x > 12|x > 8) = (23 − 12) = Figure 5.13 For the second way, use the conditional formula from Probability Topics with the original distribution X ~ U (0, 23): P(A|B) = For this problem, A is (x > 12) and B is (x > 8). So, P(x > 12|x > 8) = Figure 5.14 Darker shaded area represents P(x > 12). Entire shaded area shows P(x > 8). Try It 5.3 A distribution is given as X ~ U (0, 20). What is P(2 < x < 18)? Find the 90th percentile. Example 5.4 The amount of time, in minutes, that a person must wait for a bus is uniformly distributed between zero and 15 minutes, inclusive. Problem a. What is the probability that a person waits fewer than 12.5 minutes? b. On the average, how long must a person wait? Find the mean, μ, and the standard deviation, σ. c. Ninety percent of the time, the time a person must wait falls below what value? This asks for the 90th percentile. Solution a. Let X = the number of minutes a person must wait for a bus. a = 0 and b = 15. X ~ U(0, 15). Write the probability density function. f (x) = = for 0 ≤ x ≤ 15. Find P (x < 12.5). Draw a graph. The probability a person waits less than 12.5 minutes is 0.8333. Figure 5.15 b. μ = = = 7.5. On the average, a person must wait 7.5 minutes. σ = = 4.3. The Standard deviation is 4.3 minutes. c. Find the 90th percentile. Draw a graph. Let k = the 90th percentile. k is sometimes called a critical value. The 90th percentile is 13.5 minutes. Ninety percent of the time, a person must wait at most 13.5 minutes. Figure 5.16 Try It 5.4 The total duration of baseball games in the major league in a typical season is uniformly distributed between 447 hours and 521 hours inclusive. Find a and b and describe what they represent. Write the distribution. Find the mean and the standard deviation. What is the probability that the duration of games for a team for a single season is between 480 and 500 hours? What is the 65th percentile for the duration of games for a team in a single season? Example 5.5 Suppose the time it takes a nine-year old to eat a donut is between 0.5 and 4 minutes, inclusive. Let X = the time, in minutes, it takes a nine-year old child to eat a donut. Then X ~ U (0.5, 4). Problem a. The probability that a randomly selected nine-year old child eats a donut in at least two minutes is _______. b. Find the probability that a different nine-year old child eats a donut in more than two minutes given that the child has already been eating the donut for more than 1.5 minutes. The second question has a conditional probability. You are asked to find the probability that a nine-year old child eats a donut in more than two minutes given that the child has already been eating the donut for more than 1.5 minutes. Solve the problem two different ways (see Example 5.3). You must reduce the sample space. First way: Since you know the child has already been eating the donut for more than 1.5 minutes, you are no longer starting at a = 0.5 minutes. Your starting point is 1.5 minutes. Write a new f(x): f(x) = = for 1.5 ≤ x ≤ 4. Find P(x > 2|x > 1.5). Draw a graph. Figure 5.17 P(x > 2|x > 1.5) = (base)(new height) = (4 − 2) Solution a. 0.5714 b. The probability that a nine-year old child eats a donut in more than two minutes given that the child has already been eating the donut for more than 1.5 minutes is . Second way: Draw the original graph for X ~ U (0.5, 4). Use the conditional formula P(x > 2|x > 1.5) = Try It 5.5 Suppose the time it takes a student to finish a quiz is uniformly distributed between six and 15 minutes, inclusive. Let X = the time, in minutes, it takes a student to finish a quiz. Then X ~ U (6, 15). Find the probability that a randomly selected student needs at least eight minutes to complete the quiz. Then find the probability that a different student needs at least eight minutes to finish the quiz given that she has already taken more than seven minutes. Example 5.6 Ace Heating and Air Conditioning Service finds that the amount of time a repair tech needs to fix a furnace is uniformly distributed between 1.5 and four hours. Let x = the time needed to fix a furnace. Then x ~ U (1.5, 4). Problem Find the probability that a randomly selected furnace repair requires more than two hours. Find the probability that a randomly selected furnace repair requires less than three hours. Find the 30th percentile of furnace repair times. The longest 25% of furnace repair times take at least how long? (In other words: find the minimum time for the longest 25% of repair times.) What percentile does this represent? Find the mean and standard deviation Solution a. To find f(x): f (x) = = so f(x) = 0.4 P(x > 2) = (base)(height) = (4 – 2)(0.4) = 0.8 Figure 5.18 Uniform Distribution between 1.5 and four with shaded area between two and four representing the probability that the repair time x is greater than two b. P(x < 3) = (base)(height) = (3 – 1.5)(0.4) = 0.6 The graph of the rectangle showing the entire distribution would remain the same. However the graph should be shaded between x = 1.5 and x = 3. Note that the shaded area starts at x = 1.5 rather than at x = 0; since X ~ U (1.5, 4), x can not be less than 1.5. Figure 5.19 Uniform Distribution between 1.5 and four with shaded area between 1.5 and three representing the probability that the repair time x is less than three c. Figure 5.20 Uniform Distribution between 1.5 and 4 with an area of 0.30 shaded to the left, representing the shortest 30% of repair times. P (x < k) = 0.30 P(x < k) = (base)(height) = (k – 1.5)(0.4) 0.3 = (k – 1.5) (0.4); Solve to find k: 0.75 = k – 1.5, obtained by dividing both sides by 0.4 k = 2.25 , obtained by adding 1.5 to both sides The 30th percentile of repair times is 2.25 hours. 30% of repair times are 2.25 hours or less. Figure 5.21 Uniform Distribution between 1.5 and 4 with an area of 0.25 shaded to the right representing the longest 25% of repair times. P(x > k) = 0.25 P(x > k) = (base)(height) = (4 – k)(0.4) 0.25 = (4 – k)(0.4); Solve for k: 0.625 = 4 − k, obtained by dividing both sides by 0.4 −3.375 = −k, obtained by subtracting four from both sides: k = 3.375 The longest 25% of furnace repairs take at least 3.375 hours (3.375 hours or longer). Note: Since 25% of repair times are 3.375 hours or longer, that means that 75% of repair times are 3.375 hours or less. 3.375 hours is the 75th percentile of furnace repair times. e. and hours and hours Try It 5.6 The amount of time a service technician needs to change the oil in a car is uniformly distributed between 11 and 21 minutes. Let X = the time needed to change the oil on a car. Write the random variable X in words. X = __________________. Write the distribution. Graph the distribution. Find P (x > 19). Find the 50th percentile. PreviousNext Order a print copy Citation/Attribution This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission. Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax. Attribution information If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: Access for free at If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution: Access for free at Citation information Use the information below to generate a citation. We recommend using a citation tool such as this one. Authors: Barbara Illowsky, Susan Dean Publisher/website: OpenStax Book title: Introductory Statistics 2e Publication date: Dec 13, 2023 Location: Houston, Texas Book URL: Section URL: © Jun 25, 2025 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
12211
https://brainly.com/question/24265167
[FREE] Sometimes the units for an electric field are written as N/C, while other times the units are written as - brainly.com Search Learning Mode Cancel Log in / Join for free Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions Log in Join for free Tutoring Session +49k Smart guidance, rooted in what you’re studying Get Guidance Test Prep +18,4k Ace exams faster, with practice that adapts to you Practice Worksheets +6,2k Guided help for every grade, topic or textbook Complete See more / Physics Textbook & Expert-Verified Textbook & Expert-Verified Sometimes the units for an electric field are written as N/C, while other times the units are written as V/m. Using dimensional analysis, show that N/C is equal to V/m. A. True B. False 2 See answers Explain with Learning Companion NEW Asked by katejordan2184 • 07/20/2021 0:03 / 0:15 Read More Community by Students Brainly by Experts ChatGPT by OpenAI Gemini Google AI Community Answer This answer helped 89813125 people 89M 5.0 0 Upload your school material for a more relevant answer N/C = V/m. Explanation The SI unit of electric field is N/C. Sometimes the units are written as V/m. We know that, 1 V = 1 J/C Using dimensional analysis, The dimensional formula for Joules is [M¹L² T⁻²]. The dimensional form of coulomb is [M⁰ L⁰ T¹ I¹]. So, J/C = [M¹L² T⁻³I¹] ...(1) The dimensional formula of Newton is [M¹ L¹ T⁻²] The dimensional form of coulomb is [M⁰ L⁰ T¹ I¹]. N/C= [M¹L² T⁻³I¹] ....(2) From (1) and (2) it is clear that the N/C is equal to V/m. Answered by Muscardinus •8K answers•89.8M people helped Thanks 0 5.0 (1 vote) Textbook &Expert-Verified⬈(opens in a new tab) This answer helped 89813125 people 89M 5.0 0 Fields and Circuits - Benjamin Crowell Electricity and Magnetism - Jeremy Tatum Light and Matter - Benjamin Crowell Upload your school material for a more relevant answer Using dimensional analysis, we can demonstrate that 1 N/C is equal to 1 V/m, confirming that both units of electric field strength are equivalent. This is derived from how voltage and force are defined in terms of energy per charge. Therefore, the answer is TRUE. Explanation The electric field is represented in two common units: newtons per coulomb (N/C) and volts per meter (V/m). To demonstrate that these two units are equivalent, we can use dimensional analysis. Understanding Voltage: Voltage (V) is defined as the electrical potential energy per unit charge. Its units can be expressed as: 1 V=1 C J​ Breaking Down Joules: A joule (J) is a unit of energy defined as: 1 J=1 k g⋅m 2/s 2 Plugging this into the voltage equation gives: 1 V=C 1 k g⋅m 2/s 2​ Therefore: 1 V=1 C⋅s 2 k g⋅m 2​ Understanding Force and Electric Field: The electric field (E) is defined as the force (F) per unit charge (Q). The unit for force is the newton (N), expressed as: 1 N=1 k g⋅m/s 2 Then, the units for electric field can be summarized as: E=Q F​=C N​=C k g⋅m/s 2​ Relating N/C to V/m: We can substitute our earlier findings into the electric field's representation: E=C k g⋅m/s 2​=C⋅s 2 1 k g⋅m 2​ Comparing this with our voltage unit: E=C⋅s 2 1 k g⋅m 2​=1 V/m From these calculations, we see that: 1 N/C=1 V/m This confirms that both notations for electric field strength are equivalent. In conclusion, both N/C and V/m refer to the same measurement of electric field strength, making the answer to the question TRUE. Examples & Evidence For instance, if a charge experiences a force of 10 N in an electric field, the field strength can also be expressed in terms of voltage if you know the charge. If that charge is 2 C, the electric field would be 5 N/C or equivalently 5 V/m. This conclusion is based on the definitions of voltage and electric field, where 1 V is defined as 1 J/C and 1 N is defined as 1 kg m/s², allowing us to equate the units through dimensional analysis. Thanks 0 5.0 (1 vote) Advertisement Community Answer This answer helped 4447282 people 4M 5.0 0 Final answer: Through dimensional analysis, it's demonstrated that the units N/C and V/m used to express electric field strength are indeed equivalent. This equivalence is rooted in the basic definitions and relations involving force, charge, energy, and potential difference. Explanation: Understanding the relationship between the units of electric field strength reveals a common ground between N/C and V/m. Dimensional analysis serves as a tool to bridge these units. The electric field (E) is traditionally measured in newtons per coulomb (N/C), which directly correlates to the force exerted per unit charge in an electric field. Alternatively, the unit volts per meter (V/m) is another expression for electric field intensity which might seem different but is inherently equivalent to N/C. To see why 1 V/m equals 1 N/C, consider the definition of a volt. A volt is defined as the potential difference when one joule of work is done to move a charge of one coulomb. Thus, voltage (V) can be represented as joules per coulomb (J/C). Combining this with the understanding that work or energy (in Joules) is force (in Newtons) times distance (in meters), we get the equation: 1 J = 1 Nm. Therefore, when dividing both sides by the charge (C) and distance (m), it simplifies to 1 V = 1 N/C, which by definition is the electric field strength expressed as V/m. This dimensional analysis confirms that expressing electric field units as N/C is fundamentally equivalent to expressing them as V/m, thereby validating the interchangeable use of these units depending on the context of the physical scenario being analyzed. therefore, it is true. Answered by LauraHorowitz •13.5K answers•4.4M people helped Thanks 0 5.0 (1 vote) Advertisement ### Free Physics solutions and answers Community Answer 47 Whats the usefulness or inconvenience of frictional force by turning a door knob? Community Answer 5 A cart is pushed and undergoes a certain acceleration. Consider how the acceleration would compare if it were pushed with twice the net force while its mass increased by four. Then its acceleration would be? Community Answer 4.8 2 define density and give its SI unit​ Community Answer 9 To prevent collisions and violations at intersections that have traffic signals, use the _____ to ensure the intersection is clear before you enter it. Community Answer 4.0 29 Activity: Lab safety and Equipment Puzzle Community Answer 4.7 5 If an instalment plan quotes a monthly interest rate of 4%, the effective annual/yearly interest rate would be _______. 4% Between 4% and 48% 48% More than 48% Community Answer When a constant force acts upon an object, the acceleration of the object varies inversely with its mass 2kg. When a certain constant force acts upon an object with mass , the acceleration of the object is 26m/s^2 . If the same force acts upon another object whose mass is 13 , what is this object's acceleration Community Answer 20 [4 A wheel starts from rest and has an angular acceleration of 4.0 rad/s2. When it has made 10 rev determine its angular velocity.]]( "4 A wheel starts from rest and has an angular acceleration of 4.0 rad/s2. When it has made 10 rev determine its angular velocity.]") Community Answer 4.1 5 Lucy and Zaki each throw a ball at a target. What is the probability that both Lucy and Zaki hit the target? Community Answer 22 The two non-parallel sides of an isosceles trapezoid are each 7 feet long. The longer of the two bases measures 22 feet long. The sum of the base angles is 140°. a. Find the length of the diagonal. b. Find the length of the shorter base. New questions in Physics Rewrite this measurement with a simpler unit, if possible: 4.2 g⋅g⋅c m. The deBroglie wavelength of a photoejected electron from the surface of Li (binding energy = 2.90 eV) is 4.70×10‍9 m. What wavelength of incident light caused this electron to be ejected? How is speed calculated? A. multiply velocity by displacement B. divide velocity by displacement C. multiply distance by time D. divide distance by time Explain why it would be impossible to use C14, which has a half-life of roughly 5,700 years, to determine the age of dinosaur bone? Thermodynamics is the study of heat and its relationship to A. fuel consumption. B. chemical energy. C. radiant energy. D. mechanical energy. Previous questionNext question Learn Practice Test Open in Learning Companion Company Copyright Policy Privacy Policy Cookie Preferences Insights: The Brainly Blog Advertise with us Careers Homework Questions & Answers Help Terms of Use Help Center Safety Center Responsible Disclosure Agreement Connect with us (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) Brainly.com Dismiss Materials from your teacher, like lecture notes or study guides, help Brainly adjust this answer to fit your needs. Dismiss
12212
https://www.mdpi.com/2073-4441/11/12/2576
Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control Loading web-font Gyre-Pagella/Size1/Regular Next Article in Journal Using Large-Aperture Scintillometer to Estimate Lake-Water Evaporation and Heat Fluxes in the Badain Jaran Desert, China Previous Article in Journal AQUALIFE Software: A New Tool for a Standardized Ecological Assessment of Groundwater Dependent Ecosystems Journals Active JournalsFind a JournalJournal ProposalProceedings Series Topics ------ Information For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers Open Access PolicyInstitutional Open Access ProgramSpecial Issues GuidelinesEditorial ProcessResearch and Publication EthicsArticle Processing ChargesAwardsTestimonials Author Services --------------- Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series About OverviewContactCareersNewsPressBlog Sign In / Sign Up Notice You can make submissions to other journals here. clear Notice You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader. ContinueCancel clear All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers. Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal. Original Submission Date Received: . clearzoom_out_mapsearchmenu Journals Active Journals Find a Journal Journal Proposal Proceedings Series Topics Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers Open Access Policy Institutional Open Access Program Special Issues Guidelines Editorial Process Research and Publication Ethics Article Processing Charges Awards Testimonials Author Services Initiatives Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series About Overview Contact Careers News Press Blog Sign In / Sign UpSubmit Search for Articles: Title / Keyword Author / Affiliation / Email Journal Water Article Type All Article Types Advanced Search Section All Sections Special Issue All Special Issues Volume Issue Number Page Logical Operator Operator Search Text Search Type add_circle_outline remove_circle_outline Journals Water Volume 11 Issue 12 10.3390/w11122576 Submit to this JournalReview for this JournalPropose a Special Issue ►▼ Article Menu Article Menu Subscribe SciFeed Recommended Articles Related Info Link Google Scholar More by Authors Links on DOAJ Kubrak, J. Kubrak, E. Kaca, E. Kiczko, A. Kubrak, M. on Google Scholar Kubrak, J. Kubrak, E. Kaca, E. Kiczko, A. Kubrak, M. on PubMed Kubrak, J. Kubrak, E. Kaca, E. Kiczko, A. Kubrak, M. /ajax/scifeed/subscribe Article Views 4892 Citations 4 Table of Contents Abstract Introduction The Principle of Operation of the Circular Flap Gate Design Analysis of Circular Flap Gate Working Conditions Experimental Verification of the Device’s Operating Conditions Results of the Experimental Tests Conclusions Author Contributions Funding Conflicts of Interest References Altmetricshare Shareannouncement Helpformat_quote Citequestion_answer Discuss in SciProfiles Need Help? Support Find support for a specific problem in the support section of our website. Get Support Feedback Please let us know what you think of our products and services. Give Feedback Information Visit our dedicated information section to learn more about MDPI. Get Information clear JSmol Viewer clear first_page Download PDF settings Order Article Reprints Font Type: Arial Georgia Verdana Font Size: Aa Aa Aa Line Spacing:    Column Width:    Background: Open Access Article Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control by Janusz Kubrak Janusz Kubrak SciProfilesScilitPreprints.orgGoogle Scholar 1,, Elżbieta Kubrak Elżbieta Kubrak SciProfilesScilitPreprints.orgGoogle Scholar 1, Edmund Kaca Edmund Kaca SciProfilesScilitPreprints.orgGoogle Scholar 1, Adam Kiczko Adam Kiczko SciProfilesScilitPreprints.orgGoogle Scholar 1 and Michał Kubrak Michał Kubrak SciProfilesScilitPreprints.orgGoogle Scholar 2 1 Faculty of Civil- and Environmental Engineering, Warsaw University of Life Sciences-SGGW, 02-797 Warsaw, Poland 2 Faculty of Building Services, Hydro and Environmental Engineering, Warsaw University of Technology, 00-653 Warsaw, Poland Author to whom correspondence should be addressed. Water2019, 11(12), 2576; Submission received: 5 October 2019 / Revised: 2 November 2019 / Accepted: 4 December 2019 / Published: 6 December 2019 (This article belongs to the Section Hydraulics and Hydrodynamics) Download keyboard_arrow_down Download PDF Download PDF with Cover Download XML Download Epub Browse Figures Versions Notes Abstract This article introduces a flow controller for an upstream water head designed for pipe culverts used in drainage ditches or wells. The regulator is applicable to water flow rates in the range of Q min<Q<Q max and the water depth H 0, exceeding which causes the gate to open. Q min flow denotes the minimum flow rate that allows water to accumulate upstream of the controller. Above the maximum flow rate Q max, the gate remains in the open position. In the present study, the position of the regulator’s gate axis was related to the water depth H 0 in front of the device. Derived dependencies were verified in hydraulic experiments. The results confirmed the regulator’s usefulness for controlling the water level. Keywords: circular flap gate; automatic upstream water level control 1. Introduction For the automatic control of the upstream water level in irrigation and drainage channels, various devices are used in different countries. The most convenient and reliable types of automatic gates are flap gates controlled directly by a differential hydrostatic force between the face side and the back of the gate. In the literature, one can find descriptions of models of flap gates pivoting around a horizontal axis [1,2,3,4,5,6]. Most of the concepts for the flap-gate-type design use a counterweight which counteracts the hydrostatic force exerted on the gate. When the water level exceeds a design value, the water pressure force opens the gate. With the lowering water level, when pressure decreases, the counterweight installed on the gate closes it. The operation of flap gates, described in the literature, requires the gate-closing and opening couples around the pivot axis to be balanced (i.e., gate opens and closes at the same water level). If these two couples are properly balanced, the flap gate is able to automatically maintain the upstream water level within a few centimeters at all flow rates. This paper describes the operating conditions of a control device for the upstream water level, consisting of a circular flap gate rotating around a horizontal axis located below the geometric center of mass . The design concept of the proposed flap gate is not the same as previously presented. It was decided that opening and closing water heads would be different, allowing for cyclical emptying of the upstream reservoir and flushing stored sediments. Presently used gate controllers maintain a constant water level, while the proposed design allows for cyclical fluctuations of water levels. The device has been adapted to operate in culverts in drainage ditches and was designed to maintain a certain upstream water level. It was also planned to use the device in the drainage wells of irrigation systems. Compared with other flow flap gate controllers, the proposed design is very simple, easy to install, maintenance-free, does not block the transport of sediments, and is relatively inexpensive. 2. The Principle of Operation of the Circular Flap Gate Design The upstream water level control device consists of a rotating, circular metal flap gate placed in a pipe and fixed on a horizontal axis of rotation (Figure 1). Figure 1. The water level control mounted in the pipe: (a) view of upstream-flap gate closed, (b) view of upstream-flap gate open, (c) view of downstream-flap gate closed, (d) view of downstream-flap gate open, where 1—flap gate, 2—upper half-ring, 3—screw ended with a magnet, 4—additional metal plate, 5—flap stop, 6—horizontal axis of rotation, 7—lower half-ring. In the absence of water in the downstream side of the regulator, the circular flap gate opens when the opening moment caused by the hydrostatic force exerted on the flap gate surface above its axis of rotation exceeds the value of the closing moment caused by the hydrostatic force exerted on the flap gate surface below its axis of rotation. By adjusting the position of the circular flap gate’s axis of rotation h, it can open during both a free surface flow, when the water level is lower than the pipe diameter, and pressurized flow conditions. The angle of gate opening can be regulated with a flap stop mounted in a pipe (5 in Figure 1). Since the axis of rotation of the circular flap gate lies below its center of mass, in order to allow the flap gate to automatically close, it is necessary to increase the weight of the bottom part of the flap gate. This was achieved by attaching an additional mass in the form of metal plates with a height of l cut from a circular plate with a diameter equal to the diameter of the flap gate 2R (Figure 2). To reduce the water flow between wall of the pipe and the flap gate, two half-rings were placed in the pipe on both sides of the flap gate. Figure 2. Scheme of the circular flap gate mounted in the pipe, where R r—internal radius of the half-rings, R p—internal radius of the pipe, h—elevation of the axis of rotation above the bottom of the flap gate, l—height of the additional metal plate, magnet–magnet screw. In order to smoothly adjust the closing moment, in the upper half-ring, a screw ended with a magnet is installed (Figure 2). By changing the position of the magnet “0” to “3.5”, an increase in the upstream water level at which a gate opens can be achieved. The “0” position means that the magnet does not affect the closing moment. It should be noted that closing the flap gate does not stop the entire water flow. When the flap gate is closed, the volumetric flow rate of Q min leaks at its contact with half-rings and through its fixation of an axis of rotation. At flow rates greater than Q min, water is retained and the upstream water level increases to the designed level of H 0. When the upstream water level exceeds H 0, the flap gate opens automatically and water flows out through the pipe. Another parameter characterizing the device is the maximum flow rate of Q max, at which the flap gate remains opened. When the water flow rate exceeds Q max, the flap gate will not be able to close because the inflow to the device is greater than the outflow. The device operating range determines the flow variability of Q min<Q<Q max and the upstream water level of H 0, beyond which the flap gate opens. After lowering of the upstream water level, the flap gate automatically closes due to its weight and the weight of the additional plates and the process of water retaining begins again. The aim of this work was to analyze the operating conditions of the designed device, experimentally determine the values Q min and Q max, and verify the theoretically derived formulas. 3. Analysis of Circular Flap Gate Working Conditions The conditions for opening the circular flap gate were analyzed for a case when the downstream water level is equal to 0. Due to the presence of the lower half-ring, the circular flap gate will not be able to open when the upstream water level H is lower than the elevation of the axis of rotation above the lower edge of the flap h (H < h in Figure 3). The flap gate opening is possible when the upstream water level is higher than the elevation of the flap gate axis of rotation above its bottom (H > h in Figure 3) and when the gate-opening moment caused by the hydrostatic force P 1 is higher than the gate-closing moment caused by the hydrostatic force P 2. Figure 3. Hydrostatic force acting on the flap gate in the case of (a), (b) free surface water flow in a pipe, and (c) pressure water flow at overpressure of p n. The necessary condition for the opening of the flap gate may be written as follows: P 1 r 1>P 2 r 2 𝑃 1 𝑟 1>𝑃 2 𝑟 2 (1) where P 1 is the hydrostatic force acting on a surface of the circular flap gate above its axis of rotation, r 1 is the moment arm of force P 1, P 2 is the hydrostatic force acting on a surface of the circular flap gate above its axis of rotation, and r 2 is a moment arm of force P 2. The differential hydrostatic force dP acting on a differential surface of the flap dA can be calculated on the basis of the fact that the pressure of a liquid with a specific weight of γ at the depth of z is equal to p = γz. The scheme of the circular flap gate in a pipe is shown in Figure 4. It illustrates the notation used in the analysis. Figure 4. The scheme for calculating the position of the axis of rotation of the circular flap gate in the case of pressure flow. The differential hydrostatic force dP acting on a differential surface of the flap dA is equal to d P=p d A=γ z d A 𝑑 𝑃=𝑝 𝑑 𝐴=𝛾 𝑧 𝑑 𝐴 (2) The differential surface area can be written as follows: d A=x d z 𝑑 𝐴=𝑥 𝑑 𝑧 (3) The length x can be determined from the Pythagorean theorem: (x 2)2=R 2−(z 0−z)2 (𝑥 2)2=𝑅 2−(𝑧 0−𝑧)2 (4) Hence, x=2 R 2−(z 0−z)2 𝑥=2 𝑅 2−(𝑧 0−𝑧)2−−−−−−−−−−−−√ (5) The differential moment acting on a surface of dA is equal to d M 1=γ z(z a x−z)x d z 𝑑 𝑀 1=𝛾 𝑧(𝑧 𝑎 𝑥−𝑧)𝑥 𝑑 𝑧 (6) d M 1=γ z(z a x−z)2 R 2−(z 0−z)2 d z. 𝑑 𝑀 1=𝛾 𝑧(𝑧 𝑎 𝑥−𝑧)2 𝑅 2−(𝑧 0−𝑧)2−−−−−−−−−−−−√𝑑 𝑧. (7) On the surface of the circular flap gate above its axis of rotation, pressure exerts a gate-opening moment equal to M 1=2 γ∫z g z a x z(z a x−z)R 2−(z 0−z)2 d z. (8) The closing moment acting on a surface of the circular flap gate below its axis of rotation can be written as follows: M 2=2 γ∫z a x z d z(z−z a x)R 2−(z 0−z)2 d z. (9) From the boundary condition, M 1=M 2 (10) One can calculate the depth of the axis of rotation of the circular flap gate z ax. Due to the complex form of the integrals in Equations (8) and (9), Simpson’s rule was used to calculate them. Firstly, after assuming the value of the upstream water level, the position of the circular flap gate axis of rotation was calculated. Exceeding the assumed upstream water level should cause a circular flap gate to open. The results of the calculations allowed us to establish a dimensionless relation between the elevation of the axis of rotation of the flap gate above its bottom h and the upstream water depth at which the flap gate opens H 0 (Figure 5). This relation can be used to determine, at the given upstream water depth H 0, the elevation of the axis of rotation of the flap gate above its bottom which causes it to open. Figure 5. Relation between position of the circular flap gate’s axis of rotation h/(2 R) and upstream water depth at which the flap gate opens for a device with and without an upper half-ring. It should be noted that the upper half-ring affects the opening moment, as it reduces the surface area of the circular flap gate. The larger the radius of the upper half-ring, the lower the surface area on which hydrostatic force P 1 acts. Since the radius of the upper half-ring affects the opening moment, the relation between the elevation of the flap gate axis of rotation above its bottom h and the upstream water depth at which a flap gate opens H 0 depends on the radius of the upper half-ring R r relative to the radius of the flap gate R. The use of a half-ring with a radius of (R − R r)/R = 0.1082 for a 2 R diameter circular flap gate described later resulted in a decrease in elevation of the flap gate axis of rotation above its bottom h and the upstream water depth at which a flap gate automatically opens H 0 (Figure 5). The influence of the half-ring with the radius of R−R r = 0.1082 R on the elevation of the flap gate axis of rotation above its bottom is shown in Figure 5. For example, if the circular flap gate has no upper half-ring and were designed assuming that it opens at H 0/(2 R) = 1.0, then the elevation flap gate axis of rotation above its bottom should be equal to h/(2 R) = 0.3750, so h = 0.3750 (2 R). For comparison, assuming that the circular flap gate with the upper half-ring with the difference radius of R−R r = 0.1082 R has to maintain the same upstream water depth of H 0/2 R = 1.0, the elevation flap gate axis of rotation above its bottom should be equal to h/(2 R) = 0.3600, so h = 0.3600 (2 R). Since the axis of rotation of a circular flap gate is located below its center of mass, the gate will automatically close when (Figure 6) G 2 r C 2 sin φ>G 1 r C 1 sin φ (11) that is, m 2 g r C 2 sin φ>m 1 g r C 1 sin φ (12) where G 1 and G 2 are the weights of the flap gate parts above and below the axis of rotation, g is the gravitational acceleration, m 1 and m 2 are the masses of the flap gate parts above and below the axis of rotation, r C 1 and r C 2 are the distances between the center of mass and the flap gate axis of rotation for flap gate parts above and below the axis of rotation, and φ is the angle between the gate and vertical axis. Figure 6. The distribution of forces used to calculate the mass of the additional metal plate attached to the bottom part of the flap gate (S C1–S C3 are the centers of mass m 1–m 3). The total mass of the circular flap gate is the sum of both parts: m=m 1+m 2 (13) In order for the flap gate to return to its original position and meet the condition (11), an additional weight G 3=m 3 g with a center of mass distant from the axis of rotation r C 3 must be attached to the surface below the axis of rotation (Figure 6). When designing the flow regulator, it was assumed that the additional mass would be attached in the form of metal plates with a height of l cut from a circular plate with a diameter equal to the diameter of the flap gate 2 R. The required mass of the plates was calculated using the static equilibrium equation for the horizontal position of the gate (φ = 90°, Figure 6): m 2 g r C 2+m 3 g r C 3=m 1 g r C 1. (14) The values of the required mass of the additional plates m 3 in relation to the mass of the lower part of the flap gate m 2 are presented in Figure 7 as a function of h/(2 R) and H 0/(2 R). Due to the fact that in theoretical considerations friction was neglected, the required mass of the additional plate must be slightly greater than calculated. An increase of the additional m 3 mass increases the water level necessary to close the gate and, in this way, reduces the variability of the water levels upstream of the gate. Figure 7. The values of the required mass of the additional plates m 3 in relation to the mass of the lower part of the flap gate m 2 as a function of h/(2 R) and H 0/(2 R) ratios for the upper half-ring with the radius of (R − R r)/R = 0.1082. The relation presented in Figure 7 shows that the required mass of additional plates for the automatic closing of the flap gate increases with the increase of the axis of rotation h and the decrease of the upstream water level H 0, at which the flap gate opens. For h/(2 R) = 0.4, the mass of the additional plates is equal to the mass of part of the flap gate below its axis of rotation (i.e., m 2 = m 3). Lowering the position of the flap gate axis of rotation leads to a decrease in the upstream water level at which the flap gate opens H 0, but on the other hand, makes it necessary to increase the mass of the attached plates m 3 in order for the flap gate to close automatically. Maintaining the upstream water level by lowering and raising the position of the flap gate axis of rotation is not convenient. The increase of the upstream water level at a given position of the flap gate axis of rotation can be achieved by applying force on the upper edge of the flap gate that counteracts its opening. For this reason, a magnet with adjustable position was installed in the upper half-ring. The shorter the distance between the magnet and the flap gate, the larger the closing moment. Introduction of the magnet increases the gate-closing moment: M 2=2 γ∫z a x z d z(z−z a x)R 2−(z 0−z)2 d z+F r F (15) where F is the magnet force, which maintains the gate in the closed position, and r F is the distance from the flap gate axis of rotation to the force F in the center of the magnet screw. 4. Experimental Verification of the Device’s Operating Conditions Experimental tests of the effectiveness of the circular flap gate for automatic upstream water level control were carried out at the Hydraulic Laboratory of the Faculty of Civil and Environmental Engineering at the Warsaw University of Life Sciences. In the presented tests, the single model of the controller was analyzed. The circular flap gate was made of a 1 mm thick metal plate. The device was installed in a drainage pipe with an internal diameter of 2 R p = 0.080 m. The circular flap gate had a diameter of 2 R = 0.0739 m and a mass of m = m 1 + m 2 = 31.9 g (Figure 8). The axis of rotation of the flap gate was located at a height of h = 0.0258 m above its bottom. Based on the dependence shown in Figure 5, for R−R r = 0.1082 R, the values of ratios h/2 R = 0.3490 and H 0/(2 R) = 0.9509 were calculated. This allowed us to calculate the value of the upstream water depth at which the flap gate opens H 0 = 0.9509 (2 R) = 0.070 m. The calculated mass necessary to close the flap gate was equal to m 3 = 22.0 g. When designing the device, it was assumed that the additional mass would be attached in the form of metal plates with a height of l = 0.020 m cut from a circular plate with a diameter equal to the diameter of the flap gate 2 R. Therefore, four plates were attached to the flap gate with two screws of a total mass of 29.8 g. Since 29.8 g >m 3 = 22.0 g, the circular flap gate should close automatically when the upstream water level drops. The upstream water level at which the flap gate opens was changed by the position of the magnet screw from the “0” to the “3.5” position. The “0” position means that the magnet does not affect the flap gate. Changing the position of the magnet to “1” meant turning the screw with a magnet 360° and bringing the magnet closer to the flap gate by 0.0015 m (Figure 2). Figure 8. Cross section of the experimental setup. The device was placed in a drainage pipe installed in a wall separating a rectangular channel of 200 mm width and 400 mm height. The volumetric flow rate was measured by an inductive flow meter. The water level was measured using a gauge with an accuracy of 0.0002 m. The scheme of the experimental setup is shown in Figure 8. 5. Results of the Experimental Tests Firstly, the device operation was tested with the magnet screw in the “0” position. It was found that the circular flap gate opened at the upstream water depth of H 0 = 0.074 m at the flow rate Q>Q min. This value was 0.004 m higher than that which was theoretically calculated. The difference in depth in relation to the calculated value can be explained by the fact that the resistance of the rotating flap gate was not included in the calculations. Then, the minimum and maximum flow rates at which the circular flap gate opened and closed were determined. Results of the tests for different positions of the screw magnet are presented in Figure 9. Figure 9. Minimum and maximum flow rates at which the circular flap gate opens or closes as a function of the position of the magnet screw. Changing the position of the magnet by bringing it closer to the flap caused an increase of the gate-closing moment, which resulted in an increase of the upstream water depth at which the flap gate opens H 0 and the minimum flow rate Q min. On the other hand, exceeding the maximum flow values Q max made that the circular flap gate unable to close. This was due to the fact that the inflow to the device was greater than the outflow. Values of the upstream water depth H 0 causing the circular flap gate to open for different positions of the magnet are presented in Figure 10. Figure 10. Values of the upstream water depth H 0 at which the circular flap gate opens for different positions of the magnet screw. Operating conditions of the device at various flow rates and positions of the magnet were also analyzed. Figure 11, Figure 12, Figure 13 and Figure 14 present hydrographs of upstream water levels in the channel for different configurations of flow rates at the inflow and magnet positions. Downstream water levels had no effect on the flow through the device. As it can be seen, even a small change of the flow rate at the inflow to the controller (Q = 0.251, 0.397 dm 3/s) significantly influenced the length of the controller’s work cycle, as it depends on the volume of water retained upstream of the controller. Figure 11. Hydrograph of upstream water level at different flow rates at the inflow for the “1” position of the magnet screw. Figure 12. Hydrograph of upstream water level at different flow rates at the inflow for the “2” position of the magnet screw. Figure 13. Hydrograph of upstream water level at different flow rates at the inflow for the “3” position of the magnet screw. Figure 14. Hydrograph of upstream water level at different flow rates at the inflow for the “3.5” position of the magnet screw. 6. Conclusions The operating conditions of the upstream water level circular flap control gate were analyzed. The dimensionless relations were given for determining the elevation of the circular flap gate axis of rotation as a function of upstream water depth at which the flap gate opens H 0, the radius of the flap gate R, and the radius of the upper half-ring R r. When the circular flap gate is closed, the volumetric flow rate of Q min leaks at its contact with the half-rings and through its fixation of an axis of rotation. At flow rates greater than Q min, water is retained and the upstream water depth increases to the designed depth of H 0. When the upstream water depth exceeds H 0, the circular flap gate automatically opens. Another parameter characterizing the device is the maximum flow rate of Q max at which the flap gate remains closed. When the water flow rate exceeds Q max, the flap gate is unable to close because the inflow to the device is greater than the outflow. The device operating range determines the flow variability of Q min<Q<Q max and the upstream water depth of H 0, beyond which the flap gate opens. When the upstream water depth is lower than H 0, the flap gate automatically closes due to its weight and the weight of the additional steel plates. By using a magnet-ended screw which “holds” the flap gate, smooth adjustment of the upstream water depth H 0 can be achieved. The upstream water depth at which a circular flap gate opens was the same at different flow rates, which indicates that the flow rate has no effect on the H 0. Changing the position of a magnet by bringing it closer to the flap causes an increase of the gate-closing moment, which results in an increase of the upstream water level at which the flap gate opens H 0 and the minimum flow rate Q min. Author Contributions Conceptualization, J.K. and E.K. (Edmund Kaca); Methodology, J.K., E.K. (Edmund Kaca) and E.K. (Elżbieta Kubrak); Software, M.K.; Investigation (Laboratory experiments), E.K. (Elżbieta Kubrak); Writing—Original Draft Preparation, J.K., E.K. (Elżbieta Kubrak), M.K., and A.K.; Visualization, E.K. (Elżbieta Kubrak). Funding This research was funded by the Warsaw University of Life Sciences—SGGW according to the agreement No. MNISW/2017/DIR/36/II+, 7 March 2017. Conflicts of Interest The authors declare no conflict of interest. References Adib, M.R.M.; Amirza, A.R.M.; Wardah, T.; Junaidah, A. Effectiveness Using Circular Fibre Steel Flap Gate as a Control Structure Towards the Hydraulic Characteristics in Open Channel. IOP Conf. Ser. Mater. Sci. Eng.2016, 136, 012075. [Google Scholar] [CrossRef] Ambrosini, K. Analysis of Flap Gate Design and Implementations for Water Delivery System in California and Navada; California Polytechnic State University: San Luis Obispo, CA, USA, 2014. [Google Scholar] Belaud, G.; Litrico, X.; de Graaff, B.; Baume, J.P. Hydraulic Modeling of an Automatic Upstream Water Level Control Gate for Submerged Flow Conditions. J. Irrig. Drain. Eng.2008, 134, 315–326. [Google Scholar] [CrossRef] Cassan, L.; Baume, J.P.; Belaud, G.; Litrico, X.; Malaterre, P.; Bruno, I.R. Hydraulic Modeling of a Mixed Water Level Control Hydromechanical Gate. J. Irrig. Drain. Eng.2011, 137, 446–453. [Google Scholar] [CrossRef][Green Version] FAO. Irrigation and Drainage: Small Hydraulic Structures; Paper 26; FAO: Rome, Italy, 2005. [Google Scholar] Litrico, X.; Belaud, G.; Baume, J.; Ribot-Bruno, J. Hydraulic modeling of an Automatic Upstream Water-Level Control Gate. J. Irrig. Drain. Eng.2005, 131, 176–189. [Google Scholar] [CrossRef][Green Version] Kaca, E.; Kubrak, J.; Pietraszek, Z. Water Level Controller in Pipe. Patent Application No. P.428319, 21 December 2018. [Google Scholar] Figure 1. The water level control mounted in the pipe: (a) view of upstream-flap gate closed, (b) view of upstream-flap gate open, (c) view of downstream-flap gate closed, (d) view of downstream-flap gate open, where 1—flap gate, 2—upper half-ring, 3—screw ended with a magnet, 4—additional metal plate, 5—flap stop, 6—horizontal axis of rotation, 7—lower half-ring. Figure 2. Scheme of the circular flap gate mounted in the pipe, where R r—internal radius of the half-rings, R p—internal radius of the pipe, h—elevation of the axis of rotation above the bottom of the flap gate, l—height of the additional metal plate, magnet–magnet screw. Figure 3. Hydrostatic force acting on the flap gate in the case of (a), (b) free surface water flow in a pipe, and (c) pressure water flow at overpressure of p n. Figure 4. The scheme for calculating the position of the axis of rotation of the circular flap gate in the case of pressure flow. Figure 5. Relation between position of the circular flap gate’s axis of rotation h/(2 R) and upstream water depth at which the flap gate opens for a device with and without an upper half-ring. Figure 6. The distribution of forces used to calculate the mass of the additional metal plate attached to the bottom part of the flap gate (S C1–S C3 are the centers of mass m 1–m 3). Figure 7. The values of the required mass of the additional plates m 3 in relation to the mass of the lower part of the flap gate m 2 as a function of h/(2 R) and H 0/(2 R) ratios for the upper half-ring with the radius of (R − R r)/R = 0.1082. Figure 8. Cross section of the experimental setup. Figure 9. Minimum and maximum flow rates at which the circular flap gate opens or closes as a function of the position of the magnet screw. Figure 10. Values of the upstream water depth H 0 at which the circular flap gate opens for different positions of the magnet screw. Figure 11. Hydrograph of upstream water level at different flow rates at the inflow for the “1” position of the magnet screw. Figure 12. Hydrograph of upstream water level at different flow rates at the inflow for the “2” position of the magnet screw. Figure 13. Hydrograph of upstream water level at different flow rates at the inflow for the “3” position of the magnet screw. Figure 14. Hydrograph of upstream water level at different flow rates at the inflow for the “3.5” position of the magnet screw. © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( Share and Cite MDPI and ACS Style Kubrak, J.; Kubrak, E.; Kaca, E.; Kiczko, A.; Kubrak, M. Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control. Water2019, 11, 2576. AMA Style Kubrak J, Kubrak E, Kaca E, Kiczko A, Kubrak M. Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control. Water. 2019; 11(12):2576. Chicago/Turabian Style Kubrak, Janusz, Elżbieta Kubrak, Edmund Kaca, Adam Kiczko, and Michał Kubrak. 2019. "Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control" Water 11, no. 12: 2576. APA Style Kubrak, J., Kubrak, E., Kaca, E., Kiczko, A., & Kubrak, M. (2019). Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control. Water, 11(12), 2576. Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. Article Metrics No No Article Access Statistics For more information on the journal statistics, click here. Multiple requests from the same IP address are counted as one view. Zoom|Orient|As Lines|As Sticks|As Cartoon|As Surface|Previous Scene|Next Scene Cite Export citation file: BibTeX) MDPI and ACS Style Kubrak, J.; Kubrak, E.; Kaca, E.; Kiczko, A.; Kubrak, M. Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control. Water2019, 11, 2576. AMA Style Kubrak J, Kubrak E, Kaca E, Kiczko A, Kubrak M. Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control. Water. 2019; 11(12):2576. Chicago/Turabian Style Kubrak, Janusz, Elżbieta Kubrak, Edmund Kaca, Adam Kiczko, and Michał Kubrak. 2019. "Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control" Water 11, no. 12: 2576. APA Style Kubrak, J., Kubrak, E., Kaca, E., Kiczko, A., & Kubrak, M. (2019). Theoretical and Experimental Analysis of Operating Conditions of a Circular Flap Gate for an Automatic Upstream Water Level Control. Water, 11(12), 2576. Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. clear Water, EISSN 2073-4441, Published by MDPI RSSContent Alert Further Information Article Processing ChargesPay an InvoiceOpen Access PolicyContact MDPIJobs at MDPI Guidelines For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers MDPI Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series Follow MDPI LinkedInFacebookX Subscribe to receive issue release notifications and newsletters from MDPI journals Select options [x] Accounting and Auditing [x] Acoustics [x] Acta Microbiologica Hellenica [x] Actuators [x] Adhesives [x] Administrative Sciences [x] Adolescents [x] Advances in Respiratory Medicine [x] Aerobiology [x] Aerospace [x] Agriculture [x] AgriEngineering [x] Agrochemicals [x] Agronomy [x] AI [x] AI Chemistry [x] AI in Education [x] AI Sensors [x] Air [x] Algorithms [x] Allergies [x] Alloys [x] Analytica [x] Analytics [x] Anatomia [x] Anesthesia Research [x] Animals [x] Antibiotics [x] Antibodies [x] Antioxidants [x] Applied Biosciences [x] Applied Mechanics [x] Applied Microbiology [x] Applied Nano [x] Applied Sciences [x] Applied System Innovation [x] AppliedChem [x] AppliedMath [x] AppliedPhys [x] Aquaculture Journal [x] Architecture [x] Arthropoda [x] Arts [x] Astronautics [x] Astronomy [x] Atmosphere [x] Atoms [x] Audiology Research [x] Automation [x] Axioms [x] Bacteria [x] Batteries [x] Behavioral Sciences [x] Beverages [x] Big Data and Cognitive Computing [x] BioChem [x] Bioengineering [x] Biologics [x] Biology [x] Biology and Life Sciences Forum [x] Biomass [x] Biomechanics [x] BioMed [x] Biomedicines [x] BioMedInformatics [x] Biomimetics [x] Biomolecules [x] Biophysica [x] Bioresources and Bioproducts [x] Biosensors [x] Biosphere [x] BioTech [x] Birds [x] Blockchains [x] Brain Sciences [x] Buildings [x] Businesses [x] C [x] Cancers [x] Cardiogenetics [x] Cardiovascular Medicine [x] Catalysts [x] Cells [x] Ceramics [x] Challenges [x] ChemEngineering [x] Chemistry [x] Chemistry Proceedings [x] Chemosensors [x] Children [x] Chips [x] CivilEng [x] Clean Technologies [x] Climate [x] Clinical and Translational Neuroscience [x] Clinical Bioenergetics [x] Clinics and Practice [x] Clocks & Sleep [x] Coasts [x] Coatings [x] Colloids and Interfaces [x] Colorants [x] Commodities [x] Complexities [x] Complications [x] Compounds [x] Computation [x] Computer Sciences & Mathematics Forum [x] Computers [x] Condensed Matter [x] Conservation [x] Construction Materials [x] Corrosion and Materials Degradation [x] Cosmetics [x] COVID [x] Craniomaxillofacial Trauma & Reconstruction [x] Crops [x] Cryo [x] Cryptography [x] Crystals [x] Culture [x] Current Issues in Molecular Biology [x] Current Oncology [x] Dairy [x] Data [x] Dentistry Journal [x] Dermato [x] Dermatopathology [x] Designs [x] Diabetology [x] Diagnostics [x] Dietetics [x] Digital [x] Disabilities [x] Diseases [x] Diversity [x] DNA [x] Drones [x] Drugs and Drug Candidates [x] Dynamics [x] Earth [x] Ecologies [x] Econometrics [x] Economies [x] Education Sciences [x] Electricity [x] Electrochem [x] Electronic Materials [x] Electronics [x] Emergency Care and Medicine [x] Encyclopedia [x] Endocrines [x] Energies [x] Energy Storage and Applications [x] Eng [x] Engineering Proceedings [x] Entropic and Disordered Matter [x] Entropy [x] Environmental and Earth Sciences Proceedings [x] Environments [x] Epidemiologia [x] Epigenomes [x] European Burn Journal [x] European Journal of Investigation in Health, Psychology and Education [x] Family Sciences [x] Fermentation [x] Fibers [x] FinTech [x] Fire [x] Fishes [x] Fluids [x] Foods [x] Forecasting [x] Forensic Sciences [x] Forests [x] Fossil Studies [x] Foundations [x] Fractal and Fractional [x] Fuels [x] Future [x] Future Internet [x] Future Pharmacology [x] Future Transportation [x] Galaxies [x] Games [x] Gases [x] Gastroenterology Insights [x] Gastrointestinal Disorders [x] Gastronomy [x] Gels [x] Genealogy [x] Genes [x] Geographies [x] GeoHazards [x] Geomatics [x] Geometry [x] Geosciences [x] Geotechnics [x] Geriatrics [x] Glacies [x] Gout, Urate, and Crystal Deposition Disease [x] Grasses [x] Green Health [x] Hardware [x] Healthcare [x] Hearts [x] Hemato [x] Hematology Reports [x] Heritage [x] Histories [x] Horticulturae [x] Hospitals [x] Humanities [x] Humans [x] Hydrobiology [x] Hydrogen [x] Hydrology [x] Hygiene [x] Immuno [x] Infectious Disease Reports [x] Informatics [x] Information [x] Infrastructures [x] Inorganics [x] Insects [x] Instruments [x] Intelligent Infrastructure and Construction [x] International Journal of Cognitive Sciences [x] International Journal of Environmental Medicine [x] International Journal of Environmental Research and Public Health [x] International Journal of Financial Studies [x] International Journal of Molecular Sciences [x] International Journal of Neonatal Screening [x] International Journal of Orofacial Myology and Myofunctional Therapy [x] International Journal of Plant Biology [x] International Journal of Topology [x] International Journal of Translational Medicine [x] International Journal of Turbomachinery, Propulsion and Power [x] International Medical Education [x] Inventions [x] IoT [x] ISPRS International Journal of Geo-Information [x] J [x] Journal of Aesthetic Medicine [x] Journal of Ageing and Longevity [x] Journal of CardioRenal Medicine [x] Journal of Cardiovascular Development and Disease [x] Journal of Clinical & Translational Ophthalmology [x] Journal of Clinical Medicine [x] Journal of Composites Science [x] Journal of Cybersecurity and Privacy [x] Journal of Dementia and Alzheimer's Disease [x] Journal of Developmental Biology [x] Journal of Experimental and Theoretical Analyses [x] Journal of Eye Movement Research [x] Journal of Functional Biomaterials [x] Journal of Functional Morphology and Kinesiology [x] Journal of Fungi [x] Journal of Imaging [x] Journal of Intelligence [x] Journal of Low Power Electronics and Applications [x] Journal of Manufacturing and Materials Processing [x] Journal of Marine Science and Engineering [x] Journal of Market Access & Health Policy [x] Journal of Mind and Medical Sciences [x] Journal of Molecular Pathology [x] Journal of Nanotheranostics [x] Journal of Nuclear Engineering [x] Journal of Otorhinolaryngology, Hearing and Balance Medicine [x] Journal of Parks [x] Journal of Personalized Medicine [x] Journal of Pharmaceutical and BioTech Industry [x] Journal of Respiration [x] Journal of Risk and Financial Management [x] Journal of Sensor and Actuator Networks [x] Journal of the Oman Medical Association [x] Journal of Theoretical and Applied Electronic Commerce Research [x] Journal of Vascular Diseases [x] Journal of Xenobiotics [x] Journal of Zoological and Botanical Gardens [x] Journalism and Media [x] Kidney and Dialysis [x] Kinases and Phosphatases [x] Knowledge [x] LabMed [x] Laboratories [x] Land [x] Languages [x] Laws [x] Life [x] Lights [x] Limnological Review [x] Lipidology [x] Liquids [x] Literature [x] Livers [x] Logics [x] Logistics [x] Lubricants [x] Lymphatics [x] Machine Learning and Knowledge Extraction [x] Machines [x] Macromol [x] Magnetism [x] Magnetochemistry [x] Marine Drugs [x] Materials [x] Materials Proceedings [x] Mathematical and Computational Applications [x] Mathematics [x] Medical Sciences [x] Medical Sciences Forum [x] Medicina [x] Medicines [x] Membranes [x] Merits [x] Metabolites [x] Metals [x] Meteorology [x] Methane [x] Methods and Protocols [x] Metrics [x] Metrology [x] Micro [x] Microbiology Research [x] Microelectronics [x] Micromachines [x] Microorganisms [x] Microplastics [x] Microwave [x] Minerals [x] Mining [x] Modelling [x] Modern Mathematical Physics [x] Molbank [x] Molecules [x] Multimedia [x] Multimodal Technologies and Interaction [x] Muscles [x] Nanoenergy Advances [x] Nanomanufacturing [x] Nanomaterials [x] NDT [x] Network [x] Neuroglia [x] Neurology International [x] NeuroSci [x] Nitrogen [x] Non-Coding RNA [x] Nursing Reports [x] Nutraceuticals [x] Nutrients [x] Obesities [x] Oceans [x] Onco [x] Optics [x] Oral [x] Organics [x] Organoids [x] Osteology [x] Oxygen [x] Parasitologia [x] Particles [x] Pathogens [x] Pathophysiology [x] Peace Studies [x] Pediatric Reports [x] Pets [x] Pharmaceuticals [x] Pharmaceutics [x] Pharmacoepidemiology [x] Pharmacy [x] Philosophies [x] Photochem [x] Photonics [x] Phycology [x] Physchem [x] Physical Sciences Forum [x] Physics [x] Physiologia [x] Plants [x] Plasma [x] Platforms [x] Pollutants [x] Polymers [x] Polysaccharides [x] Populations [x] Poultry [x] Powders [x] Precision Oncology [x] Proceedings [x] Processes [x] Prosthesis [x] Proteomes [x] Psychiatry International [x] Psychoactives [x] Psychology International [x] Publications [x] Purification [x] Quantum Beam Science [x] Quantum Reports [x] Quaternary [x] Radiation [x] Reactions [x] Real Estate [x] Receptors [x] Recycling [x] Regional Science and Environmental Economics [x] Religions [x] Remote Sensing [x] Reports [x] Reproductive Medicine [x] Resources [x] Rheumato [x] Risks [x] Robotics [x] Romanian Journal of Preventive Medicine [x] Ruminants [x] Safety [x] Sci [x] Scientia Pharmaceutica [x] Sclerosis [x] Seeds [x] Sensors [x] Separations [x] Sexes [x] Signals [x] Sinusitis [x] Smart Cities [x] Social Sciences [x] Société Internationale d’Urologie Journal [x] Societies [x] Software [x] Soil Systems [x] Solar [x] Solids [x] Spectroscopy Journal [x] Sports [x] Standards [x] Stats [x] Stresses [x] Surfaces [x] Surgeries [x] Surgical Techniques Development [x] Sustainability [x] Sustainable Chemistry [x] Symmetry [x] SynBio [x] Systems [x] Targets [x] Taxonomy [x] Technologies [x] Telecom [x] Textiles [x] Thalassemia Reports [x] Theoretical and Applied Ergonomics [x] Therapeutics [x] Thermo [x] Time and Space [x] Tomography [x] Tourism and Hospitality [x] Toxics [x] Toxins [x] Transplantology [x] Trauma Care [x] Trends in Higher Education [x] Tropical Medicine and Infectious Disease [x] Universe [x] Urban Science [x] Uro [x] Vaccines [x] Vehicles [x] Venereology [x] Veterinary Sciences [x] Vibration [x] Virtual Worlds [x] Viruses [x] Vision [x] Waste [x] Water [x] Wild [x] Wind [x] Women [x] World [x] World Electric Vehicle Journal [x] Youth [x] Zoonotic Diseases Subscribe © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. Terms and ConditionsPrivacy Policy We use cookies on our website to ensure you get the best experience. Read more about our cookies here. Accept Share Link Copy clear Share clear Back to Top Top
12213
https://www.badoni.altervista.org/lezioni/metodo_delle_componenti.pdf
SOMMA DI VETTORI (NEL PIANO) MEDIANTE IL METODO DELLE COMPONENTI Questo metodo consente di sommare 2 o più vettori in modo analitico utilizzando le funzioni goniometriche questo metodo può essere suddiviso in tre passi : 1. Scomposizione dei vettori nelle componenti cartesiane X e Y 2. Somma algebrica delle componenti omonime (tutte le componenti x, tutte le componenti y ) 3. Applicazione del teorema di Pitagora per determinare il modulo della risultante e applicazione della funzione inversa della tangente (tan-1) per determinare il valore dell’angolo che la risultante forma con l’asse x. 1. Scomposizione dei vettori nelle componenti cartesiane X e Y Supponiamo di avere due vettori V1 e V2 di cui conosciamo il modulo e l’angolo che essi formano con l’asse X 1.1. Determinazione delle componenti X e Y del vettore V1 1.2. Determinazione delle componenti X e Y del vettore V2 ) cos( 1 1   V V x ) sin( 1 1   V V y ) cos( 2 2   V V x ) sin( 2 2   V V y 2. Somma algebrica delle componenti omonime (tutte le componenti x, tutte le componenti y ) ) cos( ) cos( 2 1 2 1         V V V V R x x x ) sin( ) sin( 2 1 2 1         V V V V R y y y E’ importante osservare che questi vettori si ottengono mediante una somma algebrica (tenendo conto quindi del segno) 3. Applicazione del teorema di Pitagora per determinare il modulo della risultante e applicazione della funzione inversa della tangente (tan-1) per determinare il valore dell’angolo che la risultante forma con l’asse x.
12214
https://math.stackexchange.com/questions/4455275/what-is-the-difference-between-the-principal-square-root-of-x-and-x1-2
algebra precalculus - What is the difference between the principal square root of $x$ and $x^{1/2}$? - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more What is the difference between the principal square root of x x and x 1/2 x 1/2? Ask Question Asked 3 years, 4 months ago Modified3 years, 4 months ago Viewed 608 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. Today I learned that the square root symbol (√) represents only the "principal" square root of a number. What about exponents?—if I write x 1/2 x 1/2, would that encompass the negative square root of x x as well? Edit: x x is a non-negative real number. algebra-precalculus notation Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited May 22, 2022 at 15:04 unloadedmaterialunloadedmaterial asked May 21, 2022 at 1:59 unloadedmaterialunloadedmaterial 19 2 2 bronze badges 10 No. x 1/2 x 1/2 is a function, so it has only one value for a fixed argument.Przemysław Scherwentke –Przemysław Scherwentke 2022-05-21 02:07:05 +00:00 Commented May 21, 2022 at 2:07 2 In Real Analysis, by arbitrary convention, for x≥0,x(1/2)x≥0,x(1/2) is synonymous with the principal square root of x x. This is because absent some convention around the interpretation of x(1/2),x(1/2), the expression would be ambiguous, for x>0.x>0.user2661923 –user2661923 2022-05-21 02:07:37 +00:00 Commented May 21, 2022 at 2:07 2 For nonnegative x,x, in real analysis, x−−√n x n and x 1 n x 1 n have the same meaning: the principal n n th root of x.x.ryang –ryang 2022-05-21 06:27:36 +00:00 Commented May 21, 2022 at 6:27 1 @GerryMyerson Yes; I've edited the question to reflect this.unloadedmaterial –unloadedmaterial 2022-05-22 15:08:30 +00:00 Commented May 22, 2022 at 15:08 1 @ryang Thank you.unloadedmaterial –unloadedmaterial 2022-05-22 15:08:42 +00:00 Commented May 22, 2022 at 15:08 |Show 5 more comments 1 Answer 1 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. The idea of "square root" is that it inverts the process of squaring (more generally: n n th roots invert exponentiation). It is what subtraction is to addition and what division is to multiplication. Exponentiation has another inverse, the logarithm, because it is not a commutative operation, but that's another matter. If we want to invert squaring, we run into trouble, because we cannot: the function is not injective. 2 2=4 2 2=4, but also (−2)2=4(−2)2=4, so there is no way of retrieving 2 2 or −2−2 from 4 4 without further information. This is unlike adding a number, easily reversed, or multiplying a number—reversible if and only if that number is not 0 0 (hence, we can't divide by 0 0). Square roots are well behaved: from either square root, x x, the other square root is −x−x. Thus, we define notation to give a "principal" value, the one that is useful in the most contexts, and it's not hard to find the other value if both are needed (or just the negative one). Both x−−√x and x 1/2 x 1/2 are notations to describe the principal square root of x x. The best notation for the other square root is −x−−√−x. Example notations that encompass both roots: ±x−−√±x or {x−−√,−x−−√}{x,−x}. As commenters have said, conventions may differ in a context such as complex analysis where it may be important to distinguish between functions (one input, one output) and multifunctions (one input, multiple outputs). It is not universal that, for instance, x 1/n x 1/n is the multifunction and x−−√n x n is the function, but some authors may specify this convention and then use it throughout their writing. If you like, you can use x−−√x to refer to both square roots, but this isn't standard so you need to make it clear before you first use it. Notice that it is then no longer an expression, so values like 4–√+1 4+1 also need explaining: does this mean both−1−1 and 3 3? Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered May 23, 2022 at 16:36 A.M.A.M. 4,184 10 10 silver badges 29 29 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algebra-precalculus notation See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 2Is x−−√x different from x 1 2 x 1 2? Related 1principal square root and solutions to an equation 8Writing square root of square-free numbers as sum of square roots. 9Square root and principal square root confusion 13Why do we assume principal root for the notation √ 2How misleading is it to regard i i as the square root of −1−1? 0Confusion on Definition of a Square Root 2Why is square root symbol the principal square root? Hot Network Questions в ответе meaning in context How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? An odd question how do I remove a item from the applications menu Another way to draw RegionDifference of a cylinder and Cuboid Passengers on a flight vote on the destination, "It's democracy!" Proof of every Highly Abundant Number greater than 3 is Even Alternatives to Test-Driven Grading in an LLM world alignment in a table with custom separator How can the problem of a warlock with two spell slots be solved? The geologic realities of a massive well out at Sea Vampires defend Earth from Aliens Bypassing C64's PETSCII to screen code mapping Can you formalize the definition of infinitely divisible in FOL? Why is a DC bias voltage (V_BB) needed in a BJT amplifier, and how does the coupling capacitor make this possible? On being a Maître de conférence (France): Importance of Postdoc Why multiply energies when calculating the formation energy of butadiene's π-electron system? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Why are LDS temple garments secret? Repetition is the mother of learning Exchange a file in a zip file quickly Is encrypting the login keyring necessary if you have full disk encryption? Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? What’s the usual way to apply for a Saudi business visa from the UAE? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
12215
https://math.mit.edu/research/highschool/rsi/documents/2018Hansen.pdf
The Modular Representation Theory of Cyclic Groups of Prime Power Order Kayson Hansen kayson@mit.edu under the direction of Mr. Hood Chatham Department of Mathematics Massachusetts Institute of Technology Research Science Institute July 31, 2018 Abstract It is well-known that representations of a group G over a field K are in bijection with modules over the group algebra K[G]—this is the basis for the field of modular representation theory. We study the modular representation theory of cyclic groups with prime power order Cpk over finite fields Fp. There is a broad background in the literature on the representation theory of Cpk over a finite field Fn when n ∤pk, but little is known when n | pk, which is the case we study. We find a basis for the representation ring of Fp[Cpk], which allows us to find a much simpler structure to describe the representation ring. Our results help us better understand the representation theory of cyclic groups, which have applications in Number Theory and the Langlands Program. Summary Abstract algebra is a field of math that deals with studying abstract objects and their properties. However, solving problems and studying the properties of these abstract objects can be very difficult, and it is much easier to study concrete, linear objects. This is the motivation for the field of representation theory, which represents abstract objects as matri-ces, which are very easy to deal with. We study the representation theory of one object in particular, called a cyclic group, and we describe the underlying structure of how its repre-sentations interact with each other. Our results have applications in several fields of math, including the Langlands Program, which seeks to connect two fields of study: number theory and geometry. 1 Introduction Representation theory is a method of transforming problems in abstract algebra into problems in linear algebra by representing elements of more complicated algebraic struc-tures as matrices. This allows us to use the many powerful tools of linear algebra that are traditionally applied to matrices to tell us about the properties of groups, rings, fields, and other algebraic objects. Additionally, representation theory has applications in a vast range of mathematical fields, as well as theoretical physics, making it an extremely useful field of study. One of the fundamental results in representation theory is the following theorem, given by Maschke in 1898. Theorem 1.1. Let V be a representation of the finite group G over a field F in which |G| is invertible. Let W be an invariant subspace of V . Then there exists an invariant subspace W1 of V such that V = W ⊕W1 as representations . Maschke’s Theorem implies that if the characteristic of the field F does not divide the order of the group G, all subrepresentations split. This implies that indecomposable repre-sentations are always irreducible. An object satisfying this property that all indecomposable representations are irreducible is called semisimple. Mashke’s theorem says that the repre-sentation theory of a group over a field with characteristic not dividing the order of the group is semisimple. Many techniques exist to study semisimple representation theory and the majority of the work in the field focuses on this case. Character theory provides strong restrictions on the possible dimensions of irreducible representations. Once the irreducible representations are determined, character theory provides an efficient algorithm for decom-posing representations. When the characteristic of the field divides the order of the group, this is known as mod-ular representation theory. Modular representation theory is not semisimple and far less is 1 known. The most fundamental theorem in nonsemisimple representation theory is the Jordan form, which says that the indecomposible representations of Z over an algebraically closed field k correspond to pairs λ ∈k an eigenvalue and d a dimension. If a group has represen-tation theory at most as complicated as the representation theory of Z, the representation theory is said to be tame, otherwise it is said to be wild. An immediate consequence of the Jordan form is that every cyclic group has tame modular representation theory. The only noncyclic p-group with tame modular representation theory is Z/2 × Z/2. In the tame case, the collection of indecomposable representations is well understood but the behavior of any of the standard functors on representations are poorly understood. However, when the characteristic of the field divides the group, much less is known. Because finite fields must have prime power order, we study groups of prime power order, specifically, cyclic groups of prime power order. We study these groups from the approach of modular representation theory, which studies modules instead of representations. We find a concise description of the representation ring of Cpk over the field Fp as the quotient of a polynomial ring with some relations. 2 Background 2.1 Representations Definition 2.1. A representation of a group G over a field K is a pair (V, ρ) of a K-vector space V and a homomorphism ρ : G →GL(V ) where GL(V ) is the group of linear automorphisms of V . For a simple example, consider Cn = ⟨x | xn = 1⟩the cyclic group of order n. If G is some other group, a homomorphism f : Cn →G is determined by f(x) because f is a homomorphism so f(xk) = f(x)k. Because f(g)n = f(gn) = f(1) = 1, the image of f must 2 be an element of G of order n. Thus homomorphisms Cn →G are in bijection with elements of G of order n. In particular, a representation of Cn on a K vector space of dimension d is specified by a d × d matrix M such that M n = Id. For instance, the one-dimensional representations of Cn over a field K are given by nth roots of unity in K. Definition 2.2. A group action of a group G over a set X is a map G×X →X : (g, x) 7→gx satisfying 1. (gh)x = g(hx) 2. 1Gx = x for all g, h ∈G and x ∈X. A group action is linear if X is a vector space and the elements of G act linearly on X. Because a representation maps from G to GL(V ), each element in G is represented by a linear automorphism on V . Thus, we can completely define a representation by how its elements act on V , which implies that a representation is completely equivalent to a linear group action on a vector space. 2.2 Modules Definition 2.3. A left R-module M over the ring R consists of an abelian group (M, +) and an operation R × M →M (called scalar multiplication) such that for all r, s ∈R and for all x, y ∈M, we have 1. r(x + y) = rx + ry 2. (r + s)x = rx + sx 3. (rs)x = r(sx) 3 4. 1Rx = x where 1R is the multiplicative identity in R. A right R-module is defined analogously. Definition 2.3 is the standard definition of a module, but Definition 2.4 is an equivalent, more useful definition in the context of representation theory. Definition 2.4. A module over a K-algebra R is a pair (V, ρ) of a K-vector space V and a map ρ: R →End(V ) from R to the ring of linear endomorphisms of V . If R is a field, then an R-module is a vector space over R, thus modules over a ring are a generalization of vector spaces over a field. For any ring R, an example of an R-module is the Cartesian product Rn, where scalar multiplication defined by multiplying component-wise. The ring End(Rn) is given by n × n matrices of elements of R, and the map ρ: R →Rn embeds R as the diagonal matrices. Any ring homomorphism R →S makes S into an R-module. Modules are useful in representation theory because they provide another way of looking at group representations. Definition 2.5. Given a group G and a field K, the group algebra K[G] is defined as a vector space with basis G, so that a typical element r ∈K[G] is given by P g∈G ag[g], along with a bilinear operation K[G] × K[G] →K[G]. This operation is given by setting [g][h] = [gh] and linearly extending. We study the representation theory of Fp[Cpk], which is an example of a group algebra. There is an isomorphism from Fp[Cpk] to Fp[x]/(xpk −1), given by g 7→x, where g is a generator. A typical element in Fp[Cpk] looks like a polynomial in one variable over a finite field, and takes the form p X i=1 aigi−1. 4 Lemma 2.1. Modules over K[G] are in bijection with representations of G over K. Lemma 2.1 follows almost directly from Definition 2.1 and Definition 2.4, with a few rig-orous steps involved, because modules and representations are almost definitionally equiv-alent. The bijection between modules over K[G] and representations of G over K can be used to study the representation theory of cyclic groups Cn. Representations of Cn over Fp are equivalent to modules over Fp[Cn], and it is simpler to study these modules instead of representations, because modules encompass all of the structure of representations in one object, rather than the ordered pair (V, ρ) from Definition 2.1. 2.3 Jordan Normal Form Representation theory is motivated by a desire to apply the tools of linear algebra to abstract algebra, and in our case, we use the Jordan normal form of a matrix, which is a powerful result from linear algebra. The Jordan normal form of a matrix A is a matrix J composed of Jordan blocks that satisfies J = P −1AP, so J is similar to A. The Jordan blocks which compose the Jordan normal form have the form Jn =             λ 1 0 · · · 0 0 λ 1 · · · 0 . . . . . . . . . ... . . . 0 0 0 λ 1 0 0 0 0 λ             . The main focus of this paper is studying the cyclic group of order pk, Cpk, along with vector spaces over Fp. For all g ∈Cpk, we have gpk = 1G, and because the map ρ is a homomorphism, we must also have the matrix representation M = ρ(g) satisfy M pk = id. Therefore, when we consider Jordan blocks as matrix representations of elements of Cpk, we must have J pk n = id. 5 This implies λpk = 1, so λ is a pk-th root of unity. However, because Jn ∈Fp, we have that xpk −1 = (x −1)pk, and so all roots of unity are 1, hence, λ = 1. So, all Jordan blocks have the form Jn =             1 1 0 · · · 0 0 1 1 · · · 0 . . . . . . . . . ... . . . 0 0 0 1 1 0 0 0 0 1             where Jn is n-dimensional. A matrix in Jordan normal form will be composed of Jordan blocks as shown below J =             Ji1 0 0 · · · 0 0 Ji2 0 · · · 0 . . . . . . . . . ... . . . 0 0 0 Jin−1 0 0 0 0 0 Jin             . 2.4 Indecomposable Representations The set of all representations of a group G over a field K forms a ring Rep⊗(K[G]) when one considers representations as K[G]-modules. It is a common problem in representation theory to study the indecomposable representations in Rep⊗(K[G]). Given some representation ρ, we call ρ indecomposable if ρ cannot be written as the direct sum of two non-zero representations . Understanding the direct sum of representations is 6 simplest when considering modules: the direct sum of two modules Vi and Vj is the module Vi ⊕Vj with basis given by appending the basis of Vj to the basis of Vi. If we choose some g ∈G, g will be represented as a matrix T in GL(V ) through the map ρ(g) = T. Because we are concerned with cyclic groups Cpk, every other matrix representation is determined by T if g is a generator. Therefore, we can associate a matrix to ρ: the matrix of T with respect to some basis of K[G], with g the generator in Cpk. Then, ρ is indecomposable if the Jordan normal form is only composed of one Jordan block. This is equivalent to ρ not being the direct sum of two non-zero representations because the direct sum of two matrices A and B is the block diagonal matrix A ⊕B =    A 0 0 B   =                 a11 · · · a1n 0 · · · 0 . . . ... . . . . . . ... . . . am1 · · · amn 0 · · · 0 0 · · · 0 b11 · · · b1q . . . ... . . . . . . ... . . . 0 · · · 0 bp1 · · · bpq                 , so if the Jordan normal form has more than one Jordan block, it is the direct sum of those Jordan blocks, and thus decomposable. One can use this connection between decomposing modules and the Jordan normal form of a matrix to prove the follwoing well-known result about modular representations of cyclic groups that will be useful later on: Lemma 2.2. For Cpk over a field K, it is well-known that there are precisely pk classes of indecomposable K[Cpk]-modules. The i-th module Mi, where 1 ≤i ≤pk, has dimension i. From now on, we denote the n-dimensional indecomposable module by Vn. The matrix associated with each Vn is the n × n Jordan block with λ = 1. Multiplication by this matrix corresponds to the group action by the generator g of G. Now that we know what 7 all indecomposable representations of Cpk look like, we can compute direct sums and tensor products through operations on matrices, and we can start to study the representation ring. We are interested in determining the decomposition of a general element of Rep⊗(K[G]) into the indecomposable modules Vn. The Jordan normal form proves to be extremely useful for computing such decompositions. For example, if we have the element V2 ⊗V3 over F5[C5], we can compute J2 ⊗J3, where Jn is the n-dimensional Jordan block, then take the Jordan normal form of the resulting matrix:    1 1 0 1   ⊗       1 1 0 0 1 1 0 0 1       =                 1 1 0 1 1 0 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1                 ∼                 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1                 . In this case, we find that the Jordan normal form is composed of 2 Jordan blocks of size 4 and 2. Therefore, V2 ⊗V3 = V4 ⊕V2. 3 Representation Rings of Cyclic Groups The structure of the representation ring Rep⊗(G) of a group G completely specifies how all the representations of G interact with each other, so we are interested in finding a simple way to describe its structure. Theorem 3.2 is our main result, and it does precisely this. To prove Theorem 3.2, we need Definition 3.1 and Lemma 3.1. Definition 3.1. Given a linear endomorphism A : V →V , a Jordan chain of length k in a vector space V is a sequence of non-zero vectors v1, . . . , vk ∈V that satisfies Avk = λvk, Avi−1 = λvi, 2 ≤i ≤k. 8 Lemma 3.1. Denote wj ⊗wk by wjk. When starting at w00, the number of Jordan chains ending at wjk in Wj ⊗Wk is given by j+k j  mod p, which corresponds to the number of Jordan blocks of maximal dimension in the decomposition of Wj ⊗Wk. Proof. It is well-known that the matrix of some linear transformation T : V →V is a Jordan block of dimension n with respect to a basis {v1, . . . , vn} if and only if {v1, . . . , vn} is a Jordan chain for T. In representation theory, we consider T as the matrix representation of a group action g, and we can use this fact to find Jordan blocks in a tensor product of modules. To simplify computation, we consider the group action g −1 instead of g. We can do this because g acts on the basis of Wn as follows: w0 7→w1 7→w2 7→. . . 7→wn (this is because λ = 1), so we have that g(wjk) = wj,k+1 + wj+1,k + wjk and (g −1)(wjk) = wj,k+1 + wj+1,k. When repeatedly applying a group action to find a Jordan chain, we only care about the highest degree terms in each step, so we can disregard the wjk term by using g −1 instead of g. The number of Jordan chains ending with each basis element wab, which we will denote as J(wab), will be J(wa−1,b)+J(wa,b−1). Notice that this relation is equivalent to the relation in Pascal’s Triangle, and in fact, because we know J(w00) = 0 0  = 1, the indices match as well, so we have that J(wab) = a+b b  mod p. Theorem 3.2. Consider the representation ring Rep⊗(Fp[Cpn]). This ring has two equivalent bases given by {Vi | 0 < i ≤pk} and {N V ai pi+1 | 0 ≤i < k}. Proof. The basis {Vi | 0 < i ≤pn} trivially spans Rep⊗(Fp[Cpn]), and because it has pk elements, it is a basis. Thus, we must show that M = {N V ai pi+1 | 0 ≤i < n} is a basis. Notice that M has pn elements, as there are k choices for i and p choices for ai. Therefore, it remains to prove that M spans Rep⊗(Fp[Cpn]), which composes the remainder of the proof. For simplicity, let Wi = Vi+1. We will show that one can find values of i and ai such that 9 the equation O W ai pi = Wk + k−1 M i=0 biWi (1) holds true for each k, because if it is true, then we can substitute an element generated from M for each Wi where i < k, which expresses Wk in terms of elements in M. We will proceed by induction. Assume that N W ¯ ai pi = Wk−pj + Lk−pj−1 i=0 biWi, where ¯ ai = ai if i ̸= j and ¯ aj = aj −1. Tensoring both sides of the equation by Wpj, we obtain O W ai pi = Wpj ⊗Wk−pj + Wpj ⊗ k−pj−1 M i=0 biWi. (2) We will show that the right side of Equation (2) contains a nonzero Wk term. The length of a maximal Jordan chain in Wm ⊗Wn is m + n, so if we can find a maximal Jordan chain in Wpi⊗Wk−pi, we will have a Jordan block of dimension k in the decomposition of the tensor product Wpi ⊗Wk−pi, and we will be done. Because we are looking for a maximal Jordan chain, we only have one starting point in the basis of Wpi ⊗Wk−pi, w00. By Lemma 3.1, there are pi+k−pi pi  ≡ k pi  mod p Jordan chains that start with w00 and end with wk,pi−k, which will all be maximal. Therefore, we must show that k pi  ̸= 0 mod p. By Lucas’ Theorem, this is true if and only if every digit in the base p representation of k is greater than or equal to the corresponding digit in the base p representation of pi. However, the greatest digit in the base p representation of pi is always 1, so for every nonzero value of k, it is possible to choose an i such that the same digit in the base p representation of k is at least 1. Finally, we must show that we can actually construct every value of k from 1 to pk with this induction, because our increment on k is not necessarily by 1. We will prove this by doing induction on each value for i. Denote the base p representation of n by np. In general, we can construct the number k = (c + 1)pi, where 0 < c + 1 < pn−i and p ∤c + 1, by incrementing cpi by pi, which is possible because ((c + 1)pi)p has a digit of c + 1 mod p ≥1 and (pi)p has a digit of 1. The base case for each value of i is c = 0 = ⇒k = pi, which is automatically true, because pi is already in the monomial basis M. Therefore, we can construct every value 10 of k by finding the maximum i such that pi | k, then incrementing k −pi by pi. Theorem 3.1 is interesting to us not only because it describes an alternative basis of the representation ring Rep(Fp[Cpk]), but because it gives us a way to describe the underlying structure of the representation ring as the quotient of a well-understood ring with some relations. For each Wpi, W p pi = W0, where W0 is the trivial representation, or the multiplica-tive identity in the representation ring. Additionally, we can decompose tensor products into direct sums using the Jordan normal form strategy. Thus, by writing W p pi as a direct sum of lower-dimensional terms, we obtain a relation where W0 = L Wk. We can repeat this for each Wpi, until we have k relations. Finally, we can write Rep(Fp[Cpk]) ∼ = Z[W1, Wp, · · · , W k p ]/   p−1 M i=1 Wi = W0, p2−1 M j=1 Wj = W0, · · ·  , where the Wki and Wkj terms represent the direct sum decomposition of each W p pi. 4 Future Work One feature of representations that we used frequently throughout this paper is the relative ease with which we can decompose tensor products of representations into direct sums. In fact, besides finding such decompositions algorithmically using Jordan normal form, many formulas are known for decomposing tensor products of representations into direct sums, such as the following formulas, given by Hughes and Kemper V2 ⊗Vn ∼ =        Vn−1 ⊕Vn+1 if p ∤n, Vn ⊕Vn if p | n. (3) and Vp−1 ⊗Vi = Vp−i ⊕(i −1)Vp. (4) 11 However, the decompositions of symmetric and exterior powers of modules are much less-studied. Himstedt and Symonds recently made progress on this problem by proving the following equation for computing exterior powers : Λr(V2n−1+s) ∼ = M 2i+j=r Ωi+j 2n (Λi(Vs) ⊗(Λj(V2n−1−s)) ⊕tV2n (5) Their results hold only in cyclic groups of order 2n over the field F2, so it is interesting to attempt to generalize their formula to odd primes as well. 5 Acknowledgments I would first like to thank my mentor Hood Chatham. He provided very valuable guidance along the way, and the results in this paper wouldn’t have been possible without his ideas and support. I would like to thank Dr. John Rickert, my academic tutor, who gave me vital feedback about giving presentations and writing papers. I would like to thank the MIT math department for their help and support, along with the head math mentor, Dr. Tanya Khovanova, as well as the MIT faculty members who coordinated the RSI math projects, Dr. Slava Gerovitch and Dr. Davesh Maulik. I would like to thank MIT, CEE, and RSI for giving me the opportunity to come to the Research Science Institute. Lastly, I would like to thank and recognize my sponsors, Mr. John Yochelson, Ms. Zuzana Steen, and Ms. Audrey Gerson, for funding my RSI experience. 12 References P. Webb. A course in finite group representation theory. edu/~webb/RepBook/RepBookLatex.pdf. Accessed 27 July 2018. N. Dupr´ e. Representation theory workshop. rep_workshop.pdf, 2015. Accessed 5 July 2018. A. I. Shtern. Indecomposable representation. index.php?title=Indecomposable_representation&oldid=17010. Accessed 24 July 2018. I. Hughes and G. Kemper. Symmetric powers of modular representations, hilbert series and degree bounds. Communications in Algebra, 6 2007. F. Himstedt and P. Symonds. Exterior and symmetric powers of modules for cyclic 2-groups. Journal of Algebra, 7 2014. 13
12216
https://chiralpedia.com/blog/part-2-fundamental-concepts-of-chirality/
Part 2: Fundamental Concepts of Chirality – Chiralpedia Skip to content Chiralpedia Home EducationalMenu Toggle Stereochemistry Chirality Chiral ProductsMenu Toggle Natural Pharmaceuticals Chiral Materials Agrochemicals Food & Beverages Consumer ApplicationsMenu Toggle Chiral Separation (Analytical) Chiral Separation (Preparative) Asymmetric Synthesis Research & Development Regulatory Affairs Quotes EnterpriseMenu Toggle Products Services Blog Twitter (X) LinkedIn ResourceMenu Toggle News Chiraltube Wikipedia Books Journals AboutMenu Toggle Chiralpedia Guru-Shishya Founder’s Biography Contributors TestimonialsMenu Toggle Professional Academic Contact Chiralpedia Main Menu Home EducationalMenu Toggle Stereochemistry Chirality Chiral ProductsMenu Toggle Natural Pharmaceuticals Chiral Materials Agrochemicals Food & Beverages Consumer ApplicationsMenu Toggle Chiral Separation (Analytical) Chiral Separation (Preparative) Asymmetric Synthesis Research & Development Regulatory Affairs Quotes EnterpriseMenu Toggle Products Services Blog Twitter (X) LinkedIn ResourceMenu Toggle News Chiraltube Wikipedia Books Journals AboutMenu Toggle Chiralpedia Guru-Shishya Founder’s Biography Contributors TestimonialsMenu Toggle Professional Academic Contact Part 2: Fundamental Concepts of Chirality Leave a Comment / Stereochemistry / By Valliappan Kannappan / #stereochemistry / August 30, 2025 August 30, 2025 “From left- and right-handedness to life’s molecular signatures—chirality explained” Introduction Building on the overview of chirality, this section delves into core concepts: symmetry elements in molecules, the definitions of enantiomers and diastereomers, and the phenomenon of optical activity. Understanding these fundamentals is essential for grasping how stereochemistry manifests and is measured. We will also explore how chirality is quantified via optical rotation and how instruments like polarimeters help distinguish enantiomers. By the end of this part, readers should be comfortable with terms like enantiomer, diastereomer, racemate, specific rotation, andpolarimetry, setting the stage for stereochemical nomenclature inPart 3. Elements of Symmetry and Chirality A molecule’s chirality is intimately related to its symmetry (or lack thereof). The presence of certain symmetry elements will render a molecule achiral, even if it contains stereocenters. Key symmetry elements to consider are: – Mirror Plane (σ): An internal plane that reflects half of the molecule into the other half. If a molecule has a mirror plane, it is superimposable on its mirror image (hence achiral). For example, meso-tartaric acid has two stereocenters, but also an internal mirror plane, making it achiral despite chiral centers. – Center of Inversion (i): A point through which all parts of the molecule reflect to an equivalent opposite part. Molecules with a center of inversion are also achiral. – Rotation-Reflection Axis (Sn): A combined symmetry of rotation followed by reflection. Particularly, an S2 is equivalent to an inversion center, and an S1 is a mirror plane. Chirality is often characterized as the absence of any Sn axis; a chiral molecule cannot have an Sn symmetry element for any n. In practice, a quick test for chirality is to look for an internal mirror plane: if one exists, the molecule is achiral (or meso). If not, and the molecule is not identical to its mirror image, it’s chiral. For instance, 2-butanol is chiral (no symmetry plane, one stereocenter), whereas 2,3-butanediol has stereocenters but one stereochemical configuration (meso form) possesses a mirror plane, making it achiral. Ethambutol, an antitubercular agent, exists in multiple stereoisomeric forms – among them, the meso-isomer, distinguished by its internal plane of symmetry. This symmetry renders it achiral, despite having stereocenters. < To explore the concept further checkout chiralpedia blog on meso-compounds and plane of symmetry @ . Thus, chirality can arise not only from single stereocenters but also fromaxial chirality (as in biaryl compounds like certain drugs or natural products), planar chirality, and helical chirality – none of which have symmetry elements that make them superimposable on their mirror forms. These advanced types will be touched upon later in the series; the unifying principle is the absence of improper symmetry that connects a molecule to its mirror image. For a detailed discussion on axial chirality (atropisomerism) refer to the chiralpedia blog @ . Symmetry Elements and Chirality Enantiomers and Diastereomers Enantiomers are a pair of stereoisomers that are non-superimposable mirror images of each other. They have identical connectivity and (in the absence of chiral environments) identical physical properties (melting point, boiling point, NMR spectra, etc.), except for how they interact with plane-polarized light and other chiral substances. A classic example is the R- and S-enantiomers of limonene. one enantiomer (R-(+)-limonene) smells of oranges, the other (S-(–)-limonene) smells of lemons, illustrating that enantiomers can interact differently with our chiral olfactory receptors. Another example: D-glucose and L-glucose are enantiomers; our metabolic enzymes (which are chiral) can metabolize D-glucose readily, whereas L-glucose is not recognized (and is essentially calorie-free). Importantly, enantiomers rotate plane-polarized light in equal magnitude but opposite directions (more on this below). In pharmacology, enantiomers often have different activities (e.g., one may fit a receptor better). [Read more @ . Enantiomers are sometimes designated as “left-” or “right-handed” forms of a molecule. Diastereomers Diastereomers are stereoisomers that are not mirror images of each other. This situation arises when molecules have multiple stereocenters. Two examples are presented to illustrate this concept. Tartaric acid and Ephedrine. Illustration 1: Tartaric acid Tartaric acid: it has two stereocenters. The molecule can exist as RR, SS (which are enantiomers of each other), and RS (which is a meso-form identical to SR after rotation, and achiral). The RR vs. RS forms are diastereomers – they are both stereoisomers of tartaric acid but are not mirror images. Diastereomers have different physical and chemical properties (unlike enantiomers). Diastereomeric relationship I and III are diastereomers. For a more details consult the blog @ Illustration 2:Ephedrine and pseudoephedrine. Ephedrine and Psuedoephedrine are related as diastereomers (two stereocenters each, differ in the configuration at one center). They have noticeably different melting points and pharmacological profiles. In pharmaceuticals, diastereomers can often be separated by conventional means (because of differing solubilities, etc.), and if both are biologically active, they might be developed as separate drugs or one chosen over the other. Geometric isomers (like cis/trans or E/Z isomers of double bonds) are also a type of diastereomer The FDA guidance explicitly notes that geometric isomers and diastereomers should be treated as distinct drugs unless they interconvert in vivo, because they often have distinct pharmacology. A poignant example: cis-platin (cis-diamminedichloroplatinum(II)) is a potent anticancer drug, whereas the trans isomer (trans-platin) is ineffective as an anticancer agent – a clear case where geometric diastereomers have different biological effects. Racemic Mixtures Racemic Mixtures: A 50:50 mixture of enantiomers is called a racemate or racemic mixture (sometimes denoted with a (±) prefix). Racemates are overall optically inactive (the two enantiomers’ rotations cancel out). However, racemates can have different properties from either pure enantiomer. For instance, a racemic compound might crystallize in a different form than enantiopure samples (some racemates form a distinct crystal lattice containing both enantiomers). In drug development, a racemic mixture might be easier or harder to crystallize or purify than a single enantiomer, so sometimes a racemate is used in a formulation by design (we will discuss the pros and cons in Part 6). An interesting note: some racemic mixtures can undergo spontaneous resolution upon crystallization, where enantiomers crystallize separately – a phenomenon Pasteur exploited. But in general, separating a racemate into enantiomers (resolution) is non-trivial. Optical Activity and Specific Rotation One of the earliest methods to characterize chirality is through a property called optical activity. A substance is optically active if it can rotate the plane of plane-polarized light. Enantiomers rotate light to an equal degree but in opposite directions: one is dextrorotatory (rotates light clockwise, denoted “+” or “d”), and the other is levorotatory (rotates light counterclockwise, denoted “–” or “l”). Notably, the direction (+/–) of optical rotation is entirely empirical and is not directly related to the R/S configuration (which is determined by structure, as we’ll see in Part 3). For example, (R)-limonene is (+) (smells like orange), whereas (S)-limonene is (–) but for other molecules, (R)- could be (–) and S-(+), there’s no universal correlation. Polarimetry: The instrument used to measure optical rotation is a polarimeter. It consists of a light source emitting monochromatic light (typically sodium D line at 589 nm), a polarizer to create plane-polarized light, a sample tube, and an analyzer (a second polarizing filter that can be rotated). When an optically active sample is in the tube, the plane of polarization is rotated by some angle α. The analyst rotates the analyzer until light passes through with maximum intensity; the angle needed is the optical rotation. Polarimetry is often done at a specified temperature (usually 20 °C) and wavelength (the sodium D line, labeled as the subscript D). The specific rotation [α] is defined as the standardized rotation angle for a sample with a path length l (usually 1 decimeter) and concentration c (in g/mL for solutions, or using density for neat liquids). Mathematically, where T is temperature (°C) and λ is the wavelength used (often the D line). For example, if a 1 dm tube containing a solution of 1 g/mL of a chiral compound produces an observed rotation of +12°, the specific rotation [α]20D = +12° (by definition under those conditions). If the tube length or concentration differ, one computes [α] by scaling the observed α. Specific rotation is an intrinsic property of a chiral substance (at a given wavelength and temperature) – an identity parameter that can be compared to literature values to confirm purity or absolute configuration. E.g., pure (S)-ibuprofen has a specific rotation of about +54° (in methanol, 20 °C, D line), whereas the racemic ibuprofen would have [α] ≈ 0° because the contributions cancel (external compensation). Optical Activity Significance: Optical rotation was historically crucial: it was evidence of molecular asymmetry before X-ray crystallography existed. Pasteur’s separated tartaric acid enantiomers exhibited equal and opposite rotations, proving enantiomers’ existence. In modern pharma labs, polarimetry is still used for quick checks of enantiomeric purity or identity. However, it provides no detail on which enantiomer (R or S) you have – that requires knowing the absolute configuration by other methods (see Part 3 and Part 8). Polarimetry in Practice: Consider that many early drugs were characterized by optical rotation. L-epinephrine (adrenaline) was identified as levorotatory, and its specific rotation is part of pharmacopoeia standards. If a batch deviated, that might indicate racemization or impurity. Even today, pharmacopoeias list [α] for chiral drugs as an identification criterion. Polarimetry and Chirality Quantification A key term isenantiomeric excess(ee), which quantifies optical purity. It’s defined as. A racemate has 0% ee, an enantiomerically pure sample has 100% ee. If you measure an optical rotation that is 50% of the literature value for the pure enantiomer, the sample is presumably a 75:25 enantiomeric mixture (50% ee). Modern chiral analysis (see Part 8) often uses chiral chromatography or NMR, but optical rotation provides a quick estimate of enantiomeric composition if [α] of the pure enantiomers is known. Enantiomers in a Chiral Environment While enantiomers share properties in achiral environments, they behave differently in chiral settings. Two striking examples in a biological context: – Carvone enantiomers: R-(–)-carvone smells like spearmint, S-(+)-carvone smells like caraway. Our nose’s receptors are chiral protein pockets that distinguish them. – Drug receptors:Thalidomide exists as an enantiomeric pair. The (R)-enantiomer has the desired sedative effect while the (S)-enantiomer harbors embryo-toxic and teratogenic effect. The enantiomers of the beta-blocker propranolol, (S)-propranolol is found to be 130 times as active as its (R)-enantiomer. Another case is warfarin, an anticoagulant. Warfarin’s enantiomers are both anticoagulant, but S-warfarin is ~4 times more potent and is metabolized by a different enzyme than R-warfarin. This affects dosing and drug interactions – a quintessential demonstration that even when both enantiomers are “active,” their pharmacokinetics and dynamics can diverge. In achiral environments, enantiomers share identical properties, but in chiral systems their differences become evident. Biology offers powerful illustrations of this contrast.” Diastereomers in Pharmacy Diastereomers, because they have different shapes (not mirror images), can bind to targets in unrelated ways or be processed differently. For example, the antidepressant venlafaxine has a chiral center and is used as a racemate. Its O-desmethyl metabolite (ODV) is also chiral. One diastereomeric pair (venlafaxine + ODV) might be more active or longer-lived than the other. While this is complex, it reminds us that when multiple stereocenters are in play, the combinatorial possibilities (2 n) stereoisomers for n centers) include many diastereomers, each potentially a unique chemical entity. Regulatory guidance is to treat each as separate unless interconversion is proven. Summary (Part 2) – Chirality and Symmetry: Chiral molecules have no symmetry that makes them superimposable on their mirror image (no mirror plane, inversion center, etc.). Achiral molecules either lack stereocenters or have symmetry (like meso compounds) that renders them superimposable on the mirror image. – Enantiomers: Stereoisomers that are mirror images (e.g. D- vs L-glucose). They share most properties except interactions with other chiral entities (including plane-polarized light and chiral receptors/solvents). Enantiomers rotate plane-polarized light in equal and opposite directions. – Diastereomers: Stereoisomers that are not mirror images (e.g. cis- vs trans-2-butene, or threo- vs erythro-diastereomers).[For more detailed discussion on this nomenclature consult the blog article @ ].Consult They have different physical properties and can be separated more easily. Geometric isomers and molecules with multiple stereocenters may produce diastereomers. – Optical Activity: A hallmark of chirality. Measured by polarimetry, quantified as specific rotation [α]. Enantiomer pairs have opposite rotations. Racemic mixtures are optically inactive overall. – Polarimetry: Important for characterizing chiral compounds. Historically crucial, and still useful for quick purity checks. However, it doesn’t identify which enantiomer is which (that requires knowing absolute configuration or comparing rotation sign to literature). – Pharmaceutical examples: Many chiral drugs illustrate these concepts: one enantiomer of a drug can be therapeutically active while the other is less active or causes side effects (e.g., d- vs l-propranolol, thalidomide enantiomers. Diastereomeric drugs (like cis/trans isomers or compounds with >1 stereocenter) likewise may have one form approved and the other not, due to efficacy or safety differences. Suggested Reading IUPAC Compendium of Chemical Terminology (Gold Book): Definitions of enantiomer, diastereoisomer, optical rotation, racemate. (Provides clear, authoritative definitions.) Organic Chemistry (by Clayden, Greeves, Warren, Wothers), Chapter on Stereochemistry. (An accessible discussion of symmetry elements, enantiomers vs diastereomers, and optical activity with many illustrations.) Introduction to Chirality: Understanding the Basics Chiral Pharmacology: The Mirror Image of Drug Development Atropisomers: things are tight, single bond won’t rotate The meso compounds: finding plane of symmetry (Narrates Pasteur’s experiment in detail, connecting optical activity with molecular chirality.) P. Y. Wang, et al. (1980). Science, 209, 1420-1421. (Brief report on D- vs L-glucose metabolism, illustrating enantiomer recognition in biology.) Health Canada Guidance “Stereochemical Issues in Chiral Drug Development” (2000). Sections on terminology and analysis. (A regulatory perspective on defining and handling stereochemical forms, reinforcing many concepts in this part.) Development of New Stereoisomeric Drugs, 1992. Chiral twins – Identical? … But not really! Chiral Analysis: Mapping the Essentials Erythro- and Threo- prefixes: the (same-) or (opposite-) side? The Fundamentals of Chiral Resolution: Why Chirality Matters Authors Valliappan Kannappan View all posts Chandramouli R View all posts Post navigation ← Previous Post Next Post → Leave a Comment Cancel Reply Your email address will not be published.Required fields are marked Type here.. Name Email Website [x] Save my name, email, and website in this browser for the next time I comment. Search Recent Posts Part 8: Stereochemistry in Biologics and Natural ProductsSeptember 25, 2025 Part 7: Analytical Techniques for StereochemistrySeptember 21, 2025 Part 6: Resolution of EnantiomersSeptember 15, 2025 Part 5: Stereoselective and Stereospecific SynthesisSeptember 10, 2025 Part 4: Stereochemistry in Drug Action and PharmacologySeptember 7, 2025 Part 3: Nomenclature and ConfigurationSeptember 3, 2025 Part 2: Fundamental Concepts of ChiralityAugust 30, 2025 Part 1: Introduction to StereochemistryAugust 27, 2025 Mapping Stereochemical Nomenclature: A Chiralpedia GuideAugust 20, 2025 The Thalidomide ParadoxAugust 13, 2025 Episode 10: Future Reflections: How the Story of Chiral Drug Regulation ContinuesJune 20, 2025 Episode 9: Challenges on the Horizon: Complex Chirality and Emerging Scientific QuestionsJune 17, 2025 Episode 8: Beyond the Approval: Ethical and Economic Dimensions of Chiral DrugsJune 11, 2025 Episode 7: Clinical Questions: Are Single Enantiomers Always Better?June 8, 2025 Episode 6: The Rise of the Chiral Switch: Strategy or Science?June 1, 2025 Recent Comments Asian Chemical Forum on P4. Catalysts of Change: Advancements in Asymmetric SynthesisJune 30, 2025 Fascinating read on asymmetric synthesis breakthroughs! The article brilliantly showcases how P4 catalysts are revolutionizing enantioselective reactions – a game-changer… Srinivasan Ganapathy on Chiral chromatographyDecember 27, 2024 Good article. Please do take seminar for chiral chromatography. Valliappan Kannappan on LevocetirizineNovember 1, 2024 Prasanna Rayala - Thank you so much for your kind words. It’s truly gratifying to hear that my contributions have… Prasanna Rayala on LevocetirizineOctober 24, 2024 I hope this message finds you well. As a reader deeply interested in the topic of chirality in pharmaceuticals, I… Valliappan Kannappan on The Future of Asymmetric Synthesis: Trends and Innovations”September 12, 2024 Thank you for your insightful comment! I’m glad you found the article well-structured. The multiple benefits of synthetic biology (SynBio)… Valliappan Kannappan on Asymmetric Synthesis in Industry: From Lab to MarketSeptember 5, 2024 Thank you for your insightful comment! I’m glad you found the article helpful. Indeed, the shift towards green chemistry is… Dr. Shriram on Asymmetric Synthesis in Industry: From Lab to MarketSeptember 5, 2024 As the world heads towards green chemistry goals as part of SDG initiatives, we are likely to witness more commercial… Valliappan Kannappan on Harnessing Computational Methods in Asymmetric SynthesisAugust 31, 2024 Thank you for your thoughtful feedback! I’m glad you found the article’s logical flow and distinction between traditional and modern… Valliappan Kannappan on Harnessing Computational Methods in Asymmetric SynthesisAugust 31, 2024 Thank you for sharing your insights! Computational chemistry is indeed vital for understanding reaction mechanisms. Your use of the Q2MM… K.Selvakumar on Harnessing Computational Methods in Asymmetric SynthesisAugust 30, 2024 Dear Sir, It is a well-crafted article that captures the transformative impact of computational methods in asymmetric synthesis. The depth… Categories Asymmetric Synthesis Chiral Pharmacology Chiral resolution Chiral science Chiral Separation Chiral separation and analysis Chiral synthesis Chirality Drug stereochemistry Enantioselective synthesis HPLC_separation Racemates Regulatory Affairs Stereochemistry Copyright © 2025 Chiralpedia
12217
https://www.ebsco.com/research-starters/zoology/oyster-toadfish
Research Starters Home EBSCO Knowledge Advantage TM Oyster toadfish The oyster toadfish is a unique species found in the coastal waters of the western Atlantic Ocean, ranging from Cape Cod to Miami. Named for its bumpy, warty skin that resembles oyster shells, this fish can grow up to 17 inches long and weigh around two kilograms. Characterized by their light brown or greenish skin with darker blotches and striped fins, oyster toadfish are bottom-dwellers that primarily feed on crustaceans and mollusks. They have garnered interest from researchers due to their remarkable ability to tolerate high levels of water pollution and survive in low-oxygen environments. Oyster toadfish typically spawn from April to October, with males playing a significant role in guarding the fertilized eggs. While they exhibit aggressive behavior when hunting, they face predation from larger marine animals and birds. Although they are edible, care must be taken due to their poisonous spines. Despite facing threats from human activities, oyster toadfish are not considered a threatened species and are increasingly being used in experimental studies related to environmental health. Published in: 2024 Go to EBSCOhost and sign in to access more content about this topic. Oyster toadfish Oyster toadfish were given the name oyster because they have bumpy, warty skin, like the outside of oyster shells. Because oyster toadfish are able to tolerate high levels of pollution, researchers are becoming more and more interested in studying these creatures. Oyster toadfish are found in the North Atlantic Ocean. Kingdom: Animalia Phylum: Chordata Class: Actinopterygii Order: Batrachoidiformes Family: Batrachoididae Genus: Opsanus Species: Tau Oyster toadfish may grow to be up to 17 inches (43 centimeters) long and weigh 79 ounces (around two kilograms). Their scaleless skin is light brown or light green with many darker blotches. Their fins are striped much like gulf toadfish. Their dorsal and anal, or back and belly, fins are diagonally striped with dark brown, while their caudal and pectoral, or tail and side, fins are vertically striped with the same color. Like other toadfish, the pectoral fins of oyster toadfish are also rounded and fan-like. Male oyster toadfish are typically a little longer than the females. They also have a slightly longer life span. Male oyster toadfish generally live eight years, while female oyster toadfish only live five years. Like other fish, oyster toadfish need oxygen to survive. Unlike humans, they cannot breathe oxygen through the air. Instead, they use the oxygen which is in the water. Oyster toadfish take water into their mouths, use the oxygen they need, and release the waste chemicals through their gills. One interesting thing about toadfish is that they are able to survive on very little oxygen for a long time. They are even able to survive outside of water for several hours. Oyster toadfish are found inshore in shallow waters with rocky bottoms, reefs, jetties, wrecks, and frequently among litter. They inhabit coastal waters of the western Atlantic Ocean from Cape Cod in New England south to Miami, Florida. They are unusual in their ability to tolerate badly polluted water. Like other toadfish, oyster toadfish are bottom-dwellers. As carnivorous, or meat-eating, fish, oyster toadfish feed mainly on bottom-dwelling crustaceans and mollusks. Their large, wide mouths are equipped with sharp teeth that can easily devour prey. They often live up to their belligerent, or aggressive, reputation, when they viciously attack their prey. Oyster toadfish spawn from April through October. Although oyster toadfish live in shallow waters, they move to even shallower waters during mating season. Spawning begins when the female oyster toadfish releases her eggs near a rock cavity, or pile of man-made debris. The males then fertilize those eggs. This process of releasing and fertilizing eggs is known as spawning. Once the eggs have been fertilized, they drift down and attach themselves to the surfaces below them. Although it is ideal for the eggs to attach themselves to natural substances, such as rock cavities or shells, they often attach themselves to man-made objects, such as cans, pipes, or litter, which also inhabit the water. Male oyster toadfish spend the one month after spawning, guarding their broods, or groups of eggs. They may endanger their own lives by staying with their nest during a low tide. After this incubation, or growth, period, the young oyster toadfish are ready to hatch. In general, oyster toadfish are preyed on by larger sea animals and some birds, but they are also threatened by humans. Oyster toadfish are becoming important in experimental studies because of their high tolerance for pollution. When caught by fishermen, oyster toadfish may grunt and erect their poisonous spines. Although oyster toadfish are edible, their bodies have to be handled with great care to avoid being pricked by the poisonous spines of their fins. Oyster toadfish are not a threatened species. Bibliography “Oyster Toadfish - Facts, Diet, Habitat, & Pictures on Animalia.bio.” Animalia, 2024, animalia.bio/oyster-toadfish. Accessed 8 May 2024. Parson, Will. “Oyster Toadfish Opsanus Tau.” Chesapeake Bay Program, 2024, www.chesapeakebay.net/discover/field-guide/entry/oyster-toadfish. Accessed 8 May 2024.
12218
https://math.stackexchange.com/questions/439093/n-lines-in-the-plane
geometry - $n$ Lines in the Plane - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more n n Lines in the Plane Ask Question Asked 12 years, 2 months ago Modified12 years ago Viewed 787 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. How am I to "[u]se induction to show that n n straight lines in the plane divide the plane into n 2+n+2 2 n 2+n+2 2 regions"? It is assumed here that no two lines are parallel and that no three lines have a common point. Further, this is not a non-Euclidean problem, but I wouldn't mind a discussion on the non-Euclidean nature of the problem. I was thinking it might be easier to show using the unit sphere. A GOOD REFERENCE I FOUND THAT WILL BE USEFUL TO ANYONE COMING ACROSS THIS PROBLEM IS Concrete Mathematics, Graham, Knuth, Patashnik pp. 4–8. geometry recurrence-relations induction recursion Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Sep 27, 2013 at 18:37 MJD 67.8k 44 44 gold badges 309 309 silver badges 621 621 bronze badges asked Jul 8, 2013 at 18:41 TrancotTrancot 4,137 1 1 gold badge 33 33 silver badges 69 69 bronze badges 6 Could you elaborate on what you mean by "the non-Euclidean nature of the problem", and what you're hoping to get? I wasn't sure what to say because I didn't understand what you were asking.MJD –MJD 2013-07-08 18:46:48 +00:00 Commented Jul 8, 2013 at 18:46 Well, say whatever you'd like. I mean two lines being parallel probably needs to be adjusted. Would this work in that case?Trancot –Trancot 2013-07-08 18:54:53 +00:00 Commented Jul 8, 2013 at 18:54 If you're in a space where there are no parallel lines, you can disregard that condition. If you're in a hyperbolic space where there are multiple parallels, it doesn't change.MJD –MJD 2013-07-08 19:18:57 +00:00 Commented Jul 8, 2013 at 19:18 Is this not a duplicate of this question?robjohn –robjohn♦ 2013-09-27 19:36:46 +00:00 Commented Sep 27, 2013 at 19:36 @MJD Why did you edit it that way? You've just robbed the lazy mathematician of information. Do you really think that people are going to look that reference up? Having it up like I had earlier gives people access to a good source of knowledge, which they themselves might not be able to find on their own. I'm thinking freshman undergraduates, and other people without the background knowledge necessary to just go an get it through the library. I think it ought to be as it was.Trancot –Trancot 2013-09-27 21:11:04 +00:00 Commented Sep 27, 2013 at 21:11 |Show 1 more comment 1 Answer 1 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. To prove it by induction, you first prove the base case: 0 lines divide the plane into 1 region. Then you suppose that n n lines divide the plane into 1 2(n 2+n+2)1 2(n 2+n+2) regions, and show that n+1 n+1 lines divide the plane into 1 2((n+1)2+(n+1)+2)1 2((n+1)2+(n+1)+2) regions. Since 1 2((n+1)2+(n+1)+2)−1 2(n 2+n+2)=n+1 1 2((n+1)2+(n+1)+2)−1 2(n 2+n+2)=n+1, you need to show that adding the n+1 n+1'th line adds exactly n+1 n+1 regions, or equivalently that the n+1 n+1 th line can be made to pass through n+1 n+1 of the existing regions, dividing each one into two regions. Does that help? Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jul 8, 2013 at 18:45 MJDMJD 67.8k 44 44 gold badges 309 309 silver badges 621 621 bronze badges 7 A bit. I'll work on it.Trancot –Trancot 2013-07-08 18:56:20 +00:00 Commented Jul 8, 2013 at 18:56 How do you "show" this last condition?Trancot –Trancot 2013-07-08 19:00:44 +00:00 Commented Jul 8, 2013 at 19:00 Hint: How many lines does the new line intersect?Hagen von Eitzen –Hagen von Eitzen 2013-07-08 19:07:58 +00:00 Commented Jul 8, 2013 at 19:07 @MJD See my most recent edit.Trancot –Trancot 2013-09-27 15:34:46 +00:00 Commented Sep 27, 2013 at 15:34 2 It is. I can prove it in two ways. First, in countries that are party to the Berne Convention, any original work of authorship is copyright as soon as it is fixed in a tangible medium of expression. And second, because on page iv, as you know, or should have known, it says “Copyright © 1989 by Addison-Wesley Publishing Company” and then follows with a paragraph that reads “All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, [etc.], without the prior written permission of the publisher.” So cut it out.MJD –MJD 2013-09-28 01:52:39 +00:00 Commented Sep 28, 2013 at 1:52 |Show 2 more comments You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions geometry recurrence-relations induction recursion See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 4How many triangles are formed by n n chords of a circle? 18Show that n n lines separate the plane into n 2+n+2 2 n 2+n+2 2 regions 4Number of triangles formed by all chords between n n points on a circle Related 16If a plane is divided by n n lines, then it is possible to color the regions formed with only two colors. 2Lines in the plane from Concrete Mathematics. How many bounded regions are there? 7Induction proof: n lines in a plane 1Dichotomy in the number of regions on a plane formed by an infinite number of lines 2A simple recurrence problem: L n=L n−1+n L n=L n−1+n 3Concrete Mathematics: Sums and Recurrences 0Combinatorics problem involving the number of regions that exist in a plane bounded by a set of intersecting lines 1Lines intersecting in a segment? Coloring the plane 1Is there a formula to get the number of regions of a plane bounded by lines, where two or more lines are parallel? Hot Network Questions Lingering odor presumably from bad chicken ICC in Hague not prosecuting an individual brought before them in a questionable manner? What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? Overfilled my oil Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? how do I remove a item from the applications menu Is it ok to place components "inside" the PCB Non-degeneracy of wedge product in cohomology How to locate a leak in an irrigation system? Is encrypting the login keyring necessary if you have full disk encryption? в ответе meaning in context How to use \zcref to get black text Equation? Storing a session token in localstorage Spectral Leakage & Phase Discontinuites "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf How can the problem of a warlock with two spell slots be solved? Riffle a list of binary functions into list of arguments to produce a result How different is Roman Latin? Can a GeoTIFF have 2 separate NoData values? Alternatives to Test-Driven Grading in an LLM world How many stars is possible to obtain in your savefile? Does a Linux console change color when it crashes? What is this chess h4 sac known as? A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
12219
https://www.quora.com/How-do-you-find-the-range-of-y-x-sqrt-1-x-2
How to find the range of y = x + sqrt (1 - x^2) - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Range () Function Functions, Events Solving Square Root Equat... Functions (mathematics) Algebraic Equations Finding Range Range of Function Range in Maths 5 How do you find the range of y = x + sqrt (1 - x^2)? All related (32) Sort Recommended Hachchoo Studied at Indian Institute of Engineering Science and Technology (IIEST) · Author has 283 answers and 604.6K answer views ·5y y = x + √( 1 - x² ) Now, 1 - x² >= 0 [ To find Domain of y ] Or x² - 1 <= 0 Or (x-1)(x+1) <=0 Or x € [-1,1] Also , y' = 1 + [1/(2√(1-x²) )](-2x) [ Diff. y w.r.t x ] Or y' = 1 - x/√(1-x²) Now since x € [-1,1] So x² € [0,1] So x² <= 1 Or 1 - x² >= 0 If x € (-1,0) means x < 0 Or x/√(1-x²) < 0 [ since √(1 - x²) > 0 In any case. ] So 1 - [x/√(1-x²)] is Overall a +ive quantity So for x € [-1,0) , y' > 0 or y is an INCREASING function. If x € [0,1] Then x > x² holds True Now To check where y' = 0 Or 1 - x/√(1-x²) = 0 Or x = √(1-x²) Squaring both sides we get x² = 1 - x² Or x² = 1/2 Or x = +1/√2 or x = -1/√2 But Don’t be Fooled 😏😏 Continue Reading y = x + √( 1 - x² ) Now, 1 - x² >= 0 [ To find Domain of y ] Or x² - 1 <= 0 Or (x-1)(x+1) <=0 Or x € [-1,1] Also , y' = 1 + [1/(2√(1-x²) )](-2x) [ Diff. y w.r.t x ] Or y' = 1 - x/√(1-x²) Now since x € [-1,1] So x² € [0,1] So x² <= 1 Or 1 - x² >= 0 If x € (-1,0) means x < 0 Or x/√(1-x²) < 0 [ since √(1 - x²) > 0 In any case. ] So 1 - [x/√(1-x²)] is Overall a +ive quantity So for x € [-1,0) , y' > 0 or y is an INCREASING function. If x € [0,1] Then x > x² holds True Now To check where y' = 0 Or 1 - x/√(1-x²) = 0 Or x = √(1-x²) Squaring both sides we get x² = 1 - x² Or x² = 1/2 Or x = +1/√2 or x = -1/√2 But Don’t be Fooled 😏😏 The Original equation was x = √(1-x²) and Squaring any equation to solve it may give rise to EXTRA solutions which may or may not satisfy the Original Equation. So all the obtained solutions must be put into the Original equation and any Extra root which gives erroneous result must be eliminated. Here x cannot be -1/√2 since √(1-x²) is always a Non-negative quantity and if x is -1/√2 , LHS of x= √(1-x²) is -ve while RHS is +ve which is IMPOSSIBLE , so -ve = +ve, Not Possible. so x is +1/√2 only. Check y'' at x = +1/√2 y' = 1 - x / √(1-x²) Or y'' = -[ √(1-x²).1- x.{1/2√(1-x²)}.(-2x)]/(1-x²) y" = -1/[√(1-x²).(1-x²)] So at x = -1/√2 1 - x² > 0 , √(1 - x²) > 0 Overall y" < 0 So at x=+1/√2 , there is a local maxima , Also f(1) = 1 < f(1/√2) [ check it yourself ] Check y' < 0 in ( 0 , 1 ) 1 - x/√(1-x²) < 0 Or Now √(1-x²) > 0 for any x € (0,1) So [ √(1-x²) - x ]/ √(1-x²) < 0 Now √(1-x²) - x must be less than 0 since denominator > 0 or √(1-x²) < x Squaring both sides the sign of inequality remains UNCHANGED since both LHS and RHS are +ive quantities in this case So 1 - x² < x ² Or x² > 1/2 Or x € [ 1/√2 , 1 ] since for x < 0 , y is increasing. So max value of y in x € [-1,1] is y(1/√2) while min value of y is the smaller value between y(-1) and y(1) which is obviously y(-1) . So the range of y is [ y(-1) , y(1/√2) ] Calculate the values of y(-1) , y(1) and y(1/√2) yourself. The graph of y = x + √(1-x²) is as follows : Clearly the max height of function is at x = 1/√2 or x = 0.707 which is y(1/√2) or y = √2 and minimum height of y is y(-1) = 1 . So you got the range of y. Upvote · 9 3 9 2 Sponsored by Grammarly Stuck on the blinking cursor? Move your great ideas to polished drafts without the guesswork. Try Grammarly today! Download 99 34 Related questions More answers below How do I find the range of 2^(2x) + 2 ^(x) + 1? How can I find the range of the function y=(x-1) /sqrt (x^2+x)? What is the range of y = 1/ (x^2 + 6x + 8) ^(1/2)? What is the range of x^2 + 1/x^2+1? How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? B.L. Srivastava Author has 7.6K answers and 8.1M answer views ·5y Given y = x + sqrt(1-x^2) …. … ..(1). Clearly y is valid if (1 - x^2) >/= 0 or (x^2 - 1) </= 0 or (x -1)(x + 1) </= 0 ==> x € [-1, 1] . Further for the range of y values, let us square the equation (1) both side, we get, (y - x)^2 = (1 - x^2) or 2x^2 -2xy + (y^2 - 1) = 0 . Which is quadratic in x . Since x is real, therefore discriminant (b^2 -4ac) >/= 0 or 4y^2 -8(y^2 - 1) >/= 0 or (y^2 - 2) </= 0 ==> (y - sqrt(2))(y + sqrt(2)) </= 0 . That is y € [- sqrt(2), sqrt(2)]. y attains max. that is sqrt(2) at x = 1/sqrt (2) and min. that is -sqrt(2) at x = -1/sqrt(2) which lie inside the interval [-1, 1 Continue Reading Given y = x + sqrt(1-x^2) …. … ..(1). Clearly y is valid if (1 - x^2) >/= 0 or (x^2 - 1) </= 0 or (x -1)(x + 1) </= 0 ==> x € [-1, 1] . Further for the range of y values, let us square the equation (1) both side, we get, (y - x)^2 = (1 - x^2) or 2x^2 -2xy + (y^2 - 1) = 0 . Which is quadratic in x . Since x is real, therefore discriminant (b^2 -4ac) >/= 0 or 4y^2 -8(y^2 - 1) >/= 0 or (y^2 - 2) </= 0 ==> (y - sqrt(2))(y + sqrt(2)) </= 0 . That is y € [- sqrt(2), sqrt(2)]. y attains max. that is sqrt(2) at x = 1/sqrt (2) and min. that is -sqrt(2) at x = -1/sqrt(2) which lie inside the interval [-1, 1] . Upvote · 9 2 Enrico Gregorio Associate professor in Algebra · Author has 18.4K answers and 16M answer views ·4y Related How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? The function is defined over [−1,1][−1,1]. The derivative is y′=1−x√1−x 2=√1−x 2−x√1−x 2 y′=1−x 1−x 2=1−x 2−x 1−x 2 that doesn’t exist at ±1±1, and vanishes where {√1−x 2=x x≥0{1−x 2=x x≥0 Squaring we get 1−x 2=x 2 1−x 2=x 2, hence x=1/√2 x=1/2 (the negative solution cannot be considered). Hence the points to look at are −1,1/√2,1−1,1/2,1 Since y(−1)=−1 y(−1)=−1, y(1/√2)=√2 y(1/2)=2, y(1)=1 y(1)=1 we can state that the minimum is −1−1 and the maximum is √2 2. Continue Reading The function is defined over [−1,1][−1,1]. The derivative is y′=1−x√1−x 2=√1−x 2−x√1−x 2 y′=1−x 1−x 2=1−x 2−x 1−x 2 that doesn’t exist at ±1±1, and vanishes where {√1−x 2=x x≥0{1−x 2=x x≥0 Squaring we get 1−x 2=x 2 1−x 2=x 2, hence x=1/√2 x=1/2 (the negative solution cannot be considered). Hence the points to look at are −1,1/√2,1−1,1/2,1 Since y(−1)=−1 y(−1)=−1, y(1/√2)=√2 y(1/2)=2, y(1)=1 y(1)=1 we can state that the minimum is −1−1 and the maximum is √2 2. Upvote · 9 3 Sohel Zibara Studied at Doctor of Philosophy Degrees (Graduated 2000) · Author has 5.1K answers and 2.6M answer views ·4y Related How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? Since the domain of definition of the function is the closed interval [- 1 , 1] and since the function is continuous on that interval, then both the maximum and minimum exist and can be located either at the endpoints of the interval or at the points where the derivative is zero. f(- 1) = - 1 and f(1) = 1 f'(x) = 0 if x = - 1 / sqrt(2) or x = 1 / sqrt(2) f(- 1/sqrt2) = 0 and f(1 / sqrt2) = sqrt(2). We now have all the data we need and we can therefore deduce that min f(x) = - 1 and max f(x) = sqrt(2) Upvote · 9 1 Sponsored by All Out Kill Dengue, Malaria and Chikungunya with New 30% Faster All Out. Chance Mat Lo, Naya All Out Lo - Recommended by Indian Medical Association. Shop Now 999 621 Related questions More answers below What is the range of (x^2-3x+2) /(x-1)? How do I find a range of f(x)=1 x 2+2 f(x)=1 x 2+2? What is the range of y=1/ (1-1/(x-2))? What is the range of the function 1/sqrt1-x^2? What is the range of the function y=x/ (x^2+1)? Aayush Garg B. E. IT from University Institute of Engineering and Technology, Panjab University (Graduated 2022) · Author has 1.1K answers and 1M answer views ·4y Related How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? Differentiate y wrt x and put it equal to 0. dy/dx = 1 + (-2x)/(√1-x^2) dy/dx will be positive for x between —1 and 1. dy/dx= 0 So all the values of x should come between —1 and 1. If not then we will check between the values of x. 1 — x/√1-x^2 = 0 1= x/√1-x^2 x = √1-x^2 Squaring both sides, x^2 = 1 — x^2 2x^2 = 1 x^2 = 1/2 Take sqaure root both sides. x= √(1/2), —√(1/2) Now graph will be minimum and maximum at Continue Reading Differentiate y wrt x and put it equal to 0. dy/dx = 1 + (-2x)/(√1-x^2) dy/dx will be positive for x between —1 and 1. dy/dx= 0 So all the values of x should come between —1 and 1. If not then we will check between the values of x. 1 — x/√1-x^2 = 0 1= x/√1-x^2 x = √1-x^2 Squaring both sides, x^2 = 1 — x^2 2x^2 = 1 x^2 = 1/2 Take sqaure root both sides. x= √(1/2), —√(1/2) Now graph will be minimum and maximum at above two values of x. Now put above these values in original equation and get the value of y. x = √(1/2). y = √(1/2) + √(1 — (√(1/2))^2) y = √(1/2) + √(1/2) y = 2√(1/2) y = √2. x = —√(1/2). y = —√(1/2) + √(1 — (—√(1/2))^2) y = —√(1/2) +... Upvote · 9 2 John Steele Author has 10.2K answers and 11.3M answer views ·4y Related How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? Several answers ignore that sqrt(1-x²) is normally taken as the positive square root. The derivative is dy/dx = 1 - x/(sqrt(1-x²) If |x| > 1, we are dealing with imaginary numbers. but we must separately consider -1 <x <0, and 0 < x< +1, In the latter interval, the second term of the derivative is negative, a zero exists, and I agree with everyone else’s answer. Maxima is sqrt(2). For -1 <x <0, the derivative is everywhere positive, there is no zero, so extrema can only exist at the endpoints. At zero, we get y = 1, less than our maxima in the other segment. At x = -1, we get y = -1, which is the Continue Reading Several answers ignore that sqrt(1-x²) is normally taken as the positive square root. The derivative is dy/dx = 1 - x/(sqrt(1-x²) If |x| > 1, we are dealing with imaginary numbers. but we must separately consider -1 <x <0, and 0 < x< +1, In the latter interval, the second term of the derivative is negative, a zero exists, and I agree with everyone else’s answer. Maxima is sqrt(2). For -1 <x <0, the derivative is everywhere positive, there is no zero, so extrema can only exist at the endpoints. At zero, we get y = 1, less than our maxima in the other segment. At x = -1, we get y = -1, which is the minima. The claimed minima at x = -sqrt(2)/2, which is a false root, when plugged into the original equation evaluates to y = 0. For confirmation, you may wish to plot the original equation is Desmos. Upvote · Sponsored by RedHat Customize AI for your needs, with simpler model alignment tools. Your AI needs context, not common knowledge. Learn More 9 7 Ramakrishnan Parthasarathy Lives in Bengaluru, Karnataka, India · Author has 2.5K answers and 6.3M answer views ·4y Related How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? y=x+√1−x 2 y=x+1−x 2 y y is simply the sum of x x and z=√1−x 2 z=1−x 2 in a unit circle x 2+z 2=1 x 2+z 2=1 The easiest way to do this is to put x=c o s θ x=c o s θ (or s i n θ s i n θ if you wish) and z z becomes s i n θ s i n θ (or c o s θ c o s θ the other way round). y=c o s θ+s i n θ=√2[1√2 c o s θ+1√2 s i n θ]=√2 s i n(π 4+θ)y=c o s θ+s i n θ=2[1 2 c o s θ+1 2 s i n θ]=2 s i n(π 4+θ), which has a maximum of √2 2 when s i n(π 4+θ)=1 s i n(π 4+θ)=1 and a minimum of −√2−2 when s i n(π 4+θ)=−1 s i n(π 4+θ)=−1. Think of the first coordinate of the unit circle where the maximum y=x+z y=x+z occurs and the third coordinate of the unit circle where Continue Reading y=x+√1−x 2 y=x+1−x 2 y y is simply the sum of x x and z=√1−x 2 z=1−x 2 in a unit circle x 2+z 2=1 x 2+z 2=1 The easiest way to do this is to put x=c o s θ x=c o s θ (or s i n θ s i n θ if you wish) and z z becomes s i n θ s i n θ (or c o s θ c o s θ the other way round). y=c o s θ+s i n θ=√2[1√2 c o s θ+1√2 s i n θ]=√2 s i n(π 4+θ)y=c o s θ+s i n θ=2[1 2 c o s θ+1 2 s i n θ]=2 s i n(π 4+θ), which has a maximum of √2 2 when s i n(π 4+θ)=1 s i n(π 4+θ)=1 and a minimum of −√2−2 when s i n(π 4+θ)=−1 s i n(π 4+θ)=−1. Think of the first coordinate of the unit circle where the maximum y=x+z y=x+z occurs and the third coordinate of the unit circle where the minimum y=x+z y=x+z occurs. Upvote · 9 5 Abdullah Shaleheen Oyon Math Enthusiast, Martial Artist, Scout · Author has 69 answers and 40.2K answer views ·5y Related What is the range of f(x) = sqrt (-x^2 + x)? A method to solve this sort of problem would be very helpful. f(x)=√−x 2+x f(x)=−x 2+x Let’s Assume, y=f(x)f−1(y)=x y=f(x)f−1(y)=x Now moving on by evaluating the inverse function… y=√−x 2+x y 2=−(x 2−x)=−(x 2−x+1 4)+1 4=−(x−1 2)2+1 4 x=√1 4−y 2+1 4 f−1(y)=√1 4−y 2+1 4 f−1(x)=√1 4−x 2+1 4 y=−x 2+x y 2=−(x 2−x)=−(x 2−x+1 4)+1 4=−(x−1 2)2+1 4 x=1 4−y 2+1 4 f−1(y)=1 4−y 2+1 4 f−1(x)=1 4−x 2+1 4 Now, you can easily evaluate the domain of f−1(x)f−1(x), which will be the range of f(x)f(x) Domain of f−1(x)=−1 2⩽x⩽1 2 f−1(x)=−1 2⩽x⩽1 2 Note: The function of x is a square Continue Reading f(x)=√−x 2+x f(x)=−x 2+x Let’s Assume, y=f(x)f−1(y)=x y=f(x)f−1(y)=x Now moving on by evaluating the inverse function… y=√−x 2+x y 2=−(x 2−x)=−(x 2−x+1 4)+1 4=−(x−1 2)2+1 4 x=√1 4−y 2+1 4 f−1(y)=√1 4−y 2+1 4 f−1(x)=√1 4−x 2+1 4 y=−x 2+x y 2=−(x 2−x)=−(x 2−x+1 4)+1 4=−(x−1 2)2+1 4 x=1 4−y 2+1 4 f−1(y)=1 4−y 2+1 4 f−1(x)=1 4−x 2+1 4 Now, you can easily evaluate the domain of f−1(x)f−1(x), which will be the range of f(x)f(x) Domain of f−1(x)=−1 2⩽x⩽1 2 f−1(x)=−1 2⩽x⩽1 2 Note: The function of x is a square root function. So, the ranges can't be negative. The graph also says that. Therefore, the range of f(x)=0⩽x⩽1 2 f(x)=0⩽x⩽1 2 Upvote · 9 1 Sponsored by Dell Technologies Built for what's next. Do more, faster with Dell AI PCs powered by #IntelCoreUltra processors & ready for Windows 11. Learn More 9 8 Enrico Gregorio Associate professor in Algebra · Author has 18.4K answers and 16M answer views ·5y Related What is the range of function y=square root x/1+x^2? The function is defined for x≥0 x≥0. The value at x=0 x=0 is 0 0. For x≠0 x≠0, consider x 2+1 x=x+1 x x 2+1 x=x+1 x It’s not difficult to show that the range of this is 2,∞)[2,∞). Indeed, the equation x+x−1=a x+x−1=a, with a>0 a>0, becomes x 2−a x+1=0 x 2−a x+1=0, which has positive solutions for a 2−4≥0 a 2−4≥0, that is, a≥2 a≥2. Hence the range of x 1+x 2 x 1+x 2 (when x>0 x>0) is (0,1/2=√x 1+x 2 f(x)=x 1+x 2 is [0,1/√2][0,1/2]. With calculus: Determine the domain Compute the derivative Find the critical points Find the limit for x→∞x→∞ Determine the values at the critical points If your functio Continue Reading The function is defined for x≥0 x≥0. The value at x=0 x=0 is 0 0. For x≠0 x≠0, consider x 2+1 x=x+1 x x 2+1 x=x+1 x It’s not difficult to show that the range of this is 2,∞)[2,∞). Indeed, the equation x+x−1=a x+x−1=a, with a>0 a>0, becomes x 2−a x+1=0 x 2−a x+1=0, which has positive solutions for a 2−4≥0 a 2−4≥0, that is, a≥2 a≥2. Hence the range of x 1+x 2 x 1+x 2 (when x>0 x>0) is (0,1/2=√x 1+x 2 f(x)=x 1+x 2 is [0,1/√2][0,1/2]. With calculus: Determine the domain Compute the derivative Find the critical points Find the limit for x→∞x→∞ Determine the values at the critical points If your function is, instead, f(x)=√x 1+x 2 f(x)=x 1+x 2 it’s better to study the function g(t)=t 1+t 4 g(t)=t 1+t 4 defined for t≥0 t≥0, that has the same range. Since g′(t)=1+t 4−4 t 4(1+t 4)2 g′(t)=1+t 4−4 t 4(1+t 4)2 we see that the only critical points are 0 0 (a boundary of the domain) and 4√3 3 4. The latter is obviously a point of maximum; since the limit at ∞∞ is zero and g(0)=0 g(0)=0, we conclude that the range is [0,4√3/4][0,3 4/4] because g(4√3)=4√3/4 g(3 4)=3 4/4. Upvote · 9 1 Gregory Allen MS in Mathematics, University of Florida (Graduated 1987) · Author has 2.3K answers and 3.4M answer views ·7y Related What is range of √1+x-y^2? What you posted is an expression. Only relations have a range. For what you posted to be a relation, it would have to have at least two variables and an = sign in it somewhere. Did you mean something like y 2=√1+x y 2=1+x? Since you can’t take the square root of a negative number, the expression under the square root has to be >= 0 so the range would be determined by 1+x≥0 1+x≥0 x≥−1 x≥−1 or x∈[−1,∞)x∈[−1,∞). If that isn’t the relation that you had in mind, try updating your question or posting the correct relation in a comment and I’ll take another look. Upvote · 9 3 Rory Barrett M.Sc in Mathematics, University of Auckland · Author has 2.1K answers and 1.8M answer views ·5y Related How do you find the range of 𝑓(𝑥) = √(𝑥−2) + 1/( 𝑥−5)? sqrt(x - 2) has the range y>=0, 1/(x - 5) has range y!=0. Let’s see if sqrt(x - 2) + 1/(x-5) has all of R as its range. If it does then every equation of type sqrt(x - 2) + 1/(x-5) = k , where k is real can be solved. Let k be an arbitrary real number. If sqrt(x - 2) + 1/(x-5) = k then (x-5)sqrt(x - 2) +1 = k(x-5) Hence k(x-5) - (x-5)sqrt(x - 2) = 1 Hence, (x- 5)(k - sqrt(x-2)) = 1 Hence, (x - 5)^2(k - sqrt(x-2))^2 = 1 Hence, (x - 5)^2(k^2 - 2ksqrt(x-2) + x - 2) = 1 Hence, (x - 5)^2(k^2 + x-2) - 1 = 2k(x-5)^2sqrt(x-2) Hence, ( (x - 5)^2(k^2 + x-2) - 1 )^2 = ( 2k(x-5)^2sqrt(x-2) )^2 This leads to a polyno Continue Reading sqrt(x - 2) has the range y>=0, 1/(x - 5) has range y!=0. Let’s see if sqrt(x - 2) + 1/(x-5) has all of R as its range. If it does then every equation of type sqrt(x - 2) + 1/(x-5) = k , where k is real can be solved. Let k be an arbitrary real number. If sqrt(x - 2) + 1/(x-5) = k then (x-5)sqrt(x - 2) +1 = k(x-5) Hence k(x-5) - (x-5)sqrt(x - 2) = 1 Hence, (x- 5)(k - sqrt(x-2)) = 1 Hence, (x - 5)^2(k - sqrt(x-2))^2 = 1 Hence, (x - 5)^2(k^2 - 2ksqrt(x-2) + x - 2) = 1 Hence, (x - 5)^2(k^2 + x-2) - 1 = 2k(x-5)^2sqrt(x-2) Hence, ( (x - 5)^2(k^2 + x-2) - 1 )^2 = ( 2k(x-5)^2sqrt(x-2) )^2 This leads to a polynomial of degree 6, which will not have all of R as its range. This is getting messy. Here is the graph, have fun. Try differentiating sqrt(x - 2) + 1/(x-5) then finding turning points!! Upvote · 9 2 9 1 Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views ·4y Related What is range of √(x(x-1) (x-2))? This is quite interesting. To understand what happens to possible y values it is better to choose some specific numbers and see if we can get real y values. The graph is interesting and shows the above reasoning well. . Continue Reading This is quite interesting. To understand what happens to possible y values it is better to choose some specific numbers and see if we can get real y values. The graph is interesting and shows the above reasoning well. . Upvote · 9 3 9 1 Gordon M. Brown Math Tutor at San Diego City College (2018-Present) · Author has 6.2K answers and 4.3M answer views ·2y Related How do you differentiate Y= - sqrt (1-y^2/1-x^2)? Let’s begin by pointing out that you could not have done much worse to make clear to your audience the expression you want to differentiate! Firstly, Y is not the same variable as y. This error alone would be embarrassing in a seventh-grader, let alone a student of calculus! Second, your radicand does not contain sufficient grouping symbols to make clear what you mean by it. You are literally making us all guess at what you want. Lastly, you never tell us whether you intend to differentiate with respect to x, or with respect to y. In the image that follows, I am assuming that you want to differ Continue Reading Let’s begin by pointing out that you could not have done much worse to make clear to your audience the expression you want to differentiate! Firstly, Y is not the same variable as y. This error alone would be embarrassing in a seventh-grader, let alone a student of calculus! Second, your radicand does not contain sufficient grouping symbols to make clear what you mean by it. You are literally making us all guess at what you want. Lastly, you never tell us whether you intend to differentiate with respect to x, or with respect to y. In the image that follows, I am assuming that you want to differentiate implicitly the equation y = -√[(1 - y^2) / (1 - x^2)] with respect to x. If that is not what you intended, well, whose fault is that? (Click on the image below to expand it as necessary.) Upvote · 9 3 Related questions How do I find the range of 2^(2x) + 2 ^(x) + 1? How can I find the range of the function y=(x-1) /sqrt (x^2+x)? What is the range of y = 1/ (x^2 + 6x + 8) ^(1/2)? What is the range of x^2 + 1/x^2+1? How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? What is the range of (x^2-3x+2) /(x-1)? How do I find a range of f(x)=1 x 2+2 f(x)=1 x 2+2? What is the range of y=1/ (1-1/(x-2))? What is the range of the function 1/sqrt1-x^2? What is the range of the function y=x/ (x^2+1)? What is the range of y=\sqrt {x^2-2x-8}? What is the range of a function (x+1) /√ (x^2-1)? How do you find the range of y= x^3-2x^2/x-2? What is the range of (x+1) ^2? How do I find the range of the function y=x-2? Related questions How do I find the range of 2^(2x) + 2 ^(x) + 1? How can I find the range of the function y=(x-1) /sqrt (x^2+x)? What is the range of y = 1/ (x^2 + 6x + 8) ^(1/2)? What is the range of x^2 + 1/x^2+1? How do I find the maximum and minimum of y = x + sqrt (1 - x^2)? What is the range of (x^2-3x+2) /(x-1)? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
12220
https://www.khanacademy.org/math/in-in-grade-10-ncert/x573d8ce20721c073:quadratic-equations/x573d8ce20721c073:nature-of-roots/v/discriminant-for-types-of-solutions-for-a-quadratic
Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
12221
https://www.cuemath.com/geometry/collinear-vectors/
Collinear Vectors - Definitions, Conditions, Examples We use cookies to improve your experience. Learn more OK Loading... Sign up LearnPracticeDownload Collinear Vectors Collinear vectors are considered as one of the important concepts in vector algebra. When two or more given vectors lie along the same given line, then they can be considered as collinear vectors. We can consider two parallel vectors as collinear vectors since these two vectors are pointing in exactly the same direction or opposite direction. In this article, let's learn about collinear vectors, their definition, conditions of vector collinearity with solved examples. 1.What Are Collinear Vectors? 2.Conditions of Collinear Vectors 3.FAQs on Collinear Vectors What Are Collinear Vectors? Any two given vectors can be considered as collinear vectors if these vectors are parallel to the same given line. Thus, we can consider any two vectors as collinear vectors if and only if these two vectors are either along the same line or these vectors are parallel to each other. For any two vectors to be parallel to one another, the condition is that one of the vectors should be a scalar multiple of another vector. In the above diagram, the vectors that are parallel to the same line are collinear to each other and the intersecting vectors are non-collinear vectors. Conditions of Collinear Vectors In order for any two vectors to be collinear, they need to satisfy certain conditions. Here are the important conditions of vector collinearity: Condition 1: Two vectors →p p→ and →q q→ are considered to be collinear vectors if there exists a scalar 'n' such that →p p→ = n · →q q→ Condition 2: Two vectors →p p→ and →q q→ are considered to be collinear vectors if and only if the ratio of their corresponding coordinates are equal. This condition is not valid if one of the components of the given vector is equal to zero. Condition 3: Two vectors →p p→ and →q q→ are considered to be collinear vectors if their cross product is equal to the zero vector. This condition can be applied only to three-dimensional or spatial problems. Proof of Condition 3: Let's consider two collinear vectors →a a→ = {a x a x,a y a y,a z a z} and →b b→ = {n a x a x,n a y a y,n a z a z}. We can find the cross product between them as: →a a→ × →b b→ = ∣∣ ∣ ∣∣i j k a x a y a z b x b y b z∣∣ ∣ ∣∣|i j k a x a y a z b x b y b z| = i (a y a y b z b z - a z a z b y b y) - j (a x a x b z b z - a z a z b x b x) + k (a x a x b y b y - a y a y b x b x) = i (a y a y n a z a z - a z a z n a y a y) - j (a x a x n a z a z - a z a z n a x a x) + k (a x a x n a y a y - a y a y n a x a x) = 0i + 0j + 0k = →0 0→ [Because different components of the same vector are perpendicular to each other and hence, their product is 0.] Related Articles on Collinear Vectors Check out the following pages related to collinear vector Adding Vectors Calculator Angle Between Two Vectors Calculator Handling Vectors Specified in the i-j form Triangle Inequality in Vectors Subtracting Two Vectors Important Notes on Collinear Vectors Here is a list of a few points that should be remembered while studying collinear vectors Any two given vectors can be considered as collinear vectors if these vectors are parallel to the same given line. Thus, we can consider any two vectors as collinear if and only if these two vectors are either along the same line or these vectors are parallel to each other. Read More Examples on Collinear Vectors Example 1:Find if the given vectors are collinear vectors. →P P→ = (3,4,5), →Q Q→ = (6,8,10). Solution: Two vectors are considered to be collinear if the ratio of their corresponding coordinates are equal. P 1/Q 1 = 3/6 = 1/2 P 2/Q 2​​​​​​= 4/8 = 1/2 P 3/Q 3= 5/10 = 1/2 Since P 1/Q 1 = P 2/Q 2 = P 3/Q 3, the vectors →P P→ and →Q Q→ can be considered as collinear vectors. 2. Example 2:Find if the given vectors are collinear vectors. →P P→ = i + j + k, →Q Q→ = - i - j - k Solution:Two vectors are considered to be collinear vectors if one vector is a scalar multiple of the other vector. Vector Q = - i - j - k = - (i + j + k) = - (Vector P) ⇒ Vector Q is a scalar multiple of vector P. Also, since P 1/Q 1 = P 2/Q 2 = P 3/Q 3 = -1, the vectors →P P→ and →Q Q→ can be considered as collinear vectors. View More > Have questions on basic mathematical concepts? Become a problem-solving champ using logic, not rules. Learn the why behind math with our certified experts Book a Free Trial Class Practice Questions on Collinear Vectors Check Answer > FAQs on Collinear Vectors What Are Collinear Vectors? Any two given vectors can be considered as collinear vectors if these vectors are parallel to the same given line. Thus, we can consider any two vectors as collinear if and only if these two vectors are either along the same line or these vectors are parallel to each other. For any two vectors to be parallel to one another, the condition is that one of the vectors should be a scalar multiple of another vector. How Do You Know if a Vector Is Collinear? In order for any two vectors to be collinear, they need to satisfy certain conditions. Here are the important conditions of vector collinearity: Condition 1: Two vectors →p p→ and →q q→ are considered to be collinear vectors if there exists a number 'n' such that →p p→ = n · →q q→ Condition 2: Two vectors →p p→ and →q q→ are considered to be collinear vectors if and only if the ratio of their corresponding coordinates are equal. This condition is not valid if one of the components of the given vector is equal to zero. Condition 3: Two vectors →p p→ and →q q→ are considered to be collinear vectors if their cross product is equal to the zero vector. This condition can be applied only to three-dimensional or spatial problems. Are Parallel and Collinear Vectors the Same? Yes, parallel vectors and collinear vectors are the same. Two vectors are collinear vectors if they have the same direction or are parallel or anti-parallel. Two vectors are parallel if they have the same direction or are in exactly opposite directions. How Do You Prove Three Position Vectors Are Collinear? Consider three line segments PQ, QR and PR. If PQ + QR = PR then we can consider these three points to be collinear. The three given line segments can be translated to the respective vectors PQ, QR and PR. The magnitudes of these three vectors are equal to the length of the three line segments that are mentioned here. Give an Example of Collinear Vectors Consider two vectors →P P→ = (3,4,5), →Q Q→ = (6,8,10).Two vectors are considered to be collinear if the relations of their coordinates are equal. P 1/Q 1 = 3/6 = 1/2 P 2/Q 2​​​​​​= 4/8 = 1/2 P 3/Q 3= 5/10 = 1/2 Since P 1/Q 1 = P 2/Q 2 = P 3/Q 3, the vectors →P P→ and →Q Q→can be considered as collinear vectors. What Are Non-Collinear Vectors? Vectors are considered to be non-collinear when they are situated in the same plane but they are not acting along the same line of action. How Do You Find Collinear Vectors in 3 Dimensions? Two vectors→P P→ and →Q Q→ are considered to be collinear vectors if their cross product is equal to the zero vector. This condition can be applied only to three-dimensional or spatial problems. Explore math program Math worksheets and visual curriculum Sign up FOLLOW CUEMATH Facebook Youtube Instagram Twitter LinkedIn Tiktok MATH PROGRAM Online math classes Online Math Courses online math tutoring Online Math Program After School Tutoring Private math tutor Summer Math Programs Math Tutors Near Me Math Tuition Homeschool Math Online Solve Math Online Curriculum NEW OFFERINGS Coding SAT Science English MATH ONLINE CLASSES 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math ABOUT US Our Mission Our Journey Our Team QUICK LINKS Maths Games Maths Puzzles Our Pricing Math Questions Blogs Events FAQs MATH TOPICS Algebra 1 Algebra 2 Geometry Calculus math Pre-calculus math Math olympiad Numbers Measurement MATH TEST CAASPP CogAT STAAR NJSLA SBAC Math Kangaroo AMC 8 MATH CURRICULUM 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math FOLLOW CUEMATH Facebook Youtube Instagram Twitter LinkedIn Tiktok MATH PROGRAM Online math classes Online Math Courses online math tutoring Online Math Program After School Tutoring Private math tutor Summer Math Programs Math Tutors Near Me Math Tuition Homeschool Math Online Solve Math Online Curriculum NEW OFFERINGS Coding SAT Science English MATH CURRICULUM 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math MATH TEST CAASPP CogAT STAAR NJSLA SBAC Math Kangaroo AMC 8 ABOUT US Our Mission Our Journey Our Team MATH TOPICS Algebra 1 Algebra 2 Geometry Calculus math Pre-calculus math Math olympiad Numbers Measurement QUICK LINKS Maths Games Maths Puzzles Our Pricing Math Questions Blogs Events FAQs MATH ONLINE CLASSES 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math Terms and ConditionsPrivacy Policy
12222
https://www.youtube.com/watch?v=EbV_oDEZ8xc
Determine intervals of continuity for function with square root in denominator Ms. Hearn 10000 subscribers 29 likes Description 4827 views Posted: 2 Sep 2016 To find intervals of continuity for a function, which is a composition of a rational function, a square root function, and a polynomial function, we must find the domain of the function. In this case, that involves solving a quadratic inequality. I will show you how to do so using a sign chart and also using the graph of the quadratic function. 51 comments Transcript: so use interval notation to describe the intervals on which the function is continuous in the case of a composition of a rational and square root and polinomial we can just find the domain to find the domain since it's a rational function the denominator is not allowed to be zero okay what that means is that x^2 - 9 is not allowed to be zero because the square root of 0 is 0 what else we know is because of the radical we need x^2 - 9 to not be negative either okay so if x^2 - 9 is not allowed to be zero and it's not allowed to be negative that means it can only be positive so what we're going to do is we're going to find all the X values for which x^2 - 9 is positive greater than zero so what this goes back to is solving what's called a quadratic inequality and there are a couple of different techniques for solving a quadratic inequality but they all basically involve having a zero on one side having standard form so that's good we're in good shape there and then does anyone have any suggestions how did you think about solving that inequality finding where that's positive okay you decided to factor so let's see if we just Factor that's x - 3 x + 3 we want to know where is that greater than Zer how do you determine where it's greater than zero then finding where it equals zero is a a step you can use to do what then what do you do after that plug okay plug in values in between very good okay so the technique that she's using is called called a sign chart and we can set up a sign chart so that on the bottom we have the X values and on the top we have the result of the product of x - 3 x + 3 and what she's saying is we want to mark off whatever X values cause that expression to equal zero and the reason why is we're interested in where it's positive or negative and so it's on every interval between the zeros it's either going to be positive or negative all right so um what would cause this to equal zero -3 would cause the x + 3 term Factor rather to be zero and 3 would cause the other factor to be zero okay notice I put them in order negatives to the left so on the interval from negative Infinity to -3 for example we could test the value -4 by plugging it into the expression and seeing if we get a positive or A negative on the interval from -3 to 3 zero is a nice easy number to plug in on the interval from 3 to Infinity we could use four we could use is five whatever I'm just going to pick five x - 3 if you plug in -4 you're going to get a positive or A negative negative and then -4 + 3 is also negative Nega negative is positive so we know that every result not just plugging in ne4 but any number we pick on that interval it would result in a positive value okay plugging in 0er we would get 0 - 3 is Nega 0 + 3 3 is positive negative positive is negative if we plug in 5 5 - 3 is positive 5 + 3 is positive positive positive is positive so let's answer the question built into this is a question where is this expression greater than zero is the same as asking where is it positive negative Infinity till -3 we have all positives and from 3 to positive Infinity very good okay so that's the sign chart approach and what we just found is that the domain of G and I'm going to denote that domain Doom of G is equal to Infinity to -3 Union 3 to Infinity not including the end points because it would be zero at those points okay I just want to mention that x^2 - 9 if you think of it as a graph what would it look like it would be a parabola right shifted down nine units what would the X intercepts be -3 and three right and where does it have positive y values negative Infinity to -3 it has positive y values and 3 to Infinity it has positive y values so another way that people solve these is to visualize x^2 - 9 and where it would be positive on its graph so I'm not graphing G and that's important to acknowledge what I'm doing is making a visual representation of just this this part that's under the radical the radican x^2 - 9 this has nothing to do with what the graph of G of X looks like or even the square root of x^2 - 9 this is just to see where the values are positive and you see because if I lay that Parabola over the sign chart it just corresponds to what we found here positive negative positive
12223
https://matfiz24.pl/kolo-okrag/pole-kola-dlugosc-okregu-zadania
Pole koła, średnica, długość okręgu w zadaniach - MatFiz24.pl Loading [MathJax]/extensions/tex2jax.js Dostęp od 25 zł Start 8 kl. Egzaminy Matura Dostęp Premium Kursy Blog Współpraca Kategorie Bryły Ciąg arytmetyczny Działania na liczbach Egzaminy Figury Funkcje Funkcja liniowa Funkcja kwadratowa Koło i okrąg Konstrukcje matematyczne Logika Olimpiady matematyczne Matura z matematyki Pierwiastki Podobieństwo figur Potęgi Prawdopodobieństwo Procenty Promile Równania Statystyka Twierdzenie Pitagorasa Twierdzenie Talesa Układy równań Wielomiany Wyrażenia algebraiczne Rozrywka matematyczna Kursy matematyczne Matematyka w liceum MatFiz24.pl » Koło i okrąg » Pole koła, długość okręgu – zadania łatwe i standardowe Pole koła, długość okręgu – zadania łatwe i standardowe Zadanie. Oblicz pole koła i długość okręgu o średnicy 10cm. Rozwiązanie: x Zobacz na stronie Zobacz na YouTube Średnica koła jest najdłuższym odcinkiem wewnątrz koła, który przechodzi przez środek koła. Jeśli średnica wynosi 10cm, to promień jest 2 razy krótszy, czyli 5cm. Dalej stosujesz poznane wzory na obwód i pole koła. Zadanie. Oblicz promień koła o polu (36\pi \ c{{m}^{2}}). Rozwiązanie: Dane: [{{P}}=\pi \cdot {{r}^{2}}] [{{P}}=36\pi \ c{{m}^{2}}] Można w tym momencie przekształcić wzór na pole koła i wyznaczyć r lub bezpośrednio wstawić dane z zadania do wzoru na pole koła. Wybierzemy ten drugi sposób. W miejsce P wstawiasz wartość liczbową pola koła: [36\pi =\pi \cdot {{r}^{2}}] Dzielimy w myślach cale równanie przez π otrzymujemy: [36={{r}^{2}}] [r=\sqrt{36}=6cm] Odpowiedź: Promień koła wynosi 6cm. x Zobacz na stronie Zobacz na YouTube Zadanie. Oblicz promień koła o polu (100\pi). x Zobacz na stronie Zobacz na YouTube Zadanie. Oblicz promień koła o polu (30c{{m}^{2}}). (Przyjmij π≈3,14) Rozwiązanie: Dane: [P=\pi \cdot {{r}^{2}}] [P=30c{{m}^{2}}] Szukane: r=? Teraz należy przekształcić wzór na pole i wyznaczyć z niego promień lub możesz bezpośrednio wstawić (30c{{m}^{2}}) w miejsce literki P we wzorze. Wybieramy 2 sposób. [30=\pi \cdot {{r}^{2}}] Zmieńmy równanie stronami i podzielmy przez π. [\pi \cdot {{r}^{2}}=30\quad \left| \ :\ \pi \right.] [{{r}^{2}}=\frac{30}{\pi }=\frac{30}{3}=10] [{{r}^{2}}=10] [r=\sqrt{10}] Promień podany wyżej jest przybliżony, ponieważ wstawiliśmy podczas obliczeń za liczbę Pi wartość przybliżoną 3. Promień z liczbą Pi miałby następującą postać: [r=\sqrt{\frac{30}{\pi }}] Odpowiedź: Promień koła wynosi (\sqrt{10}) x Zobacz na stronie Zobacz na YouTube Zadanie. Oblicz średnicę koła o polu 4π, a następnie oblicz obwód tego koła. x Zobacz na stronie Zobacz na YouTube Zadanie. Oblicz promień, a następnie pole koła, którego obwód wynosi 28π. x Zobacz na stronie Zobacz na YouTube Zadanie. Oblicz pole koła o obwodzie 10π. x Zobacz na stronie Zobacz na YouTube Zadanie. Mama z prostokątnego ciasta o wymiarach 32cm x 56cm wycina przed pieczeniem szklanką koła o średnicy 8cm. Następnie z tych kół wycina kieliszkiem o średnicy 4cm dziurkę. Oblicz, ile procent wykorzysta ciasta za pierwszym razem podczas pieczenia ciastek z dziurką? x Zobacz na stronie Zobacz na YouTube Zadanie. Oblicz pole niebieskiego koła oraz zielonego i pomarańczowego pierścienia. Promienie odczytaj z rysunku. Treść dostępna po opłaceniu abonamentu Ucz się matematyki już od 25 zł. Instrukcja premium Uzyskaj dostęp do całej strony MatFiz24.pl Wesprzyj rozwój filmów matematycznych Zaloguj się lub Wykup Sprawdź Wykup Anuluj Pełny dostęp do zawartości MatFiz24.pl na 15 dni za 25.00 zł. Pełny dostęp do zawartości MatFiz24.pl na 90 dni za 65.00 zł. Pełny dostęp do zawartości MatFiz24.pl na 180 dni za 87.00 zł. Kup abonament na 15 dni Dostęp do całego serwisu 25.00 PLN Przelew Dotpay x Jak zamówić dostęp? Podaj swój email, aby zakupić dostęp do wszystkich treści / Odblokuj na 15 dni- [x] Akceptuj Dokonując zamówienia potwierdzasz zapoznanie się z regulaminem serwisu MatFiz24.pl. Kup abonament na 90 dni Dostęp do całego serwisu 65.00 PLN Przelew Dotpay x Jak zamówić dostęp? Podaj swój email, aby zakupić dostęp do wszystkich treści / Odblokuj na 90 dni- [x] Akceptuj Dokonując zamówienia potwierdzasz zapoznanie się z regulaminem serwisu MatFiz24.pl. Kup abonament na 180 dni Dostęp do całego serwisu 87 PLN Przelew Dotpay x Jak zamówić dostęp? Podaj swój email, aby zakupić dostęp do wszystkich treści / Odblokuj na 180 dni- [x] Akceptuj Dokonując zamówienia potwierdzasz zapoznanie się z regulaminem serwisu MatFiz24.pl. Anuluj Zadanie. Jaką drogę pokona deskorolka mająca koło o promieniu 4cm, jeśli wiemy, że jej koło wykonało 300 obrotów? Treść dostępna po opłaceniu abonamentu. Zadanie. Janek postanowił zrobić niespodziankę mamie robiąc pyszne ciasto. Oblicz średnicę ciasta w kształcie koła wiedząc, że wałek o promieniu 5cm obrócił się na powierzchni ciasta 3 razy.(Przyjmij w obliczeniach π=3) Treść dostępna po opłaceniu abonamentu. Zadanie. Średnica dużego koła ciągnika wynosi 1,60m zaś średnica małego koła 0,5m. Oblicz, ile razy więcej obrotów wykona małe koło, w porównaniu z dużym kołem na drodze 400m?(Przyjmij w obliczeniach π=3) Treść dostępna po opłaceniu abonamentu. Zadanie. Oblicz, ile razy zwiększy się pole, a ile długość okręgu, jeśli promień wzrośnie 4 razy? Treść dostępna po opłaceniu abonamentu. Zadanie. Jaką długość ma odcinek x (patrz rysunek)? A. 4 B. 5 C.(\sqrt{5}) D.(2\sqrt{2}) Treść dostępna po opłaceniu abonamentu. Koło i okrąg – Spis treści 1. Wstęp do koła i okręgu 2. Liczba Pi: π≈3,14 3. Wzór na obwód i pole koła 4. Wzór na pole wycinka koła i długość łuku 5. Podstawowe zadania z koła i okręgu 3.4/5(7 votes ) Bądź na bieżąco z MatFiz24.pl Wyszukiwarka: Matematyka Najnowsze Matury Płatności Rozrywka Gimnazjum Egzaminy Liceum Matura Matura próbna 2015 CKE Matura 2014 Matura 2013 Matura 2012 Regulamin MatFiz24.pl Polityka Prywatności Umowa Licencyjna Regulamin DotPay Reklamacje DotPay Marek Duda - YouTube Zagadki matematyczne Kontakt © Copyright 2012 - 2025 || Wszelkie prawa zastrzeżone To strona korzysta z plików Cookie AkceptujęWięcej informacji
12224
https://wstein.org/edu/2007/spring/ent/ent-html/node58.html
Continued Fractions Next:Finite Continued FractionsUp:Elementary Number Theory, APrevious:ExercisesContentsIndex Continued Fractions A continued fraction is an expression of the form In this book we will assume that the are real numbers and for , and the expression may or may not go on indefinitely. More general notions of continued fractions have been extensively studied, but they are beyond the scope of this book. We will be most interested in the case when the are all integers. We denote the continued fraction displayed above by For example, and The second two examples were chosen to foreshadow that continued fractions can be used to obtain good rational approximations to irrational numbers. Note that the first approximates and the second . Continued fractions have many applications. For example, they provide an algorithmic way to recognize a decimal approximation to a rational number. Continued fractions also suggest a sense in which might be ``less complicated'' than (see Example5.2.4 and Section5.3). In Section5.1 we study continued fractions of finite length and lay the foundations for our later investigations. In Section5.2 we give the continued fraction procedure, which associates to a real number a sequence of integers such that . We also prove that if is any infinite sequence of positive integers, then the sequence converges; more generally, we prove that if the are arbitrary positive real numbers and diverges then converges. In Section5.4, we prove that a continued fraction with is (eventually) periodic if and only if its value is a non-rational root of a quadratic polynomial, then discuss open questions concerning continued fractions of roots of irreducible polynomials of degree greater than . We conclude the chapter with applications of continued fractions to recognizing approximations to rational numbers (Section5.5) and writing integers as sums of two squares (Section5.6). The reader is encouraged to read more about continued fractions in [#!hardywright!#, Ch.X], [#!khintchine!#], [#!burton!#, §13.3], and [#!niven-zuckerman-montgomery!#, Ch.7]. Subsections Finite Continued Fractions Partial Convergents The Sequence of Partial Convergents Every Rational Number is Represented Infinite Continued Fractions The Continued Fraction Procedure Convergence of Infinite Continued Fractions The Continued Fraction of Preliminaries Two Integral Sequences A Related Sequence of Integrals Extensions of the Argument Quadratic Irrationals Periodic Continued Fractions Continued Fractions of Algebraic Numbers of Higher Degree Recognizing Rational Numbers From Their Decimal Expansion Sums of Two Squares Exercises Next:Finite Continued FractionsUp:Elementary Number Theory, APrevious:ExercisesContentsIndex William 2007-06-01
12225
https://www.cuemath.com/calculators/cube-root-calculator/
Cube Root Calculator What Is a Cube Root Calculator? A cube root calculator is a free online tool that calculates the cube root of a given number. Cuemath's cube root calculator helps you to find out the cube root of any number, in a few seconds, whether it is a perfect cube or not. Note: Please enter upto six digits. How to Use the Cube Root Calculator? Follow the steps given below to use the calculator: What Is the Cube Root of a Number? The cube root of a number is a number which, when multiplied by itself three times, gives the original number. Book a Free Trial Class Solved Example on Cube Root Calculator Solved Example 1: What is the cube root of 64? Solution: The cube root of 64 is 4 because when 4 is multiplied by itself thrice, it gives 64 4 × 4 × 4 = 64. Therefore, the cube root of 64 is 4 which is written as ∛64 = 4 Let us see how our cube root calculator gives the answer. Enter the number 64 in the box. The answer is given in the form of these steps: ∛64 =(64)1/3 =4 Here, 64 is a perfect cube because it has an exact cube root. Similarly, There are certain numbers which are not perfect cubes because their cube root is in the decimal form. For example, the cube root of 34 is 3.24. Now, you can try the calculator to find the cube root of the following numbers:
12226
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/11%3A_Analysis_of_Variance/11.03%3A_Two-Way_ANOVA_(Factorial_Design)
Skip to main content 11.3: Two-Way ANOVA (Factorial Design) Last updated : Mar 12, 2023 Save as PDF 11.2: Pairwise Comparisons of Means (Post-Hoc Tests) 11.4: Chapter 11 Formulas Page ID : 24073 Rachel Webb Portland State University ( \newcommand{\kernel}{\mathrm{null}\,}) Two-way analysis of variance (two-way ANOVA) is an extension of one-way ANOVA. It can be used to compare the means of two independent variables or factors from two or more populations. It can also be used to test for interaction between the two independent variables. We will not be doing the sum of squares calculations by hand. These numbers will be given to you in a partially filled out ANOVA table or an Excel output will be given in the problem. There are three sets of hypotheses for testing the equality of k population means from two independent variables, and to test for interaction between the two variables (two-way ANOVA): | | | --- | | Row Effect (Factor A): | H0: The row variable has no effect on the average ___________________. H1: The row variable has an effect on the average ___________________. | | Column Effect (Factor B): | H0: The column variable has no effect on the average ___________________. H1: The column variable has an effect on the average ___________________. | | Interaction Effect (A×B): | H0: There is no interaction effect between row variable and column variable on the average ___________________. H1: There is an interaction effect between row variable and column variable on the average ___________________. | These ANOVA tests are always right-tailed F-tests. The F-test (for two-way ANOVA) is a statistical test for testing the equality of k independent quantitative population means from two nominal variables, called factors. The two-way ANOVA also tests for interaction between the two factors. Assumptions: The populations are normal. The observations are independent. The variances from each population are equal. The groups must have equal sample sizes. The formulas for the F-test statistics are: | | | --- | | Factor 1: | FA=MSAMSE with dfA=a−1 and dfE=ab(n−1) | | Factor 2: | FB=MSBMSE with dfB=b−1 and dfE=ab(n−1) | | Interaction: | FA×B=MSA×BMSE with dfA×B=(a−1)(b−1) and dfE=ab(n−1) | Where: SSA = sum of squares for factor A, the row variable SSB = sum of squares for factor B, the column variable SSA×B = sum of squares for interaction between factor A and B SSE = sum of squares of error, also called sum of squares within groups a = number of levels of factor A b = number of levels of factor B n = number of subjects in each group It will be helpful to make a table. Figure 11-5 is called a two-way ANOVA table. Since the computations for the two-way ANOVA are tedious, this text will not cover performing the calculations by hand. Instead, we will concentrate on completing and interpreting the two-way ANOVA tables. A farmer wants to see if there is a difference in the average height for two new strains of hemp plants. They believe there also may be some interaction with different soil types so they plant 5 hemp plants of each strain in 4 types of soil: sandy, clay, loam and silt. At α = 0.01, analyze the data shown, using a two-way ANOVA as started below in Figure 11-6. See below for raw data. Rough drawings from memory were futile. He didn't even know how long it had been, beyond Ford Prefect's rough guess at the time that it was "a couple of million years" and he simply didn't have the maths. Still, in the end he worked out a method which would at least produce a result. He decided not to mind the fact that with the extraordinary jumble of rules of thumb, wild approximations and arcane guesswork he was using he would be lucky to hit the right galaxy, he just went ahead and got a result. He would call it the right result. Who would know? As it happened, through the myriad and unfathomable chances of fate, he got it exactly right, though he of course would never know that. He just went up to London and knocked on the appropriate door. "Oh. I thought you were going to phone me first." (Adams, 2002) 11.2: Pairwise Comparisons of Means (Post-Hoc Tests) 11.4: Chapter 11 Formulas
12227
https://www.varsitytutors.com/sat_math-45-45-90-right-isosceles-triangles-problem-3051
HIGH SCHOOL GRADUATE SCHOOL K-8 Search 50+ Tests Loading Page math tutoring science tutoring foreign languages elementary tutoring other Search 350+ Subjects Loading Page Create an account to track your scores and create your own practice tests: Report an issue with this question If you've found an issue with this question, please let us know. With the help of the community we can continue to improve our educational resources. Test: SAT Math | | | --- | | 1. | An isosceles triangle has a base of 6 and a height of 4. What is the perimeter of the triangle? | An isosceles triangle has a base of 6 and a height of 4. What is the perimeter of the triangle? None of these 1/2 questions DMCA Complaint If you believe that content available by means of the Website (as defined in our Terms of Service) infringes one or more of your copyrights, please notify us by providing a written notice (“Infringement Notice”) containing the information described below to the designated agent listed below. If Varsity Tutors takes action in response to an Infringement Notice, it will make a good faith attempt to contact the party that made such content available by means of the most recent email address, if any, provided by such party to Varsity Tutors. Your Infringement Notice may be forwarded to the party that made the content available or to third parties such as ChillingEffects.org. Please be advised that you will be liable for damages (including costs and attorneys’ fees) if you materially misrepresent that a product or activity is infringing your copyrights. Thus, if you are not sure content located on or linked-to by the Website infringes your copyright, you should consider first contacting an attorney. Please follow these steps to file a notice: You must include the following: A physical or electronic signature of the copyright owner or a person authorized to act on their behalf; An identification of the copyright claimed to have been infringed; A description of the nature and exact location of the content that you claim to infringe your copyright, in \ sufficient detail to permit Varsity Tutors to find and positively identify that content; for example we require a link to the specific question (not just the name of the question) that contains the content and a description of which specific portion of the question – an image, a link, the text, etc – your complaint refers to; Your name, address, telephone number and email address; and A statement by you: (a) that you believe in good faith that the use of the content that you claim to infringe your copyright is not authorized by law, or by the copyright owner or such owner’s agent; (b) that all of the information contained in your Infringement Notice is accurate, and (c) under penalty of perjury, that you are either the copyright owner or a person authorized to act on their behalf. Send your complaint to our designated agent at: Charles Cohn Varsity Tutors LLC 101 S. Hanley Rd, Suite 300 St. Louis, MO 63105 Or fill out the form below: Contact Information Address Complaint Details Access results and powerful study features! | | | --- | | Email address: | | | Your name: | | | Feedback: | |
12228
https://www.mathfunworksheets.com/worksheets/proportion-worksheets/
Skip to content × Home Algebra Proportion Worksheets Browse by Topics English Worksheets Alphabet Cursive Writing Phonic Sounds Worksheets Phonics Worksheets Vowels Worksheets Consonants Worksheets Consonant Blends Digraphs and Trigraphs Worksheets R Controlled Vowels Syllables Worksheets Rhyming Words Worksheets Vocabulary Worksheets Picture Flash Cards Practice Words Synonyms and Antonyms Plurals Worksheets Homonyms Worksheets Prefix and Suffix Spelling Worksheets Days, Months & Seasons Reading Comprehension Science Worksheets Animals and their Young Ones Classifying Animals Worksheets Animal Habitats Worksheets Physical Science Worksheets Earth and Space Science Worksheets Living Things Worksheets Kid's Corner Alphabet Standard Charts Number Charts Number Names Charts Coloring Activity Connecting Dots Trace & Color Cartoon Coloring Matching Activity Position Worksheets Holiday Worksheets Christmas Worksheets New Year worksheets Columbus Day Worksheets Easter Worksheets Halloween Worksheets Martin Luther King Worksheets Memorial Day Worksheets Saint Patrick’s Day Labor Day Worksheets Thanksgiving Day Worksheets Valentine’s Day Worksheets Veteran’s Day Worksheets Numbers & Operations Addition Addition Tables Vertical Number Addition Horizontal Number Addition Picture Addition Addition Activities 3 Addends Addition Subtraction Subtraction Tables Vertical Number Subtraction Horizontal Number Subtraction Picture Subtraction Subtraction Activities Problems Ending with Zero Multiplication Multiplication Tables Number Multiplication - Vertical Number Multiplication - Horizontal Picture Application Multiplication Activities Division Division Tables Number Division - Horizontal Long Division - without Reminder Long Division - with Reminder Division Activities Decimals Decimal Addition Decimal Subtraction Decimal Multiplication Place Value 2 - digit 3 - digit 4 - digit 5 - digit 6 - digit Decimals Roman Numerals Skip Counting Odd & Even Numbers Patterns Cardinal & Ordinal Numbers Rounding Numbers Estimation of Numbers Estimation of Time & Money Counting & Cardinality Comparing Numbers Ordering Numbers Fractions Representing Fractions Types of Fractions Fraction Addition Fraction Subtraction Comparing and Ordering Fractions Multiplying and Dividing Fractions Prime & Composite Numbers Squares & Cubes Square & Cube Root Divisibility Rules Factors & Multiples Data Handling Tally Marks Bar Graphs Pictographs Mean Median Mode Range Probability and Statistics Algebra Ratio Least Common Factor Greatest Common Factor Percent Worksheets Proportion Order of Operations Scientific Notation Exponents Algebraic Expressions Evaluating Algebraic Expressions Simplifying Algebraic Expressions Graphing Lines Point Slope Form Two Point Form Two Intercept Form Equations One Step Equation Two Step Equation Multi Step Equation Graphing Linear Equation Solving Quadratic Equation Roots of Quadratic Equation Identifying Functions Evaluating Functions Function Table Domain and Range Trigonometric Charts Quadrants Polynomials Identifying Polynomials Classifying Polynomials Adding Polynomials Subtracting Polynomials Multiplying Polynomials Dividing Polynomials Polynomials - Box Method Measurement Size Comparison Big Vs Small Heavy Vs Light Long Vs Short More Vs Less Tall Vs Short Time Money Measurement of Length Length - Unit Conversions Measurement of Weight Weight - Unit Conversions Measurement of Capacity Capacity - Unit Conversions Temperature Geometry Shapes 2D Shapes 3D Shapes Properties of Shapes Compose / Decompose Shapes Symmetry Lines, Rays, Line Segments, Planes Area of Shapes Area of Triangles Area of Circles Area of Squares Area of Rectangles Area of Quadrilaterals Perimeter of Shapes Perimeter of Triangles Perimeter of Squares Perimeter of Rectangles Perimeter of Quadrilaterals Angles Naming Angles Types of Angles Measuring Angles Complement / Supplement Angles Pair of Angles Circle Area of Circles Circumference of Circles Triangle Types of Triangles Area of Triangles Perimeter of Triangles Pythagorean Theorem Median and Centroid of Triangles Triangle Inequality Theorem Angles in Triangles Rectangle Identifying Rectangles Area of Rectangles Perimeter of Rectangles Square Area of Square Perimeter of Square Parallelogram Kite Trapezoid Worksheets Midpoint Formula Quadrilaterals Identifying Quadrilateral Area of Quadrilateral Perimeter of Quadrilateral Distance Formula Slope Word Problems Addition Word problems Subtraction Word problems Multiplication Word problems Money Word problems Time Word problems Lines / Rays Word problems Proportion Worksheets Proportion Worksheets are valuable resources that help students understand the concept of proportional relationships through structured practice. These worksheets focus on solving proportion problems using cross multiplication, equivalent ratios, and word problems. Designed for students in grade 6 through grade 8, they support the development of essential math skills in a clear and engaging format. What is a Proportion? A proportion is an equation that says two ratios or fractions are equal to each other. Proportion Worksheets provided include Identifying, Forming, and Solving Proportions. Grade 6 students would find these worksheets useful. Each worksheet begins with easy-to-follow instructions and examples. Students learn how to identify proportional relationships and solve for missing values. The problems gradually increase in difficulty, helping students build confidence while improving accuracy. Teachers can use these worksheets in class to introduce new topics, assign them for homework, or use them during test prep sessions. Proportion problems often appear in real-world contexts such as recipes, scale drawings, and speed-time-distance calculations. These worksheets include word problems that connect math to everyday life. This makes learning more meaningful and helps students apply their knowledge in practical situations. Download and practice these worksheets. Related Worksheets: Ratio Worksheets, Percent Worksheets, Fraction Worksheets Proportion Worksheets Identify Proportions ## Identify Proportions Worksheet #1 Worksheet #2 Worksheet #3 Identify Proportions - Graph ## Identify Proportions - Graph Worksheet #1 Worksheet #2 Worksheet #3 Forming Proportions ## Forming Proportions Worksheet #1 Worksheet #2 Worksheet #3 Solving Proportions ## Solving Proportions Worksheet #1 Worksheet #2 Worksheet #3 Solving Proportions - Algebraic Expressions ## Solving Proportions - Algebraic Expressions Worksheet #1 Worksheet #2 Worksheet #3 Finding Proportional ## Finding Proportional Worksheet #1 Worksheet #2 Worksheet #3 Constant of Proportionality ## Constant of Proportionality Worksheet #1 Worksheet #2 Worksheet #3 As students advance in grade, the complexity of the problems increases. For example, a grade 6 student may work on simple ratios, while a grade 8 student might solve multi-step word problems involving algebraic expressions. This gradual progression ensures that learners stay challenged and continue to grow. Moreover, these worksheets are designed to support various learning styles. Some worksheets include visual aids like tables and graphs, while others emphasize step-by-step calculations. Many also provide answer keys and guided solutions, allowing students to check their work and learn independently. In conclusion, these worksheets are essential for building a strong understanding of proportional reasoning. They align with grade-level standards, support classroom and at-home learning, and prepare students for success in higher-level math. With regular practice, students develop the confidence and skills needed to solve proportion problems quickly and accurately. Home About Us Blog Contact Us Write for us Privacy Policy Kindergarten Workbooks Alphabet Workbooks Numbers Workbooks Addition / Subtraction Workbooks Measurement Workbooks Geometry Workbooks Grade 1 Workbooks Numbers and Placevalue Workbooks Addition and Subtraction Workbooks Measurement Workbooks Geometry Workbooks Grade 2 Workbooks Operations and Algebraic thinking Workbooks Numbers and Operations Workbooks Geometry Workbooks Measurement Workbooks Grade 3 Workbooks Operations and Algebraic Thinking Number and Operations in Base 10 Adobe Reader is required to download all the pdf files. If adobe reader is not installed in your computer, you may download it here for free: Adobe Reader Download. Copyright © 2024. Math Fun Worksheets. All reserved! Scroll to Top
12229
http://en.chinaculture.org/library/2008-02/01/content_26414.htm
| | | | --- | | | | | | | | | --- --- | | | | | | | | | --- | | | | | | | | | | | | | --- --- | | | Subscribe to free Email Newsletter | | | Subscribe to free Email Newsletter | | | --- | | | | | | Library>China ABC>Sci-Tech>Modern elites | | | | | | | | | | | | | | | | | | | | | | | | --- --- --- --- --- --- --- --- --- | | | | | | | | | | | | Hua Luogeng: An Outstanding Chinese Mathematician | | | | | | Hua Luogeng (1910-1985) was a famous Chinese mathematician, and an academician of China Academy of Sciences. He was born on December 11, 1910 in Jintan, Jiangsu Province. After graduation from Jintan Junior High School, Hua Lougeng taught himself diligently. He began to teach in Tsinghua University from 1930. In 1936, Hua Luogeng visited and studied in Cambridge University in England. Back home, he continued teaching in the Southwest Associated University. In 1946, He went to the United States, beginning research work in universities such as Princeton. He went back to China in 1950. In the following years, he served as a professor in Tsinghua University, CAS Institute of Mathematics. Hua was the founder and pioneer of many fields in new China's mathematics research. He wrote more than 200 thesis and monographs, many of which have become classic documents of immortal value. Besides pure mathematics research, Hua also did a lot of work in the fields of mathematics applications. He made the mathematics serve the national economy and became the first Chinese scientist who closely combined the mathematics theoretic studies and practical production, and obtained enormous economic results in many fields. | | | | | | | | | | | | | | | | | | | | Email to Friends | | Print | | Save | | | | | | | | | | | Hua Luogeng: An Outstanding Chinese Mathematician | | | | | | Hua Luogeng (1910-1985) was a famous Chinese mathematician, and an academician of China Academy of Sciences. He was born on December 11, 1910 in Jintan, Jiangsu Province. After graduation from Jintan Junior High School, Hua Lougeng taught himself diligently. He began to teach in Tsinghua University from 1930. In 1936, Hua Luogeng visited and studied in Cambridge University in England. Back home, he continued teaching in the Southwest Associated University. In 1946, He went to the United States, beginning research work in universities such as Princeton. He went back to China in 1950. In the following years, he served as a professor in Tsinghua University, CAS Institute of Mathematics. Hua was the founder and pioneer of many fields in new China's mathematics research. He wrote more than 200 thesis and monographs, many of which have become classic documents of immortal value. Besides pure mathematics research, Hua also did a lot of work in the fields of mathematics applications. He made the mathematics serve the national economy and became the first Chinese scientist who closely combined the mathematics theoretic studies and practical production, and obtained enormous economic results in many fields. | | | | | | | | | | | | | | | Hua was the founder and pioneer of many fields in new China's mathematics research. He wrote more than 200 thesis and monographs, many of which have become classic documents of immortal value. Besides pure mathematics research, Hua also did a lot of work in the fields of mathematics applications. He made the mathematics serve the national economy and became the first Chinese scientist who closely combined the mathematics theoretic studies and practical production, and obtained enormous economic results in many fields. | | | | | | | | | Email to Friends | | Print | | Save | | | | | | | | | | | | | | | | | | | | | | |
12230
https://thecorestandards.org/Math/Content/G/
Common Core State Standards Initiative Common Core State Standards Initiative Geometry Print this page Kindergarten Identify and describe shapes. CCSS.Math.Content.K.G.A.1 Describe objects in the environment using names of shapes, and describe the relative positions of these objects using terms such as above, below, beside, in front of, behind, and next to. CCSS.Math.Content.K.G.A.2 Correctly name shapes regardless of their orientations or overall size. CCSS.Math.Content.K.G.A.3 Identify shapes as two-dimensional (lying in a plane, "flat") or three-dimensional ("solid"). Analyze, compare, create, and compose shapes. CCSS.Math.Content.K.G.B.4 Analyze and compare two- and three-dimensional shapes, in different sizes and orientations, using informal language to describe their similarities, differences, parts (e.g., number of sides and vertices/"corners") and other attributes (e.g., having sides of equal length). CCSS.Math.Content.K.G.B.5 Model shapes in the world by building shapes from components (e.g., sticks and clay balls) and drawing shapes. CCSS.Math.Content.K.G.B.6 Compose simple shapes to form larger shapes. For example, "Can you join these two triangles with full sides touching to make a rectangle?" Grade 1 Reason with shapes and their attributes. CCSS.Math.Content.1.G.A.1 Distinguish between defining attributes (e.g., triangles are closed and three-sided) versus non-defining attributes (e.g., color, orientation, overall size); build and draw shapes to possess defining attributes. CCSS.Math.Content.1.G.A.2 Compose two-dimensional shapes (rectangles, squares, trapezoids, triangles, half-circles, and quarter-circles) or three-dimensional shapes (cubes, right rectangular prisms, right circular cones, and right circular cylinders) to create a composite shape, and compose new shapes from the composite shape.1 CCSS.Math.Content.1.G.A.3 Partition circles and rectangles into two and four equal shares, describe the shares using the words halves, fourths, and quarters, and use the phrases half of, fourth of, and quarter of. Describe the whole as two of, or four of the shares. Understand for these examples that decomposing into more equal shares creates smaller shares. Grade 2 Reason with shapes and their attributes. CCSS.Math.Content.2.G.A.1 Recognize and draw shapes having specified attributes, such as a given number of angles or a given number of equal faces.1 Identify triangles, quadrilaterals, pentagons, hexagons, and cubes. CCSS.Math.Content.2.G.A.2 Partition a rectangle into rows and columns of same-size squares and count to find the total number of them. CCSS.Math.Content.2.G.A.3 Partition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a third of, etc., and describe the whole as two halves, three thirds, four fourths. Recognize that equal shares of identical wholes need not have the same shape. Grade 3 Reason with shapes and their attributes. CCSS.Math.Content.3.G.A.1 Understand that shapes in different categories (e.g., rhombuses, rectangles, and others) may share attributes (e.g., having four sides), and that the shared attributes can define a larger category (e.g., quadrilaterals). Recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. CCSS.Math.Content.3.G.A.2 Partition shapes into parts with equal areas. Express the area of each part as a unit fraction of the whole. For example, partition a shape into 4 parts with equal area, and describe the area of each part as 1/4 of the area of the shape. Grade 4 Draw and identify lines and angles, and classify shapes by properties of their lines and angles. CCSS.Math.Content.4.G.A.1 Draw points, lines, line segments, rays, angles (right, acute, obtuse), and perpendicular and parallel lines. Identify these in two-dimensional figures. CCSS.Math.Content.4.G.A.2 Classify two-dimensional figures based on the presence or absence of parallel or perpendicular lines, or the presence or absence of angles of a specified size. Recognize right triangles as a category, and identify right triangles. CCSS.Math.Content.4.G.A.3 Recognize a line of symmetry for a two-dimensional figure as a line across the figure such that the figure can be folded along the line into matching parts. Identify line-symmetric figures and draw lines of symmetry. Grade 5 Graph points on the coordinate plane to solve real-world and mathematical problems. CCSS.Math.Content.5.G.A.1 Use a pair of perpendicular number lines, called axes, to define a coordinate system, with the intersection of the lines (the origin) arranged to coincide with the 0 on each line and a given point in the plane located by using an ordered pair of numbers, called its coordinates. Understand that the first number indicates how far to travel from the origin in the direction of one axis, and the second number indicates how far to travel in the direction of the second axis, with the convention that the names of the two axes and the coordinates correspond (e.g., x-axis and x-coordinate, y-axis and y-coordinate). CCSS.Math.Content.5.G.A.2 Represent real world and mathematical problems by graphing points in the first quadrant of the coordinate plane, and interpret coordinate values of points in the context of the situation. Classify two-dimensional figures into categories based on their properties. CCSS.Math.Content.5.G.B.3 Understand that attributes belonging to a category of two-dimensional figures also belong to all subcategories of that category. For example, all rectangles have four right angles and squares are rectangles, so all squares have four right angles. CCSS.Math.Content.5.G.B.4 Classify two-dimensional figures in a hierarchy based on properties. Grade 6 Solve real-world and mathematical problems involving area, surface area, and volume. CCSS.Math.Content.6.G.A.1 Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these techniques in the context of solving real-world and mathematical problems. CCSS.Math.Content.6.G.A.2 Find the volume of a right rectangular prism with fractional edge lengths by packing it with unit cubes of the appropriate unit fraction edge lengths, and show that the volume is the same as would be found by multiplying the edge lengths of the prism. Apply the formulas V = l w h and V = b h to find volumes of right rectangular prisms with fractional edge lengths in the context of solving real-world and mathematical problems. CCSS.Math.Content.6.G.A.3 Draw polygons in the coordinate plane given coordinates for the vertices; use coordinates to find the length of a side joining points with the same first coordinate or the same second coordinate. Apply these techniques in the context of solving real-world and mathematical problems. CCSS.Math.Content.6.G.A.4 Represent three-dimensional figures using nets made up of rectangles and triangles, and use the nets to find the surface area of these figures. Apply these techniques in the context of solving real-world and mathematical problems. Grade 7 Draw construct, and describe geometrical figures and describe the relationships between them. CCSS.Math.Content.7.G.A.1 Solve problems involving scale drawings of geometric figures, including computing actual lengths and areas from a scale drawing and reproducing a scale drawing at a different scale. CCSS.Math.Content.7.G.A.2 Draw (freehand, with ruler and protractor, and with technology) geometric shapes with given conditions. Focus on constructing triangles from three measures of angles or sides, noticing when the conditions determine a unique triangle, more than one triangle, or no triangle. CCSS.Math.Content.7.G.A.3 Describe the two-dimensional figures that result from slicing three-dimensional figures, as in plane sections of right rectangular prisms and right rectangular pyramids. Solve real-life and mathematical problems involving angle measure, area, surface area, and volume. CCSS.Math.Content.7.G.B.4 Know the formulas for the area and circumference of a circle and use them to solve problems; give an informal derivation of the relationship between the circumference and area of a circle. CCSS.Math.Content.7.G.B.5 Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure. CCSS.Math.Content.7.G.B.6 Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms. Grade 8 Understand congruence and similarity using physical models, transparencies, or geometry software. CCSS.Math.Content.8.G.A.1 Verify experimentally the properties of rotations, reflections, and translations: CCSS.Math.Content.8.G.A.1.a Lines are taken to lines, and line segments to line segments of the same length. CCSS.Math.Content.8.G.A.1.b Angles are taken to angles of the same measure. CCSS.Math.Content.8.G.A.1.c Parallel lines are taken to parallel lines. CCSS.Math.Content.8.G.A.2 Understand that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures, describe a sequence that exhibits the congruence between them. CCSS.Math.Content.8.G.A.3 Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates. CCSS.Math.Content.8.G.A.4 Understand that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar two-dimensional figures, describe a sequence that exhibits the similarity between them. CCSS.Math.Content.8.G.A.5 Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. For example, arrange three copies of the same triangle so that the sum of the three angles appears to form a line, and give an argument in terms of transversals why this is so. Understand and apply the Pythagorean Theorem. CCSS.Math.Content.8.G.B.6 Explain a proof of the Pythagorean Theorem and its converse. CCSS.Math.Content.8.G.B.7 Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. CCSS.Math.Content.8.G.B.8 Apply the Pythagorean Theorem to find the distance between two points in a coordinate system. Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. CCSS.Math.Content.8.G.C.9 Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. Kindergarten-Grade 12 Standards for Mathematical Practice Introduction How to read the grade level standards Kindergarten Introduction Counting & Cardinality Operations & Algebraic Thinking Number & Operations in Base Ten Measurement & Data Geometry Grade 1 Introduction Operations & Algebraic Thinking Number & Operations in Base Ten Measurement & Data Geometry Grade 2 Introduction Operations & Algebraic Thinking Number & Operations in Base Ten Measurement & Data Geometry Grade 3 Introduction Operations & Algebraic Thinking Number & Operations in Base Ten Number & Operations—Fractions¹ Measurement & Data Geometry Grade 4 Introduction Operations & Algebraic Thinking Number & Operations in Base Ten¹ Number & Operations—Fractions¹ Measurement & Data Geometry Grade 5 Introduction Operations & Algebraic Thinking Number & Operations in Base Ten Number & Operations—Fractions Measurement & Data Geometry Grade 6 Introduction Ratios & Proportional Relationships The Number System Expressions & Equations Geometry Statistics & Probability Grade 7 Introduction Ratios & Proportional Relationships The Number System Expressions & Equations Geometry Statistics & Probability Grade 8 Introduction The Number System Expressions & Equations Functions Geometry Statistics & Probability High School: Number and Quantity Introduction The Real Number System Quantities The Complex Number System Vector & Matrix Quantities High School: Algebra Introduction Seeing Structure in Expressions Arithmetic with Polynomials & Rational Expressions Creating Equations Reasoning with Equations & Inequalities High School: Functions Introduction Interpreting Functions Building Functions Linear, Quadratic, & Exponential Models Trigonometric Functions High School: Modeling High School: Geometry Introduction Congruence Similarity, Right Triangles, & Trigonometry Circles Expressing Geometric Properties with Equations Geometric Measurement & Dimension Modeling with Geometry High School: Statistics & Probability Introduction Interpreting Categorical & Quantitative Data Making Inferences & Justifying Conclusions Conditional Probability & the Rules of Probability Using Probability to Make Decisions Note on courses & transitions Courses & Transitions Mathematics Glossary Mathematics Glossary Table 1 Table 2 Table 3 Table 4 Table 5 Please click here for the ADA Compliant version of the Math Standards. Close
12231
https://math.stackexchange.com/questions/4072304/how-can-i-prove-this-for-abc-0?rq=1
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams How can I prove this for $a+b+c=0$ Ask Question Asked Modified 4 years, 6 months ago Viewed 116 times 2 $\begingroup$ We wish to prove that $$\left(\frac{a-b}{c}+\ \frac{b-c}{a}+\frac{c-a}{b}\right)\left(\frac{c}{a-b}+\frac{a}{b-c}+\frac{b}{c-a}\right)=9,$$ for $a+b+c=0$ and $$abc \neq 0 , a\neq b , b\neq c, c \neq a.$$ I tried working with only $2$ fractions at a time, switching some of the variables (ex.$a=-b-c$) and then adding them up, and multiplying first but could not go further. Thank you in advance. algebra-precalculus fractions Share edited Mar 22, 2021 at 19:05 DMcMor 10.1k55 gold badges2828 silver badges4242 bronze badges asked Mar 22, 2021 at 18:08 AkecenoAkeceno 4544 bronze badges $\endgroup$ 2 1 $\begingroup$ what do you want to prove? $\endgroup$ Viera Čerňanová – Viera Čerňanová 2021-03-22 18:32:14 +00:00 Commented Mar 22, 2021 at 18:32 1 $\begingroup$ If a,b,c are real, then the two brackets are opposite to each other. You have A×(-A), which is not 9. $\endgroup$ Viera Čerňanová – Viera Čerňanová 2021-03-22 18:34:04 +00:00 Commented Mar 22, 2021 at 18:34 Add a comment | 2 Answers 2 Reset to default 2 $\begingroup$ You can expand directly, by noticing that the first parantheses is $$\frac{ab(a-b) + bc(b-c) + ca(c-a)}{abc} = -\frac{(a-b)(b-c)(c-a)}{abc}$$ So the expression becomes $$\frac{a^3+b^3+c^3 + 3abc - a^2b - a^2c - b^2a -b^2c - c^2a - c^2b}{abc}$$ Collecting $a^3, -a^2b, - a^2c $ and similarly this becomes $$\frac{a^2(a - b -c) + b^2(b-c-a) + c^2(c - a -b)}{abc} + 3$$ Now $a-b-c = 2a$ by the condition so this becomes just $$2\frac{a^3+b^3+c^3}{abc} + 3$$ But there is an identity $a^3+b^3+c^3 - 3abc = (a+b+c)(a^2+b^2+c^2 - ab - bc - ca)$, so we get $a^3+b^3+c^3 = 3abc$. Thus we get that the initial expression is $9$. Share edited Mar 22, 2021 at 19:20 answered Mar 22, 2021 at 19:05 Tanny SiebenTanny Sieben 2,5711313 silver badges2222 bronze badges $\endgroup$ Add a comment | 0 $\begingroup$ Comment:As commented user376343 we have: $$B=-A\times A=-[a+b+c-(\frac ab+\frac bc+\frac ca)]^2=-(\frac ab+\frac bc+\frac ca)^2$$ $$\frac ab+\frac bc+\frac ca=\frac{ca^2+ab^2+bc^2}{abc}\geq3$$ This inequality becomes equality if $a=1, b=2$ and $c=3$. I checked it for $a+b+c=0$ and it is not equal to 3. It is a negative value.I think correct constrain is : a, b and c positive and product must have a negative sign and the sign of inequality must be $\geq$. Share answered Mar 22, 2021 at 19:38 siroussirous 13.1k33 gold badges1616 silver badges2525 bronze badges $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algebra-precalculus fractions See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related Ways to teach fractions 3 How to work out this easy fraction? 2 Hard word problem - finding how many rounds 2 Proving a math inequality If $\frac{(bˆ’c)}{a} + \frac{(a+c)}{b} + \frac{(aˆ’b)}{c}=1$ and $a-b+c \neq 0 $, then prove that $\frac 1a = \frac 1b + \frac 1c$ If $ { a\over a+1} + { b\over b+1 } + { c\over c+1 } = 1 $ prove $ abc \le 1/8 $ 0 100m Sprint involving Fractions: Who wins first and by how much? 0 How would I go about solving this long equation for the variable $j$? Hot Network Questions What is a "non-reversible filter"? Who is the target audience of Netanyahu's speech at the United Nations? What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? How can the problem of a warlock with two spell slots be solved? Why is the fiber product in the definition of a Segal spaces a homotopy fiber product? Suspicious of theorem 36.2 in Munkres “Analysis on Manifolds” Survival analysis - is a cure model a good fit for my problem? Can a cleric gain the intended benefit from the Extra Spell feat? How to use \zcref to get black text Equation? Does the curvature engine's wake really last forever? A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man What can be said? What is this chess h4 sac known as? Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator An odd question What happens if you miss cruise ship deadline at private island? How to rsync a large file by comparing earlier versions on the sending end? Storing a session token in localstorage alignment in a table with custom separator My dissertation is wrong, but I already defended. How to remedy? What were "milk bars" in 1920s Japan? "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Checking model assumptions at cluster level vs global level? The rule of necessitation seems utterly unreasonable more hot questions Question feed
12232
https://www.indeed.com/career-advice/finding-a-job/what-is-investment-management
What Is Investment Management? Definition, Duties and Tips | Indeed.com Skip to main content Home Company reviews Find salaries Sign in Sign in Employers / Post Job 1 new update Home Company reviews Find salaries Employers Create your resume Resume services Change country 🇺🇸 United States Help Start of main content Career Guide Finding a job Resumes & cover letters Resumes & cover letters articles Resume samples Cover letter samples Interviewing Pay & salary Career development Career development articles Starting a new job Career paths News Finding a job What Is Investment Management? Definition, Duties and Tips What Is Investment Management? Definition, Duties and Tips Written by Andrew Juma Updated June 9, 2025 Video: What Is a Portfolio Manager? See If This Son Can Explain His Dad’s Job What does a loan portfolio manager’s job entail? We tested Kiev on how much he knows about his dad Mike’s career as portfolio manager. Investment management places companies and clients on a path to meet their financial goals and improve relationships with existing clients.This concept accounts for financial analysis, the selection of profitable stocks and the development of your portfolio. Successfully managing your investments can lead to a higher potential in yielding a larger return on investment in the future.In this article, we answer the question "What is investment management?" and describe what investment managers do, plus we explain the differences between investment management and investment banking and discuss the skills you need for becoming an investment manager. Related jobs on Indeed Investment management & research jobs Part-time jobs Full-time jobs Remote jobs View more jobs on Indeed What is investment management? Investment management is an act of overseeing your financial assets, along with the investments that make up a portion of your portfolio. You decide on the short and long-term strategy that consists of obtaining and selling assets listed in your portfolio.Your strategy can also consist of budgeting and tax services rendered to you.This term can also be described as wealth management, but the intent of it is to service the clients whose money with which you're working.Investors can be made up of one person or an institution like the government, an insurance company or one that administers retirement plans. The goal of investment management is to maximize returns while minimizing risk by diversifying assets on a portfolio-wide basis across multiple asset classes and markets.Related:44 Investment Management Interview Questions (With Answers) What do investment managers do? Investment managers work with investors' money to help them reach their financial goals.They come up with ways to allocate stocks and bonds that align with the client's goals, buy and sell investments when necessary, oversee the performance of the portfolio and report results back to their clients.Customers generally create an investment account to work with an investment manager, but they can assist them in transferring money from a retirement account if they believe it's in their best interest. They also address questions concerning risks on specific investments and factors that affect risk, including the stock market conditions and its performance.A client may want to consult with an investment manager if they're trying to: Save for retirement Save and finance their children's college fund Save for a big purchase, such as a car, boat or house Related:12 Reasons To Work as an Investment Manager (Plus Tips) How does investment management work? Below is the process an investment manager follows: 1. Identify the investor's risk tolerance The process begins by identifying the investor's risk tolerance, which is how much risk they can handle before they start losing sleep at night. This helps determine how much money you divide into stocks versus bonds or other securities that may provide more stability during a market downturn. Typically, newer investors tend to be more risk-averse, while more experienced investors are comfortable with additional risk.Related:How To Write an Investment Manager Resume (With Example) 2. Choose investments for a client Once an investment manager determines what portion of the investment portfolio they separate into each type of asset class, they then decide which specific stocks or bonds to purchase for their client. This is where research is important and varies greatly depending on available time. Some clients prefer total control over their investments, while others prefer someone else to decide for them.There are also four types of asset classes in which you can invest. These are: Stocks: Stocks are the amount of money returned to shareholders if an organization's assets were all turned into liquid cash and paid to those same shareholders. Bonds: This is a type of fixed income that allows people to accrue value on this asset over a long period. Money market instruments: These are short-term investments that someone can have in their portfolio with the intent to increase their value quickly. Real estate: This type of investment is often a high-risk and high-reward type since real estate markets can be volatile compared to other types. Commodities: These are the types of goods that many people use regularly. They're often an indicator of the success of the economy since everyone purchases them. Financial derivatives: This is a contract that derives its value from an asset or group of assets that underlay it. Related:Asset Manager vs. Investment Manager: Definitions and Differences 3. Monitor performance After buying or selling an asset, the investment manager monitors its performance against specific benchmarks or milestones set out by the client during the investment analysis phase. They report everything back to the client so they can keep track of their investments and make necessary changes. Some clients may have an active role in changing their portfolio to meet their financial goals while others allow an investment manager to make these decisions.Related:How Much Does an Investment Fund Manager Make? Benefits of investment management There are several benefits to investment management. First, investment managers can help a wide variety of clients make successful investment decisions, increasing their income. Second, investment management allows clients to continue their work while the manager works with their investments.This means the client can save time that they may otherwise spend managing their investment portfolio. Finally, investment managers have the background necessary to provide good advice for investment decisions and minimize risk for clients, especially those who are risk averse.Related:What Does Risk Averse Mean in Investing? (With Examples) Investment management versus investment banking It's important to know the difference between Investment management and investment banking if you plan to work in the finance industry. Some of the differences between investment management and investment banking include: Varying job responsibilities An investment manager is accountable for working with clients to manage the wealth they currently own by analyzing financial performance and giving their insight on recommendations regarding the purchasing and selling of stock. On the other hand, investment bankers work on generating capital to address corporate finance objectives and help navigate complex financial situations like mergers and acquisitions and initial public offerings. Types of clients An investment manager's clients can be an individual or an institution, such as a pension fund or an insurance company, whereas an investment banker primarily works with corporations related to finances. Both of these positions can work with the government depending on how they want to achieve their goals. Number of hours worked Investment managers have altered work hours because their schedule is dependent on when the stock market opens and closes. Investment managers can maintain a busy schedule if they have clients working in different time zones, such as European and Asian markets, while investment bankers have extended hours that require them to work past a 40-hour workweek, so make sure you consider this before proceeding on this career path.Related:Risk Avoidance vs. Risk Mitigation: What's the Difference? Skills for investment managers Investment managers possess a variety of skills to perform well in this role. A fewskills you can include on a resumeinclude: Self-confidence An investment manager needs the confidence to perform the responsibilities of this role. Confidence increases your ability to motivate yourself while working in high-risk situations regarding the wealth of your clients.The more you successfully meet the needs of your clients, the more confidence you can build based on your previous experience. They understand their own strengths and weaknesses and recognize where they can improve.Related:Risk Management: A Definitive Guide Time management skills Investment managers are mindful of the time when markets open and close to correctly track the performance of stock traded during that day. They also prioritize tasks and allocate resources effectively to complete projects on time. Try to delegate tasks to lower-level staff members so you can decrease your workload and focus on high-priority tasks.Related:How To Get a Job In Asset Management Analytical skills You employ analytical skills to review and properly measure the impact of your efforts on your client's investments. This position requires you to examine statistics and trends to decide whether to buy or sell a certain stock or bond. You typically gather enough evidence for the data you compile to propose a solution about the next steps the client takes with their investments.Related:What Is a Real Estate Asset Manager? (Definition and Duties) Problem-solving skills Investment managers identify and address problems in their portfolio strategies and make adjustments accordingly. Speak with your clients to determine if they know the risks of their investments. This may give you a guideline to address problems before they arise and prepare you to solve them accordingly.Be sure to check in with your clients regularly to find out how they're doing to keep a close relationship with them. This way, you can establish long-term trust with them that allows you to collaborate further moving forward. Communication skills Communication skills highlight your ability to process and act on information presented to you. You can practice active listening with your clients, so you can ensure you're hearing their viewpoint and focusing on their feedback instead of your response to it.Take note of your nonverbal gestures and cues to show that you're engaged in the subject matter you're discussing. You can also use your communication skills to explain the reasoning behind a decision you consider and the recommendations you give to clients. Search jobs and companies hiring now Job title, keywords or company Location Search Tips for becoming an investment manager Here's a list of tips to help you become an investment manager: Pursue bachelor's and master's degree programs. A bachelor's program in accounting, finance or economics can position you to get an entry-level job that makes you an attractive candidate for a master's program in accounting, risk management or finance. Search for financial analyst positions. An investment manager usually gets their start as a financial analyst. Try networking with an investment manager through your university or by contacting them directly for an informational interview, where they can offer advice on the best path to take for the career. Get a certification. Investment managers tend to need an employee sponsor for certifications, but you get one to become an investment manager. You can wait until you're working full-time at a company before taking the steps to earn it. Obtain licensure. Investment managers need specific licenses to get their roles legally. These include a Series 65 License, which prepares you to work with clients since it gives you the authority to provide financial advice. The information on this site is provided as a courtesy and for informational purposes only. Indeed is not a career or legal advisor and does not guarantee job interviews or offers Investment management & research Share: Related Articles Management Consulting vs. Investment Banking: Key Differences Pros and Cons of Being an Investment Banker How To Work for a Hedge Fund (With Jobs List and Skills) Explore more articles 15 Work-From-Home Jobs for Single Parents What Is a Histologist? (With Salary and Job Duties) How To Become a Writer for a TV Show (With Steps and Salary) How To Tally in Excel by Converting a Bar Graph to a Tally Graph 12 Warehouse Job Titles 12 Ways To Make Better Decisions 20 Great Logistics Resume Objective Samples How To Write a Letter of Recommendation for a Student-Athlete How To Become an SAP Consultant How To Calculate Cubic Feet (Definition and Examples) 56 RF Engineer Interview Questions (Plus Sample Answers) 37 System Admin Interview Questions (With Sample Answers) Hiring Lab Career advice Browse jobs Browse companies Salaries Indeed Events Work at Indeed Countries About Help ESG at Indeed © 2025 Indeed Your Privacy Choices Accessibility at Indeed Privacy Center and Ad Choices Terms
12233
https://www.sciencedirect.com/science/article/pii/S0097316525000561
Complete 3-term arithmetic progression free sets of small size in vector spaces and other abelian groups - ScienceDirect Typesetting math: 100% Skip to main contentSkip to article Journals & Books ViewPDF Download full issue Search ScienceDirect Outline Abstract Keywords 1. Introduction 2. Preliminary results 3. Constructions in F q×F q and in other direct products 4. Complete 3-AP free sets and saturation in abelian groups 5. Concluding remarks and open problems Declaration of Competing Interest Acknowledgement Data availability References Show full outline Figures (1) Tables (1) Table 4.1 Journal of Combinatorial Theory, Series A Volume 215, October 2025, 106061 Complete 3-term arithmetic progression free sets of small size in vector spaces and other abelian groups Author links open overlay panel Bence Csajbók a 1 2, Zoltán Lóránt Nagy b 3 Show more Outline Add to Mendeley Share Cite rights and content Under a Creative Commons license Open access Abstract A subset S of an abelian group G is called 3-AP free if it does not contain a three term arithmetic progression. Moreover, S is called complete 3-AP free, if it is maximal w.r.t. set inclusion. One of the most central problems in additive combinatorics is to determine the maximal size of a 3-AP free set, which is necessarily complete. In this paper we are interested in the minimum size of complete 3-AP free sets. We define and study saturation w.r.t. 3-APs and present constructions of small complete 3-AP free sets and 3-AP saturating sets for several families of vector spaces and cyclic groups. Previous article in issue Next article in issue Keywords Progression-free set Complete cap Saturation Finite field Cyclic group 1. Introduction Studying the maximum possible size of a subset of a vector space over a finite field which contain either no (non-trivial) solution to a given linear equation or not too many collinear points is a classical yet vibrant research area , , , , , , , , . The most notable examples are the Sidon sets and the so-called cap-set problem. The latter one is to determine the largest subset of F 3 n containing no complete line, or in other terms, no arithmetic progression of length 3 (3-AP), or no collinear triplets. In general, point sets of finite affine or projective spaces with no three in line are called caps. Recently Ellenberg and Gijswijt proved a breakthrough result regarding caps in F 3 n building on the ideas of Croot, Lev and Pach , and they also proved that the size of a 3-AP free set of F p n is always bounded from above by (p−δ p)n for some small constant δ p>0 depending only on p, see also . For the general case of abelian groups, the maximum size of 3-AP free sets was discussed in the classical paper of Frankl, Graham and Rödl . A nice and general overview on additive combinatorial and extremal results concerning arithmetic progressions is by Shkredov . A finite point set (with respect to a property) is called complete if it does not contained as a subset in a larger point set, satisfying the same property. In extremal problems described above, usually constructions of maximum size are in the centre of attention. These are complete by definition. However, in several cases, the whole spectra of sizes matters for complete structures, and the structure of smallest size in particular. For example, in the case of complete caps over PG(n,q), the point set is corresponding to the parity check matrix of a q-ary linear code with codimension n+1, Hamming distance 4, and covering radius 2, see . In this paper we investigate the less studied lower end of the spectrum of possible sizes of complete 3-AP free sets, the minimum size. We discuss the minimum size in arbitrary abelian groups of odd order and highlight the case of finite vector spaces. A 3-term arithmetic progression of the abelian group G, 3-AP for short, is a set of three distinct elements of G of the form g,g+d,g+2 d, where g,d∈G. We will call d the difference. If d is the difference of a 3-AP in G, then the order of d is at least 3. Let F q denote the Galois field of q elements, and o q(a) denotes the multiplicative order of a∈F q. In order to have a 3-AP in F q n we need q to be odd, so we will only consider this case. A+B denotes the sumset {a+b:a∈A,b∈B} of sets A and B, while A+˙B denotes the restricted sumset where the summands must be distinct. Definition 1.1 A⊆G is called 3-AP-free if it does not contain a 3-AP of G. Moreover, A is complete 3-AP-free if it is 3-AP free and not contained in a larger 3-AP free set. The completeness of a 3-AP free set can be interpreted via saturation as well: S is complete 3-AP free if S is 3-AP free and a saturating set w.r.t. 3-APs. Definition 1.2 For a subset S⊆G we say that S is 3-AP saturating or in other words, S 3-AP saturatesG if for each x∈G∖S there is a 3-AP of G consisting of x and two elements of S. In a broader context, we say that S 3-AP saturates a set H⊂G if similar condition holds for the elements of H∖S. In this paper we will mostly consider the problem of 3-AP saturation in groups of odd order. Definition 1.3 For g∈G if ℓ∈Z is positive, then ℓ g=g+g+…+g︸ℓ∈G and (−ℓ)g=−(ℓ g)∈G. If the order of G is odd then we define 1 2 g as the unique element x∈G such that x+x=g, that is, x=(k+1)g, where the order of g is 2 k+1. Observation 1.4 A set A⊆G is 3 - AP saturating, iff for every x∈G∖A there is a 3 - AP consisting of x and{a 1,a 2}⊆A such that (i) either x=2 a 1−a 2 , or(i i)2 x=a 1+a 2 . If G has odd order then(i i)is equivalent to x=1 2 a 1+1 2 a 2 . Saturation and completeness with respect to 3-APs were considered in integer sequences as well, see , , . The postage stamp problem seeks the greatest integer r=r k such that there exists a set A k of k positive integers together with 0 such that i∈A k+A k for all i=0,1,…,r. Mrose and independently, Fried showed that the r k is at least 2 7 k 2+O(k). A set of non-negative integers A is called a basis of order two if A+A=N holds for the sumset. Note that a variation of Mrose's construction can be extended to an additive 2-basis of N, see , . Such constructions lead to small sets S of F p for which every element of F p is an arithmetic mean of a pair of elements from S, hence S saturates the 3-APs. We will discuss this in Section 4. These constructions provide further motivations to introduce the saturation with respect to a set of (coefficient) vectors. For a subset S of elements of a group G (written additively) we will write S⁎ to denote S minus the neutral element. Definition 1.5 W-avoiding and W-saturating sets Let W denote a set of vectors from F q⁎×F q⁎. We define W-avoiding and W-saturating sets in F q n as follows. A⊆F q n is W-avoiding, if there is no w=(λ 1,λ 2)∈W such that a=λ 1 a′+λ 2 a″ has a non-trivial solution with a,a′,a″ pairwise distinct vectors of A. A⊆F q n is W-saturating in F q n if for each x∈F q n∖A there exists a w=(λ 1,λ 2)∈W such that x=λ 1 a′+λ 2 a″ for a pair (a′,a″)∈A 2, a′≠a″. A⊆F q n is completeW-avoiding if it is W-avoiding and W-saturating. If W consists of a single vector W={w}, we omit the brackets for brevity. Definition 1.6 Avoiding and saturating sets in groups Let W denote a subset of Z×Z. We define W-avoiding, W-saturating and complete W-avoiding sets in abelian groups G similarly to Definition 1.5. We will be mostly interested in the cases when W is an element of {(2,−1),(1,1),(1,−1)}. If G has odd order then we also define (1 2,1 2)-saturating sets. Remark 1.7 For a subset A⊆F q n and for (λ 1,λ 2)∈F q⁎×F q⁎ the following properties are equivalent: (i) A is (λ 1,λ 2)-avoiding, (i i)A is (1/λ 1,−λ 2/λ 1)-avoiding, (i i i)A is (1/λ 2,−λ 1/λ 2)-avoiding. In a group G the following properties are equivalent: (i) A is 3-AP free (i i)A is (2,−1)-avoiding, (i i i) there is no three pairwise distinct elements x,y,z∈A such that 2 x=y+z. (If the order of G is odd, then it is equivalent to say that A is (1 2,1 2)-avoiding.) Similarly, for A⊆G∖{0} the following properties are equivalent in any abelian group G: (i) A is restricted sum-free, i.e., (A+˙A)∩A=∅, (i i)A is (1,1)-avoiding, (i i i)A is (1,−1)-avoiding. Note that Definition 1.5 can be extended naturally to any set of vectors of ⋃t=2∞(F q⁎)t. Our main (but not only) focus will be the case W={(2,−1)} for its correspondence to 3-APs, see Observation 1.4, and in general, the case when W consists of a single vector. We will also apply the fact that the field F q n is itself a vector space over F q h for h|n. This will enable us to alter the dimension of the underlying structure at times, which will provide improvements on the estimates. If q=p r, p prime, and W⊆F p⁎×F p⁎, then studying W-saturating and W-avoiding sets in the vector space F q n is equivalent to study the same questions in the elementary abelian group F p r n. Now we introduce the functions we wish to study. Definition 1.8 For a group G we define the following: (1)Let a(3−AP,G) denote the minimum size of a complete 3-AP free set of G. (2)Let sat(3−AP,G) denote the minimum size of a 3-AP saturating set of G. (3)Let a(W,G) denote the minimum size of a complete W-avoiding set of G. If there is no W-avoiding set of G, then put a(W,G)=∞. (4)Let sat(W,G) denote the minimum size of a W-saturating set of G. Observe that in Z 5 there are no complete (2,−1)-avoiding sets. Example 1.9 As an example, we show complete 3-AP sets for G=F 3 2 and G=F 5 2 in Fig. 1. These are of minimum size, cf. Lemma 2.1. We note in advance that in F q 2 we can always find complete 3-AP sets of size q when −2 is not a square element in F q, cf. Theorem 3.2. 1. Download: Download high-res image (59KB) 2. Download: Download full-size image Fig. 1. Complete 3-AP free sets of minimum size in F 3 2 and in F 5 2. Remark 1.10 Any complete 3-AP free set is 3-AP saturating, any complete W-avoiding set is W-saturating by definition, thus a(W,G)≥sat(W,G). Since (2,−1)-saturating sets are clearly satisfying the 3-AP saturating property in view of Observation 1.4, we have(1) Our main results are as follows. Theorem 1.11 Let p be an odd prime and k a positive integer. Then we have (1)2/3⋅p 2 k−1<a(3−AP,F p 4 k−2)≤p 2 k−1,provided that −2 is not a square element in F p . Also, (2)2/3⋅p n/2<a(3−AP,F p n)≤a((2,−1),F p)n. Observe that (2) of Theorem 1.11 motivates the study of a((2,−1),F p), or in general a((2,−1),Z m), where Z m is the cyclic group of order m. Theorem 1.12 Let m denote a positive odd integer. Then (1)sat((2,−1),Z m)<c m⋅m where c m∈[1,3]is a constant depending only on m. (2)a((2,−1),Z m)<c m⋅m for c m∈[1,1.5], a constant depending only on m, provided that 2 3(4 n−1)<m<4 n holds for some positive integer n. (3)2 m−0.5<sat((1/2,1/2),Z m)≤(3.5+o(1))m≈1.87 m . (4)a((2,−1),Z m)=⌈m⌉, provided that m is if form m=2 2 t+2 t+1 for some positive integer t. The close connection between the sat function and the size of the complete 3-AP-free sets, see Remark 1.10, motivates the theorem below. Theorem 1.13 Let 3<p be a prime and k a positive integer. Then we have (1)2/3⋅p k<sat(3−AP,F p 2 k)≤(4 3+r 3⋅o p(−2))(p k−1),where r is the residue modulo 3 of the order o p(−2)of −2 in F p×. (2)2/3⋅p k+1 2<sat(3−AP,F p 2 k+1)≤2 3(p k+1+p k−2)+r(p k+1+p k−2)3⋅o p(−2),where r is the residue modulo 3 of the order o p(−2)of −2 in F p×. (3)p k<sat((2,−1),F p 2 k)≤(3 2+r 2⋅o p(−2))(p k−1),where r is the residue modulo 2 of the order o p(−2)of −2 in F p×. (4)p k+1 2<sat((2,−1),F p 2 k+1)≤c p(3 2+r 2⋅o p(−2))(p k−1)p,where r is the residue modulo 2 of the order o p(−2)of −2 in F p×and c p≤3 is the same constant depending on p as inTheorem 1.12. Remark 1.14 The same ideas as in Theorem 1.13 work to prove analogous results in Z m k, when gcd⁡(m,6)=1. Then o p(−2) should be replaced by the multiplicative order of −2 in the ring Z m. In the proofs Theorem 3.10, Theorem 3.11 should be used instead of Proposition 3.5, Proposition 3.9, respectively. Theorem 1.15 For abelian groups G of order n>5 odd, it holds that sat(3−AP,G)≤sat((1/2,1/2),G)≤(n−1)ln⁡(n−1)+(n−1)+1. The paper is organized as follows. In Section 2 we present some preliminary results and useful tools. First we prove lower bounds on the size of saturating and complete W-avoiding sets. Then we show the strength and limitation of direct product constructions, which will enable us to prove the upper bound results for vector spaces (Theorem 1.11, Theorem 1.13). To have upper bounds close to our lower bounds, we will rely on further avoiding and saturating set constructions in finite fields which are small enough. Finally, we point out the relation of these results to results concerning caps and 3-AP covering sequences. In Section 3 we prove the upper bounds of Theorem 1.11 by analysing point sets of conics in the respected vector spaces. We also prove some general constructions of saturating sets in direct product of groups. Then we deduce the upper bounds of Theorem 1.13 and Theorem 1.11 (1). Section 4 is devoted to the proof of Theorem 1.12, Theorem 1.15, where we provide constructions based on numeral systems, additive basis, Sidon sets and random constructions, using tools from additive number theory to design theory. 2. Preliminary results 2.1. Double counting and direct sum constructions We begin this section by demonstrating some trivial lower bounds on the size of 3-AP saturating and W-saturating sets. Proposition 2.1 (1)Suppose that H is a 3 - AP saturating set in the abelian group G of odd order. Then|H|≥2 3|G|+1 36+1 6.Hence, sat(3−AP,F q n)>0.8164⋅q n/2 . (2)Suppose that H is a w-saturating set in the group G, where w∈Z×Z or w=(λ 1,λ 2)∈F q⁎×F q⁎if G=F q n . Then|H|≥⌈|G|⌉.Hence, sat(w,F q n)≥⌈q n/2⌉. (3)Suppose that H is a w-saturating set in the group G, where w=(1,1), or w=(1 2,1 2)if G has odd order, or w=(λ,λ)for some λ∈F q⁎if G=F q n . Then|H|≥2|G|+1 4−1 2.So in this case sat(w,F q n)>2⋅q n/2−0.5 Proof Part (1). By definition, ∀x∈G∖H, we have a≠b∈H s.t. x=1 2 a+1 2 b, or x=2 a−b, or x=−a+2 b. Thus by double counting,|G∖H|≤3(|H|2), from which the lower bound follows. Part (2). Let w=(λ 1,λ 2). By definition, ∀x∈G∖H, we have h≠h′∈H s.t. x=λ 1 h+λ 2 h′. Then by double counting,|G|−|H|≤2(|H|2). After rearranging, we get the desired bound. Part (3). By double counting,|G|−|H|≤(|H|2), and the bound follows after rearranging.□ Remark 2.2 We will see later that the lower bound above for sat((2,−1),G) is sharp in some cyclic groups, cf. Theorem 4.3, Theorem 4.13. Subsection 4.5 provides further instances when Proposition 2.1 (2) is sharp. Next we show that the direct sum construction preserves certain properties concerning saturation and W-avoidance. Note however that saturation with respect to 3-APs is not preserved. Proposition 2.3 Avoiding and saturation property in direct products (1) Suppose that H and H′are subsets of the abelian groups G and G′, respectively. (a)Assume that H and H′are 3 - AP free in the corresponding groups. Also, if G (or G′) has even order, then assume that the order of x−y is larger than 2 for any two distinct x,y∈H (∈H′). Then H×H′is 3 - AP free in G×G′. (b)If H and H′are(1,1)-avoiding in the corresponding groups and 0 G∉H , 0 G′∉H′then H×H′is(1,1)-avoiding in H×H′. (2)Suppose that H and H′are W-avoiding subsets of the vector spaces F q m and F q n , respectively for some W⊆F q⁎×F q⁎. Then H×H′is W-avoiding in F q m×F q n , provided that λ 1+λ 2=1 for all w=(λ 1,λ 2)∈W . (3)Suppose that the set W consists of a single vector w=(λ 1,λ 2)∈F q⁎×F q⁎, provided that λ 1+λ 2=1 . If H⊆F q m and H′⊆F q n are W-saturating sets of the corresponding vector space, then H×H′is also a W-saturating set in F q m×F q n . (4)Suppose that w=(2,−1), or G and G′are of odd order and w=(1/2,1/2). If H⊆G and H′⊆G′are w-saturating sets of the corresponding groups, then H×H′is also a w-saturating set in G×G′. Note that the condition λ 1+λ 2=1 holds if and only if w 1, w 2 and λ 1 w 1+λ 2 w 2∈F q n are collinear in the affine space F q n. Observe also that by choosing w=(2,−1) in part (3), the direct product will be 3-AP saturating as well. Proof (1 a) Assume to the contrary that there is a 3-AP:(h 1,h 1′),(h 2,h 2′),(h 3,h 3′)∈H×H′ with difference (d,d′)∈G×G′. Since (d,d′) is not the neutral element of the group G×G′, w.l.o.g. we may assume that d is not the neutral element of G. Then h 1,h 2,h 3 is a 3-AP of G, contradicting the assumption on H, or the order of G is even and h 1+h 3=2 h 2 holds because the size of {h 1,h 2,h 3} is 2, i.e. h 1=h 3 and hence the order of h 1−h 2 is 2, a contradiction. (1 b) Assume to the contrary (h 1,h 1′)=(h 2,h 2′)+(h 3,h 3′) for some h 1,h 2,h 3∈H and h 1′,h 2′,h 3′∈H′. W.l.o.g. we may assume h 2≠h 3. Then h 1=h 2+h 3 are 3 distinct elements of H contradicting the fact that H is (1,1)-avoiding (recall 0 G∉H). Proof of (2). Assume to the contrary the existence of w=(λ 1,λ 2)∈W such that (h 1,h 1′)=λ 1(h 2,h 2′)+λ 2(h 3,h 3′) for some elements of H×H′. Hence{h 1=λ 1 h 2+λ 2 h 3,h 1′=λ 1 h 2′+λ 2 h 3′. Since (h 2,h 2′)≠(h 3,h 3′), we may assume w.l.o.g. that h 2≠h 3. We have h 1=λ 1 h 2+λ 2 h 3 and this is a contradiction if h 1,h 2,h 3 are three pairwise distinct elements since H is W-avoiding. If we had h 1=h 2, then h 1(1−λ 1)=h 3(1−λ 1) and hence also h 1=h 3, a contradiction since h 2≠h 3 (and the same argument shows h 1≠h 3 as well). Proof of (3). Take any (g,g′)∈(F q m×F q n)∖(H×H′). If g∉H and g′∉H′, then by the assumption, there exist h 1,h 2∈H and h 1′,h 2′∈H′ such that{g=λ 1 h 1+λ 2 h 2,g′=λ 1 h 1′+λ 2 h 2′,g,h 1,h 2 and g′,h 1′,h 2′ are sets of pairwise distinct elements. This in turn shows that(g,g′)=λ 1(h 1,h 1′)+λ 1(h 2,h 2′), where (g,g′),(h 1,h 1′),(h 2,h 2′) are pairwise distinct elements. We cannot have g∈H and g′∈H′ at the same time, hence w.l.o.g. we may assume g∈H and g′∉H′. Then by the assumption, there exist h 1′,h 2′∈H′ such that g′=λ 1 h 1′+λ 2 h 2′,g′,h 1′,h 2′ are pairwise distinct elements. This in turn shows that(g,g′)=λ 1(g,h 1′)+λ 2(g,h 2′), where (g,g′),(g,h 1′),(g,h 2′) are pairwise distinct elements. The proof of (4) is the same as the proof of (3).□ A generalisation of some of the results above will be discussed in Subsection 4.5. Corollary 2.4 Direct product of complete(2,−1)-avoiding sets is a complete 3 - AP free set. Moreover, a((2,−1),G)⋅a((2,−1),H)≥a(3−A P,G×H). This highlights the importance of finding complete(2,−1)-avoiding sets A in G such that|A|≤|G|. The previous propositions motivate the distinguishment below. Definition 2.5 A complete 3-AP-avoiding or a complete (λ,1−λ)-avoiding set H⊆G is called small if |H|≤|G|. Proposition 2.6 Let H denote a W-avoiding, W′-saturating set in F q m . (1)Then for each λ∈F q⁎it holds that λH is W-avoiding and W′-saturating. (2)If for each(λ 1,λ 2)∈W it holds that λ 1+λ 2=1 , then for each d∈F q m , H+d is W-avoiding. If for each(λ 1,λ 2)∈W′it holds that λ 1+λ 2=1 , then for each d∈F q m , H+d is W′-saturating. (3)Assume W′={(1,1)}, 0∈H and λ∈F q m∖{0,1}. Then for each x∈F q m∖{0}there exist a,b∈λ H such that x=(1/λ)a+(1/λ)b . In particular, λH is(1/λ,1/λ)-saturating. Proof Proof of (1). For some (λ 1,λ 2)∈W and a,b,c∈H, λ 1 λ a+λ 2 λ b=λ c would imply λ 1 a+λ 2 b=c, a contradiction, which proves that λH is W-avoiding. Also, if x∈F q m∖λ H, then x=λ c for some c∉H and hence c=λ 1 a+λ 2 b for some (λ 1,λ 2)∈W′. It follows that λH is W′-saturating. Proof of (2). If we had λ 1(a+d)+λ 2(b+d)=c+d for some a,b,c∈H and (λ 1,λ 2)∈W, then also λ!a+λ 2 b=c, a contradiction. If x∉H+d then x=c+d for some c∉H and hence there exists (λ 1,λ 2)∈W′ such that λ 1 a+λ 2 b=c proving that H+d is W′-saturating. Proof of (3). Take some x≠0. If x∉H, then x=a+b for some a,b∈H and hence x=(1/λ)(λ a)+(1/λ)(λ b). If x∈H, then x=(1/λ)(λ 0)+(1/λ)(λ x).□ Proposition 2.7 Let q be odd. If H⊆F q m and H′⊆F q n are(1,1)-saturating such that the corresponding zero vectors are contained in H and in H′, resp., then 1 2(2 H×2 H′)is(1,1)-saturating in F q m×F q n . Proof By (3) of Proposition 2.6 it follows that 2 H is (1/2,1/2)-saturating in F q m and the same holds in F q n for 2 H′. Then by (3) of Proposition 2.3 it follows that (2 H)×(2 H′) is (1/2,1/2)-saturating in F q m×F q n. Since this subset contains the zero vector of F q m×F q n, the statement follows again by (3) of Proposition 2.6.□ Proposition 2.8 If H⊆F q m and H′⊆F q n are(1,−1)-saturating such that the corresponding zero vectors are contained in H and in H′, then H×H′is(1,−1)-saturating in F q m×F q n . Proof Take any (g,g′)∈(F q m×F q n)∖(H×H′). If g∉H and g′∉H′, then by the assumption, there exist h 1,h 2∈H and h 1′,h 2′∈H′ such that{g=h 1−h 2,g′=h 1′−h 2′,g,h 1,h 2 and g′,h 1′,h 2′ are sets of pairwise distinct elements. This in turn shows that(g,g′)=(h 1,h 1′)−(h 2,h 2′), where (g,g′),(h 1,h 1′),(h 2,h 2′) are pairwise distinct elements. We cannot have g∈H and g′∈H′ at the same time, hence w.l.o.g. we may assume g∈H and g′∉H′. Then by the assumption, there exist h 1′,h 2′∈H′ such that g′=h 1′−h 2′,g′,h 1′,h 2′ are pairwise distinct elements. This in turn shows that(g,g′)=(g,h 1′)−(0,h 2′), where (g,g′),(g,h 1′),(0,h 2′) are pairwise distinct elements.□ 2.2. Relation to caps Let q denote any (even or odd) prime power. We describe the relation of the results above to results concerning caps. Definition 2.9 A cap of AG(n,q) is a point set meeting each line of AG(n,q) in at most two points. A cap is called complete if it cannot be extended to a larger cap. A saturating set S of AG(n,q) is a point set with the property that for each P∈AG(n,q)∖S there exist two distinct points Q,R∈S, such that P is incident with the line joining Q and R. It is clear from the definitions above that a cap is complete if and only if it is also a saturating set. The lattice of affine subspaces of F q n is isomorphic to the subspace lattice of AG(n,q). For results on the (maximum) size of complete caps in AG(n,q), we refer to , , , and the references therein. A related problem is the smallest size of complete caps in finite affine and projective spaces. For the size of small complete caps, the theoretical lower bound is essentially sharp for q even , , , and in some cases also for q odd . See also , , , and the references therein for small complete caps for q odd. Proposition 2.10 Put W={(λ 1,λ 2)∈F q⁎×F q⁎:λ 1+λ 2=1}. Then we obtain the following. (1)Caps of AG(n,q)and W-avoiding sets of F q n are equivalent objects. In particular, byTheorem 2.3Part (2) the direct sum of caps is a cap. (2)Saturating sets of AG(n,q)and W-saturating sets of F q n are equivalent objects. (3)Complete caps of AG(n,q)and complete W-avoiding sets of F q n are equivalent objects. If q=3 , then W={(2,2)}, if q=4 , then W={(i,1+i)}. Hence, by Part (3) ofTheorem 2.3, direct sum of saturating sets is a saturating set and direct sum of complete caps is a complete cap for q∈{3,4}.□ 2.3. Related results on solving linear equations in algebraic structures We summarize some results which are connected to the theme of this paper in the sense that the subject is a subset of a set, in which an equation of special form has no non-trivial solutions. Then we also mention some results of saturation type with respect to an equation. Most probably the leading examples for the first theme are the Sidon sets. A set of elements in an abelian group is called a Sidon set if all pairwise sums of its not necessarily distinct elements are distinct. Equivalently, the equation a+b=c+d has only the trivial solution {a,b}={c,d} in the set. Observe that Sidon sets are 3-AP free. Concerning Sidon sets, Erdős and Turán observed (see also Cilleruelo ) that the point set of the parabola in F q×F q provides a Sidon set. Cilleruelo showed several further abelian groups admitting Sidon sets of size equal roughly to the square root of the order of the group. Building on his observations, Huang, Tait and Won showed that the largest Sidon sets in F 3 n are of size 3 n/2, provided that n is even. Small complete Sidon sets of abelian 2-groups are investigated in the recent paper of G. Nagy who showed constructions gained from ellipses and hyperbolas in the finite affine plane F q×F q, and in the papers and . There is a strong connection between the case of vector space or cyclic group setting and the case of integer setting, when a non-trivial solution of a particular equation is forbidden within the interval [1,n]⊂Z. For Sidon sets, the papers of Ruzsa , discuss the case of small complete structures, while the work of Kiss, Sándor and Yang deals with small saturating sets with respect to 3-APs. They used the term 3 - AP covering sequence for another related concept. Let A 0={a 1<…<a t} be a set of nonnegative integers such that {a 1<…a l such that {a 1,…,a l}∪{a} does not contain a 3-term arithmetic progression. Moreover, a sequence A of non-negative integers is called a 3-AP-covering sequence if there exists an integer n 0 such that, if n>n 0, then there exist a 1,a 2∈A such that a 1,a 2,n form a 3-term arithmetic progression. Fang made the following improvement. Theorem 2.11 Fang There is a 3 - AP covering sequence S of integers such that|S∩[1,n]|n≤8 5≈3.578 holds for all n. This constant cannot be improved to 1.77 . Note that this result is strongly related to our main problem since it in turn shows that sat((2,−1),Z n)≤(3.578+o(1))n. 3. Constructions in F q×F q and in other direct products In this section q always denotes a prime power, and p denotes a prime. Construction 3.1 Let P be the point set of the parabola{(x,x 2):x∈F q}. Theorem 3.2 Let q be an odd prime power. If −2 is not a square in F q thenConstruction 3.1is a complete 3 - AP free subset of F q×F q , hence a(3−AP,F q 2)≤q . Remark 3.3 Note that Construction 3.1 provides an infinite family of small complete 3-AP free subsets of F q×F q. As observed by Erdős and Turán, the parabola construction provides also a (dense) Sidon set, see , . Proof of Theorem 3.2 For each (a,b)∈F q 2, b≠a 2, we prove that one of the following systems of equations have a solution (x,y)∈F q×F q.{x+y=2 a x 2+y 2=2 b{2 y−x=a 2 y 2−x 2=b This implies that no point (a,b) outside P can be added to the construction without violating the 3-AP free property. Solution for the first system exists if and only if b−a 2 is a square in F q. Indeed, in order to have a common solution, we should get a square value for the discriminant 16 a 2−8⋅(4 a 2−2 b)=16(b−a 2) of x 2+(x 2−4 a x+4 a 2)−2 b=0. Solution for the second system exists if and only if 2(a 2−b) is a square in F q. Indeed, in order to have a common solution, we should get a square value for the discriminant 16 a 2−8⋅(a 2+b)=8(a 2−b) of 2 y 2−(4 y 2−4 a y+a 2)−b=0. If −2 is not a square, then either the first, or the second discriminant will be a square, providing a solution to one of the systems.□ Theorem 3.2 in turn implies the upper bound of Theorem 1.11 (1) on complete 3-AP free sets in vector spaces once one notes that −2 is not a square element in F p if and only if it is not a square in F p 2 k+1. Now we show some saturating set constructions. Construction 3.4 Let〈−2〉denote the multiplicative (cyclic) subgroup of F q×, generated by −2 , where char(q)≠2,3 . Take a set of maximum size in each coset of〈−2〉for which the equations−2 g=g′, 4 g=g′have no solutions within the set. Let R denote the union of these sets. Let L denote the set of point L={(0,r):r∈F q⁎∖R}∪{(r,0):r∈F q⁎∖R}⊂F q×F q The following result is a reformulation of the upper bound of Theorem 1.13 (1). Proposition 3.5 Construction 3.4contains 2(q−1)o q(−2)−⌊o q(−2)/3⌋o q(−2)elements and it saturates the 3 - AP s of F q×F q . Corollary 3.6 If 3|o q(−2)then|L|=4 3(q−1). Proof (of Proposition 3.5) First, observe that the choice of R ensures that for all g∈F q∖{0}, at least two of g,−2 g,4 g are admissible coordinates in L. This implies that at most(q−1)⌊o q(−2)/3⌋o q(−2) elements are contained in R. On the other hand, choosing each element of form (−2)3 t−1 such that 0<3 t≤o q(−2) in 〈−2〉 and applying similar rule in each coset yield equality in the bound above. Then the cardinality of points in L follows. Then take any point (a,b)∈F q×F q, where a≠0≠b and suppose that the addition of (a,b) to the construction does not create a 3-AP. Now take(0,2 b),(a,b),(2 a,0);(−a,0),(0,b/2),(a,b); and(0,−b),(a/2,0),(a,b) which form three disjoint 3-APs consisting of two points of the axes and (a,b). Here we use the fact that o q(−2)>2. Since at most one element of {a/2,−a,2 a} and of {b/2,−b,2 b} is contained in R, L will contain at least 4 of the points listed above thus together with (a,b), a 3-AP would be formed, a contradiction. Finally, suppose that a=0 or b=0. Then the addition of (a,b) would again provide at least one 3-AP, since (a,b) would induce q−1 2 pairs P,P′ on the axis incident to (a,b) for which (a,b) is the midpoint of P and P′, but the number of points on the axis in L is larger than q/2 thus by the pigeon-hole principle, there would be a pair P,P′∈L for which (a,b) is a midpoint, hence the addition of (a,b) is not allowed.□ Remark 3.7 If q is a power of the prime p then the multiplicative order of −2 in F q× is the same as the multiplicative order of −2 in F p×. Proposition 3.5 implies directly the upper bound of Theorem 1.13 (1) in view of the previous remark if we apply q=p k. To get the upper bound when the dimension of the vector space is odd (Theorem 1.13 (4)), we modify the construction in a way that it (2,−1)-saturates the whole space. It enables us to apply the direct sum construction once we have a suitable general upper bound on sat((2,−1),F p). Construction 3.8 Let〈−2〉denote the multiplicative (cyclic) subgroup of F q×, generated by −2 , where char(q)≠2,3 . Take a set of maximum size in each coset of〈−2〉for which the equations−2 g=g′, have no solutions within the set. Let R⁎denote the union of these sets. Let L⁎denote the set of points L⁎={(0,r):r∈F q⁎∖R⁎}∪{(r,0):r∈F q⁎}. Proposition 3.9 Construction 3.8contains 2(q−1)−(q−1)⌊o q(−2)/2⌋o q(−2)elements and it is a(2,−1)-saturating set (and hence 3−AP saturating set) of F q×F q . Proof One should observe that each point (a,b), a≠0≠b is (2,−1)-saturated by the pairs (−a,0),(0,b/2) and (0,−b),(a/2,0), and at least one of these pairs will be contained in the construction. The cardinality of L⁎ follows similarly to that of L in the proof of Proposition 3.5.□ The result above in turn implies the upper bound of Theorem 1.13 (3). Then, Theorem 1.13 (4) follows from the direct sum construction, Proposition 2.3, applying it to Construction 3.8 with q=p k and the (2,−1)-saturating set construction for F p, given in the next section (see also Theorem 1.12 (1)). Along the same lines, one can prove the existence of 3-AP saturating sets in direct products of abelian groups. Let A and B denote two abelian groups (written additively) of orders a and b, respectively, such that gcd⁡(a b,6)=1. For any element g which is not the neutral element of the group, put D g={g,−2 g,4 g,−8 g,…,−g/2}. Since gcd⁡(6,a b)=1, the elements g,−2 g,4 g,−8 g are pairwise distinct, so |D g|≥4. For any element g which is not the neutral element of the group, we denote by R g a subset of D g of maximum size such that the equations x=−2 y and x=4 y cannot be solved within D g. Note that 1 3|D g|≥|R g|=⌊1 3|D g|⌋≥1 5|D g|. Using these notations, we have Theorem 3.10 Let A and B denote two abelian groups (written additively) of orders a and b, respectively, such that gcd⁡(a b,6)=1 . Then L={(r,0 B):r≠0 A,r∉R g for each g∈A}∪{(0 A,r):r≠0 B,r∉R g for each g∈B}is 3 - AP saturating in A×B . On the size of L we have 4 3(|A×B|−1)≤2 3(a+b−2)≤|L|≤4 5(a+b−2).□ Note that if |D g| is the same for each g∈A and g∈B where g is different from the neutral element, then the size of L can be expressed via a, b and |D g| for a single g. This leads to the statement of Theorem 1.13 (2) by choosing A=F p k+1 and B=F p k. For any element g of a group A which is not the neutral element of the group, we define D g as before and we denote by R g⁎ a subset of D g of maximum size such that the equations x=−2 y cannot be solved within D g. As before, the size of D g is at least 4 and hence 1 2|D g|≥|R g⁎|≥⌊1 2|D g|⌋≥2 5|D g|. Using these notations we have Theorem 3.11 Let A and B denote two abelian groups (written additively) of orders a and b, respectively, such that a is odd and gcd⁡(b,6)=1 . Then L⁎={(r,0 B):r≠0 A,r∈A}∪{(0 A,r):r≠0 B,r∉R g⁎for each g∈B}is a(2,−1)-saturating (and hence 3−AP saturating) set of A×B . On the size of L⁎we have a+1 2 b−3 2≤|L⁎|≤a+3 5 b−8 5.□ 4. Complete 3-AP free sets and saturation in abelian groups 4.1. Probabilistic upper bound on saturating sets We start with a general bound using probabilistic arguments and prove Theorem 1.15. While it is off by a logarithmic factor from the lower bound, it is still the best we know in several cases (although not in vector spaces). It also highlights the algebraic nature of constructions meeting or being close to the lower bound. Theorem 4.1 Suppose that the set H saturates the 3 - AP s in the abelian group G of order n, n>5 odd, and H is of minimum size. Then we have|H|≤(n−1)ln⁡(n−1)+(n−1)+1. Proof The proof follows the probabilistic argument of of the second author, on the size of saturating sets of projective planes. Let H 0 be a random subset of G consisting of elements g∈G where each element is chosen independently, uniformly at random with probability p. The parameter p will be determined later on. Let H 1 be the set of elements g∈G which can be obtained as 2 g=h+h′ or g=2 h−h′ for h,h′∈H 0. Let X denote the random variable which takes the cardinality of H 0 and Y denote the random variable which takes the cardinality of H 1. Then H 0∪(G∖H 1) will provide a set H that saturates the 3-APs in G. We will determine the value of p which minimise the expected value of X+n−Y. Clearly, E(X)=p n. We call a pair g 1,g 2 induced by g if g 1+g 2=2 g. Hence each element of G∖{g} is contained in exactly one pair induced by a fixed element g. If g∉H 1 then H 0 contains at most one element from each pair induced by g. Thus P(g∉H 1)<(1−p 2)1 2(n−1). By the linearity of expectation, we get E(X+n−Y)<n(p+(1−p 2)1 2(n−1)). If p=ln⁡(n−1)n−1, this provides the existence of a set which saturates 3-APs and have cardinality at most n(ln⁡(n−1)n−1+(1−ln⁡(n−1)n−1)n−1 2)<(n−1)ln⁡(n−1)+1+n−1, taking into account that ln⁡(n−1)n−1+1 n−1<1 and applying the Bernoulli bound (1−x m)m<e x for x=−ln⁡(n−1) and m=n−1.□ Actually this argument shows that sat((1/2,1/2),G)≤(n−1)ln⁡(n−1)+(n−1)+1. The same bound can be easily obtained for w=(2,−1)-saturation as well. Remark 4.2 Using the Lovász local lemma, one can prove that the probability of P(g∉H 1) can be bounded from below by (1−p 2)c(n−1) for some positive constant c, which implies that the order of magnitude of a random construction obtained as above will be Θ(n ln⁡n). 4.2. Complete (2,−1)-avoiding sets of minimum size in cyclic groups via difference sets The Singer difference sets of the cyclic group of order q 2+q+1, q a prime power, provide maximal Sidon sets. These constructions inspire the construction below. We use without explicit reference the most well known facts concerning difference sets according to the Handbook of Combinatorial Designs . Theorem 4.3 Put M=2 2 n+2 n+1 and denote by D′a Singer(M,2 n+1,1)-difference set of the cyclic group(Z M,+). Then D′is a complete 3 - AP free subset of Z M . Moreover, D′is complete(2,−1)-avoiding of size⌈M⌉, so its size reaches the lower bound inProposition 2.1part (2). Remark 4.4 Note that M=2 2 n+2 n+1 is a prime for n∈{1,3,9} and in these cases we obtain complete 3-AP free subsets in the corresponding finite fields of size 2 2 n+2 n+1. In general, n needs to be a power of 3 for this to hold. Indeed, M can be written as M=2 3 n−1 2 n−1, and if there exists a proper divisor d|3 n which is not a divisor of n, then g c d(d,n)<d. Now, by applying g c d(2 d−1,2 n−1)=2 g c d(d,n)−1, we get the identity(2)(2 gcd⁡(d,n)−1)⋅r⋅2 3 n−1 2 d−1=(2 n−1)⋅M, where 2 d−1=r⋅(2 gcd⁡(d,n)−1). Hence r|M. But on the one hand, r≤2 d−1<2 2 n1 as gcd⁡(d,n)<d. Proof of Theorem 4.3 According to the First Multiplier Theorem, for a translate D of D′ it holds that 2 D=D. We will show that D is complete 3-AP free. Note that this implies that the translates of D are complete 3-AP free as well. First we show that 2 a=b+c cannot hold with a,b,c∈D pairwise distinct elements. Indeed, it would imply a−b=c−a, contradicting the fact that D is a difference set. It follows that D is 3-AP free. Since for each a∈D we have also 2 a∈D, in D∪{0} we have the 3-AP: {0,a,2 a}. This shows 0∉D and that D saturates {0}. Now take any g∈Z M∖D, g≠0. Then there exist a,b∈D such that a−b=g and since 2 D=D, we have also a=2 c for some c∈D, that is, 2 c=b+g. We cannot have c=b since in that case c=g∈D, a contradiction. The size of D reaches the lower bound in Proposition 2.1 (2) since 2 n<M<2 n+1=|D|.□ Corollary 4.5 For p∈{7,73,262657}, the minimum size of a complete(2,−1)-avoiding subset of F p is⌈p⌉. In general one can prove the following, along the same lines. Proposition 4.6 If D is a(v,k,λ)-difference set in the group G with numerical multiplier 2 then D saturates the 3 - AP s. Moreover, if λ=1 also holds, then D is a complete 3 - AP free set. Proposition 4.7 If D is a(k 2+k+1,k+1,1)-difference set in G, 0∈D , then D is complete(1,−1)-avoiding of size k+1 and hence its size reaches the lower bound inProposition 2.1. It follows that a((1,−1),Z k 2+k+1)=(k+1)if k is a prime power. Proof By definition if x∈G∖{0} then there exist y,z∈D such that x=y−z. Assume to the contrary x=y−z for some pairwise distinct x,y,z∈D. Then x−0=y−z, contradicting the fact that D is a (k 2+k+1,k+1,1)-difference set. The last part follows from the existence of Singer-difference sets.□ Note that 0∈D can always be obtained since translates of D are difference sets as well. 4.3. (1/2,1/2)-saturating sets in cyclic groups via additive bases We continue with upper bounds on sat(W,F p) for W={(1/2,1/2)}. Here we refer to a construction which provides a good upper bound for the solution of the postage stamp problem which is very closely related to finite additive basis, see , , . Recall that the problem was described in the Introduction. Let [a,(t),b] denote {a+t⋅h:h∈Z}∩[a,b]. Construction 4.8 Mrose, For an arbitrary positive integer t take a set S of 7 t+2 elements as S=⋃j=1 5 A(j), where A(1):=[0,(1),t], A(2):=[2 t,(t),3 t 2+t], A(3):=[3 t 2+2 t,(t+1),4 t 2+2 t−1], A(4):=[6 t 2+4 t,(1),6 t 2+5 t], A(5):=[10 t 2+7 t,(1),10 t 2+8 t]. Proposition 4.9 Mrose, (S+S)⊃[0,14 t 2+10 t−1]holds for the Mrose construction S with parameter t. We apply this classical construction to prove the following upper bound for sat(3−AP,F p) via W-saturation for W={(1/2,1/2)}. Proposition 4.10 Suppose that m is odd. Then sat(3−AP,Z m)≤sat((1/2,1/2),Z m)≤(3.5+o(1))m≈1.87 m. Proof Choose the least integer t such that 14 t 2+10 t−1≥m holds, i.e.,14 t 2+10 t−1≥m≥14(t−1)2+10(t−1)−1. Consider the set S(mod m) obtained in Construction 4.8. Since S+S=Z m, we also have{s 2+s′2:s,s′∈S⊂Z m}=Z m, since gcd⁡(2,m)=1. Note that for x∉S and x=s/2+s′/2, s,s′∈S, we cannot have s=s′ and hence x is saturated by two distinct elements of S. Hence S is a (1/2,1/2)-saturating set in Z m of size |S|=7 t+2 while m≥14 t 2−18 t+3>2 7|S|2−4|S|. From this, we get that sat((1/2,1/2),Z m)<7+49+3.5 m.□ 4.4. Complete (2,−1)-avoiding and (2,−1)-saturating sets in cyclic groups We will say that S(2,−1)-saturates [x,y] if for each z∈[x,y] there exist a,b∈S such that z=2 a−b. Remark 4.11 Every integer 1≤k≤4 3(4 n−1) can be written in a unique way as k=k l 4 l+⋯+k 0 4 0, where k i∈{1,2,3,4} and 0≤l≤n−1. This representation of positive integers is known as the bijective base-4 numeral system, see e.g., . Construction 4.12 Let H l={v l−1 4 l−1+⋯+v 0 4 0:v i∈{2,3}for i=0,1,…,l−1},and K l={v l−1 4 l−1+⋯+v 0 4 0:v i∈{1,2,3,4}for i=0,1,…,l−1},so K l is the set of integers with exactly l digits in the bijective base- 4 numeral system. Note that K l=[1 3(4 l−1),4 3(4 l−1)], so |K l|=4 l. The smallest integer of H l is 2 3(4 l−1), the largest one is 4 l−1, and |H l|=2 l. Theorem 4.13 (1)The set H i∪H i+1∪…∪H j(2,−1)-saturates any subset of K i∪K i+1∪…∪K j for every pair of positive integers i≤j . Given a positive integer n, let m denote an integer such that 4 n−1<m≤4 n . (2)If m=4 n , then consider the elements of H n and K n as representatives for the elements of Z 4 n . The 2 n elements corresponding to H n form a complete(2,−1)-avoiding set in Z m . (3)If 1 3(4 n−1)+1≤m<4 n then consider any interval[x,y], H n⊆[x,y]⊆K n , of size m as a representative for Z m . Then the elements corresponding to H n form a(2,−1)-saturating set of size less than 3 m in Z m . If 2 3(4 n−1)<m then H n corresponds to a complete(2,−1)-avoiding set. (4)If 4 n−1<m≤1 3(4 n−1), then let 1≤k≤n−1 be maximal such that 4 n−1<m≤(4 n−4 k−1)/3.Then S:=H k−1∪H k∪…∪H n−1(2,−1)-saturates I:=[1 3(4 k−1−1),1 3(4 k−1−1)+m−1]and has size less than 3 m . Considering I as representatives for Z m the same holds for the elements corresponding to S. Proof We start with proving (1). It is enough to prove that H t saturates K t. If k has t digits, then consider the t-digit numbers a and b according to Table 4.1. Table 4.1. The value of a i and b i,i ≤ t − 1, determined by the value of k i. Note that k i=2 a i−b i and hence k=(2 a t−1−b t−1)4 t−1+⋯+(2 a 0−b 0)4 0=2∑i=0 t−1 a i 4 i−∑i=0 t−1 b i 4 i=2 a−b, with a,b∈H t. It follows that b,a, and k=2 a−b form a 3-AP with difference a−b. To prove (2) and (3) first we show that S:=∪i=1∞H i is 3-AP free in Z. Suppose to the contrary that a<b<c are three elements of S forming an arithmetic progression. Assume that r is maximal such that the coefficients a r,b r,c r of 4 r are not all equal in the expressions of a, b, c as above. Then clearly a r≤b r≤c r. If a r=b r=2, then c r=3 and b−a is at most 4 r−1+…+1, while c−b is at least 4 r−4 r−1−…−1. It follows that b−a≠c−b. Similar arguments work when a r=2, b r=c r=3 and when a r=0. If we consider the set K n with modulo 4 n addition, then the (2,−1)-saturation property clearly holds. We show that the (2,−1)-avoiding property holds as well. Suppose to the contrary that a<b<c are three elements of H n forming an arithmetic progression when considered modulo 4 n. Then for some difference 0<d<4 n we have a≡c+d(mod 4 n) and either c−b=d, or b−a=d. From a≡c+d(mod 4 n) it follows that d≥4 n+2 3(4 n−1)−(4 n−1)=2 3(4 n−1)+1 and hence d=c−b and d=b−a are impossible because of (4 n−1)−2 3(4 n−1)=1 3(4 n−1)≥max⁡{c−b,b−a}. This proves (2). The first part of (3) follows from the fact that 3 m≥4 n+2>2 n=|H n|. To prove the second part suppose to the contrary that a<b<c are three elements of H n forming an arithmetic progression when considered modulo 4 n. Then for some difference 0<d1 3(4 n−1) and hence d=c−b and d=b−a are impossible because of (4 n−1)−2 3(4 n−1)=1 3(4 n−1)≥max⁡{c−b,b−a}. To prove (4) assume that 1≤k≤n−1 is maximal such that 3 m≤4 n−4 k−1. It follows that 3 m>4 n−4 k. First note that S:=H k−1∪…∪H n−1 has size 2 k−1+…+2 n−1=2 k−1(1+…+2 n−k)=2 k−1(2 n−k+1−1)=2 n−2 k−1. Since S(2,−1)-saturates Z:=K k−1∪…∪K n−1 and I:=[1 3(4 k−1−1),1 3(4 k−1−1)+m−1]⊆Z, it follows that S(2,−1)-saturates I. We want to show |S|<3 m, that is,2 n−2 k−1<3 m. Clearly, it is enough to prove 4 n+4 k−1−2 n+k<3 m, which follows from 4 n+4 k−1−2 n+k<4 n−4 k<3 m.□ 4.5. Constructions in abelian groups of composite order Theorem 4.14 Let G denote a commutative group, H a subgroup of G. Put S={a 1,a 2,…,a s}⊆H of size s and T={b 1+H,b 2+H,…,b t+H}⊆G/H of size t. Let w 1,w 2∈Z such that w 1+w 2=1 holds. Define X={a i+b j:i∈{1,…,s},j∈{1,…,t}}⊆G. (1)If S and T are(w 1,w 2)-saturating in the groups H and G/H , respectively, then X is(w 1,w 2)-saturating in G. (2)Assume that S and T are(w 1,w 2)-avoiding in the groups H and G/H , respectively, and the order of x−y is not a divisor of w 1 in the group H ( G/H ) for each x,y∈S (for each x,y∈T ). Then X is(w 1,w 2)-avoiding in G. (3)Assume that S and T are complete(w 1,w 2)-avoiding in the groups H and G/H , respectively, and the order of x−y is not a divisor of w 1 in the group H ( G/H ) for each x,y∈S (for each x,y∈T ). Then X is complete(w 1,w 2)-avoiding in G. Proof First suppose that S and T are (w 1,w 2)-saturating sets. Take some c∈G. Then c=a+b where a∈H and b+H is an element of G/H. By the saturation property, there exist distinct b i+H,b j+H∈T such that w 1(b i+H)+w 2(b j+H)=b+H. If w 1 b i+w 2 b j=a′+b, for some a′∈H, then take some distinct a f,a g∈S such that w 1 a f+w 2 a g=a−a′. Then w 1(a f+b i)+w 2(a g+b j)=c. If a f+b i=a g+b j, then a f−a g=b j−b i∈H, a contradiction since i≠j. It follows that X saturates c∈G. Now assume that the conditions of part (2) hold, and w 1(a f+b i)+w 2(a g+b j)=a h+b k for some elements a f+b i, a g+b j, a h+b k of X. Thus w 1(b i+H)+w 2(b j+H)=b k+H and hence {b i+H,b j+H,b k+H} is a set of size at most 2. If it has size 2, then we may assume b i≠b k. Then the order of (b i+H)−(b k+H) is divisible by w 1, a contradiction. If b i+H=b j+H=b k+H, then w 1 a f+w 2 a g=a h. It follows that the size of {a f,a g,a h} is at most 2. If it has size 2, then we may assume a f≠a h. Then the order of a f−a h divides w 1, a contradiction. Consequently, a f=a g=a h holds and hence a f+b i=a g+b j=a h+b k. The third part is a direct consequence of the first two.□ In the next result Z r is considered as {0,1,…,r−1} with operation the usual addition in Z modulo r. Corollary 4.15 Put S={a 1,a 2,…,a s}⊆Z m and T={b 1,b 2,…,b t}⊆Z n . Let w 1,w 2∈Z such that w 1+w 2=1 hold. Define X={a i n+b j:i∈{1,…,s},j∈{1,…,t}}⊆Z n m. (1)If S and T are(w 1,w 2)-saturating, then X is(w 1,w 2)-saturating in Z n m . (2)Assume that S and T are(w 1,w 2)-avoiding, the difference of distinct elements of S is not divisible by m, the difference of distinct elements of T is not divisible by n. Then X is(w 1,w 2)-avoiding in Z n m . (3)Assume that S and T are complete(w 1,w 2)-avoiding, the difference of distinct elements of S is not divisible by m, the difference of distinct elements of T is not divisible by n. Then X is complete(w 1,w 2)-avoiding in Z n m . Example 4.16 {0,1,2} is complete (3,−2)-avoiding in Z 9 and hence a((3,−2),Z 9 n)=3 n. Example 4.17 {0,1} is complete (2,−1)-avoiding in Z 4 and hence a((2,−1),Z 4 n)=2 n, as we already saw in the previous section. 5. Concluding remarks and open problems In this paper, we proved that in a large family of vector spaces, the minimum size of a complete 3-AP free set is equal to a small absolute constant multiple of the lower bound. However, it remained an open question to decide whether this is true for every vector space F q n, q>2. Problem 5.1 Minimum size complete 3-AP free sets Is it true that a(3−AP,F q n)<C⋅q n for an absolute constant C, that is, the natural lower bound is tight up to a constant factor? Concerning cyclic groups, we pose the following Problem 5.2 Is it true that a(3−AP,Z m)<a((2,−1),Z m)<c m⋅m holds for c m∈[1,1.5], a constant depending only on m, for all large enough values of m? The first inequality follows from the definition (see Remark 1.10), while we proved the second inequality for a dense set of natural numbers m in Theorem 1.12. It would be also interesting to see an improvement on the constant c m. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement The authors would like to thank the referees for their helpful suggestions. Recommended articles Data availability No data was used for the research described in the article. References N. Alon, A. Shapira Linear equations, arithmetic progressions and hypergraph property testing Theory Comput., 1 (1) (2005), pp. 177-216 CrossrefView in ScopusGoogle Scholar N. Anbar, D. Bartoli, M. Giulietti, I. Platoni Small complete caps from singular cubics J. Comb. Des., 22 (10) (2014), pp. 409-424 CrossrefView in ScopusGoogle Scholar D. Bartoli, G. Faina, S. Marcugini, F. Pambianco A construction of small complete caps in projective spaces J. Geom., 108 (2017), pp. 215-246 CrossrefView in ScopusGoogle Scholar D. Bartoli, M. Giulietti, G. Marino, O. Polverino Maximum scattered linear sets and complete caps in Galois spaces Combinatorica, 38 (2017), pp. 255-278 Google Scholar A. Cossidente, B. Csajbók, G. Marino, F. Pavese Small complete caps in PG(4 n+1,q) Bull. Lond. Math. Soc., 55 (2023), pp. 522-535 CrossrefView in ScopusGoogle Scholar Y.G. Chen On AP 3-covering sequences C. R. Math., 356 (2) (2018), pp. 121-124 View PDFView articleCrossrefView in ScopusGoogle Scholar J. Cilleruelo Combinatorial problems in finite fields and Sidon sets Combinatorica, 32 (5) (2012), pp. 497-511 CrossrefView in ScopusGoogle Scholar C. Colbourne, J. Dinitz (Eds.), Handbook of Combinatorial Designs, CRC Press, Boca Raton, FL (2007) Google Scholar E. Croot, V.F. Lev, P.P. Pach Progression-free sets in Z 4 n are exponentially small Ann. Math., 185 (2017), pp. 331-337 View in ScopusGoogle Scholar I. Czerwinski, A. Pott Sidon sets, sum-free sets and linear codes arXiv preprint arXiv:2304.07906 (2023) Google Scholar A.A. Davydov, M. Giulietti, S. Marcugini, F. Pambianco New inductive constructions of complete caps in PG(N,q), q even J. Comb. Des., 18 (2010), pp. 177-201 CrossrefView in ScopusGoogle Scholar A.A. Davydov, P.R. Östergård Recursive constructions of complete caps J. Stat. Plan. Inference, 95 (1–2) (2001), pp. 167-173 View in ScopusGoogle Scholar J.V.D. De Bruyn, D. Gijswijt On the size of subsets of F q n avoiding solutions to linear systems with repeated columns Electron. J. Comb., 30 (4) (2023) Google Scholar Y. Edel, J. Bierbrauer Large caps in small spaces Des. Codes Cryptogr., 23 (2001), pp. 197-212 View in ScopusGoogle Scholar S. Eberhard, F. Manners The apparent structure of dense Sidon sets Electron. J. Comb. (2023) P1-33 Google Scholar Y. Edel, S. Ferret, I. Landjev, L. Storme The classification of the largest caps in AG(5,3) J. Comb. Theory, Ser. A, 99 (1) (2002), pp. 95-110 View PDFView articleView in ScopusGoogle Scholar J.S. Ellenberg, D. Gijswijt On large subsets of with no three-term arithmetic progression Ann. Math., 185 (2017), pp. 339-343 View in ScopusGoogle Scholar C. Elsholtz, P.P. Pach Caps and progression-free sets in Z m n Des. Codes Cryptogr., 88 (10) (2020), pp. 2133-2170 CrossrefView in ScopusGoogle Scholar P. Erdős, P. Turán On a problem of Sidon in additive number theory, and on some related problems J. Lond. Math. Soc., 16 (4) (1941), pp. 212-215 CrossrefGoogle Scholar J.H. Fang A note on AP 3-covering sequences Period. Math. Hung., 83 (2021), pp. 67-70 CrossrefView in ScopusGoogle Scholar P. Frankl, R.L. Graham, V. Rödl On subsets of abelian groups with no 3-term arithmetic progression J. Comb. Theory, Ser. A, 45 (1) (1987), pp. 157-161 View PDFView articleView in ScopusGoogle Scholar K. Fried Rare bases for finite intervals of integers Acta Sci. Math., 52 (3–4) (1988), pp. 303-305 Google Scholar M. Giulietti Small complete caps in PG(N,q), q even J. Comb. Des., 15 (2007), pp. 420-436 CrossrefView in ScopusGoogle Scholar M. Giulietti Small complete caps in Galois affine spaces J. Algebraic Comb., 25 (2007), pp. 149-168 CrossrefView in ScopusGoogle Scholar C.S. Güntürk, M.B. Nathanson A new upper bound for finite additive bases Acta Arith., 124 (3) (2006), pp. 235-255 CrossrefView in ScopusGoogle Scholar L. Habsieger On finite additive 2-bases Trans. Am. Math. Soc., 366 (12) (2014), pp. 6629-6646 View in ScopusGoogle Scholar J.W. Hirschfeld, L. Storme The packing problem in statistics, coding theory and finite projective spaces: update 2001 Finite Geometries: Proceedings of the Fourth Isle of Thorns Conference, Springer US, Boston, MA (July 2001), pp. 201-246 CrossrefGoogle Scholar G. Hofmeister Thin bases of order two J. Number Theory, 86 (1) (2001), pp. 118-132 View PDFView articleView in ScopusGoogle Scholar Y. Huang, M. Tait, R. Won Sidon sets and 2-caps in F 3 n Involve, 12 (6) (2019), pp. 995-1003 CrossrefView in ScopusGoogle Scholar S.Z. Kiss, Cs. Sándor, Q.H. Yang On generalized Stanley sequences Acta Math. Hung., 154 (2018), pp. 501-510 View in ScopusGoogle Scholar B. Kovács, Z.L. Nagy Avoiding intersections of given size in finite affine spaces AG(n,2) J. Comb. Theory, Ser. A, 209 (2025), Article 105959 View PDFView articleView in ScopusGoogle Scholar M. Mimura, N. Tokushige Solving linear equations in a vector space over a finite field Discrete Math., 344 (12) (2021), Article 112603 View PDFView articleView in ScopusGoogle Scholar A. Mrose Untere Schranken für die Reichweiten von Extremalbasen fester Ordnung Abhandlungen aus dem, Mathematischen Seminar der Universität Hamburg, vol. 48, Springer-Verlag (April 1979), pp. 118-124 Google Scholar Z.L. Nagy Saturating sets in projective planes and hypergraph covers Discrete Math., 341 (4) (2018), pp. 1078-1083 View PDFView articleView in ScopusGoogle Scholar G.P. Nagy Thin Sidon sets and the nonlinearity of vectorial Boolean functions arXiv preprint arXiv:2212.05887 (2022) Google Scholar M. Redman, L. Rose, R. Walker A small maximal Sidon set in Z 2 n SIAM J. Discrete Math., 36 (3) (2022), pp. 1861-1867 CrossrefView in ScopusGoogle Scholar I.Z. Ruzsa Solving a linear equation in a set of integers I Acta Arith., 65 (3) (1993), pp. 259-282 CrossrefGoogle Scholar I.Z. Ruzsa A small maximal Sidon set Analytic and Elementary Number Theory: A Tribute to Mathematical Legend Paul Erdős (1998), pp. 55-58 View in ScopusGoogle Scholar F. Pambianco, L. Storme Small complete caps in spaces of even characteristic J. Comb. Theory, Ser. A, 75 (1996), pp. 70-84 View PDFView articleView in ScopusGoogle Scholar L. Sauermann Finding solutions with distinct variables to systems of linear equations over F p Math. Ann., 386 (1–2) (2023), pp. 1-33 CrossrefView in ScopusGoogle Scholar I.D. Shkredov Szemeredi's theorem and problems on arithmetic progressions Russ. Math. Surv., 61 (6) (2006), p. 1101 View in ScopusGoogle Scholar R.M. Smullyan Theory of Formal Systems Princeton University Press (1961) Google Scholar F. Tyrrell New lower bounds for cap sets Discrete Anal., 20 (2023), pp. 1-18 View in ScopusGoogle Scholar Cited by (0) 1 Current address: Department of Computer Science, ELTE Eötvös Loránd University, Budapest, Hungary. 2 This paper was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences and by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA–INdAM). 3 The author is supported by the Hungarian Research Grant (NKFIH) No. PD 134953 and No. K. 124950 and the University Excellence Fund of Eötvös Loránd University. © 2025 The Author(s). Published by Elsevier Inc. Recommended articles Vertex-transitive graphs with small motion and transitive permutation groups with small minimal degree Journal of Combinatorial Theory, Series A, Volume 216, 2025, Article 106065 Antonio Montero, Primož Potočnik View PDF ### A central limit theorem for a card shuffling problem Journal of Combinatorial Theory, Series A, Volume 214, 2025, Article 106048 Shane Chern, …, Italo Simonelli ### New results on orthogonal arrays OA(3,5,4 n + 2) Journal of Combinatorial Theory, Series A, Volume 204, 2024, Article 105864 Dongliang Li, Haitao Cao ### Optimal result on restricted sumsets containing powers of two Journal of Combinatorial Theory, Series A, Volume 217, 2026, Article 106076 Quan-Hui Yang, Lilu Zhao ### There are no good infinite families of toric codes Journal of Combinatorial Theory, Series A, Volume 213, 2025, Article 106009 Jason P.Bell, …, Zheng Xie View PDF ### Almost Steiner systems in finite classical polar spaces Finite Fields and Their Applications, Volume 108, 2025, Article 102662 Yunxian Wu, …, Menglong Zhang Show 3 more articles About ScienceDirect Remote access Advertise Contact and support Terms and conditions Privacy policy Cookies are used by this site. Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices
12234
https://www.calculatorsoup.com/calculators/math/standard-form-calculator.php
Calculator Soup® Online Calculators Standard Form Calculator Calculator Use Find standard form of a positive or negative number with the standard form calculator. Convert from number format to standard form as a decimal multiplied by a power of 10. What is Standard Form Standard form is a way of writing a number so it is easier to read. It is often used for very large or very small numbers. Standard form is like scientific notation and is typically used in science and engineering. A number is written in standard form when it is represented as a decimal number times a power of 10. As an example, consider the speed of light which travels at about 671,000,000 miles per hour. Written in standard form this number is equivalent to 6.71 x 108. How to Convert to Standard Form Standard form is like scientific notation where a number is represented as a decimal number times a power of 10. Standard form format is: Where How to Convert a Number to Standard Form Standard form of a number is a x 10b where a is a number, 1 ≤ |a| < 10. b is the power of 10 required so that the standard form is mathematically equivalent to the original number. Example: Convert 459,608 to Standard Form Example: Convert 0.000380 to Standard Form Additional Resources See the Scientific Notation Calculator to add, subtract, multiply and divide numbers in scientific notation or E notation. To round significant figures use the Significant Figures Calculator. If you need a scientific calculator see our resources on scientific calculators. Cite this content, page or calculator as: Furey, Edward "Standard Form Calculator" at from CalculatorSoup, - Online Calculators Last updated: October 21, 2023 © 2006 - 2025 CalculatorSoup® All rights reserved.
12235
https://odp.library.tamu.edu/math150-2nd-ed/chapter/5-2-properties-and-graphs-of-exponential-functions/
5.2 Properties and Graphs of Exponential Functions – Functions, Trigonometry, and Systems of Equations (Second Edition) Skip to content Menu Primary Navigation Home Read Sign in Search in book: Search Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices. Book Contents Navigation Contents Committee Members and Attributes The Organization Committee Accessibility Statement and Development Notes Accessibility Statement Accessibility Journey Current Status Works Cited Work Cited Chapter 0: Review of Algebra 0.1 Real Numbers and Exponents 0.1 Real Numbers and Exponents 0.1.1 Sets of Real Numbers 0.1.2 Section Exercises 0.2 Simplifying Radicals 0.2 Simplifying Radicals 0.2.1 Section Exercises 0.3 Factoring Expressions 0.3 Factoring Expressions 0.3.1 Section Exercises 0.4 Using Interval Notation 0.4 Using Interval Notation 0.4.1 Some Basic Set Theory Notions 0.4.2 Special Subsets of Real Numbers 0.4.3 The Real Number Line and Interval Notation 0.5 Solving Equations 0.5 Solving Equations 0.5.1 Linear Equations 0.5.2 Absolute Value Equations 0.5.3 Solving Equations By Factoring 0.5.4 Solving Radical Equations 0.5.5 Solving Quadratic Equations 0.5.6 Complex Numbers 0.5.7 Section Exercises 0.6 Basic Inequalities in One Variable 0.6 Basic Inequalities in One Variable 0.6.1 Linear Inequalities 0.6.2 Absolute Value Inequalities 0.6.3 Section Exercises Chapter 1 Properties of General Functions 1.1 Rectangular Coordinate Plane 1.1 Rectangular Coordinate Plane 1.1.1 The Cartesian Coordinate Plane 1.1.2 Distance in the Plane 1.1.3 Section Exercises 1.2 Relations and Functions 1.2 Relations and Functions 1.2.1 Functions as Mappings 1.2.2 Algebraic Representations of Functions 1.2.3 Geometric Representations of Functions 1.2.4 Section Exercises 1.3 Linear Functions 1.3 Linear Functions 1.3.2 Constant Functions 1.3.3 Linear Functions 1.3.4 The Average Rate of Change Of A Function 1.3.5 Section Exercises 1.4 Absolute Value Functions 1.4 Absolute Value Functions 1.4.1 Graphs of Absolute Value Functions 1.4.3 Section Exercises 1.5 Function Arithmetic 1.5 Function Arithmetic 1.5.1 Arithmetic of Functions 1.5.2 Function Composition 1.5.3 Section Exercises 1.6 Transformations 1.6 Transformations 1.6.1 Vertical and Horizontal Shifts 1.6.2 Reflections About the Coordinate Axes 1.6.3 Scalings 1.6.4 Transformations in Sequence 1.6.5 Section Exercises Chapter 2 Polynomial Functions 2.1 Quadratic Functions 2.1 Quadratic Functions 2.1.1 Graphs of Quadratic Functions 2.1.2 Section Exercises 2.2 Properties of Polynomial Functions and Their Graphs 2.2 Properties of Polynomial Functions and Their Graphs 2.2.1 Monomial Function 2.2.2 Polynomial Functions 2.2.3 Section Exercises 2.3 Real Zeros of Polynomials 2.3 Real Zeros of Polynomials 2.3.1 Section Exercises Chapter 3 Rational Functions 3.1 Simplifying Rational Expressions 3.1 Simplifying Rational Expressions 3.1.1 Difference Quotients 3.1.2 Section Exercises 3.2 Properties of Rational Functions 3.2 Properties of Rational Functions 3.2.1 Laurent Monomial Functions 3.2.2 Local Behavior Near Excluded Values 3.2.3 End Behavior 3.2.4 Section Exercises 3.3 Graphs of Rational Functions 3.3 Graphs of Rational Functions 3.3.1 Section Exercises 3.4 Solving Equations Involving Rational Functions 3.4 Solving Equations Involving Rational Functions 3.4.1 Section Exercises Chapter 4 Root and Power Functions 4.1 Properties of Root Functions and Their Graphs 4.1 Properties of Root Functions and Their Graphs 4.1.1 Root Functions 4.1.2 Other Functions involving Radicals 4.1.3 Section Exercises 4.2 Properties of Power Functions and Their Graphs 4.2 Properties of Power Functions and Their Graphs 4.2.1 Rational Number Exponents 4.2.2 Real Number Exponents 4.2.3 Section Exercises 4.3 Solving Equations Involving Root and Power Functions 4.3 Solving Equations Involving Root and Power Functions 4.3.1 Section Exercises 4.4 Solving Nonlinear Inequalities 4.4 Solving Nonlinear Inequalities 4.4.1 Inequalities involving Quadratic Functions 4.4.2 Inequalities Involving Rational Functions and Applications 4.4.3 Inequalities Involving Power and Root Functions 4.4.4 Section Exercises Chapter 5 Exponential and Logarithmic Functions 5.1 Inverse Functions 5.1 Inverse Functions 5.1.1 Section Exercises 5.2 Properties and Graphs of Exponential Functions 5.2 Properties and Graphs of Exponential Functions 5.2.1 Section Exercises 5.3 Properties and Graphs of Logarithmic Functions 5.3 Properties and Graphs of Logarithmic Functions 5.3.1 Section Exercises 5.4 Properties of Logarithms 5.4 Properties of Logarithms 5.4.1. Section Exercises 5.5 Solving Equations Involving Exponential Functions 5.5 Solving Equations Involving Exponential Functions 5.5.1 Section Exercises 5.6 Solving Equations Involving Logarithmic Functions 5.6 Solving Equations Involving Logarithmic Functions 5.6.1 Section Exercises 5.7 Applications of Exponential and Logarithmic Functions 5.7 Applications of Exponential and Logarithmic Functions 5.7.1 Applications of Exponential Functions 5.7.2 Applications of Logarithms 5.7.3 Section Exercises Chapter 6 Systems of Linear/Nonlinear Equations 6.1 Solving Systems of Linear Equations 6.1 Solving Systems of Linear Equations 6.1.1 Section Exercises 6.2 Solving Systems of Nonlinear Equations 6.2 Solving Systems of Nonlinear Equations 6.2.1 Section Exercises Chapter 7 Trigonometric Functions 7.1 Degree and Radian Measure of Angles 7.1 Degree and Radian Measure of Angles 7.1.1 Degree Measure 7.1.2 Radian Measure 7.1.3 Applications of Radian Measure: Circular Motion 7.1.4 Section Exercises 7.2 Sine and Cosine Functions 7.2 Sine and Cosine Functions 7.2.1 Right Triangle Definitions 7.2.2 Unit Circle Definitions 7.2.3 Beyond the Unit Circle 7.2.4 Section Exercises 7.3 Graphs of Sine and Cosine 7.3 Graphs of Sine and Cosine 7.3.1 Applications of Sinusoids 7.3.2 Section Exercises 7.4 Other Trigonometric Functions 7.4 Other Trigonometric Functions 7.4.1 Reciprocal and Quotient Identities 7.4.2 Section Exercises 7.5 Graphs of Other Trigonometric Functions 7.5 Graphs of Other Trigonometric Functions 7.5.1 Graphs of the Secant and Cosecant Functions 7.5.2 Graphs of the Tangent and Cotangent Functions 7.5.3 Section Exercises 7.6 Inverse Trigonometric Functions 7.6 Inverse Trigonometric Functions 7.6.1 Inverses of Sine and Cosine 7.6.2 Inverses of Tangent and Cotangent 7.6.3 Inverses of Secant and Cosecant 7.6.4 Section Exercises Chapter 8 Trigonometric Properties and Identities 8.1 Fundamental and Pythagorean Identities 8.1 Fundamental and Pythagorean Identities 8.1.1 Section Exercises 8.2 Other Trigonometric Identities 8.2 Other Trigonometric Identities 8.2.1 Sinusoids, Revisted 8.2.2 Section Exercises 8.3 Solving Equations Involving Trigonometric Functions 8.3 Solving Equations Involving Trigonometric Functions 8.3.1 Solving Equations Using the Inverse Trigonometric Functions 8.3.2 Strategies for Solving Equations Involving Circular Functions 8.3.3 Harmonic Motion 8.3.4 Section Exercises 8.4 Law of Sines 8.4 Law of Sines 8.4.1 Bearings 8.4.2 Section Exercises 8.5 Law of Cosines 8.5 Law of Cosines 8.5.1 Section Exercises Chapter 9 Vectors 9.1 Vectors 9.1 Vectors 9.1.1 Section Exercises 9.2 Dot Products and Projections 9.2 Dot Products and Projections 9.2.1 Vector Projections 9.2.2 Section Exercises Appendix 1: Homework Answers for Chapter 0 Section 0.1 Answers Section 0.2 Answers Section 0.3 Answers Section 0.4 Answers Section 0.5 Answers Section 0.6 Answers Appendix 1: Homework Answers for Chapter 1 Section 1.1 Answers Section 1.2 Answers Section 1.3 Answers Section 1.4 Answers Section 1.5 Answers Section 1.6 Answers Appendix 1: Homework Answers for Chapter 2 Section 2.1 Answers Section 2.2 Answers Section 2.3 Answers Appendix 1: Homework Answers for Chapter 3 Section 3.1 Answers Section 3.2 Answers Section 3.3 Answers Section 3.4 Answers Appendix 1: Homework Answers for Chapter 4 Section 4.1 Answers Section 4.2 Answers Section 4.3 Answers Section 4.4 Answers Appendix 1: Homework Answers for Chapter 5 Section 5.1 Answers Section 5.2 Answers Section 5.3 Answers Section 5.4 answers Section 5.5 Answers Section 5.6 Answers Section 5.7 Answers Appendix 1: Homework Answers for Chapter 6 Section 6.1 Answers Section 6.2 Answers Appendix 1: Homework Answers for Chapter 7 Section 7.1 Answers Section 7.2 Answers Section 7.3 Answers Section 7.4 Answers Section 7.5 Answers Section 7.6 Answers Appendix 1: Homework Answers for Chapter 8 Section 8.1 Answers Section 8.2 Answers Section 8.3 Answers Section 8.4 Answers Section 8.5 Answers Appendix 1: Homework Answers for Chapter 9 Section 9.1 Answers Section 9.2 Answers Functions, Trigonometry, and Systems of Equations (Second Edition) 5.2 Properties and Graphs of Exponential Functions Of all of the functions we study in this text, exponential functions are possibly the ones which impact everyday life the most. This section introduces us to exponential functions while the rest of the chapter will more thoroughly explore their properties. Up to this point, we have dealt with functions which involve terms like x 3, x 3 2, or x π – in other words, terms of the form x p where the base of the term, x, varies but the exponent of each term, p, remains constant. In this chapter, we study functions of the form f(x)=b x where the base b is a constant and the exponent x is the variable. We start our exploration of these functions with the time-honored classic, f(x)=2 x. We make a table of function values, plot enough points until we are more or less confident with the shape of the curve, and connect the dots in a pleasing fashion. Partial Table and Graph of f(x) A few remarks about the graph of f(x)=2 x are in order. As x→−∞ and takes on values like x=−100 or x=−1000, the function f(x)=2 x takes on values like f(−100)=2−100=1 2 100 or f(−1000)=2−1000=1 2 1000. In other words, as x→−∞, 2 x≈1 very big(+)≈very small(+). That is, as x→−∞, 2 x→0+. This produces the x-axis, y=0 as a horizontal asymptote to the graph as x→−∞. On the flip side, as x→∞, we find f(100)=2 100, f(1000)=2 1000, and so on, thus 2 x→∞. We note that by connecting the dots in a pleasing fashion, we are implicitly using the fact that f(x)=2 x is not only defined for all real numbers, but is also continuous. Moreover, we are assuming f(x)=2 x is increasing: that is, if a<b, then 2 a<2 b. While these facts are true, the proofs of these properties are best left to Calculus. For us, we assume these properties in order to state the domain of f is (−∞,∞), the range of f is (0,∞) and, f is increasing, do f is one-to-one, hence invertible. Suppose we wish to study the family of functions f(x)=b x. Which bases b make sense to study? We find that we run into difficulty if b< 0. For example, if b=−2, then the function f(x)=(−2)x has trouble, for instance, at x=1 2 because (−2)1/2=−2 is not a real number. In general, if x is any rational number with an even denominator, then (−2)x is not defined, so we must restrict our attention to bases b≥0. What about b=0? The function f(x)=0 x is undefined for x≤0 because we cannot divide by 0 and 0 0 is an indeterminant form. For x>0, 0 x=0 so the function f(x)=0 x is the same as the function f(x)=0, x>0. As we know everything about this function, we ignore this case. The only other base we exclude is b=1, because the function f(x)=1 x=1 for all real numbers x. We are now ready for our definition of exponential functions. Definition 5.3 An exponential function is the function of the form f(x)=b x where b is a real number, b>0, b≠1. The domain of an exponential function (−∞,∞). NOTE: More specifically, f(x)=b x is called the base b exponential function. We leave it to the reader to verify that if b>1, then the exponential function f(x)=b x will share the same basic shape and characteristics as f(x)=2 x. What if 0 <b< 1? Consider g(x)=(1 2)x. We could certainly build a table of values and connect the points, or we could take a step back and note that g(x)=(1 2)x=(2−1)x=2−x=f(−x) where f(x)=2 x. Per Section 1.6, the graph of f(−x) is obtained from the graph of f(x) by reflecting it across the y-axis. Relationship between y=2 x and y=2−x We see that the domain and range of g match that of f, namely (−∞,∞) and (0,∞), respectively. Like f, g is also one-to-one. Whereas f is always increasing, g is always decreasing. As a result, as x→−∞, g(x)→∞, and on the flip side, as x→∞, g(x)→0+. It shouldn’t be too surprising that for all choices of the base 0 <b< 1, the graph of y=b x behaves similarly to the graph of g. We summarize the basic properties of exponential functions in the following theorem. Theorem 5.3 Properties of Exponential Functions Suppose f(x)=b x. The domain of f is (−∞,∞) and the range of f is (0,∞). (0,1) is on the graph of f and y=0 is a horizontal asymptote to the graph of f. f is one-to-one, continuous and smooth If b>1: f is always increasing As x→−∞, f(x)→0+ As x→∞, f(x)→∞ The graph of f resembles: Graph of General Exponential Function for y=f(x)=b x,b>1 If 0<b<1: f is always decreasing As x→−∞, f(x)→∞ As x→∞, f(x)→0+ The graph of f resembles: Graph of General Exponential Function for y=f(x)=b x,0<b<1 Exponential functions also inherit the basic properties of exponents from Theorem 4.3. We formalize these below and use them as needed in the coming examples. Theorem 5.4 Algebraic Properties of Exponential Functions Let f(x)=b x be an exponential function (b>0, b≠1) and let u and w be real numbers. Product Rule: b u+w=b u b w Quotient Rule: b u−w=b u b w Power Rule: (b u)w=b u w In addition to base 2 which is important to computer scientists, two other bases are used more often than not in scientific and economic circles. The first is base 10. Base 10 is called the common baseand is important in the study of intensity (sound intensity, earthquake intensity, acidity, etc.) The second base is an irrational number, e. Like 2 or π, the decimal expansion of e neither terminates nor repeats, so we represent this number by the letter e. A decimal approximation of e is e≈2.718, so the function f(x)=e x is an increasing exponential function. The number e is called the natural basefor lots of reasons, one of which is that it naturally arises in the study of growth functions in Calculus. We will more formally discuss the origins of e in Section 5.7. It is time for an example. Example 5.2.1 Example 5.2.1.1a Graph the following functions by starting with a basic exponential function and using transformations, Theorem 1.12. Track at least three points and the horizontal asymptote through the transformations. F(x)=2(1 3)x−1 Solution: Graph F(x)=2(1 3)x−1. The base of the exponent in F(x)=2(1 3)x−1 is 1 3, so we start with the graph of f(x)=(1 3)x. To use Theorem 1.12, we first need to choose some control points on the graph of f(x)=(1 3)x. Because we are instructed to track three points (and the horizontal asymptote, y=0) through the transformations, we choose the points corresponding to x=−1, x=0, and x=1: (−1,3), (0,1), and (1,1 3), respectively. Next, we need determine how to modify f(x)=(1 3)x to obtain F(x)=2(1 3)x−1. The key is to recognize the argument, or inside of the function is the exponent and the outside is anything outside the base of 1 3. Using these principles as a guide, we find F(x)=2 f(x−1). Per Theorem 1.12, we first add 1 to the x-coordinates of the points on the graph of y=f(x), shifting the graph to the right 1 unit. Next, multiply the y-coordinates of each point on this new graph by 2, vertically stretching the graph by a factor of 2. Looking point by point, we have (−1,3)→(0,3)→(0,6), (0,1)→(1,1)→(1,2), and (1,1 3)→(2,1 3)→(2,2 3). The horizontal asymptote, y=0 remains unchanged under the horizontal shift and the vertical stretch because 2⋅0=0. Below we graph y=f(x)=(1 3)x on the left y=F(x)=2(1 3)x−1 on the right. Graphical Representation of Changes in Example 5.2.1.1a As always we can check our answer by verifying each of the points (0,6), (1,2), (2,2 3) is on the graph of F(x)=2(1 3)x−1 by checking F(0)=6, F(1)=2, and F(2)=2 3. We can check the end behavior as well, that is, as x→−∞, F(x)→∞ and as x→∞, F(x)→0. We leave these calculations to the reader. Example 5.2.1.1b Graph the following functions by starting with a basic exponential function and using transformations, Theorem 1.12. Track at least three points and the horizontal asymptote through the transformations. G(t)=2−e−t Solution: Graph G(t)=2−e−t. The base of the exponential in G(t)=2−e−t is e, so we start with the graph of g(t)=e t. Note that as e is an irrational number, we will use the approximation e≈2.718 when plotting points. However, when it comes to tracking and labeling said points, we do so with exact coordinates, that is, in terms of e. We choose points corresponding to t=−1, t=0, and t=1: (−1,e−1)≈(−1,0.368), (0,1), and (1,e)≈(1,2.718), respectively. Next, we need to determine how the formula for G(t)=2−e−t can be obtained from the formula g(t)=e t. Rewriting G(t)=−e−t+2, we find G(t)=−g(−t)+2. Following Theorem 1.12, we first multiply the t-coordinates of the graph of y=g(t) by −1, effecting a reflection across the y-axis. Next, we multiply each of the y-coordinates by −1 which reflects the graph about the t-axis. Finally, we add 2 to each of the y-coordinates of the graph from the second step which shifts the graph up 2 units. Tracking points, we have (−1,e−1)→(1,e−1)→(1,−e−1)→(1,−e−1+2)≈(1,1.632), (0,1)→(0,1)→(0,−1)→(0,1), and (1,e)→(−1,e)→(−1,−e)→(−1,−e+2)≈(−1,−0.718). The horizontal asymptote is unchanged by the reflections, but is shifted up 2 units y=0→y=2. We graph g(t)=e t below on the left and the transformed function G(t)=−e−t+2 below on the right. As usual, we can check our answer by verifying the indicated points do, in fact, lie on the graph of y=G(t) along with checking end behavior. We leave these details to the reader. Graphical Representation of Changes in Example 5.2.1.1b Example 5.2.1.2 Write a formula for the graph of the function below. Assume the base of the exponential is 2. Graph for Example 5.2.1.2 Solution: Write a formula for the graph of the function above. Assume the base of the exponential is 2. We are told to assume the base of the exponential function is 2, thus we assume the function F(x) is the result of the transforming the graph of f(x)=2 x using Theorem 1.12. This means we are tasked with finding values for a, b, h, and k so that F(x)=a f(b x−h)+k=a⋅2 b x−h+k. Because the horizontal asymptote to the graph of y=f(x)=2 x is y=0 and the horizontal asymptote to the graph y=F(x) is y=4, we know the vertical shift is 4 units up, so k=4. Next, looking at how the graph of F approaches the vertical asymptote, it stands to reason the graph of f(x)=2 x undergoes a reflection across x-axis, meaning a 0. For simplicity, we assume a=−1 and see if we can find values for b and h that go along with this choice. Because (−1,0) and (0,−4) on the graph of F(x)=−(2)b x−h+4, we know F(−1)=0 and F(0)=−4. From F(−1)=0, we have −(2)−b−h+4=0 or 2−b−h=4=2 2. Hence, −b−h=2 is one solution. Next, using F(0)=−4, we get −(2)−h+4=−4 or 2−h=8=2 3. From this, we have −h=3 so h=−3. Putting this together with −b−h=2, we get −b+3=2 so b=1. Hence, one solution to the problem is F(x)=−(2)x+3+4. To check our answer, we leave it to the reader verify F(−1)=0, F(0)=−4, as x→−∞, F(x)→4 and as x→∞, F(x)→−∞. Because we made a simplifying assumption (a=−1), we may well wonder if our solution is the only solution. Indeed, we started with what amounts to three pieces of information and set out to determine the value of four constants. We leave this for a thoughtful discussion in Exercise 14. Our next example showcases an important application of exponential functions: economic depreciation. Example 5.2.2 Example 5.2.2.1 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. Calculate and interpret V(0), V(1), and V(2). Solution: Calculate and interpret V(0), V(1), and V(2). We find V(0)=25(0.8)0=25⋅1=25, V(1)=25(0.8)1=25⋅0.8=20 and V(2)=25(0.8)2=25⋅0.64=16. t represents the number of years the car has been owned, so t=0 corresponds to the purchase price of the car. V(t) returns the value of the car in thousands of dollars, so V(0)=25 means the car is worth $25,000 when first purchased. Likewise, V(1)=20 and V(2)=16 means the car is worth $20, 000 after one year of ownership and $16,000 after two years, respectively. Example 5.2.2.2 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. Compute and interpret the average rate of change of V over the intervals [0,1] and [0,2] and [1,2]. Solution: Compute and interpret the average rate of change of V over the intervals [0,1] and [0,2] and [1,2]. Recall to find the average rate of change of V over an interval [a,b], we compute: V(b)−V(a)b−a. For the interval [0,1], we find V(1)–V(0)1−0=20−25 1=−5 which means over the course of the first year of ownership, the value of the car depreciated, on average, at a rate of $5000 per year. For the interval [0,1], we compute V(2)–V(0)2−0=16−25 2=−4.5 which means over the course of the first two years of ownership, the car lost, on average, $4500 per year in value. Finally, we find for the interval [1,2], V(2)–V(1)2−1=16−20 1=−4 meaning the car lost, on average, $4000 in value per year between the first and second years. Notice that the car lost more value over the first year ($5000) than it did the second year ($4000), and these losses average out to the average yearly loss over the first two years ($4500 per year.) Example 5.2.2.3 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. Determine and interpret V(1)V(0), V(2)V(1) and V(2)V(0). Solution: Determine and interpret V(1)V(0), V(2)V(1) and V(2)V(0). We compute: V(1)V(0)=20 25=0.8, V(2)V(1)=16 20=0.8, and V(2)V(0)=16 25=0.64. The ratio V(1)V(0)=0.8 can be rewritten as V(1)=0.8 V(0) which means that the value of the car after 1 year, V(1) is 0.8 times, or 80% the initial value of the car, V(0). Similarly, the ratio V(2)V(1)=0.8 rewritten as V(2)=0.8 V(1) means the value of the car after 2 years, V(2) is 0.8 times, or 80% the value of the car after one year, V(1). Finally, the ratio V(2)V(0)=0.64, or V(2)=0.64 V(0) means the value of the car after 2 years, V(2) is 0.64 times, or 64% of the initial value of the car, V(0). Note that this last result tracks with the previous answers. Because V(1)=0.8 V(0) and V(2)=0.8 V(1), we get V(2)=0.8 V(1)=0.8(0.8 V(0))=0.64 V(0) Also note it is no coincidence that the base of the exponential, 0.8 has shown up in these calculations, as we’ll see in the next problem. Example 5.2.2.4 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. For t≥0, find and interpret V(t+1)V(t) and V(t+k)V(t). Solution: For t≥0, find and interpret V(t+1)V(t) and V(t+k)V(t). Using properties of exponents, we find V(t+1)V(t)=25(0.8)t+1 25(0.8)t=(0.8)t+1−t=0.8 Rewriting, we have V(t+1)=0.8 V(t). This means after one year, the value of the car V(t+1) is only 80% of the value it was a year ago, V(t). Similarly, we find V(t+k)V(t)=25(0.8)t+k 25(0.8)t=(0.8)t+k−t=(0.8)k which, rewritten, says V(t+k)=V(t)(0.8)k. This means in k years’ time, the value of the car V(t+k) is only (0.8)k times what it was worth k years ago, V(t). These results shouldn’t be too surprising. Verbally, the function V(t)=25(0.8)t says to multiply 25 by 0.8 multiplied by itself t times. Therefore, for each additional year, we are multiplying the value of the car by an additional factor of 0.8. Example 5.2.2.5 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. Compute and interpret V(1)−V(0)V(0), V(2)−V(1)V(1), and V(2)−V(0)V(0). Solution: Compute and interpret V(1)−V(0)V(0), V(2)−V(1)V(1), and V(2)−V(0)V(0). We compute V(1)–V(0)V(0)=20−25 25=−0.2,V(2)–V(1)V(1)=16−20 20=−0.2 and V(2)–V(0)V(0)=16–25 25=−0.36 The ratio V(1)−V(0)V(0) computes the ratio of difference in the value of the car after the first year of ownership, V(1)−V(0), to the initial value, V(0). We find this to be −0.2 or a 20% decrease in value. This makes sense as we know from our answer to number 3, the value of the car after 1 year, V(1) is 80% of the initial value, V(0). Indeed: V(1)–V(0)V(0)=V(1)V(0)–V(0)V(0)=V(1)V(0)–1 and because V(1)V(0)=0.8, we get V(1)–V(0)V(0)=0.8−1=−0.2 Likewise, the ratio V(2)−V(1)V(1)=−0.2 means the value of the car has lost 20% of its value over the course of the second year of ownership. Finally, the ratio V(2)−V(0)V(0)=−0.36 means that over the first two years of ownership, the car value has depreciated 36% of its initial purchase price. Again, this tracks with the result of number 3 which tells us that after two years, the car is only worth 64% of its initial purchase price. Example 5.2.2.6 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. For t≥0, find and interpret V(t+1)−V(t)V(t) and V(t+k)−V(t)V(t). Solution: For t≥0, find and interpret V(t+1)−V(t)V(t) and V(t+k)−V(t)V(t). Using properties of fractions and exponents, we get: V(t+1)–V(t)V(t)=25(0.8)t+1–25(0.8)t 25(0.8)t=25(0.8)t+1 25(0.8)t–25(0.8)t 25(0.8)t=0.8–1=−0.2 so after one year, the value of the car V(t+1) has lost 20% of the value it was a year ago, V(t). Similarly, we find: V(t+k)–V(t)V(t)=25(0.8)t+k–25(0.8)t 25(0.8)t=25(0.8)t+1 25(0.8)t–25(0.8)t 25(0.8)t=(0.8)k–1 so after k years’ time, the value of the car V(t) has decreased by ((0.8)k−1)⋅100% of the value k years ago, V(t). Example 5.2.2.7 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. Graph y=V(t) starting with the graph of y=V(t) and using transformations. Solution: Graph y=V(t) starting with the graph of y=V(t) and using transformations. To graph y=25(0.8)t, we start with the basic exponential function f(t)=(0.8)t. The base b=0.8 satisfies 0 <b<1, therefore the graph of y=f(t) is decreasing. We plot the y-intercept (0,1) and two other points, (−1,1.25) and (1,0.8), and label the horizontal asymptote y=0. To obtain the graph of y=25(0.8)t=25 f(t), we multiply all of the y values in the graph by 25 (including the y value of the horizontal asymptote) in accordance with Theorem 1.10 to obtain the points (−1,31.25), (0,25) and (1,20). The horizontal asymptote remains the same, (25⋅0=0.) Finally, we restrict the domain to [0,∞) to fit with the applied domain given to us. Graphical Representation of Changes in Example 5.2.2.7 Example 5.2.2.8 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. Interpret the horizontal asymptote of the graph of y=V(t). Solution: Interpret the horizontal asymptote of the graph of y=V(t). We see from the graph of V that its horizontal asymptote is y=0. This means as the car gets older, its value diminishes to 0. Example 5.2.2.9 The value of a car can be modeled by V(t)=25(0.8)t, where t≥0 is number of years the car is owned and V(t) is the value in thousands of dollars. Using technology and your graph, determine how long it takes for the car to depreciate to (a) one half its original value and (b) one quarter of its original value. Round your answers to the nearest hundredth. Solution: Using technology and your graph, determine how long it takes for the car to depreciate to (a) one half its original value and (b) one quarter of its original value. Round your answers to the nearest hundredth. We know the value of the car, brand new, is $25,000 so when we are asked to find when the car depreciates to one half and one quarter of this value, we are trying to find when the value of the car dips to $12,500 and $6,125, respectively. V(t) is measured in thousands of dollars, so we this translates to solving the equations V(t)=12.5 and V(t)=6.125. Because we have yet to develop any analytic means to solve equations like 25(0.8)t=12.5 (remember t is in the exponent here), we are forced to approximate solutions to this equation numerically or use a graphing utility. Choosing the latter, we graph y=V(t) along with the lines y=12.5 and y=6.125 and look for intersection points. We find y=V(t) and y=12.5 intersect at (approximately) (3.106 12.5) which means the car depreciates to half its initial value in (approximately) 3.11 years. Similarly, we find the car depreciates to one-quarter its initial value after (approximately) 6.23 years. Graph of V(t)=25 f(t) for Example 5.2.2.9 Some remarks about Example 5.2.2 are in order. First the function in the previous example is called a decay curve. Increasing exponential functions are used to model growth curves and we shall see several different examples of those in Section 5.7. Second, as seen in numbers 3 and 4, V(t+1)=0.8 V(t). That is to say, the function V has a constant unit multiplier, in this case, 0.8 because to obtain the function value V(t+1), we multiply the function value V(t) by b. It is not coincidence that the multiplier here is the base of the exponential, 0.8. Indeed, exponential functions of the form f(x)=a⋅b x have a constant unit multiplier, b. To see this, note f(x+1)f(x)=a⋅b x+1 a⋅b x=b 1=b Hence f(x+1)=f(x)⋅b. This will prove useful to us in Section 5.7 when making decisions about whether or not a data set represents exponential growth or decay. We close this section with another important application of exponential functions, Newton’s Law of Cooling. Example 5.2.3 Example 5.2.3.1 According to Newton’s Law of Cooling the temperature of coffee T(t) (in degrees Fahrenheit) t minutes after it is served can be modeled by T(t)=70+90 e−0.1 t. Compute and interpret T(0). Solution: Compute and interpret T(0). T(0)=70+90 e−0.1(0)=160, thus the temperature of the coffee when it is served is 160∘F. Example 5.2.3.2 According to Newton’s Law of Cooling the temperature of coffee T(t) (in degrees Fahrenheit) t minutes after it is served can be modeled by T(t)=70+90 e−0.1 t. Sketch the graph of y=T(t) using transformations. Solution: Sketch the graph of y=T(t) using transformations. To graph y=T(t) using transformations, we start with the basic function, f(t)=e t. As in Example 5.2.1, we track the points (−1,e−1)≈(−1,0.368), (0,1), and (1,e)≈(1,2.718), along with the horizontal asymptote y=0 through each of transformations. To use Theorem 1.12, we rewrite T(t)=70+90 e−0.1 t=90 e−0.1 t+70=90 f(−0.1 t)+70 Following Theorem 1.12, we first divide the t-coordinates of each point on the graph of y=f(t) by −0.1 which results in a horizontal expansion by a factor of 10 as well as a reflection about the y-axis. Next, we multiply the y-values of the points on this new graph by 90 which effects a vertical stretch by a factor of 90. Last but not least, we add 70 to all of the y-coordinates of the points on this second graph, which shifts the graph upwards 70 units. Tracking points, we have (−1,e−1)→(10,e−1)→(10,90 e−1)→(10,90 e−1+70)≈(10,103.112), (0,1)→(0,1)→(0,90)→(0,160), and (1,e)→(−10,e)→(−10,90 e)→(−10,90 e+70)≈(−10,314.62). The horizontal asymptote y=0 is unaffected by the horizontal expansion, reflection about the y-axis, and the vertical stretch, but the vertical shift moves the horizontal asymptote up 70 units, y=0→y=70. After restricting the domain to t≥0, we get the graph on the right. Graphical Representation of Changes in Example 5.2.3.2 Example 5.2.3.3 According to Newton’s Law of Cooling the temperature of coffee T(t) (in degrees Fahrenheit) t minutes after it is served can be modeled by T(t)=70+90 e−0.1 t. Determine and interpret the behavior of T(t) as t→∞. Solution: Determine and interpret the behavior of T(t) as t→∞. We can determine the behavior of T(t) as t→∞ two ways. First, we can employ the number sense developed in Chapter 3. That is, as t→∞, we get T(t)=70+90 e−0.1 t≈70+90 e very big(−). As e>1, e very big(−)≈very small(+). The larger t becomes, the smaller e−0.1 t becomes, so the term 90 e−0.1 t≈very small(+). Hence, T(t)=70+90 e−0.1 t≈70+very small(+)≈70. Alternatively, we can look to the graph of y=T(t). We know the horizontal asymptote is y=70 which means as t→∞, T(t)≈70. In either case, we find that as time goes by, the temperature of the coffee is cooling to 70∘ Fahrenheit, ostensibly room temperature. 5.2.1 Section Exercises In Exercises 1 – 8, sketch the graph of g by starting with the graph of f and using transformations. Track at least three points of your choice and the horizontal asymptote through the transformations. State the domain and range of g. f(x)=2 x and g(x)=2 x−1 f(x)=(1 3)x and g(x)=(1 3)x−1 f(x)=3 x and g(x)=3−x+2 f(x)=10 x and g(x)=10 x+1 2−20 f(t)=(0.5)t and g(t)=100(0.5)0.1 t f(t)=(1.25)t and g(t)=1−(1.25)t−2 f(x)=e t and g(x)=8−e−t f(x)=e t and g(x)=10 e−0.1 t In Exercises, 9 – 12, the graph of an exponential function is given. Find a formula for the function in the form F(x)=a⋅2 b x−h+k. Graph for Exercise 9 Points: (−1,1), (0,2), (1,5 2), Asymptote: y=3 Graph for Exercise 10 Points: (5 2,1 2), (3,1), (7 2,2), Asymptote: y=0 Graph for Exercise 11 Points: (−1 2,6), (0,3), (1 2,3 2), Asymptote: y=0 Graph for Exercise 12 Find a formula for each graph in Exercises 9 – 12 of the form G(x)=a⋅4 b x−h+k. Did you change your solution methodology? What is the relationship between your answers for F(x) and G(x) for each graph? In Example 5.2.1 number 2, we obtained the solution F(x)=−2 x+3+4 as one formula for the given graph by making a simplifying assumption that a=−1. This exercises explores if there are any other solutions for different choices of a. Show G(x)=−4⋅2 x+1+4 also fits the data for the given graph, and use properties of exponents to show G(x)=F(x). (Use the fact that 4=2 2…) With help from your classmates, find solutions to Example 5.2.1 number 2 using a=−8, a=−16 and a=−1 2. Show all your solutions can be rewritten as: F(x)=−2 x+3+4. Using properties of exponents and the fact that the range of 2 x is (0,∞), show that any function of the form f(x)=−a⋅2 b x−h+k for a>0 can be rewritten as f(x)=−2 c 2 b x−h+k=−2 b x−h+c+k. Relabeling, this means every function of the form f(x)=−a⋅2 b x−h+k with four parameters (a, b, h, and k) can be rewritten as f(x)=−2 b x−H+k, a formula with just three parameters: b, H, and k. Conclude that every solution to Example 5.2.1 number 2 reduces to F(x)=−2 x+3+4 . In Exercises 15 – 20, write the given function as a nontrivial decomposition of functions as directed. For f(x)=e−x+1, find functions g and h so that f=g+h For f(x)=e 2 x−x, find functions g and h so that f=g−h For f(t)=t 2 e−t, find functions g and h so that f=g h For r(x)=e x−e−x e x+e−x, find functions f and g so r=f g For k(x)=e−x 2, find functions f and g so that k=g∘f For s(x)=e 2 x−1, find functions f and g so s=g∘f Show that the average rate of change of a function over the interval [x,x+2] is average of the average rates of change of the function over the intervals [x,x+1] and [x+1,x+2]. Can the same be said for the average rate of change of the function over [x,x+3] and the average of the average rates of change over [x,x+1], [x+1,x+2], and [x+2,x+3]? Generalize. Which is larger: e π or π e? How do you know? Can you find a proof that doesn’t use technology? Section 5.2 Exercise Answers can be found in the Appendix. See the discussion of real number exponents in Section 4.2. ↵ or, as we defined real number exponents in Section 4.2, if x is an irrational number …↵ Meaning, graph some more examples on your own. ↵ Recall that this means the graph of f has no sharp turns or corners. ↵ The digital world is comprised of bytes which take on one of two values: 0 or "off" and 1 or "on." ↵ This is the only solution. f(x)=2 x, so the equation 2−b−h=2 2 is equivalent to the functional equation f(−b−h)=f(2). f is one-to-one, so we know this is true only when −b−h=2. ↵ It turns out for any function f, the average rate of change over the interval [x,x+2] is the average of the average rates of change of f over [x,x+1] and [x+1,x+2]. See Exercise 21. ↵ It turns out that it takes exactly twice as long for the car to depreciate to one-quarter of its initial value as it takes to depreciate to half its initial value. Can you see why? ↵ We will discuss this in greater detail in Section 5.7. ↵ We will discuss this in greater detail in Section 5.7. ↵ We will discuss this in greater detail in Section 5.7. ↵ definition A function of the form b raised to the x, where b is a positive real number, not equal to zero, and x is any real number. ×Close definition Previous/next navigation Previous: 5.1 Inverse Functions Next: 5.3 Properties and Graphs of Logarithmic Functions Back to top License Functions, Trigonometry, and Systems of Equations (Second Edition) Copyright © 2025 by Texas A&M Mathematics Department is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. Share This Book Pressbooks Powered by Pressbooks Pressbooks User Guide |Pressbooks Directory |Contact Pressbooks on YouTubePressbooks on LinkedIn
12236
https://www.quora.com/What-is-the-solution-to-the-equation-x-2-x-1-0
What is the solution to the equation x^2 - x + 1 = 0? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Quadratic Formula Equation Solving and Ineq... Problem Solving Basic Algebra Algebra Problem Solving Solving Quadratic Equatio... Algebra Quadratic Polynomials 5 What is the solution to the equation x^2 - x + 1 = 0? All related (47) Sort Recommended Allen Ries Math Major University of Alberta · Author has 25.1K answers and 9.7M answer views ·10mo What is the solution to the equation x^2 - x + 1 = 0? x²-x+1=0 x²-x = -1 x²-x+1/4 = -0.75 (x-0.5)² = -0.75 (x-0.5) = ±√-0.75 x = 0.5±√-0.75 Upvote · 9 1 Sponsored by Grammarly Stuck on the blinking cursor? Move your great ideas to polished drafts without the guesswork. Try Grammarly today! Download 99 34 Related questions More answers below What is the solution to the equation [math]\displaystyle x^3 + x^2 - x - 1=0[/math]? What is the solution for the equation X+X=X^2-X=1 when X is equal to 0 and when X is equal to -1/2? What is the solution of (1-x-x^2-…) × (2-x-x^2-…) =0? Can anyone solve the equation x^2-x+1=0? What are the solutions for the equationmath+(x^2+\frac{1}{x^2})+(x+\frac{1}{x})=0[/math]? Donald Hartig PhD in Mathematics, University of California, Santa Barbara (Graduated 1970) · Author has 7.4K answers and 2.8M answer views ·10mo [math]0=x^2-x+1=(x-\frac12)^2+1-\frac14[/math] [math]\hspace{4ex}\implies x=\frac12\pm\frac12\sqrt{3}\,{\rm i}[/math] Upvote · Gary Russell Professor Emeritus, University of Iowa · Author has 6K answers and 3.1M answer views ·Updated Jun 3 Related What will be the solution of the equation, X³-x²+1=0? According to the Rational Root Theorem, the only two possible rational roots are -1 and +1. Neither work, so all roots are irrational or complex. Since a cubic has at least one real root, we must have an irrational root. There do exist several explicit algorithms for the cubic. However, they are not easy to follow and still require you take a square root. This post illustrates a numerical approach to get one root, followed by the use of the Quadratic Formula for the last two roots. It assumes that you know the Vieta Rules for cubics and quadratics. It also assumes that you know first semester c Continue Reading According to the Rational Root Theorem, the only two possible rational roots are -1 and +1. Neither work, so all roots are irrational or complex. Since a cubic has at least one real root, we must have an irrational root. There do exist several explicit algorithms for the cubic. However, they are not easy to follow and still require you take a square root. This post illustrates a numerical approach to get one root, followed by the use of the Quadratic Formula for the last two roots. It assumes that you know the Vieta Rules for cubics and quadratics. It also assumes that you know first semester calculus. Let y = f(x) = x^3 - x^2 + 1. Graphing the function, we see that there is root near -0.6. To find this root, we use the Newton Method. (See link at the end of this post.) Suppose that x = q is an estimate of a root f(x) = 0. Then, a better estimate is q = q - f(q)/(df/dx) where df/dx is evaluated at q. You iterate, letting the next q be the previous q. In this case, f(x) = x^3 - x^2 + 1 df/dx = 3x^2 - 2x = (x^2)(3x-2) So, starting this at q = -0.6, you get q =-0.754879 after three iterations. Now, call the three roots of f(x) as r, s, t. We know r = -0.754879 From the Vieta Rules for cubics, we know that r + s + t = 1 rst = -1 or s+t = 1-r st = -1/r Accordingly, due to the Vieta Rules for quadratics, s and t are the roots of g(x) = x^2 + (r-1)x - 1/r with discriminant D = (r-1)^2 + 4/r The roots are s = (1-r)/2 + sqrt(D)/2 t = (1-r)/2 - sqrt(D)/2 or s = 0.87744 - 0.744859 i t = 0.87744 + 0.744859 i COMMENT For more information on the Newton Method, click on this link. Newton's method - Wikipedia Algorithm for finding zeros of functions An illustration of Newton's method. In numerical analysis , the Newton–Raphson method , also known simply as Newton's method , named after Isaac Newton and Joseph Raphson , is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real -valued function . The most basic version starts with a real-valued function f , its derivative f ′ , and an initial guess x 0 for a root of f . If f satisfies certain assumptions and the initial guess is close, then x 1 = x 0 − f ( x 0 ) f ′ ( x 0 ) {\displaystyle x_{1}=x_{0}-{\frac {f(x_{0})}{f'(x_{0})}}} is a better approximation of the root than x 0 . Geometrically, ( x 1 , 0) is the x-intercept of the tangent of the graph of f at ( x 0 , f ( x 0 )) : that is, the improved guess, x 1 , is the unique root of the linear approximation of f at the initial guess, x 0 . The process is repeated as x n + 1 = x n − f ( x n ) f ′ ( x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}} until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of Householder's methods , and was succeeded by Halley's method . The method can also be extended to complex functions and to systems of equations . The purpose of Newton's method is to find a root of a function. The idea is to start with an initial guess at a root, approximate the function by its tangent line near the guess, and then take the root of the linear approximation as a next guess at the function's root. This will typically be closer to the function's root than the previous guess, and the method can be iterated . x n +1 is a better approximation than x n for the root x of the function f (blue curve) The best linear approximation to an arbitrary differentiable function f ( x ) {\displaystyle f(x)} near the point x = x n {\displaystyle x=x_{n}} is the tangent line to the curve, with equation f ( x ) ≈ f ( x n ) + f ′ ( x n ) ( x − x n ) . {\displaystyle f(x)\approx f(x_{n})+f'(x_{n})(x-x_{n}).} The root of this linear function, the place where it intercepts the ⁠ x {\displaystyle x} ⁠ -axis, can be taken as a closer approximate root ⁠ x n + 1 {\displaystyle x_{n+1}} ⁠ : x n + 1 = x n − f ( x n ) f ′ ( x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.} Iteration typically improves the approximation The process can be started with any arbitrary initial guess ⁠ x 0 {\displaystyle x_{0}} ⁠ , though it will generally require fewer iterations to converge if the guess is close to one of the function's roots. The method will usually converge if ⁠ f ′ ( x 0 ) ≠ 0 {\displaystyle f'(x_{0})\neq 0} ⁠ . Furthermore, for a root of multiplicity 1, the convergence is at least quadratic (see Rate of convergence ) in some sufficiently small neighbourhood of the root: the number of correct digits of the approximation roughly doubles with each additional step. More details can be found in § Upvote · 9 1 Ted Hopp Math hobbyist and Ph.D. in CS · Author has 2.4K answers and 3.6M answer views ·Updated 6y Related What is the solution to the equation [math]\displaystyle x^3 + x^2 - x - 1=0[/math]? What is the solution to the equation [math]x^3 + x^2 - x - 1 = 0[/math]? The first step I always take in trying to solve higher-order polynomials by hand (after looking for common factors among all the terms) is to try various groupings of terms, factor out powers of [math]x[/math], and see if anything useful develops. For instance, we can group based on the parity of the exponent (evens and odds) [math]\begin{align}x^3 + x^2 - x - 1 &= (x^3 - x) + (x^2 - 1)\ &= x(x^2 - 1) + (x^2 - 1)\end{align}\tag{}[/math] Well, that looks useful because now we can factor again: [math]x(x^2 - 1) + (x^2 - 1) = (x+1)(x^2 - 1)\tag{}[/math] The second factor can it Continue Reading What is the solution to the equation [math]x^3 + x^2 - x - 1 = 0[/math]? The first step I always take in trying to solve higher-order polynomials by hand (after looking for common factors among all the terms) is to try various groupings of terms, factor out powers of [math]x[/math], and see if anything useful develops. For instance, we can group based on the parity of the exponent (evens and odds) [math]\begin{align}x^3 + x^2 - x - 1 &= (x^3 - x) + (x^2 - 1)\ &= x(x^2 - 1) + (x^2 - 1)\end{align}\tag{}[/math] Well, that looks useful because now we can factor again: [math]x(x^2 - 1) + (x^2 - 1) = (x+1)(x^2 - 1)\tag{}[/math] The second factor can itself be factored by recognizing that it is the difference of two squares. So now we have: math(x^2 - 1) = (x+1)(x+1)(x-1) = (x+1)^2(x-1)\tag{}[/math] This is a complete factorization (all factors are constants or linear in [math]x[/math]), so we can simply read off the roots: [math]x = \begin{cases}-1 & \text{double root}\1\end{cases}\tag{}[/math] Note that we could have eventually arrived here by trying a different grouping at the start: [math]\begin{align}x^3 + x^2 - x - 1 &= (x^3 + x^2) - (x + 1)\ &= x^2(x+1) - (x + 1)\end{align}\tag{}[/math] and so forth. Upvote · 9 7 Related questions What is the solution to the equation [math]\displaystyle x^3 + x^2 - x - 1=0[/math]? What is the solution for the equation X+X=X^2-X=1 when X is equal to 0 and when X is equal to -1/2? What is the solution of (1-x-x^2-…) × (2-x-x^2-…) =0? Can anyone solve the equation x^2-x+1=0? What are the solutions for the equationmath+(x^2+\frac{1}{x^2})+(x+\frac{1}{x})=0[/math]? What is the solution to the equation (x + 2) ^3 = 0? If (x^2) +(x-1) =0, what is x? What is the solution for x^2+x-2=0? What are the solutions to the equation -x^4+x^3-2x^2-x+1=0? How can I solve this quadratic equation 1/x+1 + 1/x-1 + 1/x+2 + 1/x-2 =0? What are the possible integral solutions to the equation [math]x^2+y^{1/2} -1=0[/math], where math=/=0[/math]? Is [x-1] [x+2] = 0 a quadratic equation? What is the solution to the equation x^3 + 2x^2 + 4x + 1 = 0? What is the solution for x in equation 2^(x+1) + 2^(x-1) = 20? X^2+1=0, what is the value of x? Related questions What is the solution to the equation [math]\displaystyle x^3 + x^2 - x - 1=0[/math]? What is the solution for the equation X+X=X^2-X=1 when X is equal to 0 and when X is equal to -1/2? What is the solution of (1-x-x^2-…) × (2-x-x^2-…) =0? Can anyone solve the equation x^2-x+1=0? What are the solutions for the equationmath+(x^2+\frac{1}{x^2})+(x+\frac{1}{x})=0[/math]? What is the solution to the equation (x + 2) ^3 = 0? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
12237
https://www.excelforum.com/excel-formulas-and-functions/1187566-exponential-trendline-formula.html
Exponential Trendline formula Register Help Forgotten Your Password? - [x] Remember Me? Forum Today's Posts FAQ Calendar Community Groups Forum Actions Mark Forums Read Quick Links : What's New? Members List Calendar Forum Rules Dashboard Commercial Services Advanced Search Forum Microsoft Office Application Help - Excel Help forum Excel Formulas & Functions Exponential Trendline formula Discover more forum Forums Excel Microsoft Office Internet forum Office Softwares Microsoft Excel Software The use of AI tools (e.g. chatGPT, BARD, GPT4 etc) to create forum answers is not permitted. If a user is believed to have used such tools to provide a forum answer, sanctions may be imposed. For the best help attach a sample workbook to your post Attention - ExcelForum Rules have been updated as of August 2023. These rules apply to everyone for the benefit of all, please take a moment now to read them so you will have no issues in complying with them. Please read them here . New Notice for experts and gurus: Recently, it has become clear that some members (especially newer members) have been confused by "mixed messages" coming from non-Moderators. They are unclear how to react to such requests. Accordingly, we are now asking all members NOT to attempt to moderate threads. From now on, that will be done by forum Moderators only.We continue to encourage the reporting of posts/threads where cross-posting has not been declared, where measures to bypass sheet security are requested, or where abusive language is being used. Generally speaking, it it NOT necessary to report poor titles, spam, lack of code tags, etc, as these are easily spotted by forum Moderators.From now on, those posts made by non-Mods that are attempting to carry out Moderation activities will be deleted. + Reply to Thread Results 1 to 3 of 3 Exponential Trendline formula LinkBack LinkBack URL About LinkBacks Thread Tools Show Printable Version Subscribe to this Thread… Rate This Thread Current Rating ‎ Excellent ‎ Good ‎ Average ‎ Bad ‎ Terrible Display Linear Mode Switch to Hybrid Mode Switch to Threaded Mode 06-01-2017,04:04 AM#1 stopherlogic View Profile View Forum Posts Registered User Join Date 06-08-2014 Posts 4 Exponential Trendline formula Discover more Software Office Forums Excel forum Microsoft Office Internet forum Microsoft Excel Softwares Hi, I am having problems breaking down the exponential trendline equation in the attached graph. Would anyone be able to give me some pointers? Exp.PNG Register To Reply 06-01-2017,08:31 AM#2 6StringJazzer View Profile View Forum Posts Visit Homepage Administrator Join Date 01-27-2010 Location Tysons Corner VA USA MS-Off Ver MS 365 Family 64-bit 2505 Posts 27,590 Re: Exponential Trendline formula Discover more Forums Softwares Microsoft Excel Office Excel Microsoft Office Software forum Internet forum Since you have an exponential trendline, I'll assume you know what an exponential trendline is. The equation for an exponential curve is y= a_e_b x, where a and b are arbitrary constants and e_is the mathematical constant _e. In your curve, the constants have been adjusted so that a = 3E+07 ≡ 3.0 x 107≡ 30,000,000 and b = 1.8299. The constant a controls the scaling in the vertical direction and b controls the scaling in the horizontal direction. Jeff | | |·| |·| |·| |·| | | |·| |·| Read the rules Use code tags to [code]enclose your code![/code] Register To Reply 06-01-2017,08:34 AM#3 MrShorty View Profile View Forum Posts Forum Guru Join Date 04-13-2005 Location North America MS-Off Ver 2002/XP, 2007, 2024 Posts 16,606 Re: Exponential Trendline formula Discover more Forums Softwares Excel Internet forum Software forum Office Microsoft Office Microsoft Excel What specifically are you having trouble with? It looks like a straightforward exponential equation y=Bexp(Mx) (you should be able to show that this is the same as ln(y)=ln(B)+Mx, which is the actual "linear" equation the regression algorithm uses). Pointer 1: What is the exact value of A? Excel's chart trendlines have a bad habit of giving results to one or two significant figures, like in this case. That could be 3.00000E+7 or that could be 2.500000E+7 or that could be 3.40000E+7. If you are going to use chart trendlines, always expand the number format to show more significant figures. Pointer 2: This is the same equation as the LOGEST() function uses (technically y=BM^x or ln(y)=ln(B)+xln(M) but you should be able to show that these are equivalent). If this is something you will do frequently, I would recommend that you become familiar with using the LOGEST() (or LINEST()) function: From there, I'm not sure what to suggest, without a better idea of what you are having trouble with. Originally Posted by shg Mathematics is the native language of the natural world. Just trying to become literate. Register To Reply + Reply to Thread «Previous Thread | Next Thread» Thread Information Users Browsing this Thread There are currently 1 users browsing this thread. (0 members and 1 guests) Similar Threads Exponential trendline fit in Excel does not produce the best fit??? By 0rz in forum Excel Formulas & Functions Replies: 5 Last Post: 01-09-2014, 09:07 PM 2. ###### Double exponential trendline By rbulph in forum Excel General Replies: 3 Last Post: 09-16-2012, 10:38 AM 3. ###### Exponential trendline equation By xpat in forum Excel General Replies: 1 Last Post: 07-12-2012, 12:10 AM 4. ###### [SOLVED] Is Exponential trendline always concave By ChemistB in forum Excel Charting & Pivots Replies: 8 Last Post: 05-31-2012, 01:47 PM 5. ###### Exponential trendline offset How can I do that in Excel? Thanks in advance. Juliano Fernandes") By drjulianof in forum Excel Charting & Pivots Replies: 0 Last Post: 01-22-2006, 06:10 AM 6. ###### Exponential trendline offset How can I do that in Excel? Thanks in advance. Juliano Fernandes") By drjulianof in forum Excel Charting & Pivots Replies: 1 Last Post: 01-20-2006, 04:55 PM 7. ###### Solver vs. Exponential Trendline By jcoleman52 in forum Excel General Replies: 2 Last Post: 12-21-2005, 04:45 PM Bookmarks Bookmarks Digg del.icio.us StumbleUpon Google Posting Permissions You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is Off HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are Off Forum Rules Discover more forum Office Forums Software Microsoft Excel Softwares Internet forum Microsoft Office Excel Contact Us ExcelForum.com Archive Top All times are GMT -4. The time now is 12:37 AM. Search Engine Friendly URLs by vBSEO 3.6.0 RC 1
12238
https://www.justia.com/products-liability/types-of-products-liability-claims/manufacturing-defects/
Find a Lawyer Ask a Lawyer Research the Law Law Schools Laws & Regs Newsletters Marketing Solutions Justia Connect Pro Membership Practice Membership Public Membership Justia Lawyer Directory Platinum Placements Gold Placements Justia Elevate SEO Websites Blogs Justia Amplify PPC Management Google Business Profile Social Media Justia Onward Blog Manufacturing Defects Supporting Products Liability Legal Claims Unlike design defects, manufacturing defects usually exist in one or a few items, rather than every product in a line. They are aspects of the product that the manufacturer did not design or intend. Instead, they occur when a product deviates from its intended design, regardless of how much care the manufacturer took to design the product, select materials, and oversee its production. A manufacturing defect occurs during the construction phase of products. For example, an airbag that lacks the proper mechanism to deploy probably has a manufacturing defect. Similarly, a bottle of cough syrup that is contaminated probably has a manufacturing defect. Any manufacturing defect that causes an injury can give rise to a products liability lawsuit. Manufacturing defect = an unintentional deviation from the product’s design during the manufacturing process Generally, manufacturing and quality assurance controls limit the number of defective products that are shipped to consumers. However, occasionally a badly manufactured product bypasses the systems that are in place to make sure that it is not defective. The two most common defects involve low-quality materials and poor workmanship while assembling components to make the finished product. Often, the manufacturing defect could be eliminated if a more careful worker or better-quality materials were used to create the product. If an injury-producing problem would still be there, whether or not the product was put together well, the issue is probably a design defect rather than a manufacturing defect. When a badly manufactured product leaves the manufacturer and causes injury when used for its intended purpose, the manufacturer is liable for any injuries that result under the principle of strict liability. Liability arises even if the manufacturer was very reasonable and careful when putting together the product. A plaintiff trying to prove strict liability need only show that the product was defective and that the defect caused his or her injuries. Proving Manufacturing Defects Although suing under a theory of strict liability is more straightforward in some ways than suing for negligence, it can be challenging to prove that a manufacturing defect was the actual and proximate cause of the accident. Sometimes, a plaintiff's own actions contribute to an accident, and the defendant argues that the plaintiff's actions, not the defect, caused his or her injuries. Suppose, for example, there is a head-on collision before which the plaintiff was swerving in and out of traffic and moves into the oncoming traffic lane in order to pass the car in front of her. Even if there is a manufacturing defect in the steering such that the plaintiff cannot get out of the way of an oncoming car, the defendant will have a strong argument that the plaintiff's comparative negligence in driving into the oncoming lane contributed to the accident. After an accident, it is important for an injured person to retain the product that caused the injury so that an expert can examine it and determine whether it has manufacturing defects. What happens if the product is so badly damaged that an expert cannot evaluate whether it malfunctioned? In some jurisdictions, a plaintiff may use the "malfunction doctrine" to show causation when the product is too damaged after the accident to determine if it had flaws. This doctrine allows a plaintiff to show that the circumstances of an accident indicate that a defect caused the accident and to present evidence eliminating all other possible causes to show that a flaw must have existed at the time of the sale. The Malfunction Doctrine In some jurisdictions, a plaintiff may establish a manufacturing defect with only circumstantial evidence by showing that the injury occurred without abnormal use or another reasonable cause. A defendant manufacturer can defend a manufacturing defect lawsuit in two primary ways: arguing modification or arguing assumption of the risk. With modification, the defendant must prove that the product was changed since it left the defendant's possession. To prove assumption of the risk, the defendant will need to demonstrate the plaintiff knew of the hazard that hurt him or her and chose to engage in the activity nonetheless. Products Liability Law Center Contents Products Liability Law Center Product Recalls & Related Lawsuits Types of Products Liability Legal Claims Manufacturing Defects Supporting Products Liability Legal Claims Design Defects Supporting Products Liability Legal Claims Failures to Warn Supporting Products Liability Legal Claims Breaches of Warranties Supporting Products Liability Legal Claims Types of Defective Product Cases Bringing a Products Liability Lawsuit Camp Lejeune Lawsuits for Water Contamination Products Liability Law FAQs Find a Products Liability Lawyer Related Areas Personal Injury Law Center Car Accidents Legal Center Truck Accidents Legal Center Medical Malpractice Law Workers’ Compensation Law Center Consumer Protection Law Center Products Liability Law Center Product Recalls & Related Lawsuits Types of Products Liability Legal Claims Manufacturing Defects Supporting Products Liability Legal Claims Design Defects Supporting Products Liability Legal Claims Failures to Warn Supporting Products Liability Legal Claims Breaches of Warranties Supporting Products Liability Legal Claims Types of Defective Product Cases Dangerous Drugs Leading to Products Liability Lawsuits FDA Drug Recalls & Related Products Liability Lawsuits Mesothelioma and Asbestos Lawsuits Medical Device Defects Leading to Products Liability Lawsuits Auto Defects Leading to Products Liability Lawsuits Brake Defects & Rear-End Collisions Leading to Products Liability Lawsuits Welding Rod Defects & Related Products Liability Lawsuits Toxic Tort Law Toxic Mold & Potential Legal Claims Tobacco Lawsuits Alleging Product Defects Hernia Mesh Defects Leading to Products Liability Lawsuits Food Poisoning & Related Products Liability Lawsuits Prescription Drug Defects Leading to Products Liability Lawsuits Bringing a Products Liability Lawsuit Elements of a Products Liability Legal Claim Damages in Products Liability Lawsuits Time Limits for Filing a Products Liability Lawsuit Defendants in Products Liability Lawsuits Multidistrict Litigation for Resolving Products Liability Lawsuits Working With a Products Liability Lawyer Camp Lejeune Lawsuits for Water Contamination Products Liability Law FAQs Find a Products Liability Lawyer Related Areas Personal Injury Law Center Car Accidents Legal Center Truck Accidents Legal Center Medical Malpractice Law Workers’ Compensation Law Center Consumer Protection Law Center Columbus, Ohio Products Liability Lawyers Sponsored Listings Jami S. Oliver Lawyers, want to be a Justia Connect Pro too? Learn more › (614) 220-9100 Dublin, OHPersonal Injury, Products Liability, Nursing Home Abuse, Insurance Claims, Medical Malpractice, Employment Law WebsiteEmailCallEmailProfile View All Justia Legal Resources Find a Lawyer Bankruptcy Lawyers Business Lawyers Criminal Lawyers Employment Lawyers Estate Planning Lawyers Family Lawyers Personal Injury Lawyers More... © 2025 Justia Justia Connect Legal Portal Company Help Terms of Service Privacy Policy Marketing Solutions
12239
https://www.studocu.com/en-us/document/the-university-of-texas-at-dallas/fluid-mechanics/dimensional-analysis-and-buckingham-pi-theorem-in-fluid-mechanics/116978721
Understanding Dimensional Analysis & Buckingham Pi Theorem (ENG 540) - Studocu Skip to document Teachers University High School Discovery Sign in Welcome to Studocu Sign in to access study resources Sign in Register Guest user Add your university or school 0 followers 0 Uploads 0 upvotes New Home My Library AI Notes Ask AI AI Quiz Chats Recent You don't have any recent items yet. My Library Courses You don't have any courses yet. Add Courses Books You don't have any books yet. Studylists You don't have any Studylists yet. Create a Studylist Home My Library Discovery Discovery Universities High Schools High School Levels Teaching resources Lesson plan generator Test generator Live quiz generator Ask AI Understanding Dimensional Analysis & Buckingham Pi Theorem (ENG 540) The Buckingham Pi theorem serves as a foundational tool in dimensional analysis,...particularly in fluid mechanics, for simplifying complex relationships among variables by deriving dimensionless groups. This document delves into the application of the theorem through case studies, highlighting its role in determining the pressure drop in pipe flow, analyzing drag on various geometries like cylinders and spheres, and exploring the lift and drag forces on airfoils. By systematically employing dimensional analysis, it demonstrates how to formulate functional relationships that reduce the number of independent variables, thereby streamlining experimental design and improving predictive accuracy in engineering applications. Key themes include the principles of similitude, the importance of identifying repeating variables, and effective strategies for experimental correlation, culminating in a deeper understanding of fluid dynamics phenomena. The work emphasizes the practical implications of dimensional analysis in optimizing engineering solutions and validating empirical results through rigorous theoretical underpinnings. View more Original title: Dimensional Analysis and Buckingham Pi Theorem in Fluid Mechanics Course Fluid Mechanics (MECH 3315) 13 documents University The University of Texas at Dallas Academic year:2024/2025 Recommended for you 28 Fluid Mechanics 1 Fluid Mechanics Lecture notes 100% (1) 2 Fluid Mech 3 Fluid Mechanics Lecture notes 100% (1) 5 HW-4 solution Fluid Mechanics Assignments 100% (1) Comments Please sign in or register to post comments. Report Document Students also viewed Lab report heat transfer tube Reynolds Transport Theorem & Control Volume Analysis in Fluid Mechanics 7.6 Quiz 5 Answers – Principles of Macroeconomics (FILM23332) Principles of Macroeconomics QUIZ #4 – Chapters 10 Insights ITSS4352 Assignment 3: ROI Analysis for Google and Bing Traffic Quiz 8 - homework Related documents DANC 1305 - Introduction to World Dance Forms Syllabus - Spring 2025 DANC 2331 - 001 Spring 2025 Course Syllabus for Dance Technique 1 MAE543 HW1 - Minimum Wall Thickness Analysis for Pressure Tank Steels NUR 215L LV5-C: Clinical Interprofessional Partnerships - Nursing Assistant Interview Government Study Guide for US Politics and Elections RRM3 D268 Task 1 Email Introduction Template Instructions Preview text 9 Introduction 9 Buckingham Pi Theorem 9 Repeating Variables Method 9 Similitude and Model Development 9 Correlation of Experimental Data 9 Application to Case Studies # 9.6 DA of Flow in a Round Pipe # 9.6 DA of Flow through Area Change # 9.6 DA of Pump and Fan Laws # 9.6 DA of Flat Plate Boundary Layer # 9.6 DA of Drag on Cylinders and Spheres # 9.6 DA of Lift and Drag on Airfoils 9 Summary # Problems # 534 9 INTRODUCTION Fluid mechanics problems can be dealt with by using analytical, computational, and ex- perimental approaches to understand the distribution of fluid and flow properties and the interaction of the fluid with its surroundings. In the preceding chapters you learned how to apply mass, momentum, and energy balances and the Bernoulli equation. In this chap- ter, we focus our attention on some of the experimental tools engineers use to solve fluid mechanics problems. These tools, known as dimensional analysis, similitude, and mod- eling, are very powerful but surprisingly easy to apply. Perhaps you remember that the case study results were based on dimensional analysis and experiments. In a later sec- tion of this chapter we will show you how dimensional analysis can be used to arrive at the design formulas of those case studies. We will also show you how the tools you learn about in this chapter allow an engineer to efficiently organize and understand the results of experiments. 9 DIMENSIONAL ANALYSIS AND SIMILITUDE # 9 INTRODUCTION 535 Did you know that most experimental studies in fluid mechanics involve the use of scale models? Examples are shown in Figure 9. Perhaps it is obvious that a scale model must be geometrically similar to an actual device or system, but what guarantee do we have that the flow field that occurs with a scale model is similar to the flow field of engineering interest? How do we apply data gathered from an experiment on a model to the design of a full-scale device? Dimensional analysis (abbreviated DA) is the source of answers to these and many other questions involving experimental research. It is the foundation of the theories of similitude and modeling, and it provides a means to design an efficient experimental program. DA makes use of the principle that all terms of a physical equation must have the same dimensions. Engineers take advantage of this by routinely checking a proposed formula to make sure it is dimensionally consistent. However, this particular application of DA, while helpful, is really of secondary importance compared to using DA to un- derstand the behavior of a physical system without the need for complex mathematics. Most engineers associate DA with the Buckingham Pi theorem, published in 1914 by E. Buckingham. With the aid of the Pi theorem, DA may be used to determine the (A) (C) (B) Figure 9 (A) Model of an aircraft in a wind tunnel. (B) scale model of a section of the Mississipi River. (C) Model of San Antonio, Texas, used for determining wind patterns in this urban environment. # 9 BUCKINGHAM PI THEOREM 537 p is related to frictional losses (viscous dissipation). Since we expect frictional loses to increase with an increase in viscosity, pipe length, and pipe wall roughness, it makes sense to include μ, L, and e in our model. The inclusion of V ̄ and D should also seem reasonable to you if you remember that the volume flowrate, i., the volume of liquid moving through the pipe per unit time, is given by the product of the average liquid ve- locity and the cross-sectional area of the pipe. Finally, we include density as a variable to be able to relate the volume flowrate to the mass flowrate. The next step in the DA is to write the proposed functional relationship mathemat- ically as p = f (L , D, e, V ̄ , ρ, μ) (9) According to Eq. 9, the pressure drop is a function of six independent variables. By the principle of dimension consistency, the unknown function f must combine the indepen- dent variables in such a way that it has dimensions of pressure. Suppose we try to deter- mine f by experiment. To explore the effects of the six variables on p we systemati- cally vary each independent variable while holding the other independent variables fixed. This can be time-consuming and expensive. Furthermore, the experiments may be difficult to perform. For example, how can we vary viscosity significantly? Is it possible to reduce the number of independent variables in Eq. 9? Buckingham’s work established the theoretical basis for a process to reduce the number of independent variables in a functional relationship like Eq. 9 to a minimum. The Pi theorem is applicable to a function, f , of k dimensional variables, uk , in the form u 1 = f (u 2 , u 3 ,... , uk ) (9) For example, in Eq. 9 the dependent variable u 1 is the pressure drop and there are six independent variables, so the total number of variables is k = 7. The Pi theorem states that there exists a functional relationship between at most k − r dimensionless Pi groups, k−r , of the form  1 = g( 2 ,  3 ,... , k−r ) (9) where r is the number of base dimensions needed to describe those parameters. Thus the Pi theorem proves that the number of independent variables in any functional rela- tionship may be reduced from k to k − r if the relationship is expressed in terms of Length, L p 1 p 2 Average flow velocity, V Fluid with viscosity , and density  Diameter, D Roughness, e Roughness, e Pressure drop, p  p 2 p 1 Pipe wall Figure 9 The parameters that affect pressure drop in horizontal flow through a round pipe. dimensionless groups. The r base dimensions in most fluid mechanics problems are M, L , t, and T, repre- senting mass, length, time, and temperature. In the absence of thermal effects, a temperature scale T is unnecessary. What happens when we apply the Pi theorem to our pipe flow problem? The seven physical parame- ters in pipe flow can be written in terms of base dimen- sions as {p} = M t 2 L , {L} = {D} = {e} = L , { ̄V } = L t , {ρ} = M L 3 , and {μ} = M Lt Notice that we did not use the base dimension for temperature, T, and it was not neces- sary to introduce force, F, as a base dimension, since M, L , and t can be combined to form the dimension of force. Thus, the base dimensions for pipe flow are M, L, and t and we have r = 3. Table 9, which is a list of common fluid and flow properties and their base dimensions, can be of assistance in this process. According to the Pi theorem, using dimensionless groups reduces the number of in- dependent variables in any functional relationship from k to k − r. For pipe flow, since k = 7 and r = 3 , the theorem suggests replacing Eq. 9 with a functional relationship:  1 = g( 2 ,  3 ,  4 ) where the four Pi groups are dimensionless algebraic combinations of the original set of seven physical parameters. By design, the first Pi group includes the dependent variable, # 538 9 DIMENSIONAL ANALYSIS AND SIMILITUDE The implications of the Pi theorem are twofold. First, in performing experiments it is necessary to vary only the value of a di- mensionless group rather than of each physical parameters it contains. Second, by working with dimensionless groups, there are k − r independent variables rather than k, a substantial reduction. TABLE 9 Base Dimensions for Common Fluid and Flow Properties Property Dimensions Property Dimensions Acceleration Lt− 2 Momentum M Lt− 1 Angle Dimensionless Power M L 2 t− 3 Angular momentum M L 2 t− 2 Pressure M L− 1 t− 2 Angular velocity t− 1 Specific heat L 2 t− 2 T − 1 Area L 2 Specific weight M L− 2 t− 2 Density M L− 3 Strain Dimensionless Energy M L 2 t− 2 Stress M L− 1 t− 2 Force M Lt− 2 Surface tension Mt− 2 Frequency t− 1 Temperature T Heat M L 2 t− 2 Time t Length L Torque M L 2 t− 2 Mass M Velocity Lt− 1 Modulus of elasticity M L− 1 t− 2 Viscosity (dynamic) M L− 1 t− 1 Moment of a force M L 2 t− 2 Viscosity (kinematic) L 2 t− 1 Moment of inertia (area) L 4 Volume L 3 Moment of inertia (mass) M L 2 Work M L 2 t− 2 9 REPEATING VARIABLE METHOD The primary goal of DA is to determine the form of each dimensionless group predicted by the Pi theorem. With experience, this may be done by inspection, since we know that each group is dimensionless and that every valid physical parameter must appear in at least one dimensionless group. In general, however, we recommend the use of the re- peating variable method to construct dimensionless groups. It is quick and easy to im- plement, and it provides the advantage of using a set procedure with less chance of error. The required procedure, stated formally here, is explained in more detail in the subse- quent example. Repeating Variable Method 1. List all physical parameters, assigning one as the dependent variable, and ex- press a relationship between this variable and the others in the form of Eq. 9. Let k be the total number of variables, including the dependent variable. 2. Represent each variable in terms of its base dimensions, forming a base dimen- sions table. Let r be the total number of different base dimensions needed. 3. Choose r independent variables to serve as repeating variables, making sure that all base dimensions are included. There are (k − r) nonrepeating independent variables left, with each appearing in a dimensionless Pi group. 4. Form a Pi group first for the nonrepeating dependent variable by multiplying it by each repeating variable raised to a power. Choose the exponents of each of the r repeating variables to make the overall product and resulting dependent Pi group dimensionless. Repeat for the remaining (k − r − 1 ) nonrepeating inde- pendent variables, forming a Pi group for each. 5. Check each Pi group to be sure it is dimensionless, and rearrange to obtain a standard form if known. 6. Express the (k − r) Pi groups in the functional form of equation 9. Let us now look at each of these steps in more detail by executing the complete pro- cedure for the pipe flow shown earlier in Figure 9. # Step 1 We are told that the pressure drop p depends on the pipe length L, diameter D, wall roughness e, average velocity V ̄ , liquid density ρ, and viscosity μ. The dependent vari- able is p, so the desired functional relationship between the physical parameters is Eq. 9: p = f (L , D, e, V ̄ , ρ, μ) Counting all the physical parameters we see that k = 7. Comments on Step 1: To simply list all the physical parameters that influence a flow problem is deceptively difficult; yet this step is critical and usually a challenge for an in- experienced engineer. We are required to list every fluid and flow property, geometric parameter, and external agent that exerts an influence on the phenomenon of interest. Constructing this list is guided by an understanding of the theoretical and practical # 540 9 DIMENSIONAL ANALYSIS AND SIMILITUDE # 9 REPEATING VARIABLE METHOD 541 aspects of fluid mechanics, and most importantly by experience. A review of published work on the flow of interest is helpful. If a noninfluential parameter is inadvertently included in a DA, an extra dimension- less group containing that parameter will result. Experiments will, however, show the extra group is superfluous and can be ignored. For example, if we had included gravity in our pipe flow DA, we would have obtained an extra dimensionless group that would have been shown by experiments to have no influence on p. On the other hand, if an impor- tant parameter is left out, the corresponding dimensionless group will be missing. The effect of a missing group may not be discovered except in hindsight, when mysterious variations in experimental data are finally understood. Therefore, we recommend that you include every parameter in step 1 that can be reasonably expected to influence the flow. This is why wall roughness is included for pipe flow. Does it seem reasonable that the frictional pressure drop in a rough pipe might be different from that in a smooth pipe? Be careful to avoid including redundant parameters in your parameter list. For ex- ample, choosing to include both pipe diameter and cross-sectional area is inappropriate because the effect of a change in the value of one of these parameters completely deter- mines the change in the other. Tradition dictates that diameter be used in pipe flow. The same thinking applies to including both the absolute viscosity μ and the kinematic vis- cosity ν of a fluid along with the density. The two viscosities are not independent. We recommend using μ rather than ν, but never both. # Step 2 We represented each variable in pipe flow in terms of base dimensions earlier. We there- fore use those results to construct the following base dimensions table: p L D e V ̄  M t 2 L L L L L t M L 3 M t L By inspection, the number of base dimensions used in the table is r = 3. Since there are 7 variables and 3 base dimensions, we anticipate finding a total of k − r = 7 − 3 = 4 dimensionless groups. Comment on Step 2: It is usually straightforward to represent each physical parameter in terms of its base dimensions and determine the total number of base dimensions in a problem. Table 9 can be consulted as part of this process. The recommended default set of base dimensions is M, L , t , with T included when needed in thermal problems. # Step 3 We must now choose r = 3 independent variables out of the set L , D, e, V ̄ , ρ, μ to serve as repeating variables, with only one stated constraint: that all base dimensions be included. From the base dimensions table, it appears that a reasonable choice is (D, V ̄ , ρ). The remaining nonrepeating variables p, L , e, μ will each appear in a di- mensionless group. # 9 REPEATING VARIABLE METHOD 543 # Step 4 The first Pi group always involves the dependent variable. We form this group for pipe flow by writing a product of the pressure drop with each of the repeating variables raised to a power. Since we will pick these exponents to make the Pi group dimensionless, we have:  1 = (p) 1 (D)A( V ̄ )B (ρ)C = ( M t 2 L ) 1 (L)A ( L t )B ( M L 3 )C = M 0 L 0 t 0 Note that we have substituted the dimensions of each variable in the Pi group from the base dimensions table. We can ensure that the first Pi group is dimensionless by equat- ing exponents of each base dimension as follows: (M) 1 +C = M 0 , (L)− 1 +A+B− 3 C = L 0 , and (t)− 2 −B = t 0 The equations for the exponents are: 1 + C = 0 , − 1 + A + B − 3 C = 0 , and − 2 − B = 0 and these are satisfied by choosing C = − 1 , B = − 2 , A = 0. Thus the first Pi group is  1 = (p) 1 (D) 0 ( V ̄ )− 2 (ρ)− 1 = p ρ V ̄ 2 The remaining three Pi groups containing the three nonrepeating independent vari- ables (L , e, μ) are constructed following the same procedure. The order in which the remaining groups are created does not matter. The Pi group containing pipe length L is constructed by writing  2 = (L) 1 (D)A( V ̄ )B (ρ)C = (L) 1 (L)A ( L t )B ( M L 3 )C = M 0 L 0 t 0 so the equations for the exponents are C = 0 , 1 + A + B − 3 C = 0 , and −B = 0. By inspection, A = − 1 , B = 0 , and C = 0 , so the second Pi group is  2 = (L) 1 (D)− 1 ( V ̄ ) 0 (ρ) 0 = L D The next Pi group, containing wall roughness e, is constructed as follows:  3 = (e) 1 (D)A( V ̄ )B (ρ)C = (L) 1 (L)A ( L t )B ( M L 3 )C = M 0 L 0 t 0 The equations determining the required exponents are the same as those for pipe length, since wall roughness and pipe length have the same dimensions. Thus the third Pi group is  3 = (e) 1 (D)− 1 ( V ̄ ) 0 (ρ) 0 = e D The last Pi group, containing viscosity μ, is constructed as follows:  4 = (μ) 1 (D)A( V ̄ )B (ρ)C = ( M Lt ) 1 (L)A ( L t )B ( M L 3 )C = M 0 L 0 t 0 The equations for the exponents are 1 + C = 0 , − 1 + A + B − 3 C = 0 , and − 1 − B = 0 , so the resulting exponents are C = − 1 , B = − 1 , and A = − 1. Thus the final Pi group is  4 = (μ) 1 (D)− 1 ( V ̄ )− 1 (ρ)− 1 = μ D V ̄ ρ The complete set of four Pi groups for pipe flow are  1 = p ρ V ̄ 2 ,  2 = L D ,  3 = e D ,  4 = μ D V ̄ ρ Comment on Step 4: In this particular case, once pipe diameter has been selected as a repeating parameter, it is possible to anticipate that the dimensionless groups containing pipe length and wall roughness must simply involve the division of each by the pipe di- ameter, since this immediately forms a dimensionless group. In effect, the pipe diameter has been chosen as the length scale for this analysis. Thus you can form these groups by inspection as indicated earlier. With experience, you may find that you can do the entire DA this way. # Step 5 We now check each Pi group to be sure it is dimensionless, referring to the base dimen- sions table as needed. In this case it is obvious that the second and third Pi groups are dimensionless. Checking the other two, we have { 1 } = { p ρ V ̄ 2 } = M/L t 2 (M/L 3 )(L 2 /t 2 ) = M L t 2 L 3 M t 2 L 2 = 1 { 4 } = { μ D V ̄ ρ } = M/L t (L)(L/t)(M/L 3 ) = M L t 1 L t L L 3 M = 1 We conclude that all four Pi groups are dimensionless and that we have carried out the procedure correctly. The next step is to rearrange individual Pi groups to put them into standard form. In this case the fourth Pi group is the inverse of Reynolds number, so we will invert it and write it as  4 = ρ V D ̄ μ Comment on Step 5: To rearrange a Pi group, it is necessary to know the standard forms of dimensionless groups in fluid mechanics. In Chapter 3 we discussed the im- portant dimensionless groups in fluid mechanics and the context in which you are likely to encounter them. You might wish to reread Section 3 at this time. # Step 6 The final step in a DA is to write a relationship between the dependent Pi group and the remaining groups in the form of Eq. 9. For pipe flow, the relationship between pressure # 544 9 DIMENSIONAL ANALYSIS AND SIMILITUDE # 546 9 DIMENSIONAL ANALYSIS AND SIMILITUDE Figure 9 has a blast radius R at a time t∗ seconds after initiation and that the blast radius depends only on the total energy E released by the bomb and on the initial density ρ 0 of the air in the atmosphere. See if you can recreate Taylor’s DA for this problem. SOLUTION We apply the repeating variable method to implement the Buckingham Pi theorem by means of the standard six-step procedure. Step 1. We are told that the radius R of the fireball after the explosion depends on the elapsed time t∗, total energy released E, and initial gas density ρ 0. The desired func- tional relationship between these physical parameters is R = f (E, ρ 0 , t∗). There are four parameters, so k = 4. Step 2. The base dimension table is: The base dimensions are M, L, and t so r = 3 and we will have 4 − 3 = 1 dimension- less groups. Step 3. The only possible choice for the required set of 3 repeating variables is all three independent variables E, ρ 0 , t∗ . This selection includes all three base dimensions. Step 4. The first and only Pi group in this problem is found by writing  1 = (R) 1 (E)A(ρ 0 )B (t)C = (L) 1 ( M L 2 t 2 )A ( M L 3 )B (t)C = M 0 L 0 t 0 To obtain a dimensionless group we must have (M)A+B = M 0 , (L) 1 + 2 A− 3 B = L 0 , and (t)− 2 A−C = t 0 which gives the following equations for the exponents A + B = 0 , 1 + 2 A − 3 B = 0 , and − 2 A − C = 0. Solving these we find A = − 15 , B = 15 , C = 25. The single Pi group in this problem is the dimensionless blast radius:  1 = R(ρ 0 ) 1 / 5 /E 1 / 5 (t∗) 2 / 5. R E 0 t L M L 2 t 2 M L 3 t R Figure 9 Schematic for Example 9. # 9 REPEATING VARIABLE METHOD 547 Step 5. Checking this group to see if it is dimensionless, we find { 1 } = { R(ρ 0 ) 1 / 5 E 1 / 5 (t∗) 2 / 5 } = L(M/L 3 ) 1 / 5 (M L 2 /t 2 ) 1 / 5 (t 2 / 5 ) = L 2 / 5 M 1 / 5 M 1 / 5 L 2 / 5 t 0 = 1 So the dimensional analysis appears to be correct. Step 6. There is only one Pi group here. With a little thought perhaps you can convince yourself that in a problem with a single Pi group, dimensional consistency demands that the single Pi group be equal to a dimensionless constant. Thus the relationship between blast radius and the other physical parameters in this problem must be  1 = R(ρ 0 ) 1 / 5 E 1 / 5 (t∗) 2 / 5 = C where C is a dimensionless constant. Through further investigation Taylor determined that this constant is approximately 1. The original New Mexico test had a 20 kiloton ( 8 × 1013 J) yield, so E = 8 × 1013 J. At t∗ = 0 .015 s after detonation, we can calcu- late that R = 108 m by using an air density of ρ 0 = 1 kg/m 3. Compare this estimate with the photograph in Figure 9. Ground Ground R 100 m. Figure 9 Atomic fireball in New Mexico 0 s after ignition in 1945. Note the scale and the fact that Taylor’s analysis was confirmed. E X A M P L E 9. 4 Consider the open channel flow of water shown in Figure 9. Under flow conditions illustrated, a structure known as a hydraulic jump forms, causing a change of depth from d 1 upstream to d 2 downstream as shown in the schematic Figure 9. The downstream # 9 SIMILITUDE AND MODEL DEVELOPMENT 549 inspection we have A = − 1 , and B = 0 , so the first Pi group is  1 = d 2 d 1 The second Pi group containing V is found from  2 = (V ) 1 (d 1 )A(g)B = ( L t ) 1 (L)A ( L t 2 )B = L 0 t 0 To obtain a dimensionless group we must have (L) 1 +A+B = L 0 and (t)− 1 − 2 B = t 0 , which yields 1 + A + B = 0 and − 1 − 2 B = 0. By inspection we have A = − 12 , B = − 12 , so the second Pi group is  2 = V √ gd 1 Comparing this with Eq. 3 we see that it is the Froude number. Step 5. Checking these groups to see whether they are dimensionless, we see immedi- ately that the first is. To check the second one we write { 2 } = { V √ gd 1 } = L/t (L 2 /t 2 ) 1 / 2 = 1 So the dimensional analysis appears to be correct. Step 6. There are two Pi groups, so in a hydraulic jump the relationship between down- stream depth and the other physical parameters is d 2 d 1 = g ( V √ gd 1 ) Thus we find that the depth ratio is solely determined by the Froude number. CD/Dynamics/Reynolds Number: Inertia and Viscosity/Dynamic Simulation CD/Video Library/Flow Past Cars 9 SIMILITUDE AND MODEL DEVELOPMENT Experimental modeling is a fundamental tool in the design of fluid devices and systems, and in the solution of many fluid mechanics problems. Experiments are used extensively to validate the design of airplanes, ships, buildings, bridges, and harbors, where it is im- portant to confirm that the device or system will perform as anticipated before incurring the expense of construction. The large size of such engineering projects makes it impractical and uneconomical to build full-scale prototypes of proposed designs. Thus, the use of models becomes mandatory. It is critical for an engineer to understand the issues involved in performing exper- iments with a model rather than the actual device or system of interest. In most cases a model is smaller than the actual device; however, it is sometimes prudent to build a large-scale model of a device that is too small to permit measurements to be taken with conventional sensors. In this section we discuss the design of scale model experiments and the methods for using experimental data obtained from a model study to predict the performance of a full-scale device or system. There are three similarity conditions that must be met in an experiment using mod- els. The model flow field must be geometrically, kinematically, and dynamically similar to the full-scale prototype it is intended to represent. When all three conditions are satis- fied, we achieve complete similarity between the model and full-scale flow. It is then pos- sible to use the experimental results to predict what will occur with the full-scale device. Let us now describe these conditions in more detail and explain how to achieve each one. By definition, geometric similarity requires that a scale model have the precise shape of the full-scale device or system of interest, with each of the model’s physical di- mensions in a fixed ratio to the corresponding dimension of the full-scale prototype. For example, a 1 / 10 -scale model has each of its dimensions reduced by a factor of 10. Achieving geometric similarity is the first condition needed to ensure that the fluid dy- namic phenomena experienced in the full-scale flow are also present in experiments conducted using the model. Although this looks like a straightforward requirement, it may be impossible to reproduce the surface finish of a full-scale device in a small-scale model. For example, the thousands of rivet heads of an aircraft wing are impossible to incorporate into a wind tunnel model. It is up to the engineer to decide whether this loss of perfect geometric similarity is important. The question to ask is, Does the absence of the feature affect the flow field in a significant way? If it does not, there is no need for concern. If it does, then the engineer must find a way to account for the effect of the missing feature. The missing rivets on a model wing may affect the onset of turbulent flow because the roughness of the model surface is different. Perhaps the model surface could be roughened artificially to produce the missing flow disturbances. This potential problem in using models is called scale effect. The second condition, called kinematic similarity, is satisfied if the velocity vectors in the model flow field have the same direction as those in the full-scale flow, with the magnitudes of corresponding vectors related by a single velocity scale factor. The third condition, dynamical similarity, is achieved if all forces in the model system have the same direction as those in the full-scale device with the magnitudes of corresponding forces related by a single force scale factor. It is not obvious how to achieve these re- maining two conditions, but DA provides the necessary insight. We are interested in performing experiments on a geometrically similar model in a way that results in the achievement of complete similarity. This means that the set of ex- perimental operating parameters must be picked so that kinematic and dynamic similar- ity occur in the model flow field. To see how to pick these parameters, suppose a DA has been performed on a full-scale device or system operating under the proposed design conditions. The DA will yield a relationship between all relevant dimensionless groups of the form given by Eq. 9:  1 F S = g ( F S 2 ,  3 F S ,... , kF S−r ) (9) # 550 9 DIMENSIONAL ANALYSIS AND SIMILITUDE # 552 9 DIMENSIONAL ANALYSIS AND SIMILITUDE If the wing span for the human-powered vehicle is 100 ft, the model wing span must be 12 ft. Figure 9 The Gossamer Condor, the first human-powered air- craft to demonstrate sustained, maneuverable flight, won a prize in 1977 for its developer, Paul MacCready. E X A M P L E 9. 6 The model of a tidal channel in a coastline study is scaled to 1 / 100 of actual size. Fresh water is to be used in place of seawater in the model. Assuming that the Reynolds num- ber must be matched, what model velocity is needed to ensure dynamic similarity? Will similarity also be achieved for free surface effects related to the Weber and Froude side of the Eqs. 9 and 9 are identical, then the remaining dimensionless group must have an identical value in the model and full-scale flow. Thus we can write F S 1 =  1 M (9) This is the desired relationship between dependent dimensionless groups, which permits the results from an experiment to be applied to the full-scale device. In many important applications, geometric similarity is achieved, but complete kinematic and dynamic similarity are not possible. This type of scale effect occurs when nongeometric Pi groups cannot be matched between the full-scale flow and a model. The magnitude of influence of scale effects must be considered when one is designing model tests. When scale effects cannot be avoided, care must be taken in interpreting results. This is illustrated in Example 9. # 9 SIMILITUDE AND MODEL DEVELOPMENT 553 numbers? In your calculations, note that the appropriate velocity and length scales for the actual tidal channel are V = 0. 5 m/s and L = 10 m, respectively. SOLUTION To maintain dynamic similarity, the Reynolds number of the model must be the same as that of the actual channel. For seawater (Appendix A) ρ = 1025 kg/m 3 and μ = 1. 07 × 10 − 3 kg/(m-s), so Re = ρV L μ = (1025 kg/m 3 )(0 m/s)(10 m) 1. 07 × 10 − 3 kg/(m-s) = 4. 8 × 106 The length scale of the model channel is Lmodel = L ( 1 / 100 ) = (10 m/ 100 ) = 0 .1 m. For the fresh water in the model (Appendix A), ρ = 998 kg/m 3 and μ = 1 × 10 − 3 kg/(m-s), so that the Reynolds number for the model is: Rem = ( ρV L μ ) m = (998 kg/m 3 )Vm (0 m) 1 × 10 − 3 kg/(m-s) = 4. 8 × 106 Solving for Vm we obtain Vm = ( 4. 8 × 106 )[1 × 10 − 3 kg/(m-s)] (998 kg/m 3 )( 0 .1 m) = 48 m/s To check for similarity of surface effects, we must calculate the Weber and Froude num- bers for the full-scale flow and the model. Surface tension for both seawater and fresh water is found in Appendix A to be 7. 28 × 10 − 2 N/m. The Weber numbers for the tidal channel and the model are found by using Eq. 3 to be: We = ρV 2 L σ = (1025 kg/m 3 )(0 m/s) 2 (10 m) 7. 28 × 10 − 2 N/m = 3. 5 × 104 and Wem = ( ρV 2 L σ ) m = (998 kg/m 3 )(48 m/s) 2 (0 m) 7. 28 × 10 − 2 N/m = 3. 2 × 106 respectively. Thus, surface tension effects can be safely neglected for both the tidal channel and model even though the Weber numbers do not exactly match. Next use Eq. 3 to calculate Fr: Fr = V √ gL = 0 m/s √ ( 9 .81 m/s 2 )(10 m) = 5 × 10 − 2 Frm = ( V √ gL ) m = 48 m/s √ (9 m/s 2 )(0 m) = 48 Understanding Dimensional Analysis & Buckingham Pi Theorem (ENG 540) Download Download AI Tools Ask AI Multiple Choice Flashcards Quiz Video Audio Lesson 0 0 Save Understanding Dimensional Analysis & Buckingham Pi Theorem (ENG 540) Course: Fluid Mechanics (MECH 3315) 13 documents University: The University of Texas at Dallas Info More info Download Download AI Tools Ask AI Multiple Choice Flashcards Quiz Video Audio Lesson 0 0 Save 9.1 Int rod uct ion 9.2 Buckingham Pi Theorem 9.3 Repeating V ariables Method 9.4 Similitude and Model Development 9.5 Correlation of Experimental Data 9.6 Application to Case Studies 9.6.1 DA of Flow in a Round Pipe 9.6.2 DA of Flow through Area Change 9.6.3 DA of Pump and Fan Laws 9.6.4 DA of Flat Plate Boundary Layer 9.6.5 DA of Drag on Cylinders and Spheres 9.6.6 DA of Lift and Drag on Airfoils 9.7 Summary Problems 534 9.1 INTRODUCTION Fluid mechanics problems can be dealt with by using analytical, computational, and ex- perimental approaches to understand the distribution of fluid and flow properties and the interaction of the fluid with its surroundings. In the preceding chapters you learned how to apply mass, momentum, and energy balances and the Bernoulli equation. In this chap- ter, we focus our attention on some of the experimental tools engineers use to solve fluid mechanics problems. These tools, known as dimensional analysis, similitude, and mod- eling, are very powerful but surprisingly easy to apply. Perhaps you remember that the case study results were based on dimensional analysis and experiments. In a later sec- tion of this chapter we will show you how dimensional analysis can be used to arrive at the design formulas of those case studies. W e will also show you how the tools you learn about in this chapter allow an engineer to efficiently or ganize and understand the results of experiments. 9 DIMENSIONAL ANAL YSIS A N D S I M I L I T U D E 9.1 INTRODUCTION 535 Did you know that most experimental studies in fluid mechanics involve the use of scale models? Examples are shown in Figure 9.1. Perhaps it is obvious that a scale model must be geometrically similar to an actual device or system, but what guarantee do we have that the flow field that occurs with a scale model is similar to the flow field of engineering interest? How do we apply data gathered from an experiment on a model to the design of a full-scale device? Dimensional analysis (abbreviated DA) is the source of answers to these and many other questions involving experimental research. It is the foundation of the theories of similitude and modeling, and it provides a means to design an efficient experimental program. DA makes use of the principle that all terms of a physical equation must have the same dimensions. Engineers take advantage of this by routinely checking a proposed formula to make sure it is dimensionally consistent. However, this particular application of DA, while helpful, is really of secondary importance compared to using DA to un- derstand the behavior of a physical system without the need for complex mathematics. Most engineers associate DA with the Buckingham Pi theorem, published in 1914 by E. Buckingham. W ith the aid of the Pi theorem, DA may be used to determine the (A) (C) (B) Figure 9.1(A) Model of an aircraft in a wind tunnel. (B)scale model of a section of the Mississipi River. (C) Model of San Antonio, T exas, used for determining wind patterns in this urban environment. number and form of the dimensionless groups describing any fluid system. As discussed in Chapter 3, a dimensionless group is a unitless algebraic combination of several of the physical parameters of a problem. Recall that the most important dimensionless group in fluid mechanics, the Reynolds number, is given by the product of density ρ, a fluid ve- locity scale V, and a length scale L, all divided by viscosity, µ. Thus we have R e=ρ V L/µ. There are numerous other groups that arise from dimensional analysis. For example, dividing the length of a pipe by its diameter yields the dimensionless group L/D. A given fluid system may have from one to five or more dimensionless groups, depending on its complexity. The value of each dimensionless group is a pure number determined by the specific values of the problem parameters. W ith experience it is possible to anticipate the behavior of a physical system simply by knowing the values of the dimensionless groups that describe it. Y ou already know from the case studies that the use of dimensionless groups allows an engineer to classify a fluid mechanics problem, relate it to work done by others, and select an effective solution method. DA is also essential when one is conducting and an- alyzing experiments. Proper selection of the values of each dimensionless group for a scale model ensures that a condition known as similitude is achieved. Similitude guar- antees that a particular experimental model is similar to the true physical system it is in- tended to simulate and also assures us that experimental data obtained with the model can be scaled and applied to a full-scale prototype. Finally, DA provides an understand- ing of how to minimize the total number of experiments and enhances the correlation and efficient compilation of experimental data. The Buckingham Pi theorem is introduced in the following section, followed by a discussion of DA using the repeating variable method of constructing dimensionless groups. Next we demonstrate the use of dimensional groups in similitude and model de- velopment, and in the correlation of experimental data. Finally, we revisit the case stud- ies of Chapter 3 and demonstrate the power of DA and empirical correlations in the de- sign of fluid mechanics devices and systems. W e do this by showing how the empirical relationships first presented in the case studies without explanation can now be seen to be based on dimensional analysis. 9.2 BUCKINGHAM PI THEOREM W e begin our discussion of the Pi theorem by considering a practical problem in the de- sign of a piping system. What size pump is required to move a particular liquid at a de- sired flowrate through a pipe? It can be shown that the power needed is a function of the pressure drop down the pipe, (see the case study in Section 3.3.1), which in turn depends on the viscous dissipation of energy in the flow (see Section 2.7.1). The power required also depends on the mass flowrate, defined as the mass of liquid moving through the pipe per unit time. How can the Pi theorem contribute to answering this question? The first step in a DA of pipe flow is to determine which fluid and flow properties might influence the pressure drop. As illustrated in Figure 9.2, we postulate that the pressure drop p may depend upon the pipe length L, diameter D, wall roughness e, average liquid velocity ¯ V, liquid density ρ, and viscosity µ. The wall roughness, defined as the average height of random protuberances, depends on the type of pipe and how long it has been in service. How did we decide on this list of parameters? W e know that 536 9 D I M E N S I O N A L A N A LY S I S A N D S I M I L I T U D E Too long to read on your phone? Save to read later on your computer Save to a Studylist 9.2 BUCKINGHAM PI THEOREM 537 p is related to frictional losses (viscous dissipation). Since we expect frictional loses to increase with an increase in viscosity, pipe length, and pipe wall roughness, it makes sense to include µ,L, and e in our model. The inclusion of ¯ V and D should also seem reasonable to you if you remember that the volume flowrate, i.e., the volume of liquid moving through the pipe per unit time, is given by the product of the average liquid ve- locity and the cross-sectional area of the pipe. Finally, we include density as a variable to be able to relate the volume flowrate to the mass flowrate. The next step in the DA is to write the proposed functional relationship mathemat- ically as p=f(L,D,e,¯ V,ρ,µ)(9.1) According to Eq. 9.1, the pressure drop is a function of six independent variables. By the principle of dimension consistency, the unknown function f must combine the indepen- dent variables in such a way that it has dimensions of pressure. Suppose we try to deter- mine f by experiment. T o explore the effects of the six variables on p we systemati- cally vary each independent variable while holding the other independent variables fixed. This can be time-consuming and expensive. Furthermore, the experiments may be difficult to perform. For example, how can we vary viscosity significantly? Is it possible to reduce the number of independent variables in Eq. 9.1? Buckingham’s work established the theoretical basis for a process to reduce the number of independent variables in a functional relationship like Eq. 9.1 to a minimum. The Pi theorem is applicable to a function, f, of k dimensional variables, u k, in the form u 1=f(u 2,u 3,...,u k)(9.2) For example, in Eq. 9.2 the dependent variable u 1 is the pressure drop and there are six independent variables, so the total number of variables is k=7. The Pi theorem states that there exists a functional relationship between at most k−r dimensionless Pi groups,k−r, of the form 1=g(2,3,...,k−r)(9.3) where r is the number of base dimensions needed to describe those parameters. Thus the Pi theorem proves that the number of independent variables in any functional rela- tionship may be reduced from k to k−r if the relationship is expressed in terms of Length,L p 1 p 2 A verage flo w velocity, V Fluid with viscosity␮, and density ␳ Diameter, D Roughness,e Roughness,e Pressure drop, pp 2p 1 Pipe wall Figure 9.2 The parameters that affect pressure drop in horizontal flow through a round pipe. dimensionless groups. The r base dimensions in most fluid mechanics problems are M,L,t,and T, repre- senting mass, length, time, and temperature. In the absence of thermal effects, a temperature scale T is unnecessary. What happens when we apply the Pi theorem to our pipe flow problem? The seven physical parame- ters in pipe flow can be written in terms of base dimen- sions as {p}=M t 2 L,{L}={D}={e}=L,{¯ V}=L t,{ρ}=M L 3,and{µ}=M L t Notice that we did not use the base dimension for temperature, T, and it was not neces- sary to introduce force, F, as a base dimension, since M,L,and t can be combined to form the dimension of force. Thus, the base dimensions for pipe flow are M,L, and t and we have r=3. T able 9.1, which is a list of common fluid and flow properties and their base dimensions, can be of assistance in this process. According to the Pi theorem, using dimensionless groups reduces the number of in- dependent variables in any functional relationship from k to k−r. For pipe flow, since k=7 and r=3, the theorem suggests replacing Eq. 9.1 with a functional relationship: 1=g(2,3,4) where the four Pi groups are dimensionless algebraic combinations of the original set of seven physical parameters. By design, the first Pi group includes the dependent variable, 538 9 D I M E N S I O N A L A N A LY S I S A N D S I M I L I T U D E The im pl i ca ti o ns o f t he P i t he o re m a re twofold. First, in performing experiments it is necessary to vary only the value of a di- mensionless group rather than of each ph y si ca l p ar am e te rs it c o nt ai n s. S e co nd, by working with dimensionless groups, th e re a r e k − r independent variables rather than k , a substantial reduction. T ABLE 9.1 Base Dimensions for Common Fluid and Flow Properties Property Dimensions Property Dimensions Acceleration L t−2 Momentum M L t−1 Angle Dimensionless Power M L 2 t−3 Angular momentum M L 2 t−2 Pressure M L−1 t−2 Angular velocity t−1 Specific heat L 2 t−2 T−1 Area L 2 Specific weight M L−2 t−2 Density M L−3 Strain Dimensionless Energy M L 2 t−2 Stress M L−1 t−2 Force M L t−2 Surface tension M t−2 Frequency t−1 T emperature T Heat M L 2 t−2 T ime t Length L T orque M L 2 t−2 Mass M V elocity L t−1 Modulus of elasticity M L−1 t−2 V iscosity (dynamic)M L−1 t−1 Moment of a force M L 2 t−2 V iscosity (kinematic)L 2 t−1 Moment of inertia (area)L 4 V olume L 3 Moment of inertia (mass)M L 2 W ork M L 2 t−2 9.2 BUCKINGHAM PI THEOREM 539 so1 includes the pressure drop. Thus in pipe flow the theorem instructs us to think in terms of a relationship between a dimensionless pressure drop and three other dimen- sionless groups made up of the remaining physical parameters. W e conclude that in per- forming experiments on pipe flow, it is necessary to vary only the value of three dimen- sionless groups rather than of the six physical parameters they contain. EXAMPLE 9.1 The drag force F D on a soccer ball is thought to depend on the velocity of the ball V, diameter D, air density ρ, and viscosity µ. Determine the number of Pi groups that can be formed from these five parameters. SOLUTION W e begin by writing F D=f(V,D,ρ,µ), and establishing that the total number of variables is k=5. The dimensions of these variables are {F D}=M L t 2,{V}=L t,{D}=L,{ρ}=M L 3,and{µ}=M L t Thus, the base dimensions are M,L, and t, and r=3. The expected number of Pi groups is k−r=5−3=2. EXAMPLE 9.2 Suppose your company has a policy that engineers investigating the influence of an in- dependent variable on a particular flow must use at least 10 different values of each independent variable during testing. How many experiments would be required to investigate the influence of the six independent variables for pipe flow listed in Eq. 9.1? How many experiments are required if we make use of the Pi theorem with r=3? SOLUTION If we require 10 experiments for each independent variable, we will need 10 k experi- ments to fully describe the influence of k independent variables. In the pipe flow prob- lem, with k=6, we would need to perform a million experiments. Once we understand the power of the Pi theorem, however, we realize that we can collect equivalent data with only 10 k−r, or a thousand experiments. Will your knowledge of the Pi theorem save your company time and money? Document continues below Discover more from: Fluid MechanicsMECH 3315The University of Texas at Dallas 13 documents Go to course 2 Fluid Mech 3 Fluid Mechanics Lecture notes 100% (1) 28 Fluid Mechanics 1 Fluid Mechanics Lecture notes 100% (1) 5 HW-4 solution Fluid Mechanics Assignments 100% (1) 63 Flow over Immersed Bodies: Boundary Layer Theory & Drag Analysis (ENGR 301)Fluid Mechanics Other None 78 Reynolds Transport Theorem & Control Volume Analysis in Fluid Mechanics 7.6 Fluid Mechanics Lecture notes None 82 Final Exam Study Guide for ABC123: Fluid Mechanics Key Concepts Fluid Mechanics Coursework None Discover more from: Fluid MechanicsMECH 3315The University of Texas at Dallas13 documents Go to course 2 Fluid Mech 3 Fluid Mechanics 100% (1) 28 Fluid Mechanics 1 Fluid Mechanics 100% (1) 5 HW-4 solution Fluid Mechanics 100% (1) 63 Flow over Immersed Bodies: Boundary Layer Theory & Drag Analysis (ENGR 301) Fluid Mechanics None 78 Reynolds Transport Theorem & Control Volume Analysis in Fluid Mechanics 7.6 Fluid Mechanics None 82 Final Exam Study Guide for ABC123: Fluid Mechanics Key Concepts Fluid Mechanics None 9.3 REPEA TING V ARIABLE METHOD The primary goal of DA is to determine the form of each dimensionless group predicted by the Pi theorem. W ith experience, this may be done by inspection, since we know that each group is dimensionless and that every valid physical parameter must appear in at least one dimensionless group. In general, however, we recommend the use of the re- peating variable method to construct dimensionless groups. It is quick and easy to im- plement, and it provides the advantage of using a set procedure with less chance of error. The required procedure, stated formally here, is explained in more detail in the subse- quent example. Repeating V ariable Method 1.List all physical parameters, assigning one as the dependent variable, and ex- press a relationship between this variable and the others in the form of Eq. 9.2. Let k be the total number of variables, including the dependent variable. 2.Represent each variable in terms of its base dimensions, forming a base dimen- sions table. Let r be the total number of different base dimensions needed. 3.Choose r independent variables to serve as repeating variables, making sure that all base dimensions are included. There are (k−r)nonrepeating independent variables left, with each appearing in a dimensionless Pi group. 4.Form a Pi group first for the nonrepeating dependent variable by multiplying it by each repeating variable raised to a power. Choose the exponents of each of the r repeating variables to make the overall product and resulting dependent Pi group dimensionless. Repeat for the remaining (k−r−1)nonrepeating inde- pendent variables, forming a Pi group for each. 5.Check each Pi group to be sure it is dimensionless, and rearrange to obtain a standard form if known. 6.Express the (k−r)Pi groups in the functional form of equation 9.3. Let us now look at each of these steps in more detail by executing the complete pro- cedure for the pipe flow shown earlier in Figure 9.2. S te p 1 W e are told that the pressure drop p depends on the pipe length L, diameter D, wall roughness e, average velocity ¯ V, liquid density ρ, and viscosity µ. The dependent vari- able is p, so the desired functional relationship between the physical parameters is Eq.9.1: p=f(L,D,e,¯ V,ρ,µ) Counting all the physical parameters we see that k=7. Comments on Step 1:T o simply list all the physical parameters that influence a flow problem is deceptively difficult; yet this step is critical and usually a challenge for an in- experienced engineer. W e are required to list every fluid and flow property, geometric parameter, and external agent that exerts an influence on the phenomenon of interest. Constructing this list is guided by an understanding of the theoretical and practical 540 9 D I M E N S I O N A L A N A LY S I S A N D S I M I L I T U D E 1 out of 31 Share Download Download More from:Fluid Mechanics(MECH 3315) More from: Fluid MechanicsMECH 3315The University of Texas at Dallas 13 documents Go to course 2 Fluid Mech 3 Fluid Mechanics Lecture notes 100% (1) 28 Fluid Mechanics 1 Fluid Mechanics Lecture notes 100% (1) 5 HW-4 solution Fluid Mechanics Assignments 100% (1) 63 Flow over Immersed Bodies: Boundary Layer Theory & Drag Analysis (ENGR 301)Fluid Mechanics Other None More from: Fluid MechanicsMECH 3315The University of Texas at Dallas13 documents Go to course 2 Fluid Mech 3 Fluid Mechanics 100% (1) 28 Fluid Mechanics 1 Fluid Mechanics 100% (1) 5 HW-4 solution Fluid Mechanics 100% (1) 63 Flow over Immersed Bodies: Boundary Layer Theory & Drag Analysis (ENGR 301) Fluid Mechanics None 78 Reynolds Transport Theorem & Control Volume Analysis in Fluid Mechanics 7.6 Fluid Mechanics None 82 Final Exam Study Guide for ABC123: Fluid Mechanics Key Concepts Fluid Mechanics None Recommended for you 28 Fluid Mechanics 1 Fluid Mechanics Lecture notes 100% (1) 2 Fluid Mech 3 Fluid Mechanics Lecture notes 100% (1) 5 HW-4 solution Fluid Mechanics Assignments 100% (1) 28 Fluid Mechanics 1 Fluid Mechanics 100% (1) 2 Fluid Mech 3 Fluid Mechanics 100% (1) 5 HW-4 solution Fluid Mechanics 100% (1) Students also viewed Lab report heat transfer tube Reynolds Transport Theorem & Control Volume Analysis in Fluid Mechanics 7.6 Quiz 5 Answers – Principles of Macroeconomics (FILM23332) Principles of Macroeconomics QUIZ #4 – Chapters 10 Insights ITSS4352 Assignment 3: ROI Analysis for Google and Bing Traffic Quiz 8 - homework Related documents DANC 1305 - Introduction to World Dance Forms Syllabus - Spring 2025 DANC 2331 - 001 Spring 2025 Course Syllabus for Dance Technique 1 MAE543 HW1 - Minimum Wall Thickness Analysis for Pressure Tank Steels NUR 215L LV5-C: Clinical Interprofessional Partnerships - Nursing Assistant Interview Government Study Guide for US Politics and Elections RRM3 D268 Task 1 Email Introduction Template Instructions Get homework AI help with the Studocu App Open the App English United States Company About us Studocu Premium Academic Integrity Jobs Blog Dutch Website Study Tools All Tools Ask AI AI Notes AI Quiz Generator Notes to Quiz Videos Notes to Audio Infographic Generator Contact & Help F.A.Q. Contact Newsroom Legal Terms Privacy policy Cookie Settings Cookie Statement Copyright & DSA View our reviews on Trustpilot English United States Studocu is not affiliated to or endorsed by any school, college or university. Copyright © 2025 StudeerSnel B.V., Keizersgracht 424-sous, 1016 GC Amsterdam, KVK: 56829787, BTW: NL852321363B01 Cookies give you a personalised experience We’re not talking about the crunchy, tasty kind. These cookies help us keep our website safe, give you a better experience and show more relevant ads. We won’t turn them on unless you accept. Want to know more or adjust your preferences? Reject all Accept all cookies Manage cookies
12240
https://www.riddles.com/2415
The Telephone Box Riddles Index SEARCH RIDDLES RIDDLE GAME FOLLOW RIDDLES.COM SHARE THIS RIDDLES PAGE LINK TO RIDDLE #2415 To link to the page use the following code: The Telephone Box SUBSCRIBE TO RIDDLES Get our Weekly Riddles Round Up sent direct to your email inbox every week! INCLUDES: The last 7 Riddle Of The Day's, Current Problem of the Week and Current Weekly Challenge. Riddles Members About | Contact | Archives | Riddles Blog | Terms | Content Policy | Privacy Policy Privacy Manager© Riddles.com 2025
12241
https://math.stackexchange.com/questions/2787565/integral-with-singularity-on-the-real-axis-with-complex-integration
Integral with singularity on the real axis with complex integration - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Integral with singularity on the real axis with complex integration Ask Question Asked 7 years, 4 months ago Modified7 years, 4 months ago Viewed 2k times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. I don't understand how to solve an integral like this with singularities in the real axis ∫∞−∞1 x 2−1 d x From what I know, the integrand should vanishes when we have large |x|, but what contour should I take? I saw a solution with the contour circling below the singularity in −1 and circling above the singularity at +1 but I didn't understand why should we take this kind of contour. Do the answers differ from different contours we take? I think I need a careful explanation of how to evaluate this integral because maybe I am missing something. Thank you for the attention. integration complex-analysis improper-integrals complex-integration Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited May 19, 2018 at 16:38 Michael Hardy 1 asked May 19, 2018 at 15:12 Slayer147Slayer147 241 2 2 silver badges 9 9 bronze badges 6 you can put little half-loops which approach the singularities in the limit. There is a technology to deal with such cases.James S. Cook –James S. Cook 2018-05-19 15:52:21 +00:00 Commented May 19, 2018 at 15:52 1 For example, see Section 7.5 of supermath.info/GuideToGamelin.pdf I think you would benefit from thinking about fractional residues.James S. Cook –James S. Cook 2018-05-19 15:57:16 +00:00 Commented May 19, 2018 at 15:57 It looks like the integral diverges. However, you can use residue calculus to compute its principal value.theyaoster –theyaoster 2018-05-19 17:07:39 +00:00 Commented May 19, 2018 at 17:07 @BrianYao That is what I am asking, I don't understand what would be the right contour to take and if the answer would be ambiguous and how to proceed this calculation using this method.Slayer147 –Slayer147 2018-05-19 17:22:12 +00:00 Commented May 19, 2018 at 17:22 I see. I will post an answer later today.theyaoster –theyaoster 2018-05-19 17:48:57 +00:00 Commented May 19, 2018 at 17:48 |Show 1 more comment 2 Answers 2 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. Why this contour works: To give you an outline of why this contour was chosen in the first place, it will be easier to think about it by partitioning the entire contour C into six continuous curves: The upper semicircle Γ of radius R centered at the origin. The interval of the real line [−R,−1−ε]. The lower semicircle γ−1 of radius ε centered at −1. The interval of the real line [−1+ε,1−ε]. The upper semicircle γ 1 of radius ε centered at 1. The interval of the real line [1+ε,R]. Note that these are listed in order when traversing around the contour counterclockwise. C is just the union of these six pieces. A general outline of one approach to this problem consists of four main steps (letting f(z)=1/(z 2−1)): Use the Residue Theorem (or Cauchy's Theorem, if there are no singularities inside the contour) to obtain the value of ∫C f(z)d z. Use the ML estimate (or other method) to show that ∫Γ f(z)d z→0 as R→∞. While Γ does not need to be a semicircle for the math to work out, it is easier to reason about an upper bound on f(z) over simple curves, and the semicircle is relatively simple to work with in many cases. Also, the length of Γ is simply π R. Use either the ML estimate or the Fractional Residue Theorem to evaluate the integrals over the small semicircles (in this case we have two: one over γ−1, and one over γ 1). Take the limit as R→∞ and ε→0, to obtain PV∫∞−∞f(z)d z=lim R→∞,ε→0(∫−1−ε−R f(z)d z+∫1−ε−1+ε f(z)d z+∫R 1+ε f(z)d z)=lim R→∞,ε→0(∫C f(z)d z−∫Γ f(z)d z−∫γ−1 f(z)d z−∫γ 1 f(z)d z). The quantity you are trying to compute is on the left side of this equation, and from the steps above, you can compute each of the limits on the right side. Thus, this method gives you the answer you want. The final equation comes from the fact that the integral of f(z) over C is equal to the sum of the integrals of f(z) over each of the six parts. Other contours: The contour you mentioned is not the only contour applicable. For instance, you could have two upper semicircles around the singularities instead of one lower semicircle and one upper semicircle; call this contour C′. This will get you the same answer, for two subtle reasons: The singularity −1 is now not on the interior of the contour, so the value of the integral over C′ now differs from that of the integral over C. When traversing C′ counterclockwise, we now traverse the semicircle around −1 in the clockwise direction, whereas in C, we would traverse the lower semicircle in a counterclockwise direction. This affects the value of the fractional residue, since a clockwise direction corresponds to a negative angle. As mentioned earlier, the choice of Γ being a semicircle is that it make the argument in the second step simpler. Additionally, the choice of γ−1 and γ 1 being semicircles allows us to apply the Fractional Residue Theorem. Finally, we use a half disk whose boundary contains part of the real line, since the original integral we are trying to evaluate is over the reals, from −∞ to ∞. Let me know if there is anything you are still unclear about. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered May 19, 2018 at 19:00 theyaostertheyaoster 2,140 10 10 silver badges 18 18 bronze badges 4 With the choice of γ−1 and γ 1 given in the begining of your answer I get ∫γ−1 f(z)d z=∫γ 1 f(z)d z=−i π 2, ∫C f(z)d z=−i π and ∫Γ f(z)d z=0, so that P.V.∫∞−∞f(z)d z=2 i π. Is that correct?Slayer147 –Slayer147 2018-05-19 19:52:13 +00:00 Commented May 19, 2018 at 19:52 Almost! Note that on the right side of the equation in step 4, there are negative signs in front of some of the integrals.theyaoster –theyaoster 2018-05-19 20:10:10 +00:00 Commented May 19, 2018 at 20:10 Oh yeah, P.V.∫∞−∞f(z)d z=0.Slayer147 –Slayer147 2018-05-19 20:21:50 +00:00 Commented May 19, 2018 at 20:21 Yep, you got it. :)theyaoster –theyaoster 2018-05-19 20:29:15 +00:00 Commented May 19, 2018 at 20:29 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. ∫∞−∞1 x 2−1 d x This integral has undefined points within the boundaries at: −1,1 Note that, if exist b,a<b<c,f(b)=undefined, ∫c a f(x)d x=∫b a f(x)d x+∫c b f(x)d x So, ∫∞−∞1 x 2−1 d x=∫−1−∞1 x 2−1 d x+∫1−1 1 x 2−1 d x+∫∞1 1 x 2−1 d x ∫1 x 2−1 d x=−1 2 l n|x+1|+1 2 l n|x−1|+C But when we compute the boundaries: ∫−1−∞1 x 2−1 d x=d i v e r g e s ∫1−1 1 x 2−1 d x=d i v e r g e s ∫∞1 1 x 2−1 d x=d i v e r g e s So, ∫∞−∞1 x 2−1 d x=d i v e r g e s Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited May 19, 2018 at 16:46 answered May 19, 2018 at 16:36 tien leetien lee 1,831 1 1 gold badge 15 15 silver badges 32 32 bronze badges 1 With complex variable integration this integral doesn't diverge, tough your answer is right if we talk only about real variable integration methods.Slayer147 –Slayer147 2018-05-19 16:55:04 +00:00 Commented May 19, 2018 at 16:55 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions integration complex-analysis improper-integrals complex-integration See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 1Integration of a divergent integral using the Cauchy Principal Value Related 16How would one evaluate ∫x sin(x)x 2+1 over the real line? 5Improper integrals with singularities on the REAL AXIS (Complex Variable) 4Evaluating ∫∞0 sin(a x)sinh(x)d x with a rectangular contour 1Contour Integration to evaluate real Integral when there is no singularity 3Choosing branch cuts for complex integration 0Evaluating ∫∞−∞sin x x(x 2+1)d x 1Integration of a divergent integral using the Cauchy Principal Value 0Help with contour integral (gaussian times a fractional root) Hot Network Questions Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? Why do universities push for high impact journal publications? Numbers Interpreted in Smallest Valid Base What's the expectation around asking to be invited to invitation-only workshops? What is this chess h4 sac known as? How can blood fuel space travel? What happens if you miss cruise ship deadline at private island? Lingering odor presumably from bad chicken The rule of necessitation seems utterly unreasonable Is it safe to route top layer traces under header pins, SMD IC? Non-degeneracy of wedge product in cohomology в ответе meaning in context What can be said? Is there a specific term to describe someone who is religious but does not necessarily believe everything that their religion teaches, and uses logic? Can induction and coinduction be generalized into a single principle? ConTeXt: Unnecessary space in \setupheadertext Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Can you formalize the definition of infinitely divisible in FOL? For every second-order formula, is there a first-order formula equivalent to it by reification? Xubuntu 24.04 - Libreoffice Interpret G-code Suspicious of theorem 36.2 in Munkres “Analysis on Manifolds” Is it ok to place components "inside" the PCB Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
12242
https://puzzling.stackexchange.com/questions/109318/deriving-a-3x3-grid-from-another-one
mathematics - Deriving a 3x3 grid from another one - Puzzling Stack Exchange Join Puzzling By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Puzzling helpchat Puzzling Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Deriving a 3x3 grid from another one Ask Question Asked 4 years, 5 months ago Modified4 years, 5 months ago Viewed 1k times This question shows research effort; it is useful and clear 10 Save this question. Show activity on this post. A 3×3 3×3 grid G G is filled with every number from 1 1 to 9 9. Now a new 3×3 3×3 grid H H is formed, such that H i j H i j is the number of neighbors of G i j G i j that are greater than G i j G i j. Two cells are neighbors if they are adjacent horizontally, vertically or diagonally. Given the grid H H shown below, can you derive the contents of grid G G? mathematics logical-deduction combinatorics no-computers grid-deduction Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications edited Apr 8, 2021 at 3:46 bobble 11.8k 4 4 gold badges 38 38 silver badges 96 96 bronze badges asked Apr 8, 2021 at 3:14 Dmitry KamenetskyDmitry Kamenetsky 39.7k 6 6 gold badges 87 87 silver badges 316 316 bronze badges Add a comment| 4 Answers 4 Sorted by: Reset to default This answer is useful 6 Save this answer. Show activity on this post. Solution: ⎛⎝⎜5 4 9 6 8 3 7 2 1⎞⎠⎟(5 6 7 4 8 2 9 3 1) Deduction process: Basically, 9→8→1→2→3→4→7→5,6 9→8→1→2→3→4→7→5,6. First, notice 9 9 must correspond to a 0 0 in H H, so it's in the (3,1)(3,1) position. Now every other number in H H is at least 1 1, so 8 8 must be adjacent to 9 9 and correspond to a 1 1 in H H, hence at (2,2)(2,2). A similar reasoning puts 1,2 1,2 at (3,3),(2,3)(3,3),(2,3) respectively. Now 3 3 must be at (3,2)(3,2), otherwise it would have at most one adjacent cell smaller than it, which is not true for all other ones. Continuing in this manner, 4 4 must be at (2,1)(2,1), and then 7,5,6 7,5,6 follows. (at this point one can even brute force it) This is probably not the fastest way though. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Apr 8, 2021 at 3:54 answered Apr 8, 2021 at 3:44 Tesla DaybreakTesla Daybreak 228 2 2 silver badges 5 5 bronze badges 1 1 Yeah, I was trying to edit in some logic. The difficulty is that at some point, the problem becomes simple enough that I would try to brute-force it and abandon logic...Tesla Daybreak –Tesla Daybreak 2021-04-08 03:58:37 +00:00 Commented Apr 8, 2021 at 3:58 Add a comment| This answer is useful 7 Save this answer. Show activity on this post. I have the same final answer as the others, but using only one repeated method, namely If there is only one 0 0, it must be the greater remaining number, when we keep track, for each undiscovered cell, of the number of undiscovered neighbours that are greater. This gives the following first step ⎛⎝⎜x x 9 x x x x x x⎞⎠⎟(x x x x x x 9 x x) and the new H H: ⎛⎝⎜2 3 x 2 0 2 1 4 3⎞⎠⎟(2 2 1 3 0 4 x 2 3) We still only have one 0 0, so it must be the maximum ie 8 8, and we repeat this down to 1 1: G G: ⎛⎝⎜x x 9 x 8 x x x x⎞⎠⎟(x x x x 8 x 9 x x)H H: ⎛⎝⎜1 2 x 1 x 1 0 3 2⎞⎠⎟(1 1 0 2 x 3 x 1 2) G G: ⎛⎝⎜x x 9 x 8 x 7 x x⎞⎠⎟(x x 7 x 8 x 9 x x)H H: ⎛⎝⎜1 2 x 0 x 1 x 2 2⎞⎠⎟(1 0 x 2 x 2 x 1 2) G G: ⎛⎝⎜x x 9 6 8 x 7 x x⎞⎠⎟(x 6 7 x 8 x 9 x x)H H: ⎛⎝⎜0 1 x x x 1 x 1 2⎞⎠⎟(0 x x 1 x 1 x 1 2) G G: ⎛⎝⎜5 x 9 6 8 x 7 x x⎞⎠⎟(5 6 7 x 8 x 9 x x)H H: ⎛⎝⎜x 0 x x x 1 x 1 2⎞⎠⎟(x x x 0 x 1 x 1 2) G G: ⎛⎝⎜5 4 9 6 8 x 7 x x⎞⎠⎟(5 6 7 4 8 x 9 x x)H H: ⎛⎝⎜x x x x x 0 x 1 2⎞⎠⎟(x x x x x 1 x 0 2) G G: ⎛⎝⎜5 4 9 6 8 3 7 x x⎞⎠⎟(5 6 7 4 8 x 9 3 x)H H: ⎛⎝⎜x x x x x x x 0 1⎞⎠⎟(x x x x x 0 x x 1) G G: ⎛⎝⎜5 4 9 6 8 3 7 2 x⎞⎠⎟(5 6 7 4 8 2 9 3 x)H H: ⎛⎝⎜x x x x x x x x 0⎞⎠⎟(x x x x x x x x 0) G G: ⎛⎝⎜5 4 9 6 8 3 7 2 1⎞⎠⎟(5 6 7 4 8 2 9 3 1)H H: ⎛⎝⎜x x x x x x x x x⎞⎠⎟(x x x x x x x x x) Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Apr 8, 2021 at 15:36 Cyrille CorpetCyrille Corpet 171 2 2 bronze badges 6 1 This is a very nice and general method.Dmitry Kamenetsky –Dmitry Kamenetsky 2021-04-08 18:46:02 +00:00 Commented Apr 8, 2021 at 18:46 I ended up using this method, but didn't formalize it so well. It's not general though, because you could have multiple zeros in H n H n Rad80 –Rad80 2021-04-08 19:33:40 +00:00 Commented Apr 8, 2021 at 19:33 1 @Rad80 Yes, I noticed that, but it worked for this one, so I didn't bother making it stronger Cyrille Corpet –Cyrille Corpet 2021-04-08 19:51:01 +00:00 Commented Apr 8, 2021 at 19:51 1 @Rad80 It turns out that grids H H that have a unique solution G G, only have a single 0. Furthermore, there is always a single path connected 1 to 2 to 3 ... all the way to N 2 N 2. So it seems that this method will work for any such puzzle, no matter how large.Dmitry Kamenetsky –Dmitry Kamenetsky 2021-04-09 01:19:49 +00:00 Commented Apr 9, 2021 at 1:19 That's really nice! Can you share the idea of the proof?Rad80 –Rad80 2021-04-09 08:19:30 +00:00 Commented Apr 9, 2021 at 8:19 |Show 1 more comment This answer is useful 5 Save this answer. Show activity on this post. Step 1: Wherever "9" goes in G, none of its neighbors will be larger. Therefore H in this spot will be 0, so the 9 in G is in R3C1. The middle cell is adjacent to all the others. Since its H is 1, only one other cell can be greater than it. Therefore its G is 8. Step 2: Wherever 1 goes in G, it is smaller than all of its surroundings, so its H number would be the max possible. This is only possible in R3C3. Then whenever 2 goes its H would be the max possible (considering the fact that 2 > 1), so it goes in R2C3. Similar logic places the 3 in R3C2. Step 3: The same logic places the 4 and the 5. Step 4/solution: Our H tells us that R1C3 can only be smaller than one neighbor. Therefore it can't have a 6 in G, because the 7 would go in R1C2 then R1C3 would be smaller than the neighboring 7 and 8. Therefore R1C3 is 7, R1C2 is 6, and the puzzle is complete. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Apr 8, 2021 at 3:54 answered Apr 8, 2021 at 3:44 bobblebobble 11.8k 4 4 gold badges 38 38 silver badges 96 96 bronze badges 5 1 I'm sure you got this earlier than I did:) I was too lazy to draw pictures.Tesla Daybreak –Tesla Daybreak 2021-04-08 03:45:38 +00:00 Commented Apr 8, 2021 at 3:45 Actually H can have a 0 without it being a 9 in G. You just need to be surrounded by everything smaller than you, ie local maxima, but not necessarily global maxima. So you need to refine step 1.Dmitry Kamenetsky –Dmitry Kamenetsky 2021-04-08 03:49:57 +00:00 Commented Apr 8, 2021 at 3:49 My point was that the 9 can't go anywhere but that corner, because where-ever it goes must be a 0 bobble –bobble 2021-04-08 03:50:35 +00:00 Commented Apr 8, 2021 at 3:50 oh i see. Yes that's true. It wouldn't work if there were multiple zeroes in H.Dmitry Kamenetsky –Dmitry Kamenetsky 2021-04-08 03:52:23 +00:00 Commented Apr 8, 2021 at 3:52 Interestingly all such 3x3 puzzles that have a unique solution (784 of them), only have a single 0.Dmitry Kamenetsky –Dmitry Kamenetsky 2021-04-08 03:58:04 +00:00 Commented Apr 8, 2021 at 3:58 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. I came up with the same solution as the other answers, but I did it a different way The solution: 5 6 7 4 8 2 9 3 1 Steps: First notice that the center square touches all other squares. Since it's a 1, that means that there is only 1 number greater than it in the entire puzzle, so it must be an 8. Then, notice the 0 in the corner. Since it's adjacent to the 8, and no adjacent values are greater than it, it must be the 9. From this point, we can follow a path of squares where the number of neighbors greater than it have already been revealed. Since we have filled in the highest numbers already, each of these squares must have the next-highest value. The 1 in the top-right corner is adjacent to the 8, so this must be a 7. The 2 in the top-middle is adjacent to the 7 and 8, so this must be a 6 The 2 in the top-left is adjacent to the 6 and 8, so this must be a 5 The 4 in the middle-left is adjacent to the 5, 6, 8, and 9, so it must be a 4 The 3 in the bottom-middle is adjacent to the 4, 8, and 9, so it must be a 3 The 4 in the middle-right is adjacent to the 3, 6, 7, and 8, so it must be a 2 And finally, the 3 in the bottom-right is adjacent to the 2, 3, and 8 (and is also the only remaining square), so it must be a 1 Edit: I realized that this method is not guaranteed to work. For example, the 1 adjacent to the 8 could have been a 6 instead of a 7, as long as there wasn't a 7 adjacent to it. I'll leave this answer up as it's interesting that it still worked. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Apr 8, 2021 at 16:54 answered Apr 8, 2021 at 14:11 KevinKevin 119 3 3 bronze badges 0 Add a comment| Your Answer Thanks for contributing an answer to Puzzling Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions mathematics logical-deduction combinatorics no-computers grid-deduction See similar questions with these tags. Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure The USAMTS attracts a lot of cheating attempts Related 24Chess Connect Puzzle 58x8 grid with no unpainted pentominoes 810x10 grid with no unpainted hexominoes 13A grid where every combination of two colours appears exactly once 13Placing 9 cars into a 4x4 carpark 7A 3x3 grid with common factors 4A 4x6 grid with adjacent integers with gcd > 1 23Snakes and Adders - A New Grid Puzzle 14PSE Advent Calendar 2023 (Day 20): Candles on the tree Hot Network Questions в ответе meaning in context Interpret G-code Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Another way to draw RegionDifference of a cylinder and Cuboid A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man Exchange a file in a zip file quickly On being a Maître de conférence (France): Importance of Postdoc Analog story - nuclear bombs used to neutralize global warming Is existence always locational? How long would it take for me to get all the items in Bongo Cat? What were "milk bars" in 1920s Japan? Can you formalize the definition of infinitely divisible in FOL? Overfilled my oil Sign mismatch in overlap integral matrix elements of contracted GTFs between my code and Gaussian16 results Bypassing C64's PETSCII to screen code mapping Why are LDS temple garments secret? Determine which are P-cores/E-cores (Intel CPU) Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Childhood book with a girl obsessessed with homonyms who adopts a stray dog but gives it back to its owners Repetition is the mother of learning Passengers on a flight vote on the destination, "It's democracy!" What's the expectation around asking to be invited to invitation-only workshops? Is it safe to route top layer traces under header pins, SMD IC? Does the curvature engine's wake really last forever? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Puzzling Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
12243
https://www.scirp.org/journal/paperinformation?paperid=70175
Forward (Δ) and Backward (∇) Difference Operators Basic Sets of Polynomials in and Their Effectiveness in Reinhardt and Hyperelliptic Domains Login Login切换导航 Home Articles Journals Books News About Services Submit Home Journals Article Journal of Applied Mathematics and Physics>Vol.4 No.8, August 2016 Forward (Δ) and Backward (∇) Difference Operators Basic Sets of Polynomials in and Their Effectiveness in Reinhardt and Hyperelliptic Domains () Saheed Abayomi Akinbode, Aderibigbe Sheudeen Anjorin Lagos State University, OJO (LASU), Apapa, Nigeria. DOI:10.4236/jamp.2016.48173PDFHTMLXML 1,691 Downloads 12,728 ViewsCitations Abstract We generate, from a given basic set of polynomials in several complex variables , new basic sets of polynomials and generated by the application of the Δ and ∇ operators to the set . All relevant properties relating to the effectiveness in Reinhardt and hyperelliptic domains of these new sets are properly deduced. The case of classical orthogonal polynomials is investigated in details and the results are given in a table. Notations are also provided at the end of a table. Keywords Effectiveness, Cannon Condition, Cannon Sum, Cannon Function, Reinhardt Domain, Hyperelliptic Domain Share and Cite: FacebookTwitterLinkedInSina WeiboShare Akinbode, S. and Anjorin, A. (2016) Forward (Δ) and Backward (∇) Difference Operators Basic Sets of Polynomials in and Their Effectiveness in Reinhardt and Hyperelliptic Domains. Journal of Applied Mathematics and Physics, 4, 1630-1642. doi: 10.4236/jamp.2016.48173. Received 7 July 2016; accepted 26 August 2016; published 29 August 2016 1. Introduction Recently, there has been an upsurge of interest in the investigations of the basic sets of polynomials - . The inspiration has been the need to understand the common properties satisfied by these polynomials, crucial to gaining insights into the theory of polynomials. For instance, in numerical analysis, the knowledge of basic sets of polynomials gives information about the region of convergence of the series of these polynomials in a given domain. Namely, for a particular differential equation admitting a polynomial solution, one can deduce the range of convergence of the polynomials set. This is an advantage in numerical analysis which can be exploited to reduce the computational time. Besides, if the basic set of polynomials satisfies the Cannon condition, then their fast convergence is guaranteed. The problem of derived and integrated sets of basic sets of polynomials in several variables has been recently treated by A. El-Sayed Ahmed and Kishka . In their work, complex variables in complete Reinhardt domains and hyperelliptical regions were considered for effectiveness of the basic set. Also, recently the problem of effectiveness of the difference sets of one and several variables in disc D(R) and polydisc has been treated by A. Anjorin and M.N Hounkonnou . In this paper, we investigate the effectiveness, in Reinhardt and hyperelliptic domains, of the set of polynomials generated by the forward (D) and backward (Ñ) difference operators on basic sets. These operators are very important as they involve the discrete scheme used in numerical analysis. Furthermore, their composition operators form the most of second order difference equations of Mathematical Physics, the solutions of which are orthogonal polynomials . Let us first examine here some basic definitions and properties of basic sets, useful in the sequel. Definition 1.1 Let be an element of the space of several complex variables . The hyperelliptic region of radii), is denoted by and its closure by where And Definition 1.2 An open complete Reinhardt domain of radii is denoted by and its closure by, where The unspecified domains and are considered for both the Reinhardt and hyperelliptic domains. These domains are of radii. Making a contraction of this domain, we get the domain where stands for the right-limits of: Thus, the function of the complex variables, which is regular in can be represented by the power series (1) where represents the mutli indicies of non-negative integers for the function F(z). We have (2) where is the radius of the considered domain. Then for hyperelliptic domains t being the radius of convergence in the domain, assuming and, whenever. Since, we have (1) where also, using the above function of the complex variables, which is regular in and can be represented by the power series above (1), then we obtain and (3) Hence, we have for the series . Since can be taken arbitrary near to, we conclude that With and. Definition 1.3 A set of polynomials is said to be basic when every polynomial in the complex variables can be uniquely expressed as a finite linear combination of the elements of the basic set. Thus, according to , the set will be basic if and only if there exists a unique row-finite-matrix such that, where is a matrix of coefficients of the set; are multi indices of nonegative integers, is the matrix of operators deduced from the associated set of the set and is the infinite unit matrix of the basic set, the inverse of which is. We have (4) Thus, for the function given in (1), we get where ,. The series is an associated basic series of F(z). Let be the number of non zero coefficients in the representation (4). Definition 1.4 A basic set satisfying the condition (5) Is called a Cannon basic set. If Then the set is called a general basic set. Now, let be the degree of polynomials of the highest degree in the representation (4). That is to say is the degree of the polynomial; the and since the element of basic set are linearly independent , then, where is a constant. Therefore the condition (5) for a basic set to be a Cannon set implies the following condition (6) For any function of several complex variables there is formally an associated basic series. When the associated basic series converges uniformly to in some domain, in other words as in classical terminology of Whittaker (see ) the basic set of polynomials are classified according to the classes of functions represented by their associated basic series and also to the domain in which they are represented. To study the convergence property of such basic sets of polynomials in complete Reinhardt domains and in hyperelliptic regions, we consider the following notations for Cannon sum (7) For Reinhardt domains , (8) For hyperelliptic regions . 2. Basic Sets of Polynomials in Generated by Ñ and D Operators Now, we define the forward difference operator D acting on the monomial such that where E is the shift operator and -the identity operator. Then So, considering the monomial Hence Since, Hence where and by definition. Similarly, we define the backward difference operator Ñ acting on the monomial such that (9) Equivalently, in terms of lag operator L defined as, we get. Remark that the advantage which comes from defining polynomials in the lag operator stems from the fact that they are isomorphic to the set of ordinary algebraic polynomials. Thus, we can rely upon what we know about ordinary polynomials to treat problems concerning lag-operator polynomials. So, (10) The Cannon functions for the basic sets of polynomils in complete Reinhardt domain and in hyperelliptical regions , are defined as follows, respectively: Concerning the effectiveness of the basic set in complete Reinhardt domain we have the following results: Theorem 2.1 A necessary and sufficient condition for a Cannon set to be: effective in is that; effective in is that. Theorem 2.2 The necessary and sufficient condition for the Cannon basic set of polynomials of several complex variables to be effective in the closed hyperelliptic is that where. The Cannon basic set of polynomials of several complex variables will be effective in if and only if. See also . We also get for a given polynomial set: So, considering the monomial, Let’s prove the following statement: Theorem 2.3 The set of polynomials and Are basic. Proof: To prove the first part of this theorem, it is sufficient to to show that the initial sets of polynomials and, from which and are generated, are linearly independent. Suppose there exists a linear relation of the form (11) For at least one i,. Then Hence, it follows that. This means that would not be linearly independent. Then the set would not be basic. Consequently (11) is impossible. Since are polynomials, each of them can be represented in the form. Hence, we write In general, given any polynomial and using Hence the representation is unique. So, the set is a basic set. Changing D to Ñ leads to the same conclusion. We obtain the following result. Theorem 2.4 The Cannon set of polynomials in several complex variables is Effective in the closed complete Reinhardt domain and in the closed Reinhardt region. Proof: In a complete Reinhardt domain for the forward difference operator D, the Cannon sum of the monomial is given by Then where is a constant. Therefore, which implies that Then the Cannon function But. Hence Similarly, for the backward difference operator Ñ, the Cannon sum Then where as is bounded for the Reinhardt domain is complete. Thus, But Hence, we deduce that. Theorem 2.5 If the Cannon basic set (resp.) of polynomials of the several complex variables for which the condition (5) is satisfied, is effective in, then the (D) and (Ñ)-set (resp.) of polynomials associated with the set (resp.) will be effective in. The Cannon sum of the forward difference operator D of the set in will have the form where and where is a constant. Then where So, by similar argument as in the case of Reinhardt domain we obtain where, since the Cannon function is such that . Similarly, for the backward difference operator Such that the Cannon function writes as But Since the Cannon function is non-negative. Hence 3. Examples Let us illustrate the effectiveness in Reinhardt and hyperelliptic domains, taking some examples. First, suppose that the set of polynomials is given by Then Hence which implies for;. Now consider the new polynomial from the polynomial defined above: Hence by Theorem 2.4, where where is a constant. Hence, The Cannon function which implies where and Hence Similarly, for the operator Ñ, we have Since Then Polynomials Monomials Chebyshev (first kind) Chebyshev (second kind) Hermite Table 1. Region of effectiveness: (1) Disc; (2) Hyperelliptic; Reinhardt domain. Implication: The new sets are nowhere effective since the parents sets are nowhere effective. By changing in Reinhart domain to, where, we obtain the same condition of effectiveness as in Reinhart domain for both operators D and Ñ in the hyperelliptic domain. The following notations are relevant to the table below. (12) (13) (14) (15) Finally, for the classical orthogonal polynomials, the explicit results of computation are given in a Table 1 below. Thus, in this paper, we have provided new sets of polynomials in C, generated by Ñ and D operators, which satisfy all properties of basic sets related to their effectiveness in specified regions such as in hyperelliptic and Reinhardt domains. Namely, the new basic sets are effective in complete Reinhardt domain as well as in closed Reinhardt domain. Furthermore, we have proved that if the Cannon basic set is effective in hyperelliptic domain, then the new set is also effective in the hiperelliptic domain. Appendix Key Notations 1) = Cannon sum of the new D-set in Reinhardt domain. 2) = Cannon sum of the new Ñ-set in Reinhardt domain. 3) = Cannon sum of the new D-set in Hyperelliptic domain. 4) = Cannon sum of the new Ñ-set in Hyperelliptic domain. 5) = Cannon function of the new D-set in Reinhardt domain. 6) = Cannon function of the new Ñ-set in Reinhardt domain. 7) = Cannon sum of the new D-set in Hyperelliptic domain. 8) = Cannon sum of the new Ñ-set in Hyperelliptic domain. 9). 10). 11) where is a constant. is a coefficient corresponding to polynomials set, is a coefficient corresponding to polynomial set. We should note that or. Submit or recommend next manuscript to SCIRP and we will provide best service for you: Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc. A wide selection of journals (inclusive of 9 subjects, more than 200 journals) Providing 24-hour high-quality service User-friendly online submission system Fair and swift peer-review system Efficient typesetting and proofreading procedure Display of the result of downloads and visits, as well as the number of cited articles Maximum dissemination of your research work Submit your manuscript at: Conflicts of Interest The authors declare no conflicts of interest. References El-Sayed Ahmed, A. and Kishka, Z.M.G. (2003) On the Effectiveness of Basic Sets of Polynomials of Several Complex Variables in Elliptical Regions. Proceedings of the Third International ISAAC Congress, 1, 265-278. El-Sayed Ahmed, A. (2013) On the Convergence of Certain Basic Sets of Polynomials. Journal of Mathematics and Computer Science, 3, 1211-1223. El-Sayed Ahmed, A. (2006) Extended Results of the Hadamard Product of Simple Sets of Polynomials in Hypersphere. Annales Societatis Mathematicae Polonae, 2, 201-213. Adepoju, J.A. and Nassif, M. (1983) Effectiveness of Transposed Inverse Sets in Faber Regions. International Journal of Mathematics and Mathematical Sciences, 6, 285-295. Whittaker, J.M. (1949) Sur les séries de bases de polynômes quelconques. Quaterly Journal of Mathematics (Oxford), 224-239. Sayed, K.A.M. (1975) Basic Sets of Polynomials of Two Complex Variables and Their Convergences Properties. Ph D Thesis, Assiut University, Assiut. Sayed, K.A.M. and Metwally, M.S. (1998) Effectiveness of Similar Sets of Polynomials of Two Complex Variables in Polycylinders and in Faber Regions. International Journal of Mathematics and Mathematical Sciences, 21, 587-593. Abdul-Ez, M.A. and Constales, D. (1990) Basic Sets of Polynomials in Clifford Analysis. Journal of Complex Variables and Applications, 14, 177-185. Abdul-Ez, M.A. and Sayed, K.A.M. (1990) On Integral Operators Sets of Polynomials of Two Complex Variables. Bug. Belg. Math. Soc. Simon Stevin, 64, 157-167. Abdul-Ez, M.A. (1996) On the Representation of Clifford Valued Functions by the Product System of Polynomials. Journal of Complex Variables and Applications, 29, 97-105. Abul-Dahab, M.A., Saleem, A.M. and Kishka, Z.M. (2015) Effectiveness of Hadamard Product of Basic Sets of Polynomials of Several Complex Variables in Hyperelliptical Regions. Electronic Journal of Mathematical Analysis and Applications, 3, 52-65. Mursi, M. and Makar, B.H. (1955) Basic Sets of Polynomials of Several Complex Variables i. The Second Arab Sci. Congress, Cairo, 51-60. Mursi, M. and Makar, B.H. (1955) Basic Sets of Polynomials of Several Complex Variables ii. The Second Arab Sci. Congress, Cairo, 61-68. Nasif, M. (1971) Composite Sets of Polynomials of Several Complex Variables. Publicationes Mathematics Debrecen, 18, 43-53. Nasif, M. and Adepoju, J.A. (1984) Effectiveness of the Product of Simple Sets of Polynomials of Two Complex Variables in Polycylinders and in Faber Regions. Journal of Natural Sciences and Mathematics, 24, 153-171. Metwally, M.S. (1999) Derivative Operator on a Hypergeometric Function of the Complex Variables. Southwest Journal of Pure and Applied Mathematics, 2, 42-46. Mikhall, N.N. and Nassif, M. (1954) On the Differences and Sum of Simple Sets of Polynomials. Assiut Universit Bulletin of Science and Technology. (In Press) Makar, R.H. (1954) On Derived and Integral Basic Sets of Polynomials. Proceedings of the American Mathematical Society, 5, 218-225. Makar, R.H. and Wakid, O.M. (1977) On the Order and Type of Basic Sets of Polynomials Associated with Functions of Trices. Periodica Mathematica Hungarica, 8, 3-7. Boss, R.P. and Buck, R.C. (1986) Polynomial Expansions of Analytic Functions II. In: Ergebinsse der Mathematika Hung ihrer grenz gbicle. Neue Foge., Vol. 19 of Moderne Funktionetheorie, Springer Verlag, Berlin. Kumuyi, W.F. and Nassif, M. (1986) Derived and Integrated Sets of Simple Sets of Polynomials in Two Complex Variables. Journal of Approximation Theory, 47, 270-283. New, W.F. (1953) On the Representation of Analytic Functions by Infinite Series. Philosophical Transactions of the Royal Society A, 245, 439-468. Kishka, Z.M.G. (1993) Power Set of Simple Set of Polynomials of Two Complex Variables. Bulletin de la Société Royale des Sciences de Liège, 62, 361-372. Kishka, Z.M.G. and El-Sayed Ahmed, A. (2003) On the Order and Type of Basic and Composite Sets of Polynomials in Complete Reinhardt Domains. Periodica Mathematica Hungarica, 46, 67-79. Hounkonnou, M.N., Hounga, C. and Ronveaux, A. (2000) Discrete Semi-Classical Orthogonal Polynomials: Generalized Charlier. Journal of Computational and Applied Mathematics, 114, 361-366. Lorente, M. (2001) Raising and Lowering Operators, Factorization and Differential/Difference Operators of Hypergeometric Type. Journal of Physics A: Mathematical and General, 34, 569-588. Anjorin, A. and Hounkonnou, M.N. (2006) Effectiveness of the Difference Sets of One and Several Variables in Disc D(R) and Polydisc. (ICMPA) Preprint 07. Journals Menu Articles Archive Indexing Aims & Scope Editorial Board For Authors Publication Fees Journals Menu Articles Archive Indexing Aims & Scope Editorial Board For Authors Publication Fees Related Articles Dominating Sets and Domination Polynomials of Square of Paths Improvement of Forward-Backward Pursuit Algorithm Based on Weak Selection Chebyshev Polynomials with Applications to Two-Dimensional Operators Generalized Invertibility of Operators through Spectral Sets On the Domains of General Ordinary Differential Operators in the Direct Sum Spaces Open Special Issues Published Special Issues Special Issues Guideline E-Mail Alert JAMP Subscription Publication Ethics & OA Statement Frequently Asked Questions Recommend to Peers Recommend to Library Contact us Disclaimer History Issue Sponsors, Associates, and Links Open Journal of Discrete Mathematics Applied Mathematics American Journal of Computational Mathematics Advances in Pure Mathematics Follow SCIRP Contact us customer@scirp.org +86 18163351462(WhatsApp) 1655362766 Paper Publishing WeChat Copyright © 2025 by authors and Scientific Research Publishing Inc. This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License. Free SCIRP Newsletters Add your e-mail address to receive free newsletters from SCIRP. Home Journals A-Z Subject Books Sitemap Contact Us About SCIRP Publication Fees For Authors Peer-Review Issues Special Issues News Service Manuscript Tracking System Subscription Translation & Proofreading FAQ Volume & Issue Policies Open Access Publication Ethics Preservation Retraction Privacy Policy Copyright © 2006-2025 Scientific Research Publishing Inc. All Rights Reserved. Top ✓ Thanks for sharing! AddToAny More…
12244
https://arxiv.org/abs/2401.03232
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate math > arXiv:2401.03232 Mathematics > Geometric Topology arXiv:2401.03232 (math) [Submitted on 6 Jan 2024] Title:Generalization of the Apollonius theorem for simplices and related problems Authors:Michael N. Vrahatis View a PDF of the paper titled Generalization of the Apollonius theorem for simplices and related problems, by Michael N. Vrahatis View PDF HTML (experimental) Abstract:The Apollonius theorem gives the length of a median of a triangle in terms of the lengths of its sides. The straightforward generalization of this theorem obtained for m-simplices in the n-dimensional Euclidean space for n greater than or equal to m is given. Based on this, generalizations of properties related to the medians of a triangle are presented. In addition, applications of the generalized Apollonius' theorem and the related to the medians results, are given for obtaining: (a) the minimal spherical surface that encloses a given simplex or a given bounded set, (b) the thickness of a simplex that it provides a measure for the quality or how well shaped a simplex is, and (c) the convergence and error estimates of the root-finding bisection method applied on simplices. | | | --- | | Subjects: | Geometric Topology (math.GT); Computational Geometry (cs.CG); Numerical Analysis (math.NA) | | MSC classes: | Primary 51M05, 52A40. Secondary 52A35, 53A07, 51M20, 51M25, 52A37 | | Cite as: | arXiv:2401.03232 [math.GT] | | | (or arXiv:2401.03232v1 [math.GT] for this version) | | | arXiv-issued DOI via DataCite | Submission history From: Michael Vrahatis N. [view email] [v1] Sat, 6 Jan 2024 14:56:58 UTC (26 KB) Full-text links: Access Paper: View a PDF of the paper titled Generalization of the Apollonius theorem for simplices and related problems, by Michael N. Vrahatis View PDF HTML (experimental) TeX Source Other Formats view license Current browse context: math.GT < prev | next > new | recent | 2024-01 Change to browse by: cs cs.CG cs.NA math math.NA References & Citations NASA ADS Google Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × Data provided by: Bookmark Bibliographic and Citation Tools Bibliographic Explorer (What is the Explorer?) Connected Papers (What is Connected Papers?) Litmaps (What is Litmaps?) scite Smart Citations (What are Smart Citations?) Code, Data and Media Associated with this Article alphaXiv (What is alphaXiv?) CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub (What is DagsHub?) Gotit.pub (What is GotitPub?) Hugging Face (What is Huggingface?) Papers with Code (What is Papers with Code?) ScienceCast (What is ScienceCast?) Demos Replicate (What is Replicate?) Hugging Face Spaces (What is Spaces?) TXYZ.AI (What is TXYZ.AI?) Recommenders and Search Tools Influence Flower (What are Influence Flowers?) CORE Recommender (What is CORE?) Author Venue Institution Topic arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
12245
https://www.youtube.com/watch?v=-iOSijcvKpA
Froude number and open channel flow Hubert Chanson 1390 subscribers 9 likes Description 2250 views Posted: 6 Mar 2023 The Froude number is a dimensionless number defined as the ratio of a characteristic velocity scale V to the square root of the gravity acceleration g times a characteristic length scale L. The dimensionless number is commonly used in open channel flows. In open channel flows, the Froude number may be derived from different fundamental principles: -Dimensional considerations and the application of the Pi-Buckingham theorem to free-surface flows because the gravity effects are important. -The dimensionless number may also be derived based upon energy considerations and the backwater equation. -The Froude number may be derived as well as from momentum considerations, e.g. the Bélanger equation for a hydraulic jump. -A further consideration is the dimensionless specific energy and critical flow conditions at minimum specific energy. The concepts of Froude number, critical flow conditions, hydraulic jump and backwater equation are essential to the understanding of open channel hydraulics. These are discussed in the relevant Youtube video movies in the same Playlist at: { Applied Hydrodynamics in Hubert Chanson Youtube channel { Fundamentals of open channel hydraulics [Playlist] Advanced hydraulics of open channel flow [Playlist] Critical flow conditions in open channels { Subcritical and supercritical flow in open channel { Celerity of small wave and its propagation in open channel { On the analogy between Mach and Froude numbers { Physical modelling in hydraulic engineering (4) Froude similitude { References BAKHMETEFF, B.A. (1912). "O Neravnomernom Dwijenii Jidkosti v Otkrytom Rusle." St Petersburg, Russia (in Russian). BELANGER, J.B. (1828). "Essai sur la Solution Numérique de quelques Problèmes Relatifs au Mouvement Permanent des Eaux Courantes." Carilian-Goeury, Paris, France, 38 pages & 5 tables (in French). BÉLANGER, J.B. (1841). "Notes sur l'Hydraulique." ('Notes on Hydraulic Engineering.') Ecole Royale des Ponts et Chaussées, Paris, France, session 1841-1842, 223 pages (in French). CHANSON, H. (2004). "The Hydraulics of Open Channel Flow: An Introduction." Butterworth-Heinemann, 2nd edition, Oxford, UK, 630 pages (ISBN 978 0 7506 5978 9). CHANSON, H. (2009). "Development of the Bélanger Equation and Backwater Equation by Jean-Baptiste Bélanger (1828)." Journal of Hydraulic Engineering, ASCE, Vol. 135, No. 3, pp. 159-163 (DOI: 10.1061/(ASCE)0733-9429(2009)135:3(159)) CHANSON, H. (2012). "Momentum Considerations in Hydraulic Jumps and Bores." Journal of Irrigation and Drainage Engineering, ASCE, Vol. 138, No. 4, pp. 382-385 (DOI 10.1061/(ASCE)IR.1943-4774.0000409). CHANSON, H. (2014). "Applied Hydrodynamics: An Introduction." CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 448 pages & 21 video movies (ISBN 978-1-138-00093-3). Transcript: welcome welcome to this talk on the fraud number on its application to a functional flow the fraud number is a demotionalized number defined as a ratio of a characteristic velocity scale V to the square root of the gravity acceleration time a characteristic length l it is a dimotionalized number commonly used in a functional flow on this Photograph present the philosophers flow of the saloon river in France at pontobo Channel flow the fraud number may be derived from different fundamental principle dimensional considerations energy consideration and Backwater equation momentum consideration on dimensional specific energy and critical flow conditions let us review zest developments physical hydraulic modeling is a design technique used by engineer to optimize the structural design imply that two dimensionless never are most true or not in hydraulic models using the sample it in physical model and prototype 10 000 Runners on fraud numbers in fresh office flow so gravity effects are always important on a fraud number modeling is applied with the fraud number being the same model on Prototype on when the gravity acceleration is constant the fraud similarity implies A diversity scaling ratio is equal to the square root of the dimensional scaling ratio this Photograph illustrator 1 in 13 scale model of the putest dam in France in gradually valued operational flow The Logical festivals profile may be predicted using the differential form of the energy equation called Backwater equation first proposed in 1828 is expression is shown here where SF is the friction slope or slope of the total headline after transformation another form of the Backwater equation is shown here in which the fraud numberrapy the fraud number up with being defined as a ratio of the Velocity to the square root of G time so the ratio of the cross-section area to the philosophers width in no functionals the transition from a supercritical to a sub critical flow is called a hydraulic jump for a smooth horizontal rectangular channel on the constructional width the application of the equation of conservation of mass and conservation of momentum between the two Section 1 and 2. it are shown here anil's classical equation here called belongs equation the ratio of the conjugate death on a second equation which is a ratio of the conjugate fraud number and in both we see the importance of the Upstream fraudible fr1 this Photograph illustrates a hydraulic graph experiment as the universe for a relatively large Reynolds number with flow direction from right to left a further consideration is the derivation of critical flu conditions at minimum specific energy is a critical for conditional cure when the specific energy is minimum which lead to a condition as one minus the front number square equals zero with this graph illustrating the relationship between specific energy on water death the concept of fraud number critical for condition hydraulic jump on Backwater equation are essential to the understanding of operational Hydraulics that are discussed in a number of Orleans YouTube video movie that you may find in the same channel foreign
12246
https://www.education.com/worksheet/article/interpreting-bar-chart-graphs/
Interpreting Bar Chart Graphs | Worksheet | Education.com SKIP TO CONTENT Worksheet Generator Subjects Grades Worksheets Games Build a Worksheet More Resources Roly Recommends  Log InSign Up Subjects Grades Worksheets Games Build a Worksheet More Resources Roly Recommends WorksheetsMath Worksheets5th Grade Math Worksheets5th Grade Data and Graphing Worksheets Worksheet Interpreting Bar Chart Graphs Learners practice using a bar graph to answer questions in this data and graphing worksheet. Students will examine a graph of fruit harvested by month, then answer seven questions designed to support them as they learn to both read and interpret data. This fifth-grade worksheet can be used with our lesson plan Interpreting Complex Graphs. Download Free Worksheet  See in a Lesson Plan  View answer key  Add to collection  Add to assignment Save Grade: Fifth Grade Subjects: Math Data and Graphing View aligned standards No standards associated with this content. Related learning resources Reading a Bar Graph: Number of Athletes Worksheet Reading a Bar Graph: Number of Athletes In this sports-themed worksheet, children use a bar graph to answer six questions about the number of athletes playing at a time in a variety of sporting events. Third Grade Worksheet Probability: Jelly Beans Worksheet Probability: Jelly Beans Learners will practice determining probability in fraction form in this sweet practice worksheet. Fifth Grade Worksheet Algebra for Beginners Worksheet Algebra for Beginners Want to know the perfect recipe for some spooky math practice? Witches and algebra, of course! Worksheet What's the Angle? Worksheet What's the Angle? Once students understand angle basics, challenge them to find missing angles in this exercise! Fourth Grade Worksheet Greater Than or Less Than? Comparing Fractions Worksheet Greater Than or Less Than? Comparing Fractions Covering a variety of fraction skills such as converting to like fractions, this worksheet is sure to challenge your fifth grader's fraction savvy. Fourth Grade Worksheet Autumn Word Problems Interactive Worksheet Autumn Word Problems Welcome the season with autumn-themed word problems. Your student will have the opportunity to practice addition, subtraction, multiplication, and division. Third Grade Interactive Worksheet Base and Volume Worksheet Base and Volume Calculate the volume of each object using the base and height. Fifth Grade Worksheet Metric Length Measurement: Word Problems Worksheet Metric Length Measurement: Word Problems Familiarize yourself with the metric system! Convert units of measurement in this series of word problems. Fifth Grade Worksheet Hungry for Math Interactive Worksheet Hungry for Math Figure out these foodie math problems using lots of multiplication and division. Seventh Grade Interactive Worksheet Bar Graph: Getting to School Worksheet Bar Graph: Getting to School Kids completing this third grade math worksheet use a bar graph to compare data about transportation to school and solve addition and subtraction problems. Third Grade Worksheet Place Value Puzzle #2 Interactive Worksheet Place Value Puzzle #2 It's a riddle whirwind! Hold on tight as you use math to solve this place value puzzle -- are you up to the challenge? Fifth Grade Interactive Worksheet Dividing Decimals Worksheet Dividing Decimals Children learn how to divide multi-digit decimal numbers in this supplemental math worksheet. Fifth Grade Worksheet    Sign up to start collecting! Bookmark this to easily find it later. Then send your curated collection to your children, or put together your own custom lesson plan. Sign upLog in Educational Tools Learning Library Worksheets Games Interactive Worksheets Worksheet Generator Lesson Plans Common Core Resources Support Help center Pricing Education.com For Schools Get a Quote Give Gift Redeem Gift Contact Us Connect Blog Tell us what you think About Company Careers Press Reviews Privacy Policy COPPA Privacy Policy Terms of Service  ##### IXL Comprehensive K-12 personalized learning##### Rosetta Stone Immersive learning for 25 languages##### Wyzant Trusted tutors for 300 subjects##### Vocabulary.com Adaptive learning for English vocabulary##### ABCya Fun educational games for kids##### SpanishDictionary.com Spanish-English dictionary, translator, and learning##### Emmersion Fast and accurate language certification##### TPT Marketplace for millions of educator-created resources Copyright © 2025 Education.com, Inc, a division of IXL Learning • All Rights Reserved.  Create Account Step 1: Who is primarily going to use this content? My Child I'm a parent or guardian My Students I'm a teacher Myself I'm a student  First time here? Education.com is the world’s largest collection of educational worksheets, games, videos, and songs. Pre-K through 8th grade Create account Keep exploring  Unlimited downloads Guided Lessons to cultivate a joy for learning ...and more! Upgrade
12247
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Map%3A_Organic_Chemistry_(Smith)/21%3A_Substitution_Reactions_of_Carbonyl_Compounds_at_the_Alpha_Carbon/21.09%3A_Direct_Enolate_Alkylation
Skip to main content 21.9: Direct Enolate Alkylation Last updated : Jun 5, 2019 Save as PDF 21.8: Halogenation at the α Carbon 21.10: Malonic Ester Synthesis Page ID : 28436 ( \newcommand{\kernel}{\mathrm{null}\,}) Enolates can act as a nucleophile in SN2 type reactions. Overall an α hydrogen is replaced with an alkyl group. This reaction is one of the more important for enolates because a carbon-carbon bond is formed. These alkylations are affected by the same limitations as SN2 reactions previously discussed. A good leaving group, Chloride, Bromide, Iodide, Tosylate, should be used. Also, secondary and tertiary leaving groups should not be used because of poor reactivity and possible competition with elimination reactions. Lastly, it is important to use a strong base, such as LDA or sodium amide, for this reaction. Using a weaker base such as hydroxide or an alkoxide leaves the possibility of multiple alkylation’s occurring. | Example 1: Alpha Alkylation | | | Mechanism 1) Enolate formation 2) Sn2 attack Alkylation of Unsymmetrical Ketones Unsymmetrical ketones can be regioselctively alkylated to form one major product depending on the reagents. Treatment with LDA in THF at -78oC tends to form the less substituted kinetic enolate. Using sodium ethoxide in ethanol at room temperature forms the more substituted thermodynamic enolate. Problems 1) Please write the structure of the product for the following reactions. Answers 1) Contributors William Reusch, Professor Emeritus (Michigan State U.), Virtual Textbook of Organic Chemistry Prof. Steven Farmer (Sonoma State University) 21.8: Halogenation at the α Carbon 21.10: Malonic Ester Synthesis
12248
https://perso.lip6.fr/Mohab.Safey/Articles/LeSa22.pdf
Solving parametric systems of polynomial equations over the reals through Hermite matrices Huu Phuoc Le, Mohab Safey El Din1 Sorbonne Université, CNRS, LIP6, F-75005, Paris, France Abstract We design a new algorithm for solving parametric systems of equations having finitely many complex solutions for generic values of the parameters. More precisely, let f = ( f1, . . . , fm) ⊂Q[y][x] with y = (y1, . . . , yt) and x = (x1, . . . , xn), V ⊂Ct × Cn be the algebraic set defined by the simultaneous vanishing of the fi’s and π be the projection (y, x) →y. Under the assumptions that f admits finitely many complex solutions when specializing y to generic values and that the ideal generated by f is radical, we solve the following algorithmic problem. On input f, we compute semi-algebraic formulas defining open semi-algebraic sets S1, . . . , Sℓin the parameters’ space Rt such that ∪ℓ i=1Si is dense in Rt and, for 1 ≤i ≤ℓ, the number of real points in V∩π−1(η) is invariant when η ranges over Si. This algorithm exploits special properties of some well chosen monomial bases in the quotient algebra Q(y)[x]/I where I ⊂Q(y)[x] is the ideal generated by f in Q(y)[x] as well as the specialization property of the so-called Hermite matrices which represent Hermite’s quadratic forms. This allows us to obtain “compact” representations of the semi-algebraic sets Si by means of semi-algebraic formulas encoding the signature of a given symmetric matrix. When f satisfies extra genericity assumptions (such as regularity), we use the theory of Gröbner bases to derive complexity bounds both on the number of arithmetic opera-tions in Q and the degree of the output polynomials. More precisely, letting d be the max-imal degrees of the fi’s and D = n(d−1)dn, we prove that, on a generic input f = ( f1, . . . , fn), one can compute those semi-algebraic formulas using Oe t+D t  23t n2t+1d3nt+2(n+t)+1 arith-metic operations in Q and that the polynomials involved in these formulas have degree bounded by D. We report on practical experiments which illustrate the efficiency of this algorithm, both on generic parametric systems and parametric systems coming from applications since it allows us to solve systems which were out of reach on the current state-of-the-art. Keywords: Real algebraic geometry; Polynomial system solving; Real root classification; Hermite quadratic forms; Gröbner bases Email addresses: huu-phuoc.le@lip6.fr (Huu Phuoc Le), mohab.safey@lip6.fr (Mohab Safey El Din ) 1Mohab Safey El Din and Huu Phuoc Le are supported by the ANR grants ANR-18-CE33-0011 Preprint submitted to Elsevier February 2, 2024 1. Introduction 1.1. Problem statement and motivations In the whole paper, Q, R and C denote respectively the fields of rational, real and complex numbers. Let f = ( f1, . . . , fm) be a polynomial sequence in Q[y][x] where the indeterminates y = (y1, . . . , yt) are considered as parameters and x = (x1, . . . , xn) are considered as variables. We denote by V ⊂Ct × Cn the (complex) algebraic set defined by f1 = · · · = fm = 0 and by VR its real trace V ∩Rt+n. We consider also the projection on the parameter space y π : Ct × Cn →Ct, (y, x) 7→y. Further, we say that f satisfies Assumption (A) when the following holds. Assumption A. There exists a non-empty Zariski open subset O ⊂Ct such that π−1(η)∩V is non-empty and finite for any η ∈O. In other words, assuming (A) ensures that, for a generic value η of the parameters, the sequence f(η, ·) defines a finite algebraic set and hence finitely many real points. Note that, it is easy to prove that one can choose O in a way that the number of complex solutions to the entries of f(η, ·) is invariant when η ranges over O (e.g. using the theory of Gröbner basis). This is no more the case when considering real solutions whose number may vary when η ranges over O. By Hardt’s triviality theorem , there exists a real algebraic proper subset R of Rt such that, for any non-empty connected open set U of Rt \ R and η ∈U, π−1(η) × U is homeomorphic with π−1(U). This leads us to consider the following real root classification problem. Problem 1 (Real root classification). On input f satisfying Assumption (A), compute semi-algebraic formulas (i.e. finitely many disjunctions of conjunctions of polynomial inequalities) defining semi-algebraic sets S1, . . . , Sℓsuch that (i) The number of real points in V ∩π−1(η) is invariant when η ranges over Si, for 1 ≤i ≤ℓ; (ii) The union of the Si’s is dense in Rt; as well as at least one sample point ηi in each Si and the corresponding number of real points in V ∩π−1(ηi). A collection of semi-algebraic formulas sets is said to solve Problem (1) for the input f if it defines a collection of semi-algebraic sets Si satisfies the above properties (i) and (ii). Our output will have the form {(Φi, ηi, ri) | 1 ≤i ≤ℓ} where Φi is a semi-algebraic formula defining the set Si, ηi ∈Qt is a sample point of Si and ri is the corresponding number of real roots. Sesame, and ANR-19-CE40-0018 De Rerum Natura, the joint ANR-FWF ANR-19-CE48-0015 ECARP project, the PGMO grant CAMiSAdo, the European Union’s Horizon 2020 research and innovative training network program under the Marie Skłodowska-Curie grant agreement N° 813211 (POEMA) and the Grant FA8665-20-1-7029 of the EOARD-AFOSR. 2 A weak version of Problem (1) would be to compute only a set {η1, . . . , ηℓ} of sample points for a collection of semi-algebraic sets Si solving Problem (1) and their correspond-ing numbers of real points in V ∩π−1(ηj). Problem (1) appears in many areas of engineering sciences such as robotics or medical imagery (see, e.g., [50, 10, 51, 19, 6]). In this paper, we design a new algorithm whose arithmetic complexity improves the previously known bounds and reports on practical experiments showing that its practical behaviour outperforms the current software state-of-the-art. Before going further with a description of the prior works and our contributions, we introduce the complexity model which we use. We measure only the arithmetic complexity of algorithms, i.e., the number of arithmetic operations +, −, ×, ÷, in the base field Q, hence, without taking into account the cost of real root isolation. We use the Landau notation: • Let f : Rℓ + 7→R+ be a positive function. We let O( f) denote the class of functions g : Rℓ + →R+ such that there exist C, K ∈R+ such that for all ∥x∥≥K, g(x) ≤C f(x), where ∥· ∥is a norm of Rℓ. • The notation O e denotes the class of functions g : Rℓ + →R+ such that g ∈ O( f logκ( f)) for some κ > 0. Further, the notation ω always stands for the exponent constant of the matrix multipli-cation, i.e., the smallest positive number such that the product of two matrices in QN×N can be done using O (Nω) arithmetic operations in Q. The value of ω can be bounded from above by 2.37286, which is established in . 1.2. Prior works A first approach to Problem (1) would be to compute a cylindrical algebraic decom-position (CAD) of Rt ×Rn adapted to f using e.g. Collins’ algorithm (and its more recent improvements) ; see . While, up to our knowledge, there is no clear reference for this fact, the cylindrical structure of the cells of the CAD will imply that their projection on the parameters’ space Rt define semi-algebraic sets enjoying the properties needed to solve Problem (1). However, the doubly exponential complexity of CAD both in terms of runtime and output size [14, 7] makes it difficult to use in practice. A more popular approach consists in computing polynomials h1, . . . , hr in Q[y] such that ∪r i=1V(hi) ∩Rt contains the boundaries of semi-algebraic sets S1, . . . , Sℓenjoying the properties required to solve Problem (1). Next, one needs to compute semi-algebraic descriptions of the connected components of Rt \ ∪r i=1V(hi) as well as sample points in these connected components. This is basically the approach followed by (the hi’s are called border polynomials) and (the set ∪r i=1V(hi) is called discriminant variety) under the assumption that ⟨f⟩is a radical ideal. Note that both and provide algorithms that can handle variants of Problem (1) allowing inequalities. In this paper, we focus on the situation where we only have equations in our input parametric system. When ⟨f⟩is radical and the restriction of π to V ∩Rt × Rn is proper, one can easily prove using a semi-algebraic version of Thom’s isotopy lemma that one can choose ∪r i=1V(hi) to be the set of critical values of the restriction of π to V (see e.g. ). If f is a regular sequence (hence m = n), the critical set of the restriction of π to V is defined as the intersection of V with the hypersurface defined by the vanishing of the determinant 3 of the Jacobian matrix of f with respect to the variables x. When d dominates the degrees of the entries of f, Bézout’s theorem allows us to state that the degree of this set is bounded above by n(d −1)dn. It is worth noticing that, usually, this approach is used only to solve the aforemen-tioned weak version of Problem (1) as getting a semi-algebraic description of the con-nected components of Rt \ ∪r i=1V(hi) through CAD is too expensive when t ≥4 (still, because of the doubly exponential complexity of CAD). Under the above assumptions and notation, the output degree of the polynomials in such formulas would be bounded by (n(d −1)dn)2O(t). An alternative would be to use parametric roadmap algorithms to do such computa-tions using e.g. [4, Chap. 16] to compute semi-algebraic representations of the connected components of Rt \ ∪r i=1V(hi). Under the above extra assumptions, this would result in output formulas involving polynomials of degree bounded by (n(d −1)dn)O(t3) using (n(d −1)dn)O(t4) arithmetic operations (see [4, Theorem 16.13]). Note that the output de-grees are by several orders of magnitude larger than n(d −1)dn which bounds the degree of the set of critical values of the restriction of π to V. Hence, one topical algorithmic issue is to design an efficient algorithm for solving Problem (1) which would output semi-algebraic formulas of degree bounded by n(d−1)dn (using a number of arithmetic operations polynomial in this quantity). At this stage of our exposition, this is not clear that it is doable. Actually, admittedly “folklore” algorithms in symbolic computation already allow one to achieve such a result. Using the (probabilistic) algorithm of , one can compute a rational parametriza-tion of V = V( f) with respect to the x-variables, i.e. a sequence of polynomials (w, v1, . . . , vn) in Q(y)[u] where u is a new variable, such that the constructible set Z ⊂Ct × Cn of every point η, v1 ∂w/∂u(η, ϑ), . . . , vn ∂w/∂u(η, ϑ) ! , where (η, ϑ) ∈Ct × C such that w(η, ϑ) = 0 and η does not cancel ∂w/∂u and any denomi-nator of (w, v1, . . . , vn), is Zariski dense in V, i.e., the Zariski closure of Z coincides with V. The bi-rational equivalence between Z and its projection on the (u, y)-space implies that semi-algebraic formulas solving Problem (1) can be obtained through the computa-tion of the subresultant sequence associated to  w, ∂w ∂u  (see e.g. [4, Chap. 4]). Combining the complexity results of to compute a rational parametrization of V with those of [4, Chap. 4] for computing subresultants we obtain that this algorithm uses O e t + 2d2n t ! 25t d5nt+3n ! arithmetic operations in Q, and that the semi-algebraic formulas computed by this algo-rithm involve polynomials in Q[y] of degree bounded by 2d2n. Recall that the degree of the critical locus of the restriction of π to V is bounded by n(d −1)dn. Hence, computing semi-algebraic formulas solving Problem (1) involving polynomials of degrees in O(dn) through an efficient algorithm reflecting this complexity gain is still an open problem. 4 1.3. Main results Basically, our main result is to provide a new algorithm solving Problem (1) when ⟨f⟩is radical and assumption (A) holds. Under some genericity assumptions, we prove that it outputs formulas involing polynomials of degree in O(dn) with a better arithmetic complexity than what was previously known. Theorem I. Let C[x, y]d be the set of polynomials in C[x, y] having total degree bounded by d and set D = n(d −1)dn. There exists a non-empty Zariski open set F ⊂C[x, y]n d such that for f = ( f1, . . . , fn) ∈ F ∩Q[x, y]n, the following holds: i) There exists an algorithm that computes a solution for the weak-version of Problem (1) within O e t + D t ! 23t n2t+1d2nt+n+2t+1 ! . arithmetic operations in Q. ii) There exists a probabilistic algorithm that returns the formulas of a collection of semi-algebraic sets solving Problem (1) within O e t + D t ! 23t n2t+1d3nt+2(n+t)+1 ! arithmetic operations in Q in case of success. iii) The semi-algebraic descriptions output by the above algorithm involves polynomials in Q[y] of degree bounded by D. We note that the binomial coefficient t+D t  is bounded from above by Dt ≃ntdnt+t. Therefore, the complexities given in the items i) and ii) of Theorem I can be bounded by O e  23t n3td3nt and O e  23t n3td4nt respectively. We also implemented this algorithm to illustrate its practical behaviour and compare it with the state-of-the-art software within the Maple packages RootFinding[Parametric] and RegularChains[ParametricSystemTools]. We report on experiments showing that our implementation outperforms these packages, which is justified by our complexity result. The key ingredient on which one relies to obtain these results is a set of well-known properties of Hermite quadratic forms to count the real roots of zero-dimensional ideals. The use of such quadratic forms for counting the number of real solutions was introduced in and then later on generalized by and used in . We refer to [4, Theorem 4.102] for the explicit relation between the number of real roots of a zero-dimensional algebraic set and the signature of these quadratic forms and to [4, Algo. 8.43] for an algorithm computing these signatures. We first slightly extend the definition of Hermite’s quadratic forms and Hermite’s matrices to the context of parametric systems; we call them parametric Hermite quadratic forms and parametric Hermite matrices. This is easily done since the ideal of Q(y)[x] generated by f, considering Q(y) as the base field, has dimension zero. We also establish natural specialization properties for these parametric Hermite matrices. 5 Hence, a parametric Hermite matrix, similar to its zero-dimensional counterpart, allows one to count respectively the number of distinct real and complex roots at any parameters outside a strict algebraic sets of Rt by evaluating the signature and rank of its specialization. Based on this specialization property, we design two algorithms for solving Prob-lem (1) and also its weak version for the input system f which satisfies Assumption (A) and generates a radical ideal. Our algorithm for the weak version of Problem (1) reduces to the following main steps. (a) Compute a parametric Hermite matrix H associated to f ⊂Q[y][x]. (b) Compute a set of sample points {η1, . . . , ηℓ} in the connected components of the semi-algebraic set of Rt defined by w , 0 where w is derived from H. This is done through the so-called critical point method (see e.g. [4, Chap. 12] and references therein) which are adapted to obtain practically fast algorithms following . We will explain in detail this step in Section 3. This algorithm takes as input s polynomials of degree D involving t variables and computes sample points per connected components in the semi-algebraic set defined by the non-vanishing of these polynomials using O e D + t t ! st+123tD2t+1 ! . (c) Compute the number ri of real points in V ∩π−1(ηi) for 1 ≤i ≤ℓ. This is done by simply evaluating the signature of the specialization of H at each ηi. It is worth noting that, in the algorithm above, we obtain through parametric Hermite matrices a polynomial w that plays the same role as the discriminant varieties of or the border polynomials of . We will see in the section reporting experiments that our approach outperforms the other two on every example we consider. To return semi-algebraic formulas, our routine is basically the same except instead of computing sample points in the set {w , 0}, one needs to consider all principal minors of the matrix H and compute sample points outside the union of the vanishing sets of all these polynomials. Another contribution of this paper is to make clear how to perform the step (a). For this, we rely on the theory of Gröbner bases. More precisely, we use specialization properties of Gröbner bases, similar to those already proven in . This leaves some freedom when running the algorithm: since we rely on Gröbner bases, one may choose monomial orderings which are more convenient for practical computations. In particular, the monomial basis of the quotient ring Q(y)[x]/I where I is the ideal generated by f in Q(y)[x] depends on the choice of the monomial ordering used for Gröbner bases computations. We describe the behavior of our algorithm when choosing the graded reverse lexicographical ordering whose interest for practical computations is explained in . Further, we denote by grevlex(x) the graded reverse lexicographical ordering applied 6 to the sequence of the variables x = (x1, . . . , xn) (with x1 ≻· · · ≻xn). Further, we also denote by ≻lex the lexicographical ordering. We report, at the end of the paper, on the practical behavior of this algorithm. We compare with two Maple packages RootFinding[Parametric] and RegularChains[Para-metricSystemTools] which respectively implement the algorithms of and . In particular, our algorithm allows us to solve instances of Problem (1) which were not tractable by the state-of-the-art as well as the actual degrees of the polynomials in the output formula which are bounded by n(d −1)dn. We actually prove such a statement under some generic assumptions. Our main complexity result is stated below. Its proof is given in Subsection 6.2, where the generic assumptions in use are given explicitly. Organization of the paper. Section 2 reviews fundamental notions of algebraic geometry and the theory of Gröbner bases that we use further. Next, we present a dedicated algorithm for computing at least one point per connected component of a semi-algebraic defined by a list of inequations in Section 3. Section 4 lies the definition and some useful properties of parametric Hermite matrices. In Section 5, we describe our algorithm for solving the real root classification problem using this parametric Hermite matrix. The complexity analysis of the algorithms mentioned above is given in Section 6. Finally, in Section 7, we report on the practical behavior of our algorithms and illustrate its practical capabilities. 2. Preliminaries In the first paragraph, we fix some notations on ideals and algebraic sets and recall the definition of critical points associated to a given polynomial map. Next, we give the definitions of regular sequences, Hilbert series, Noether position and proper maps, which are used later in Subsection 6.1. The fourth paragraph recalls some basic properties of Gröbner bases and quotient algebras of zero-dimensional ideals. We refer to for an introductory study on the algorithmic theory of Gröbner bases. In the last paragraphs, we recall respectively the definitions of zero-dimensional parametrizations and rational parametrizations which go back to and is widely used in computer algebra (see e.g. [24, 26, 25]) to represent finite algebraic sets. Algebraic sets and critical points. We consider a sub-field F of C. Let I be a polynomial ideal of F[x1, . . . , xn], the algebraic subset of Cn at which the elements of I vanish is de-noted by V(I). Conversely, for an algebraic set V ⊂Cn, we denote by I(V) ⊂C[x1, . . . , xn] the radical ideal associated to V. Given any subset A of Cn, we denote by A the Zariski closure of A, i.e., the smallest algebraic set containing A. A map ϕ between two algebraic sets V ⊂Cn and W ⊂Cs is a polynomial map if there exist ϕ1, . . . , ϕt ∈C[x1, . . . , xn] such that the ϕ(η) = (ϕ1(η), . . . , ϕs(η)) for η ∈V. An algebraic set V is equi-dimensional of dimension t if it is the union of irreducible algebraic sets of dimension t. Let ϕ be a polynomial map from V to another algebraic set W. The morphism ϕ is dominant if and only if the image of every irreducible component V′ of V by ϕ is Zariski dense in W, i.e. ϕ(V′) = W. 7 Let φ ∈C[x1, . . . , xn] which defines the polynomial function φ : Cn →C, (x1, . . . , xn) 7→φ(x1, . . . , xn) and V ⊂Cn be a smooth equi-dimensional algebraic set. We denote by crit(φ, V) the set of critical points of the restriction of φ to V. If c is the codimension of V and ( f1, . . . , fm) generates the vanishing ideal associated to V, then crit(φ, V) is the subset of V at which the Jacobian matrix associated to ( f1, . . . , fm, φ) has rank less than or equal to c (see, e.g., [42, Subsection 3.1]). Regular sequences & Hilbert series. Let F be a field and ( f1, . . . , fm) ⊂F[x] where x = (x1, . . . , xn) and m ≤n be a homogeneous polynomial sequence. We say that ( f1, . . . , fm) ⊂F[x] is a regular sequence if for any i ∈{1, . . . , m}, fi is not a zero-divisor in F[x]/⟨f1, . . . , fi−1⟩. The notion of regular sequences is the algebraic analogue of complete intersection. In this paper, we focus particularly on the Hilbert series of homogeneous regular sequences, which are recalled below. Let I ⊂F[x] be a homogeneous ideal. We denote by F[x]r the set of every homogeneous polynomial whose degree is equal to r. Then F[x]r and I∩F[x]r are two F-vector spaces of dimensions dimF(F[x]r) and dimF(I ∩F[x]r) respectively. The Hilbert series of I is defined as HSI(z) = ∞ X r=0 (dimF(F[x]r) −dimF(I ∩F[x]r)) · zr. We now consider the affine polynomial sequences. Note that one can define affine regular sequences by simply removing the homogeneity assumption of ( f1, . . . , fm) from the above definition. However, as explained in [2, Sec 1.7], many important properties that hold for homogeneous regular sequences are no longer valid for the affine ones. Therefore, in this paper, we use [2, Definition 1.7.2] of affine regular sequences, which is more restrictive but allows us to preserve similar results as the homogeneous case. We recall that definition below. For p ∈F[x1, . . . , xn], we denote by H p the homogeneous component of largest degree of p. A polynomial sequence ( f1, . . . , fm) ⊂F[x1, . . . , xn], not necessarily homogeneous, is called a regular sequence if and only if (H f1, . . . , H fm) is a homogeneous regular sequence. Noether position & Properness. Let F be a field and f = ( f1, . . . , fn) ⊂F[x1, . . . , xn+t]. The variables (x1, . . . , xn) are in Noether position with respect to the ideal ⟨f⟩if their canonical images in the quotient algebra F[x1, . . . , xn+t]/⟨f⟩are algebraic integers over F[xn+1, . . . , xn+t] and, moreover, F[xn+1, . . . , xn+t] ∩⟨f⟩= ⟨0⟩. From a geometric point of view, Noether position is strongly related to the notion of proper map below (see ). Let V be the algebraic set defined by f ∈R[y1, . . . , yt, x1, . . . , xn]. The restriction of the projection π : (y, x) 7→y to V ∩Rt+n is said to be proper if the inverse image of every compact subset of π(V ∩Rt+n) is compact. If the variables x = (x1, . . . , xn) is in Noether position with respect to ⟨f⟩, then the projection π : V ∩Rt+n →Rt, (y, x) 7→y is proper. A point η ∈Rt is a non-proper point of the restriction of π to V if and only π−1(U) ∩ V ∩Rt+n is not compact for any compact neighborhood U of η in Rt. 8 Gröbner bases and zero-dimensional ideals. Let F be a field and F be its algebraic closure. We denote by F[x] the polynomial algebra in the variables x = (x1, . . . , xn). We fix an admissible monomial ordering ≻(see Section 2.2, ) over F[x]. For a polynomial p ∈F[x], the leading monomial of p with respect to ≻is denoted by lm≻(p). Given an ideal I ⊂F[x], the initial ideal of I with respect to the ordering ≻is the ideal ⟨lm≻(p) | p ∈I⟩. A Gröbner basis G of I with respect to the ordering ≻is a generating set of I such that the set of leading monomials {lm≻(g) | g ∈G} generates the initial ideal ⟨lm≻(p) | p ∈I⟩. For any polynomial p ∈F[x], the remainder of the division of p by G using the monomial ordering ≻is uniquely defined. It is called the normal form of p with respect to G and is denoted by NFG(p). A polynomial p is reduced by G if p coincides with its normal form in G. A Gröbner basis G is said to be reduced if, for any g ∈G, all terms of g are reduced modulo the leading terms of G. An ideal I is said to be zero-dimensional if the algebraic set V(I) ⊂F n is finite and non-empty. By [12, Sec. 5.3, Theorem 6], the quotient ring F[x]/I is a F-vector space of finite dimension. The dimension of this vector space is also called the algebraic degree of I; it coincides with the number of points of V(I) counted with multiplicities [4, Sec. 4.5]. For any Gröbner basis of I, the set of monomials in F[x] which are irreducible by G forms a monomial basis, which we call B, of this vector space. For any p ∈F[x], the normal form of p by G can be interpreted as the image of p in F[x]/I and is a linear combination of elements of B (with coefficients in F). Therefore, the operations in the quotient algebra F[x]/I such as vector additions or scalar multiplications can be computed explicitly using the normal form reduction. In this article, while working with polynomial systems depending on parameters in Q[y][x], we frequently take F to be the rational function field Q(y) and treat polynomials in Q[y][x] as elements of Q(y)[x]. Zero-dimensional parametrizations. A zero-dimensional parametrization R of coefficients in Q consists of (a1, . . . , an) ∈Qn and a sequence of polynomials (w, v1, . . . , vn) ∈(Q[u])n+1 where u = Pn i=1 aixi such that w is square-free. The solution set of R, defined as Z(R) = ( v1(ϑ) w′(ϑ), . . . , vn(ϑ) w′(ϑ) ! ∈Cn | ϑ ∈C such that w(ϑ) = 0 ) , is finite. A finite algebraic set V ∈Cn is said to be represented by a zero-dimensional parametriza-tion R if and only if V coincides with Z(R). Note that the cardinality of V is the same as the degree of w ; we also call it the degree of the zero-dimensional parametrization. Note that it is possible to retrieve a polynomial parametrization by inverting the derivative w′ modulo w. Still, this rational parametrization whose denominator is the derivative of w is known to be better for practical computations as it usually involves coefficients with smaller bit size (see ). 3. Computing sample points in semi-algebraic sets defined by the non-vanishing of poly-nomials In this section, we study the following algorithmic problem. Given (g1, . . . , gs) in Q[y1, . . . , yt], compute at least one sample point per connected component of the semi-9 algebraic set S ⊂Rt defined by g1 , 0, . . . , gs , 0. Such sample points will be encoded with zero-dimensional parametrizations which we described in Section 2. The main result of this section which will be used in the sequel of this paper is the following. Theorem II. Let (g1, . . . , gs) in Q[y1, . . . , yt] with D ≥max1≤i≤s deg(gi) and S ⊂Rt be the semi-algebraic set defined by g1 , 0, . . . , gs , 0. There exists a probabilistic algorithm which on input (g1, . . . , gs) outputs a finite family of zero-dimensional parametrizations R1, . . . , Rk, all of them of degree bounded by (2D)t, which encode at most (2sD)t points such that ∪k i=1Z(Ri) meets every connected component of S using O e D + t t ! st+123tD2t+1 ! . arithmetic operations in Q. The rest of this section is devoted to the proof of this theorem. Proof. By [19, Lemma 1], there exists a non-empty Zariski open set A × E ⊂Cs × C such that for (a = (a1, . . . , as), e) ∈A × E ∩Rs × R, the following holds. For I = {i1, . . . , iℓ} ⊂ {1, . . . , s} and σ = (σ1, . . . , σs) ∈{−1, 1}s, the algebraic sets VI,σ a,e ⊂Ct defined by gi1 + σi1ai1e = · · · = giℓ+ σiℓaiℓe = 0 are, either empty, or (t−ℓ)-equidimensional and smooth, and the ideal generated by their defining equations is radical. Note that by the transfer principle, one can choose instead of a scalar e an infinites-imal ε so that the algebraic sets VI,σ a,ε and their defining set of equations satisfy the above properties. When, in the above equations, one leaves ε as a variable, one ob-tains equations defining an algebraic set in Ct+1. We denote by VI,σ a,ε the union of the (t + 1 −ℓ)-equidimensional components of this algebraic set. Further we also assume that the ai’s are chosen positive. Denote by S(ε) the extension of the semi-algebraic set S to R⟨ε⟩t ; similarly, the extension of any connected component C of S to R⟨ε⟩t is denoted by C(ε). Now, remark that any connected component C(ε) of S(ε) contains a connected compo-nent of the semi-algebraic set S(ε) a defined by: (−a1ε ≥g1 ∨g1 ≥a1ε) ∧· · · ∧(−asε ≥gs ∨gs ≥asε) Hence, we are led to compute sample points per connected component of S(ε) a . These will be encoded with zero-dimensional parametrizations with coefficients in Q[ε]. By [4, Proposition 13.1], in order to compute sample points per connected component in S(ε) a , it suffices to compute sample points in the real algebraic sets VI,σ a,ε ∩Rt. To do that, since the algebraic sets VI,σ a,ε satisfy the above regularity properties, we can use the 10 algorithm and geometric results of . To state these results, one needs to introduce some notation. Let Q be a real field, R be a real closure of Q and C be an algebraic closure of R. For an algebraic set V ⊂Ct defined by h1 = · · · = hℓ= 0 (hi ∈Q[y] with y = (y1, . . . , yt)) and M ∈GLt(R), we denote by V M the set {M−1 · x | x ∈V} and, for 1 ≤i ≤ℓ, by hiM the polynomial hi(M · y) and by πi the canonical projection (y1, . . . , yt) 7→(y1, . . . , yi) (π0 will simply denote (y1, . . . , yt) 7→{•}). By slightly abusing notation, we will also denote by πi projections from VI,σ a,ε to the first i coordinates (y1, . . . , yi). We will consider the set of critical points of the restriction of πi to V and will denote this set by crit(πi, V) for 1 ≤i ≤ℓ. By [41, Theorem 2], for a generic choice of M ∈GLt(R), the union of V M ∩π−1 t−ℓ(0) with the sets crit(πi, V M) ∩π−1 i−1(0) (for 1 ≤i ≤t −ℓ) is finite and meets all connected components of V M ∩Rt. Because V satisfies the aforementioned regularity assumptions, crit(πi, V M) ∩π−1 i−1(0) is defined as the projection on the y-space of the solution set to the polynomials hM, (λ1, . . . , λℓ).jac(hM, i), u1λ1 + · · · + uℓλℓ= 1, y1 = · · · = yi−1 = 0, where h = (h1, . . . , hℓ), λ1, . . . , λℓare new variables (called Lagrange multipliers), jac(hM, i) is the Jacobian matrix associated to hM truncated by forgetting its first first i columns and the ui’s are generically chosen (see also [42, App. B]). Assume that D is the maximum degree of the hj’s and let E be the length of a straight-line program evaluating h. Observe now that, setting the yj’s to 0 (for 1 ≤j ≤i −1), and using [43, Theorem 1] combined with the degree estimates in [43, Section 5], we obtain that such systems can be solved using O        t −i ℓ ! Dℓ(D −1)t−(i−1)−ℓ !2 (E + (t + ℓ)D + (t + ℓ)2)(t + ℓ)        arithmetic operations in Q and have at most t −i ℓ ! Dℓ(D −1)t−(i−1)−ℓ solutions. Going back to our initial problem, one then needs to solve polynomial systems which encode the set crit(πi, VI,σ a,ε ) of critical points of the restriction of πi to VI,σ a,ε . Note that these systems have coefficients in Q[ε]. To solve such systems, we rely on , which consists in specializing ε to a generic value v ∈Q and compute a zero-dimensional parametrization of the solution set to the obtained system (within the above arithmetic complexity over Q) and next use Hensel lifting and rational reconstruction to deduce from this parametriza-tion a zero-dimensional parametrization with coefficients in Q(ε). By [44, Corollary 1] and multi-homogeneous bounds on the degree of the critical points of πi to VI,σ a,ε as in [43, Section 5], this lifting step has a cost O e       ((t + ℓ)4 + (t + ℓ+ 1)E) t −i ℓ ! Dℓ(D −1)t−(i−1)−ℓ !2      . Hence, all in all computing one zero-dimensional parametrization for one critical locus uses O e       ((t + ℓ)4D + (t + ℓ+ 1)E) t −i ℓ ! Dℓ(D −1)t−(i−1)−ℓ !2       11 arithmetic operations in Q. Note that, following , the degrees in ε of the numerators and denominators of the coefficients of these parametrizations are bounded by t ℓ  Dℓ(D − 1)t−ℓ. Summing up for all critical loci and using t−ℓ X i=0 t −i ℓ ! = t + 1 ℓ+ 1 ! , the computation for a fixed VI,σ a,ε uses O e       ((t + ℓ)4D + (t + ℓ+ 1)E) t + 1 ℓ+ 1 !2  Dℓ(D −1)t−ℓ2        arithmetic operations in Q. Also, the number of points computed this way is dominated by t + 1 ℓ+ 1 !  Dℓ(D −1)t−ℓ . Note that the above quantity is upper bounded by (2D)t and bounds the degree of the output zero-dimensional parametrizations. Taking the sum for all possible algebraic sets VI,σ a,ε and remarking that • the sum of number of indices of cardinality ℓfor 0 ≤ℓ≤t is bounded by st; • the number of sets σ for a given ℓis bounded by 2t; • the sum Pt ℓ=0 t+1 ℓ+1 2 equals 2 2t+1 t  −1 one deduces that all these zero-dimensional parametrizations can be computed within O e st2t 2t + 1 t !  (2t)4D + (2t + 1)Γ  D2t ! arithmetic operations in Q (recall that Γ bounds the length of a straight line program evaluating all the polynomials defining our semi-algebraic set S) which we simplify to O e  Γ st 23t D2t+1 . Similarly, using the above simplifications, the total number of points encoded by these zero-dimensional parametrizations is bounded above by (2sD)t. At this stage, we have just obtained zero-dimensional parametrizations with coeffi-cients in Q(ε). The above bound on the number of returned points is done but it remains to show how to specialize ε in order to get sample points per connected components in S. To do that, given a parametrization Rε = (w, v1, . . . , vt) ⊂Q(ε)[u]t+1, we need to find a specialization value e for ε to obtain a parametrization Re such that • the number of real roots of the zero set associated to Re is the same as the number of real roots of the zero set associated to Rε; 12 • when η ranges over the interval ]0, e] the signs of the gi’s at the zero set associated to η does not vary. To do that, it suffices to choose e such that it is smaller than the smallest positive root of the resultant associated to  w, ∂w ∂u  and the smallest positive roots of the resultant asso-ciated to w and gi  v1 ∂w/∂u, . . . , vt ∂w/∂u  . The algebraic cost (i.e. the resultant computations) are dominated by the complexity estimates of the previous step. Finally, note that Γ can be bounded by s D+t t  when the gi’s are given in an expanded form in the monomial basis. Therefore, the arithmetic complexity for computing sample points of the semi-algebraic set defined by g1 , 0, . . . , gs , 0 can be bounded by O e D + t t ! st+1 23t D2t+1 ! . Remark 2. Observe that since the coefficients of the rational parametrizations with coef-ficients in Q[ε] have bit size depending both on the maximum bit size τ of the coefficients of the input polynomials g1, . . . , gs and the bit size of the generically chosen ai’s. When substituting ε by a small enough rational number e, one obtains zero-dimensional parametrizations with coefficients in Q of bit size depending on the one of e also. Admis-sible values for e depend on the magnitude of the real roots of the univariate resultant we exhibit in the above proof. Because we start with rational parametrizations of degree bounded by O(D)t, assuming that the bit size of the ai’s is bounded by O(D)t (following reasonings like the one in ), one could show using standard quantitative results that the bit size of e may be τ DO(t) (because e is obtained through the isolation of real roots of a univariate polynomial of degree DO(t)). However, this is a worst case analysis and most of the time, we observe in practice that one can choose for e values of reasonable bit size. We end this section with a Corollary which is a consequence of the proof of [4, Theorem 13.18]. Basically, once we have the parametrizations computed by the algorithm on which Theorem II relies, one can compute sample points per connected components of the semi-algebraic set S within the same arithmetic complexity bounds. The idea is just to evaluate the gi’s at these rational parametrizations and use bounds on the minimal distance between two roots of a univariate polynomial such as [4, Prop. 10.22]. Hence, the proof of the corollary below follows mutatis mutandis the same steps as the one of [4, Theorem 13.18]. Corollary 3. Let (g1, . . . , gs) in Q[y1, . . . , yt] with D ≥max1≤i≤s deg(gi) and S ⊂Rt be the semi-algebraic set defined by g1 , 0, . . . , gs , 0. There exists a probabilistic algorithm which on input (g1, . . . , gs) outputs a finite set of points P in Qt of cardinality at most (2sD)t points such that P meets every connected component of S using O e D + t t ! st+123tD2t+1 ! . arithmetic operations in Q. 13 Note that the main difference, by contrast with Theorem II, the above Corollary shows how to obtain output points with coordinates in Q. 4. Parametric Hermite matrices In this section, we adapt the construction encoding Hermite’s quadratic forms, also known as Hermite matrices to the context of parametric systems and describe an algo-rithm for computing those parametric Hermite matrices. 4.1. Definition Let K be a field and I ⊂K[x] be a zero-dimensional ideal. Recall that the quotient ring AK = K[x]/I is a K-vector space of finite dimension [12, Section 5.3, Theorem 6]. For p ∈K[x], we denote by Lp the multiplication map q ∈AK 7→p · q, ∈AK. Note that the map Lp is an endomorphism of AK as a K-vector space. The Hermite quadratic form associated to I is defined as the bilinear form that sends (p, q) ∈AK × AK to the trace of Lp·q as an endomorphism of AK. We refer to [4, Chap. 4] for more details about Hermite quadratic forms. Now, let f = ( f1, . . . , fm) be a polynomial sequence in Q[y][x]. We take the rational function field Q(y) as the base field K and denote by ⟨f⟩K the ideal generated by f in K[x]. We require that the system f satisfies Assumption (A). This leads to the following well-known lemma, which is the foundation for the con-struction of our parametric Hermite matrices. Lemma 4. Assume that f satisfies Assumption (A). Then the ideal ⟨f⟩K is zero-dimensional. Proof. Assume that there exists a coordinate xi for 1 ≤i ≤n such that ⟨f⟩∩C[y, xi] = ⟨0⟩. We denote respectively by πi and ˜ πi the projections (y, x) 7→(y, xi) and (y, xi) 7→y. By the assumption above, πi(V) is the whole space Ct+1. Then, we have the identity Ct+1 =  ˜ πi−1(O) ∪˜ πi−1(Ct \ O)  ∩πi(V), where O be the dense Zariski open subset of Ct required in Assumption (A). Since ˜ πi is a map from Ct+1 to Ct, its fibers are of dimension at most 1. Therefore, we have that dim ˜ πi−1(Ct \ O) ≤1 + dim(Ct \ O) ≤t. As Assumption (A) holds and dim ˜ π−1 i (Ct \ O) ≤t, we have that dim ˜ πi−1(O) ∩πi(V) = t. This contradicts to the identity above. We conclude that, for 1 ≤i ≤n, ⟨f⟩∩C[y, xi] , ⟨0⟩. On the other hand, by Assumption (A), the Zariski-closure of π(V) is the whole parameter space Ct. Thus, we have that ⟨f⟩∩C[y] = ⟨0⟩. Since ⟨f⟩∩C[y] = (⟨f⟩∩ C[y, xi]) ∩C[y] for every 1 ≤i ≤n, there exists a polynomial pi ∈⟨f⟩∩C[y, xi] whose degree with respect to xi is non-zero. Clearly, pi is an element of the ideal ⟨f⟩K. Thus, there exists di such that xdi i is a leading term in ⟨f⟩K. Hence, ⟨f⟩K is a zero-dimensional ideal. Lemma 4 allows us to apply the construction of Hermite matrices described in [4, Chap. 4] to parametric systems as follows. Since the ideal ⟨f⟩K is zero-dimensional by Lemma 4, its associated quotient ring AK = K[x]/⟨f⟩K is a finite dimensional K-vector space. Let δ denote the dimension of AK as a K-vector space. 14 We consider a basis B = {b1, . . . , bδ} of AK, where the bi’s are taken as monomials in the variables x. Such a basis can be derived from Gröbner bases as follows. We fix an admissible monomial ordering ≻over the set of monomials in the variables x and compute a Gröbner basis G with respect to the ordering ≻of the ideal ⟨f⟩K. Then, the monomials that are not divisible by any leading monomial of elements of G form a basis of AK. Recall that, for an element p ∈K[x], we denote by p the class of p in the quotient ring AK. A representative of p can be derived by computing the normal form of p by the Gröbner basis G, which results in a linear combination of elements of B with coefficients in Q(y). Assume now the basis B of AK is fixed. For any p ∈K[x], the multiplication map Lp is an endomorphism of AK. Therefore, it admits a matrix representation with respect to B, whose entries are elements in Q(y). The trace of Lp can be computed as the trace of the matrix representing it. Similarly, the Hermite’s quadratic form of the ideal ⟨f⟩K can be represented by a matrix with respect to B. This leads to the following definition. Definition 5. Given a parametric polynomial system f = ( f1, . . . , fm) ⊂Q[y][x] satisfying Assumption (A). We fix a basis B = {b1, . . . , bδ} of the vector space K[x]/⟨f⟩K. The parametric Hermite matrix associated to f with respect to the basis B is defined as the symmetric matrix H = (hi,j)1≤i,j≤δ where hi,j = trace(Lbi·bj). It is important to note that the definition of parametric Hermite matrices depends both on the input system f and the choice of the monomial basis B. 4.2. Gröbner bases and parametric Hermite matrices In the previous subsection, we have defined parametric Hermite matrices assuming one knows a Gröbner basis G with respect to some monomial ordering of the ideal ⟨f⟩K where K = Q(y) and ⟨f⟩K is the ideal of K[x] generated by f. Computing such a Gröbner basis may be costly as this would require to perform arithmetic operations over the field Q(y) (or Z/pZ(y) where p is a prime when tackling this computational task through modular computations). In this paragraph, we show that one can obtain parametric Hermite matrices by considering some Gröbner bases of the ideal ⟨f⟩⊂Q[y, x] (hence, enabling the use of efficient implementations of Gröbner bases such as the F4/F5 algorithms [17, 18]). Since the graded reverse lexicographical ordering (grevlex for short) is known for yielding Gröbner bases of relatively small degree comparing to other orders, we prefer using this ordering to construct our parametric Hermite matrices. Further, we will use the notation grevlex(x) for the grevlex ordering among the variables x (with x1 ≻· · · ≻xn) and grevlex(x) ≻grevlex(y) (with y1 ≻· · · ≻yt) for the elimination ordering. We denote respectively by lmx(p) and lcx(p) the leading monomial and the leading coefficient of p ∈K[x] with respect to the ordering grevlex(x). Lemma 6. Let G be the reduced Gröbner basis of ⟨f⟩with respect to the elimination ordering grevlex(x) ≻grevlex(y). Then G is also a Gröbner basis of ⟨f⟩K with respect to the ordering grevlex(x). Proof. Since G is a Gröbner basis of the ideal ⟨f⟩, every polynomial fi of f can be written as fi = P g∈G cg · g where cg ∈Q[x, y]. Therefore, any element of ⟨f⟩K can also be written 15 as a combination of elements of G with coefficients in Q(y)[x]. In other words, G is a set of generators of ⟨f⟩K. Let p be a polynomial in K[x], p is contained in ⟨f⟩K if and only if there exists a polynomial q ∈Q[y] such that q · p ∈⟨f⟩. Thus, the leading monomial of p as an element of K[x] with respect to the grevlex ordering grevlex(x) is contained in the ideal ⟨lmx(g) | g ∈G⟩. Therefore, G is a Gröbner basis of ⟨f⟩K. Hereafter, we denote by G the reduced Gröbner basis of ⟨f⟩with respect to the elimination ordering grevlex(x) ≻grevlex(y). Let B be the set of all monomials in x that are not reducible by G, which is finite by Lemmas 4 and 6. The set B actually forms a basis of the K-vector space K[x]/⟨f⟩K. Then, we denote by H the parametric Hermite matrix associated to f with respect to this basis B. We consider the following assumption on the input system f. Assumption B. For g ∈G, the leading coefficient lcx(g) does not depend on the parameters y. As the computations in the quotient ring AK are done through normal form reductions by G, the lemma below is straight-forward. Lemma 7. Under Assumption (B), the entries of the parametric Hermite matrix H are elements of Q[y]. Proof. Since Assumption (B) holds, the leading coefficients lcx(g) do not depend on parameters y for g ∈G. The normal form reduction in AK of any polynomial in Q[y][x] returns a polynomial in Q[y][x]. Thus, each normal form can be written as a linear combination of B whose coefficients lie in Q[y]. Hence, the multiplication map Lbi·bj for 1 ≤i, j ≤δ can be represented by polynomial matrices in Q[y] with respect to the basis B. As an immediate consequence, the entries of H, as being the traces of those multiplication maps, are polynomials in Q[y]. The next proposition states that Assumption (B) is satisfied by a generic system f. It implies that the entries of the parametric Hermite matrix of a generic system with respect to the basis B derived from G completely lie in Q[y]. We postpone the proof of Proposition 8 to Subsection 6.1 where we prove a more general result (see Proposition 20). Proposition 8. Let C[x, y]d be the set of polynomials in C[x, y] having total degree bounded by d. There exists a non-empty Zariski open subset FC of C[x, y]n d such that Assumption (B) is satisfied by any f ∈FC ∩Q[x, y]n. 4.3. Specialization property of parametric Hermite matrices Recall that G is the reduced Gröbner basis of ⟨f⟩with respect to the ordering grevlex(x) ≻grevlex(y) and B is the basis of K[x]/⟨f⟩K derived from G as discussed in the previous subsection. Then, H is the parametric Hermite matrix associated to f with respect to the basis B. Let η ∈Ct and φη : C(y)[x] →C[x], p(y, x) 7→p(η, x) be the specialization map that evaluates the parameters y at η. Then f(η, ·) = (φη( f1), . . . , φη( fm)). We denote by H(η) the specialization (φη(hi,j))1≤i, j≤δ of H at η. 16 Recall that, for a polynomial p ∈C(y)[x], the leading coefficient of p considered as a polynomial in the variables x with respect to the ordering grevlex(x) is denoted by lcx(p). In this subsection, for p ∈C[x], we use lm(p) to denote the leading monomial of p with respect to the ordering grevlex(x). Let W∞⊂Ct denote the algebraic set ∪g∈GV(lcx(g)). In Proposition 10, we prove that, outside W∞, the specialization H(η) coincides with the classic Hermite matrix of the zero-dimensional ideal f(η, ·) ⊂Q[x]. This is the main result of this subsection. Since the operations over the K-vector space AK rely on normal form reductions by the Gröbner basis G of ⟨f⟩K, the specialization property of H depends on the specialization property of G. Lemma 9 below, which is a direct consequence of [32, Theorem 3.1], provides the specialization property of G. We give here a more elementary proof for this lemma than the one in . Lemma 9. Let η ∈Ct \ W∞. Then the specialization G(η, ·) B {φη(g) | g ∈G} is a Gröbner basis of the ideal ⟨f(η, ·)⟩⊂C[x] generated by f(η, ·) with respect to the ordering grevlex(x). Proof. Since η ∈Ct \ W∞, the leading coefficient lcx(g) does not vanish at η for every g ∈G. Thus, lmx(g) = lm(φη(g)). We denote by M the set of all monomials in the variables x and MG B {m ∈M | ∃g ∈G : lmx(g) divides m} = {m ∈M | ∃g ∈G : lm(φη(g)) divides m}. For any p ∈⟨f⟩⊂Q[x, y], we prove that lm(φη( f)) ∈MG. If p is identically zero, there is nothing to prove. So, we assume that p , 0, p is then expanded in the form below: p = X m∈MG cm · m + X m∈M\MG cm · m, where the cm’s are elements of Q[y]. Since p is not identically zero, there exists m ∈MG such that cm , 0. Since G is a Gröbner basis of ⟨f⟩K, any monomial in MG can be reduced by G to a unique normal form in K[x]. These divisions involve denominators, which are products of some powers of the leading coefficients of G with respect to the variables x. We write NFG(p) = X m∈MG cm · NFG(m) + X m∈M\MG cm · m. As p ∈⟨f⟩K, we have that NFG(p) = 0, which implies X m∈M\MG cm · m = − X m∈MG cm · NFG(m). Therefore, we have the identity p = X m∈MG cm · (m −NFG(m)) Since η does not cancel any denominator appearing in NFG(m), we can specialize the identity above without any problem: φη(p) = X m∈MG φη(cm) · (m −φη(NFG(m))). 17 If at least one of the φη(cm) does not vanish, then the leading monomial of φη( f) is in MG. Otherwise, if all the φη(cm) are canceled, then φη(p) is identically zero, and there is not any new leading monomial appearing either. So, the leading monomial of any p ∈⟨fη⟩ is contained in MG, which means G(η, ·) is a Gröbner basis of ⟨f(η, ·)⟩with respect to grevlex(x). Proposition 10. For any η ∈Ct \ W∞, the specialization H(η) coincides with the classic Hermite matrix of the zero-dimensional ideal ⟨f(η, ·)⟩⊂C[x]. Proof. As a consequence of Lemma 9, each computation in AK derives a corresponding one in C[x]/⟨f(η, ·)⟩by evaluating y at η in every normal form reduction by G. This evaluation is allowed since η does not cancel any denominator appearing during the com-putation. Therefore, we deduce immediately the specialization property of the Hermite matrix. Using Proposition 10 and [4, Theorem 4.102], we obtain immediately the following corollary that allows us to use parametric Hermite matrices to count the root of a spe-cialization of a parametric system. Corollary 11. Let η ∈Ct \ W∞, then the rank of H(η) is the number of distinct complex roots of f(η, ·). When η ∈Rt \ W∞, the signature of H(η) is the number of distinct real roots of f(η, ·). Proof. By Proposition 10, H(η) is a Hermite matrix of the zero-dimensional ideal ⟨f(η, ·)⟩. Then, [4, Theorem 4.102] implies that the rank (resp. the signature) of H(η) equals to the number of distinct complex (resp. real) solutions of f(η, ·). We finish this subsection by giving some explanation for what happens above W∞, where our parametric Hermite matrix H does not have good specialization property. Lemma 12. Let W∞defined as above. Then W∞contains all the following sets: • The non-proper points of the restriction of π to V (see Section 2 for this definition). • The set of points η ∈Ct such that the fiber π−1(η) ∩V is infinite. • The image by π of the irreducible components of V whose dimensions are smaller than t. Proof. The claim for the set of non-properness of the restriction of π to V is already proven in [35, Theorem 2]. We focus on the two remaining sets. Using the Hermite matrix, we know that for η ∈Ct \ W∞, the system f(η, ·) admits a non-empty finite set of complex solutions. On the other hand, for any η ∈Ct such that π−1(η) ∩V is infinite, f(η, ·) has infinitely many complex solutions. Therefore, the set of such points η is contained in W∞. Let V>t be the union of irreducible components of V of dimension greater than t. By the fiber dimension theorem [45, Theorem 1.25], the fibers of the restriction of π to V>t must have dimension at least one. Similarly, the components of dimension t whose images by π are contained in a Zariski closed subset of Ct also yield infinite fibers. Therefore, as proven above, all of these components are contained in π−1(W∞). 18 We now consider the irreducible components of dimension smaller than t. Let V≥t and V<t be respectively the union of irreducible components of V of dimension at least t and at most t −1. We have that V = V≥t ∪V<t. Let I ⊂Q[x, y] denote the ideal generated by f. Using the primary decomposition of I (see e.g. [12, Sec. 4.8]), we have that I is the intersection of two ideals I≥t and I<t such that V(I≥t) = V≥t and V(I<t) = V<t. We write I = I≥t ∩I<t. We denote by R the polynomial ring Q(y)[x]. Then, the above identity is transferred into R: I · R = (I≥t · R) ∩(I<t · R). Since dim(π(V<t)) ≤t −1, then there exists a non-zero polynomial p ∈I<t ∩Q[y]. As p is a unit in Q(y), the ideal I<t · R is exactly R. So, I · R = I≥t · R. Note that, by Lemma 6, G is a Gröbner basis of I·R, then it is also a Gröbner basis of I≥t·R. Therefore, the Hermite matrices associated to I and I≥t (with respect to the basis derived from G) coincide. So, for η < W∞, the ranks of those matrices are equal and so are the numbers of complex points in π−1(η) ∩V and π−1(η) ∩V≥t. As π−1(η) ∩V≥t ⊂π−1(η) ∩V, we have that π−1(η) ∩V = π−1(η) ∩V≥t. This leads to π−1(Ct \ W∞) ∩V≥t = π−1(Ct \ W∞) ∩V. Then, π−1(Ct \ W∞) ∩V<t = ∅or equivalently, V<t ⊂π−1(W∞), which concludes the proof. 4.4. Computing parametric Hermite matrices Given f = ( f1, . . . , fm) ∈Q[y][x] satisfying Assumption (A). We keep denoting K = Q(y). Let G be the reduced Gröbner basis of ⟨f⟩with respect to the ordering grevlex(x) ≻ grevlex(y) and B be the set of all monomials in the variables x which are not reducible by G. The set B then forms a basis of the K-vector space K[x]/⟨f⟩K. In this subsection, we focus on the computation of the parametric Hermite matrix associated to f with respect to the basis B. Note that one can design an algorithm using only the definition of parametric Hermite matrices given in Subsection 4.1. More precisely, for each bi · bj ∈B (1 ≤i, j ≤δ), one computes the matrix representing Lbi·bj in the basis B by computing the normal form of every bi · bj · bk for 1 ≤k ≤δ. Therefore, in total, this direct algorithm requires O(δ3) normal form reductions of polynomials in K[x]. In Algorithm 1 below, we present another algorithm for computing H. We call to the following subroutines successively: • GrobnerBasis that takes as input the system f and computes the reduced Gröbner basis G of ⟨f⟩with respect to the ordering grevlex(x) ≻grevlex(y) and the basis B = {b1, . . . , bδ} ⊂Q[x] derived from G. Such an algorithm can be obtained using any general algorithm for computing Gröbner basis, which we refer to F4/F5 algorithms [17, 18]. 19 • ReduceGB that takes as input the Gröbner basis G and outputs a subset G′ of G which is still a Gröbner basis of ⟨f⟩K with respect to the ordering grevlex(x). This subroutine aims to remove the elements in G that we do not need. Even though G is reduced as a Gröbner basis of ⟨f⟩with respect to grevlex(x) ≻grevlex(y), it is not necessarily the reduced Gröbner basis of ⟨f⟩K with respect to grevlex(x). Using [12, Lemma 3, Sec. 2.7], we can design ReduceGB to remove all the elements of G which have duplicate leading monomials (in x). We obtain as output a subset G′ of G which is also a Gröbner basis G′ for ⟨f⟩K with respect to grevlex(x). Note that this tweak reduces not only the cardinal of the Gröbner basis in use but also the size of the set W∞introduced in Subsection 4.3 (as we have less leading coefficients). • XMatrices that takes as input (G′, B) and computes the matrix representation of the multiplication maps Lxi (1 ≤i ≤n) with respect to B. This computation is done directly by reducing every xi · b j (1 ≤i ≤n, 1 ≤j ≤δ) to its normal form in K[x]/⟨f⟩K using G′. • BMatrices that takes as input the matrices representing (Lx1, . . . , Lxn) and B and computes the matrices representing the Lbi’s (1 ≤i ≤δ) in the basis B. We design BMatrices in a way that it constructs the matrices of Lbi’s inductively in the degree of the bi’s as follows. At the beginning, we have the multiplication matrices of 1 and the xi’s; those are the matrices of the elements of degree zero and one. Note that, for any element b of B. At the step of computing the matrix of an element b ∈B, we remark that there exist a variable xi and a monomial b′ ∈B such that b = xi · b′ and the matrix of b′ is already computed (as deg(b′) < deg(b). Therefore, we simply multiply the matrices of Lxi and Lb′ to obtain the matrix of Lb. • TraceComputing that takes as input the multiplication matrices Lb1, . . . , Lbδ and computes the matrix (trace(Lbi·bj))1≤i≤j≤δ. This matrix is in fact the parametric Hermite matrix H associated to f with respect to the basis B. To design this subroutine, we use the following remark given in . Let p, q ∈K[x]. The normal form p of p by G can be written as p = Pδ i=1 ci · bi where the ci’s lie in K. Then, we have the identity trace(Lp·q) = δ X i=1 ci · trace(Lq·bi), Hence, by choosing p = bi ·bj and q = 1, we can compute hi, j using the normal form bi · b j and trace(Lb1), . . . , trace(Lbδ). Note that trace(Lbi) is easily computed from the matrix of the map Lbi. On the other hand, the normal form bi · bj can be read off from the j-th row of the matrix representing Lbi, which is already computed at this point. It is also important to notice that there are many duplicated entries in H. Thus, we should avoid all the unnecessary re-computation. This is done easily be keeping a list for tracking distinct entries of H. 20 The pseudo-code of Algorithm 1 is presented below. Its correctness follows simply from our definition of parametric Hermite matrices. Beside the parametric Hermite matrix H, we return a polynomial w∞which is the square-free part of lcmg∈G(lcx(g)) for further usage. Note that V(w∞) = W∞. Algorithm 1: DRL-Matrix Input: A parametric polynomial system f = ( f1, . . . , fm) Output: A parametric Hermite matrix H associated to f with respect to the basis B 1 G, B ←GröbnerBasis( f, grevlex(x) ≻grevlex(y)) 2 G′ ←ReduceGB(G) 3 w∞←sqfree(lcmg∈G(lcx(g))) 4 (Lx1, . . . , Lxn) ←XMatrices(G′, B) 5 (Lb1, . . . , Lbδ) ←BMatrices((Lx1, . . . , Lxn), B) 6 H ←TraceComputing(Lb1, . . . , Lbδ) 7 return [H, w∞] Removing denominators. Note that, through the computation in the quotient ring AK, the entries of our parametric Hermite matrix possibly contains denominators that lie in Q[y]. As the algorithm that we introduce in Section 5 will require us to manipulate the parametric Hermite matrix that we compute, these denominators can be a bottleneck to handle the matrix. Therefore, we introduce an extra subroutine RemoveDenominator that returns a parametric Hermite matrix H′ of f without denominator. • RemoveDenominator that takes as input the matrix H computed by DRL-Matrix and outputs a matrix H′ which is the parametric Hermite matrix associated to f with respect to a basis B′ that will be made explicit below. As we can freely choose any basis of form {ci · bi | 1 ≤i ≤δ} where the ci’s are elements of Q[y], we should use a basis that leads to a denominator-free matrix. To do this, we choose ci as the denominator of trace(Lbi) (which lies in the first row of the matrix H computed by TraceComputing). Then, for the entry of H that corresponds to bi and b j, we can multiply it with ci · c j. The output matrix H′ is the parametric Hermite matrix associated to f with respect to the basis {ci · bi | 1 ≤i ≤δ}. We observe in many examples that this subroutine returns either a denominator-free matrix or a matrix with smaller degree denominators. Thus, it facilitates further computations on the output matrix. Evaluation & interpolation scheme for generic systems. Here we assume that the input system f satisfies Assumption (B). By Lemma 7, the entries of H are polynomials in Q[y]. Suppose that we know beforehand a value Λ that is larger than the degree of any entry of H, we can compute H by an evaluation & interpolation scheme as follows. We start by choosing randomly a set E of t+Λ t  distinct points in Qt. Then, for each η ∈E, we use DRL-Matrix (Algorithm 1) on the input f(η, ·) to compute the classic Hermite matrix associated to f(η, ·) with respect to the ordering grevlex(x). These 21 computations involve only polynomials in Q[x] and not in Q(y)[x]. Finally, we interpolate the parametric Hermite matrix H from its specialized images H(η) computed previously. Since Assumption (B) holds, then W∞is empty. By Proposition 10, the Hermite matrix of f(η, ·) with respect to grevlex(x) is the image H(η) of H. Therefore, the above scheme computes correctly the parametric Hermite matrix H. We also remark that, in the computation of the specializations H(η), we can replace the subroutine XMatrices in DRL-Matrix by a linear-algebra-based algorithm described in . That algorithm constructs the Macaulay matrix and carries out matrix reductions to obtain simultaneously the normal forms that XMatrices requires. In Section 6, we will estimate the complexity of this evaluation & interpolation scheme when the input system f satisfies some generic assumptions. 5. Algorithms for real root classification We present in this section two algorithms targeting the real root classification problem through parametric Hermite matrices. The one described in Subsection 5.1 aims to solve the weak version of Problem (1). The second algorithm, given in Subsection 5.2 outputs the semi-algebraic formulas of the cells Si that solves Problem (1). Further, in Section 6, we will see that, for a generic sequence f, the semi-algebraic formulas computed by this algorithm consist of polynomials of degree bounded by n(d −1)dn. Up to our knowledge, this improves all previously known bounds. Throughout this section, our input is a parametric polynomial system f = ( f1, . . . , fm) ⊂ Q[y][x]. We require that f satisfies Assumptions (A) and that the ideal ⟨f⟩is radical. Let G be the reduced Gröbner basis of the ideal ⟨f⟩⊂Q[x, y] with respect to the ordering grevlex(x) ≻grevlex(y). Let K denote the rational function field Q(y). We recall that B ⊂Q[x] is the basis of K[x]/⟨f⟩K derived from G and H is the parametric Hermite matrix associated to f with respect to the basis B. 5.1. Algorithm for the weak-version of Problem (1) From Subsection 4.3, we know that, outside the algebraic set W∞B ∪g∈GV(lcx(g)), the parametric matrix H possesses good specialization property (see Proposition 10). We denote by w∞the square-free part of lcmg∈Glcx(g). This polynomial w∞is returned as an output of Algorithm 1. Note that V(w∞) = W∞. Lemma 13. When Assumption (A) holds and the ideal ⟨f⟩is radical, the determinant of H is not identically zero. Proof. Recall that K denotes the rational function field Q(y). We prove that the ideal ⟨f⟩K ⊂K[x] is radical. Let p ∈K[x] such that there exists n ∈N satisfying pn ∈⟨f⟩K. Therefore, there exists a polynomial q ∈Q[y] such that q · pn ∈⟨f⟩. Then, (q · p)n ∈⟨f⟩. As ⟨f⟩is radical, we have that q · p ∈⟨f⟩. Thus, p ∈⟨f⟩K, which concludes that ⟨f⟩K is radical. By Lemma 4, ⟨f⟩K is a radical zero-dimensional ideal in Q(y). Since H is also a Hermite matrix (in the classic sense) of ⟨f⟩K, H is full rank. Therefore, det(H) is not identically zero. 22 Let wH B n/ gcd(n, w∞) where n is the square-free part of the numerator of det(H). We denote by WH the vanishing set of wH. By Lemma 13, WH is a proper Zariski closed subset of Ct. Our algorithm relies on the following proposition. Proposition 14. Assume that Assumption (A) holds and the ideal ⟨f⟩is radical. Then, for each connected component S of the semi-algebraic set Rt \ (W∞∪WH), the number of real solutions of f(η, ·) is invariant when η varies over S. Proof. By Lemma 12, W∞contains the following sets: • The non-proper points of the restriction of π to V. • The point η ∈Ct such that the fiber π−1(η) ∩V is infinite. • The image by π of the irreducible components of V whose dimensions are smaller than t. Now we consider the set K(π, V) B sing(V) ∪crit(π, V). Let ∆B jac(f, x) be the Jacobian matrix of f with respect to the variables x. The ideal generated by the n × n-minors of ∆is denoted by I∆. Note that, since f is radical, K(π, V) is the algebraic set defined by the ideal ⟨f⟩+ I∆. By Proposition 10, for η ∈Ct \ W∞, ⟨f⟩is a zero-dimensional ideal and the quotient ring C[x]/⟨f(η, ·)⟩has dimension δ. Moreover, if η ∈Ct \ (W∞∪WH), the system f(η, ·) has δ distinct complex solutions as the rank of H(η) is δ. Therefore, every complex root of f(η, ·) is of multiplicity one (we use the definition of multiplicity given in [4, Sec. 4.5]). Now we prove that, for such a point η, the fiber π−1(η) does not intersect K(π, V). Assume by contradiction that there exists a point (η, χ) ∈Ct+n lying in π−1(η) ∩K(π, V). Note that χ is a solution of f(η, ·), i.e., f(η, χ) = 0. As (η, χ) ∈K(π, V), then it is contained in V(I∆). Hence, as the derivation in ∆does not involve y, χ cancels all the n × n-minors of the Jacobian matrix jac( f(η, ·), x). [4, Proposition 4.16] implies that χ has multiplicity greater than one. This contradicts to the claim that f(η, ·) admits only complex solutions of multiplicity one. Therefore, we conclude that, for η ∈Ct(W∞∪WH), π−1(η) does not intersect K(π, V). So, using what we prove above and Lemma 12, we deduce that, for η ∈Rt(W∞∪WH), then there exists an open neighborhood Oη of η for the Euclidean topology such that π−1(Oη) does not intersect K(π, V) ∪π−1(W∞). Therefore, by Thom’s isotopy lemma , the projection π realizes a locally trivial fibration over Rt \ (W∞∪WH). So, for any connected component C of Rt \ (W∞∪WH) and any η ∈C, we have that π−1(C)∩V ∩Rt+n is homeomorphic to C×(π−1(η)∩V ∩Rt+n). As a consequence, the number of distinct real solutions of f(η, ·) is invariant when η varies over each connected component of Rt \ (W∞∪WH). To describe Algorithm 2, we need to introduce the following subroutines: • CleanFactors which takes as input a polynomial p ∈Q[y, x] and the polynomial w∞. It computes the square-free part of p with all the common factors with w∞ removed. • Signature which takes as input a symmetric matrix with entries in Q and evaluates its signature. 23 • SamplePoints which takes as input a set of polynomials g1, . . . , gs ∈Q[y] and com-putes a finite subset R of Qt that intersects every connected component of the semi-algebraic set defined by ∧s i=1gi , 0. An explicit description of SamplePoints is given in the proof of Theorem II in Section 3. The pseudo-code of Algorithm 2 is below. Its proof of correctness follows immediately from Proposition 14 and Corollary 11. Algorithm 2: Weak-RRC-Hermite Input: A polynomial sequence f ∈Q[y][x] such that ⟨f⟩is radical and Assumptions (A) holds. Output: A set of sample points and the corresponding numbers of real solutions solving the weak version of Problem (1) 1 [H, w∞] ←DRL-Matrix( f) 2 wH ←CleanFactors(numer(det(H)), w∞) 3 L ←SamplePoints(wH , 0 ∧w∞, 0) 4 for η ∈L do 5 rη ←Signature(H(η)) 6 end 7 return {(η, rη) | η ∈L} Remark 15. As we have seen, Algorithm 2 obtains a polynomial which serves similarly as discriminant varieties or border polynomials through computing the determinant of parametric Hermite matrices. Whereas, the two latter strategies rely on algebraic elimination based on Gröbner bases to compute the projection of crit(π, V) on the y-space. Since it is well-known that the computation of such a Gröbner basis could be heavy, our algorithm has a chance to be more practical. In Section 7, we provide experimental results to support this claim. Remark 16. It is worth noticing that, even though the design of Algorithm 2 employs the grevlex monomial ordering where x1 ≻· · · ≻xn, we can replace it by any grevlex ordering with another lexicographical order among the x’s. For instance, we can use the monomial ordering grevlex(xn ≻· · · ≻x1). While every theoretical claim still holds for this ordering, the practical behavior could be different. 5.2. Computing semi-algebraic formulas By Corollary 11, the number of real roots of the system f(η, ·) for a given point η ∈Rt \ W∞can be obtained by evaluating the signature of the parametric Hermite matrix H. We recall that the signature of a matrix can be deduced from the sign pattern of its leading principal minors. More precisely, we recall the following criterion, introduced by and (see for a summary on these works). Lemma 17. [23, Theorem 2.3.6] Let S be a δ × δ symmetric matrix in Rδ×δ and, for 1 ≤i ≤δ, S i be the i-th leading principal minor of S , i.e., the determinant of the sub-matrix formed by the first i rows and i columns of S . By convention, we denote S 0 = 1. We assume that S i , 0 for 0 ≤i ≤δ. Let k be the number of sign variations between S i and S i+1. Then, the numbers of positive and negative eigenvalues of S are respectively δ −k and k. Thus, the signature of S is δ −2k. 24 This criterion leads us to the following idea. Assume that none of the leading principal minors of H is identically zero. We consider the semi-algebraic subset of Rt defined by the non-vanishing of those leading principal minors. Over a connected component S′ of this semi-algebraic set, each leading principal minor is not zero and its sign is invariant. As a consequence, by Lemma 17 and Corollary 11, the number of distinct real roots of f(η, ·) when η varies over S′ \ W∞is invariant. However, this approach does not apply directly if one of the leading principle minors of H is identically zero. We bypass this obstacle by picking randomly an invertible matrix A ∈GLδ(Q) and working with the matrix HA B AT · H · A. The lemma below states that, with a generic matrix A, all of the leading principal minors of HA are not identically zero. Lemma 18. There exists a Zariski dense subset A of GLδ(Q) such that for A ∈A, all of the leading principal minors of HA B AT · H · A are not identically zero. Proof. For 1 ≤r ≤δ, we denote by Mr the set of all r × r minors of H. Let η ∈Qt \ W∞∪WH. We have that H(η) is a full rank matrix in Qδ×δ and, for A ∈GLδ(R), HA(η) = AT · H(η) · A. We prove that there exists a Zariski dense subset A of GLδ(Q) such that, for A ∈A, all of the leading principal minors of HA(η) are not zero. Then, as an immediate consequence, all the leading principal minors of HA are not identically zero. We consider the matrix A = (ai, j)1≤i, j≤δ where a = (ai, j) are new variables. Then, the r-th leading principal minor Mr(a) of AT · H(η) · A can be written as Mr(a) = X m∈Mr am · m(η), where the am’s are elements of Q[a]. As H(η) is a full rank symmetric matrix by assumption, there exists a matrix Q ∈ GLδ(R) such that QT · H(η) · Q is a diagonal matrix with no zero on its diagonal. Hence, the evaluation of a at the entries of Q gives Mr(a) a non-zero value. As a consequence, Mr(a) is not identically zero. Let Ar be the non-empty Zariski open subset of GLδ(Q) defined by Mr(a) , 0. Then, the set of the matrices A ∈Ar such that the r × r leading principal minor of AT · H(η) · A is not zero. Taking A as the intersection of Ar for 1 ≤r ≤δ, then, for A ∈A, none of the leading principal minors of AT · H(η) · A equals zero. Consequently, each leading principal minor of AT · H · A is not identically zero. Our algorithm (Algorithm 3) for solving Problem (1) through parametric Hermite matrices is described below. As it depends on the random choice of the matrix A, Algorithm 3 is probabilistic. One can easily modify it to be a Las Vegas algorithm by detecting the cancellation of the leading principal minors for each choice of A. 25 Algorithm 3: RRC-Hermite Input: A polynomial sequence f ⊂Q[y][x] such that the ideal ⟨f⟩is radical and f satisfies Assumption (A) Output: The descriptions of a collection of semi-algebraic sets Si solving Problem (1) 1 H, w∞←DRL-Matrix( f) 2 Choose randomly a matrix A in Qδ×δ 3 HA ←AT · H · A 4 (M1, . . . , Mδ) ←LeadingPrincipalMinors(HA) 5 L ←SamplePoints  w∞∧  ∧δ i=1Mi , 0  6 for η ∈L do 7 rη ←Signature(H(η)) 8 end 9 return {(sign (M1(η), . . . , Mδ(η)), η, rη) | η ∈L} Proposition 19. Assume that f satisfies Assumptions (A) and that the ideal ⟨f⟩is radical. Let A be a matrix in GLδ(Q) such that all of the leading principal minors M1, . . . , Mδ of HA B AT ·H ·A are not identically zero. Then, Algorithm 3 computes correctly a solution for Problem (1). Proof. Note that for η ∈Rt \ W∞, we have that HA(η) = AT · H(η) · A. Therefore, the signature of H(η) equals to the signature of HA(η). Let M1, . . . , Mδ be the leading principal minors of HA and S be the algebraic set defined by ∧δ i=1Mi , 0. Over each connected component S′ of S, the sign of each Mi is invariant and not zero. Therefore, by Lemma 17, the signature of HA(η), and therefore of H(η), is invariant when η varies over S′ \ W∞. As a consequence, by Corollary 11, the number of distinct real roots of f(η, ·) is also invariant when η varies over S′ \ W∞. We finish the proof of correctness of Algorithm 3. 6. Complexity analysis 6.1. Degree bound of parametric Hermite matrices on generic input In this subsection, we consider an affine regular sequence f = ( f1, . . . , fn) ⊂Q[y][x] according to the variables x, i.e., the homogeneous components of largest degree in x of the fi’s form a homogeneous regular sequence (see Section 2). Additionally, we require that f satisfies Assumptions (A) and (B). Let d be the highest value among the total degrees of the fi’s. Since the homogeneous regular sequences are generic among the homogeneous polynomial sequences (see, e.g., [2, Proposition 1.7.4] or ), the same property of genericity holds for affine regular sequences (thanks to the definition we use). As in previous sections, G denotes the reduced Gröbner basis of ⟨f⟩with respect to the ordering grevlex(x) ≻grevlex(y). Let δ be the dimension of the K-vector space K[x]/⟨f⟩K where K = Q(y). By Bézout’s inequality, δ ≤dn. We derive from G a basis B = {b1, . . . , bδ} of K[x]/⟨f⟩K consisting of monomials in the variables x. Finally, the parametric Hermite matrix of f with respect to B is denoted by H = (hi,j)1≤i,j≤δ. 26 For a polynomial p ∈Q[y, x], we denote by deg(p) the total degree of p in (y, x) and degx(p) the partial degree of p in the variables x. As Assumption (B) holds, by Lemma 7, the entries of the parametric Hermite matrix H associated to f with respect to the basis B are elements of Q[y]. To establish a degree bound on the entries of H, we need to introduce the following assumption. Assumption C. For any g ∈G, we have that deg(g) = degx(g). Proposition 20 below states that Assumption (C) is generic. Its direct consequence is a proof for Proposition 8. Proposition 20. Let C[x, y]d be the set of polynomials in C[x, y] having total degree bounded by d. There exists a non-empty Zariski open subset FD of C[x, y]n d such that Assumption (C) holds for f ∈FD ∩Q[x, y]n. Consequently, for f ∈FD ∩Q[x, y]n, f satisfies Assumption (B). Proof. Let yt+1 be a new indeterminate. For any polynomial p ∈Q[x, y], we consider the homogenized polynomial ph ∈Q[x, y, yt+1] of p defined as follows: ph = ydeg(p) t+1 p x1 yt+1 , . . . , xn yt+1 , y1 yt+1 , . . . , yt yt+1 ! . Let C[x, y, yt+1]h d be the set of homogeneous polynomials in C[x, y, yt+1] whose degrees are exactly d. By [47, Corollary 1.85], there exists a non-empty Zariski subset F h D of  C[x, y, yt+1]h d n such that the variables x is in Noether position with respect to fh for every fh ∈F h D. For fh ∈F h D, let Gh be the reduced Gröbner basis of fh with respect to the grevlex ordering grevlex(x ≻y ≻yt+1). By [3, Proposition 7], if the variables x is in Noether position with respect to fh, then the leading monomials appearing in Gh depend only on x. Let f and G be the image of fh and Gh by substituting yt+1 = 1. We show that G is a Gröbner basis of f with respect to the ordering grevlex(x ≻y). Since Gh generates ⟨fh⟩, G is a generating set of ⟨f⟩. As the leading monomials of elements in Gh do not depend on yt+1, the substitution yt+1 = 1 does not affect these leading monomials. For a polynomial p ∈⟨f⟩⊂Q[x, y], then p writes p = Pn i=1 ci · fi, where the ci’s lie in Q[x, y]. We homogenize the polynomials ci · fi on the right hand side to obtain a homogeneous polynomial Ph ∈⟨fh⟩. Note that Ph is not necessarily the homogenization ph of p but only the product of ph with a power of yt+1. Then, there exists a polynomial gh ∈Gh such that the leading monomial of gh divides the leading monomial of Ph. Since the leading monomial of gh depends only on x, it also divides the leading monomial of ph, which is the leading monomial of p. So, the leading monomial of the image of gh in G divides the leading monomial of p. We conclude that G is a Gröbner basis of f with respect to the ordering grevlex(x ≻y) and the set of leading monomials in G depends only on the variables x. Let FD be the subset of C[x, y]n d such that for every f ∈FD, its homogenization fh is contained in F h D. Since the two spaces  C[x, y, yt+1]h d n and C[x, y]n d are both exactly 27 C(d+n+t n+t )×n (by considering each monomial coefficient as a coordinate), FD is also a non-empty Zariski open subset of C[x, y]n d. Assume now that the polynomial sequence f belongs to FD. We consider the two monomial orderings over Q[x, y] below: • The elimination ordering grevlex(x) ≻grevlex(y) is abbreviated by O1. The leading monomial of p ∈Q[x, y] with respect to O1 is denoted by lm1(p). The reduced Gröbner basis of f with respect to O1 is G. • The grevlex ordering grevlex(x ≻y) is abbreviated by O2. The leading monomial of p ∈Q[x, y] with respect to O2 is denoted by lm2(p). The reduced Gröbner basis of f with respect to O2 is denoted by G2. As proven above, the set {lm2(g2) | g2 ∈G2} does not depend on y. With this property, we will show, for any g2 ∈G2, there exists a polynomial g ∈G such that lm1(g) divides lm2(g2). By definition, lm2(g2) is greater than any other monomial of g2 with respect to the ordering O2. Since lm2(g2) depends only on the variables x, it is then greater than any monomial of g2 with respect to the ordering O1. Hence, lm2(g2) is also lm1(g2). Consequently, since G is a Gröbner basis of f with respect to O1, there exists a polynomial g ∈G such that lm1(g) divides lm1(g2) = lm2(g2). Next, we prove that for every g ∈G, lm1(g) is also lm2(g). For this, we rely on the fact that G is reduced. Assume by contradiction that there exists a polynomial g ∈G such that lm1(g) , lm2(g). Thus, lm2(g) must contain both x and y. Let tx be the part in only variables x of lm2(g). Note that lm1(g) is greater than tx with respect to O1. There exists an element g2 ∈G2 such that lm2(g2) divides lm2(g). Since lm2(g2) depends only on the variables x, we have that lm2(g2) divides tx. Then, by what we proved above, there exists g′ ∈G such that lm1(g) divides lm2(g2), so lm1(g) divides tx. This implies that G is not reduced, which contradicts the definition of G. So, lm1(g) = lm2(g) for every g ∈G and, consequently, deg(g) = degx(g). We conclude that there exists a non-empty Zariski open subset FD (as above) of C[x, y]n d such that Assumption (C) holds for every f ∈FD ∩Q[x, y]n. Additionally, one easily notices that Assumption (C) implies Assumption (B). As a consequence, f also satisfies Assumption (B) for any f ∈FD ∩Q[x, y]n. Recall that, when Assumption (B) holds, by Lemma 7, the trace of any multiplication map Lp is a polynomial in Q[y] where p ∈Q[y][x]. We now estimate the degree of trace(Lp). Since the map p 7→trace(Lp) is linear, it is sufficient to consider p as a monomial in the variables x. Proposition 21. Assume that Assumption (C) holds. Then, for any monomial m in the variables x, the degree in y of trace(Lm) is bounded by deg(m). As a consequence, the total degree of the entry hi,j = trace(Lbi·bj) of H is at most the sum of the total degrees of bi and bj, i.e., deg(hi, j) ≤deg(bi) + deg(bj). Proof. Let m be a monomial in Q[x]. The multiplication matrix Lm is built as follows. 28 For 1 ≤i ≤δ, the normal form of bi · m as a polynomial in Q(y)[x] writes NFG(bi · m) = δ X j=1 ci, j · bj. Note that this normal form is the remainder of the successive divisions of bi · m by polynomials in G. As Assumption (C) holds, Assumption (B) also holds. Therefore, those divisions do not introduce any denominator. So, every term appearing during these normal form reductions are polynomials in Q[y][x]. Let p ∈Q[y][x]. For any g ∈G, by Assumption (C), the total degree in (y, x) of every term of g is at most the degree of lmx(g). Thus, a division of p by g involves only terms of total degree deg(p). Thus, during the polynomial division of p to G, only terms of degree at most deg(p) will appear. Hence the degree of NFG(p) is bounded by deg(p). Note that trace(Lm) = Pδ i=1 ci,i. As the degree of ci,i · bi is bounded by deg(bi) + deg(m), the degree of ci,i is at most deg(m). Then, we obtain that deg(trace(Lm)) ≤deg(m). Finally, the degree bound of hi, j follows immediately: deg(hi,j) = deg(trace(Lbi·bj)) ≤deg(bi · bj) = deg(bi) + deg(bj). Lemma 22. Assume that f satisfies Assumption (C). Then the degree of a minor M consisting of the rows (r1, . . . , rℓ) and the columns (c1, . . . , cℓ) of H is bounded by ℓ X i=1 deg(bri) + deg(bci) . Particularly, the degree of det(H) is bounded by 2 Pδ i=1 deg(bi). Proof. We expand the minors M into terms of the form (−1)sign (σ)hr1,σ(c1) . . . hrℓ,σ(cℓ), where σ is a permutation of {c1, . . . , cℓ} and sign (σ) is its signature. We then bound the degree of each of those terms as follows using Proposition 21: deg         ℓ Y i=1 hri,σ(ci)        = ℓ X i=1 deg(hri,σ(ci)) ≤ ℓ X i=1 deg(bri) + deg(bσ(ci)) = ℓ X i=1 deg(bri) + deg(bci) . Hence, taking the sum of all those terms, we obtain the inequality: deg(Mi) ≤ ℓ X i=1 deg(bri) + deg(bci) . When M is taken as the determinant of H, then deg(det(H)) ≤2 δ X i=1 deg(bi). 29 Proposition 21 implies that, when Assumption (C) holds, the degree pattern of H depends only on the degree of the elements of B = {b1, . . . , bδ}. We rearrange B in the increasing order of degree, i.e., deg(bi) ≤deg(b j) for 1 ≤i < j ≤δ. So, b1 = 1 and deg(b1) = 0. The degree bounds of the entries of H are expressed by the matrix below                  0 deg(b2) . . . deg(bδ) deg(b2) 2 deg(b2) . . . deg(bδ) + deg(b2) . . . . . . ... . . . deg(bδ) deg(bδ) + deg(b2) . . . 2 deg(bδ)                  . Moreover, using the regularity of f, we are able to establish explicit degree bounds for the elements of B and then, for the minors of H. Lemma 23. Assume that f is an affine regular sequence and let B be the basis defined as above. Then the highest degree among the elements of B is bounded by n(d −1) and 2 δ X i=1 deg(bi) ≤n(d −1)dn. Proof. For p ∈K[x], let ph ∈K[x1, . . . , xn+1] be the homogenization of p with respect to the variable xn+1, i.e., ph = xdegx(p) n+1 p x1 xn+1 , . . . , xn xn+1 ! . The dehomogenization map α is defined as: α : K[x1, . . . , xn+1] →K[x1, . . . , xn], p(x1, . . . , xn+1) 7→p(x1, . . . , xn, 1). Also, the homogeneous component of largest degree of p with respect to the variables x is denoted by H p. Throughout this proof, we use the following notations: • I = ⟨f⟩K and G is the reduced Gröbner basis of I w.r.t. grevlex(x1 ≻· · · ≻xn). • Ih = ⟨ph | p ∈f⟩K and Gh is the reduced Gröbner basis of Ih w.r.t. grevlex(x1 ≻· · · ≻ xn+1). The Hilbert series of the homogeneous ideal Ih writes HSIh(z) = ∞ X r=0 (dimK K[x]r −dimK(Ih ∩K[x]r)) · zr, where K[x]r = {p | p ∈K[x] : degx(p) = r} Since f is an affine regular sequence, by definition (see Section 2), H f = (H f1, . . . , H fn) forms a homogeneous regular sequence. Equivalently, by [47, Proposition 1.44], the homo-geneous polynomial sequence ((f1)h, . . . , ( fn)h, xn+1) is regular. Particularly, ((f1)h, . . . , ( fn)h) is a homogeneous regular sequence and, by [36, Theorem 1.5], we obtain HSIh(z) = Qn i=1  1 −zdeg(fi) (1 −z)n+1 = Qn i=1  1 + . . . + zdeg( fi)−1 1 −z . 30 On the other hand, as ((f1)h, . . . , ( fn)h, xn+1) is a homogeneous regular sequence, by [3, Proposition 7], the leading terms of Gh w.r.t. grevlex(x1 ≻· · · ≻xn+1) do not depend on the variables xn+1. Thus, the dehomogenization map α does not affect the set of leading terms of Gh. Besides, α(Gh) is a Gröbner basis of I with respect to grevlex(x) (see, e.g., the proof of [20, Lemma 27]). Hence, the leading terms of Gh coincides with the leading terms of G. As a consequence, the set of monomials in (x1, . . . , xn+1) which are not contained in the initial ideal of Ih with respect to grevlex(x1 ≻· · · ≻xn+1) is exactly {b · x j n+1 | b ∈B, j ∈N}. As a consequence, dimK K[x]r −dimK(Ih ∩K[x]r) = Pr j=0 |B ∩K[x] j|. Let H(z) = P∞ r=0 |B ∩ K[x]r| · zr. We have that (1 −z) · HSIh(z) = (1 −z) ∞ X r=0 r X j=0 |B ∩K[x] j| · zr = ∞ X r=0 |B ∩K[x]r| · zr = H(z). Then, H(z) = n Y i=1  1 + . . . + zdeg( fi)−1 . As a direct consequence, max1≤i≤δ deg(bi) is bounded by Pn i=1 deg(fi) −n ≤n(d −1). Let G1 and G2 be two polynomials in Z[z]. We write G1 ≤G2 if and only if for any r ≥0, the coefficient of zr in G2 is greater than or equal to the one in G1. Since deg(fi) ≤d for every 1 ≤i ≤n, then H(z) = n Y i=1  1 + . . . + zdeg( fi)−1 ≤ n Y i=1  1 + . . . + zd−1 . As a consequence, H′(z) = P∞ r=1(r |B ∩K[x]r|) · zr−1 ≤ Qn i=1  1 + . . . + zd−1′. Expanding G′(z), we obtain H′(z) ≤ n Pd−1 i=0 zin−1 Pd−1 i=0 zi −dzd−1 1 −z = n         d−1 X i=0 zi         n−1 d−2 X i=0 zi  1 + . . . + zd−i−2 . By substituting z = 1 in the above inequality, we obtain H′(1) ≤ndn−1 d−2 X i=0 (d −i −1) = n(d −1)dn 2 . Thus, we have that Pδ i=1 deg(bi) = P∞ r=0 r |B ∩K[x]r| = H′(1) ≤n(d−1)dn 2 . Corollary 24 below follows immediately from Lemmas 22 and 23. Corollary 24. Assume that f is a regular sequence that satisfies Assumption (C). Then the degree of any minor of H is bounded by n(d −1)dn. 31 Remark 25. Note that Assumption (C) requires a condition on the degrees of polynomials in the Gröbner basis G of ⟨f⟩. We remark that it is possible to establish similar bounds for the degrees of entries of our parametric Hermite matrix and its minors when the system f satisfies a weaker property than Assumption (C) (we still keep the regularity assumption). Indeed, we only need to assume that, for any g ∈G, the homogeneous component of the highest degree in x of g does not depend on the parameters y. Let dy be an upper bound of the partial degrees in y of elements of G. Under the change of variables xi 7→xdy i , f is mapped to a new polynomial sequence that satisfies Assumption (C). Therefore, we easily deduce the two following bounds, which are similar to the ones of Proposition 21 and Corollary 24. • deg(hi,j) ≤dy(deg(bi) + deg(bj)); • The degree of any minor of H is bounded by dy n(d −1)dn. Even though these bounds are not sharp anymore, they still allow us to compute the parametric Hermite matrices using evaluation & interpolation scheme and control the complexity of this computation in the instances where Assumption (C) does not hold. 6.2. Complexity analysis of our algorithms In this subsection, we analyze the complexity of our algorithms on generic systems. Let f = ( f1, . . . , fn) ⊂Q[x, y] be a regular sequence, where y = (y1, . . . , yt) and x = (x1, . . . , xn), satisfying Assumptions (A) and (C). To simplify the asymptotic complexity, we assume that n, t and d are greater than or equal to 2. We denote by G the reduced Gröbner basis of f with respect to the ordering grevlex(x) ≻ grevlex(y). The basis B is taken as all the monomials in x that are irreducible by G. Then, H is the parametric Hermite matrix associated of f with respect to B. We start by estimating the arithmetic complexity for computing the parametric Her-mite matrix H and its minors. We denote λ B n(d −1) and D B n(d −1)dn. Proposition 26. Assume that f = ( f1, . . . , fn) ⊂Q[y][x] is a regular sequence that satisfies Assumptions (A) and (C). Let δ be the dimension of the K-vector space K[x]/⟨f⟩K where K = Q(y). Let H be the parametric Hermite matrix associated to f constructed using grevlex(x) ordering. Then, by Lemma 7, the entries of the parametric Hermite matrix H lie in Q[y]. Using the evaluation & interpolation scheme, one can compute H within O e t + 2λ t ! n d + n + t n + t ! + nω+1dωn+1 + d(ω+1)n !! arithmetic operations in Q, where, by Bézout’s bound, δ is bounded by dn. Moreover, each minor (including the determinant) of H can be computed using O e t + D t ! d2n t + 2λ t ! + dωn !! arithmetic operations in Q. 32 Proof. By Lemma 23 and Proposition 21, the highest degree among the entries of H is bounded by 2λ = 2n(d −1). The evaluation & interpolation scheme of Subsection 4.4 requires computing t+2λ t  specialized Hermite matrices. We first analyze the complexity for computing each of those specialized Hermite matrices. The evaluation of f at each point η ∈Qt costs O  n d+n+t n+t  arithmetic operations in Q. As the highest degree in the Gröbner basis of f(η, ·) w.r.t. the grevlex(x) ordering is bounded by n(d −1) + 1, the computation of this Gröbner basis can be done within O (ndωn) arithmetic operations in Q (see [16, Theorem 5.1]). Next, we compute the matrices representing the Lxi’s. Using [16, Algo. 4], we obtain an arithmetic complexity of O  dnω+1δω ([16, Prop. 5]) for computing such n matrices, where ω is the exponential constant for matrix multiplication. Using δ ≤dn, we obtain the bound O  nω+1dωn+1 . The traces of these matrices are then computed using nδ additions in Q. The subrou-tine BMatrices consists of essentially δ multiplication of δ × δ matrices (with entries in Q). This leads to an arithmetic complexity O(δω+1), which is then bounded by O(d(ω+1)n). Next, the computation of each entry hi, j is simply a vector multiplication of length δ, whose complexity is O(δ). Doing so for δ2 entries, TraceComputing takes in overall O(δ3) arithmetic operations in Q. Thus, as δ ≤dn, the complexity of the evaluation step lies in O t + 2λ t ! n d + n + t n + t ! + nω+1dωn+1 + d(ω+1)n !! . Finally, we interpolate δ2 entries which are polynomials in Q[y] of degree at most 2λ. Using the multivariate interpolation algorithm of , the complexity of this step therefore lies in O  δ2 t+2λ t  log2 t+2λ t  log log t+2λ t  . Summing up the both steps, we conclude that the parametric Hermite matrix H can be obtained within O e t + 2λ t ! n d + n + t n + t ! + nω+1dωn+1 + d(ω+1)n !! arithmetic operations in Q. Similarly, the minors of H can be computed using the technique of evaluation & interpolation. By Corollary 24, the degree of every minor of H is bounded by D. We specialize H at t+D t  points in Qt and compute the corresponding minor of each specialized Hermite matrix. This step takes O t + D t ! δ2 t + 2λ t ! + δω !! arithmetic operations in Q. Finally, using the multivariate interpolation algorithm of , it requires O t + D t ! log2 t + D t ! log log t + D t !! arithmetic operations in Q to interpolate the final minor. Therefore, using δ ≤dn, the whole complexity for computing each minor of H lies within O e t + D t ! d2n t + 2λ t ! + dωn !! . 33 We note that the complexity of computing the matrix H in Proposition 26 is also bounded by the complexity of computing its minor. Indeed, we have that d + n + t n + t ! = (d + n + t) . . . (d + n + 1)(d + n) . . . (d + 1) (n + t)! ≤(d + n + t) . . . (d + n + 1) t! (d + n) . . . (d + 1) n! ≤(D + t) . . . (D + 1) t! (2dn) = D + t t ! (2dn). Asymptotically, nωdωn+1 is bounded by O e  d(ω+1)n . For t ≥2, t+D t  ≥D2/2 ≥d(ω−1)n. Hence, we obtain t + 2λ t ! n d + n + t n + t ! + nω+1dωn+1 + d(ω+1)n ! ∈O e t + 2λ t ! t + D t ! d2n ! , which proves our claim above. Finally, we state our main result, which is Theorem I below. It estimates the arith-metic complexity of Algorithms 2 and 3. Theorem I. Let f ⊂Q[x, y] be a regular sequence such that the ideal ⟨f⟩is radical and f satisfies Assumptions (A) and (C). Recall that D denotes n(d −1)dn. Then, we have the following statements: i) The arithmetic complexity of Algorithm 2 lies in O e t + D t ! 23t n2t+1d2nt+n+2t+1 ! . ii) Algorithm 3, which is probabilistic, computes a set of semi-algebraic descriptions solving Problem (1) within O e t + D t ! 23t n2t+1d3nt+2(n+t)+1 ! arithmetic operations in Q in case of success. iii) The semi-algebraic descriptions output by Algorithm 3 consist of polynomials in Q[y] of degree bounded by D. Proof. As Assumption (C) holds, we have that w∞= 1 and wH is the square-free part of det(H). Therefore, after computing the parametric Hermite matrix H and its determinant, whose complexity is given by Proposition 26, Algorithm 2 essentially consists of comput-ing sample points of the connected components of the algebraic set Rt \ V(det(H)). By Corollary 24, the degree of det(H) is bounded by D. Applying Corollary 3, we obtain the following arithmetic complexity for this computation of sample points O e t + D t ! 23tD2t+1 ! ≃O e t + D t ! 23t n2t+1d2nt+n+2t+1 ! . 34 Also by Corollary 3, the finite subset of Qt output by SamplePoints has cardinal bounded by 2tDt. Thus, evaluating the specializations of H at those points and their signatures costs in total O  2tDt  δ22λ+t t  + δω+1/2 arithmetic operations in Q using [4, Algorithm 8.43]. Therefore, the complexity of SamplePoints dominates the whole complexity of the algorithm. We conclude that Algorithm 2 runs within O e t + D t ! 23t n2t+1d2nt+n+2t+1 ! arithmetic operations in Q. For Algorithm 3, we start by choosing randomly a matrix A and compute the matrix HA = AT ·H · A. Then, we compute the leading principal minors M1, . . . , Mδ of HA. Using Proposition 26, this step admits the arithmetic complexity bound O e δ t + D t ! d2n t + 2λ t ! + dωn !! . Next, Algorithm 3 computes sample points for the connected components of the semi-algebraic set defined by ∧δ i=1Mi , 0. Since the degree of each Mi is bounded by D, Corollary 3 gives the arithmetic complexity O e t + D t ! dnt+n 23t D2t+1 ! ≃O e t + D t ! 23t n2t+1d3nt+2(n+t)+1 ! . It returns a finite subset of Qt whose cardinal is bounded by (2δD)t. The evaluation of the leading principal minors’ sign patterns at those points has the arithmetic complexity lying in O  2tδt+1D2t ≃O  2tn2td3nt+n+2t . Again, the complexity of SamplePoints dominates the whole complexity of Algo-rithm 3. The proof of Theorem I is then finished. Probability aspect. The main probabilistic source of our algorithms 2 and 3 comes from the use of the geometric resolution in the computation of sample points per con-nected components described in Section 3. Since the geometric resolution depends on the specialization and lifting procedures, it makes use of various random choices. As explained in , the bad choices are enclosed in strict algebraic subsets of certain affine spaces, which implies that almost any random choice leads to a correct computation. In general, even though one can check whether the points output by geometric resolution are solutions of the input system, some solutions can be missing. Thus, the geometric resolution is not Las Vegas. Besides, Algorithm 3 depends also on the choice of the matrix Q. By Lemma 18, any choice of Q from a prescribed dense Zariski open subset of GL(n, C) will work. As the purpose of choosing Q is to ensure that none of the leading principal minors of QT · H · Q are identically zero. One can check easily whether a good matrix Q is found. 7. Practical implementation & Experimental results 7.1. Remark on the implementation of Algorithm 3 Recall that Algorithm 3 leads us to compute sample points per connected components of the non-vanishing set of the leading principal minors (M1, . . . , Mδ). Comparing to 35 Algorithm 2 in which we only compute sample points for Rt \ V(Mδ), the complexity of Algorithm 3 contains an extra factor of dnt due to the higher number of polynomials given as input to the subroutine SamplePoints. Even though the complexity bounds of these two algorithms both lie in dO(nt), the extra factor dnt mentioned above sometimes becomes the bottleneck of Algorithm 3 for tackling practical problems. Therefore, we introduce the following optimization in our implementation of Algorithm 3. We start by following exactly the steps (1-4) of Algorithm 3 to obtain the leading principal minors (M1, . . . , Mδ) and the polynomial w∞. Then, by calling the subroutine SamplePoints on the input Mδ , 0∧w∞, 0, we compute a set of sample points (and their corresponding numbers of real roots) {(η1, r1), . . . , (ηℓ, rℓ)} that solves the weak-version of Problem (1). We obtain from this output all the possible numbers of real roots that the input system can admit. For each value 0 ≤r ≤δ, we define Φr = {σ = (σ1, . . . , σδ) ∈{−1, 1}δ | the sign variation of σ is (δ −r)/2}. If r . δ (mod 2), Φr = ∅. For σ ∈Φr and η ∈Rt \ V(w∞) such that sign (Mi(η)) = σi for every 1 ≤i ≤δ, the signature of H(η) is r. As a consequence, for any η in the semi-algebraic set defined by (w∞, 0) ∧(∨σ∈Φr(∧δ i=1sign (Mi) = σi)), the system f(η, .) has exactly r distinct real solutions. Therefore, (Sri)1≤i≤ℓis a collection of semi-algebraic sets solving Problem (1). Then, we can simply return {(Φri, ηi, ri) | 1 ≤i ≤ℓ} as the output of Algorithm 3 without any further computation. Note that, by doing so, we may return sign conditions which are not realizable. We discuss now about the complexity aspect of the steps described above. For r ≡δ (mod 2), the cardinal of Φr is  δ (δ−r−2)/2  . In theory, the total cardinal of all the Φri’s (1 ≤i ≤ℓ) can go up to 2δ−1, which is doubly exponential in the number of variables n. However, in the instances that are actually tractable by the current state of the art, 2δ is still smaller than δ3t. And when it is the case, following this approach has better performance than computing the sample points of the semi-algebraic set defined by ∧δ i=1Mi , 0. Otherwise, when 2δ exceeds δ3t, we switch back to the computation of sample points. This implementation of Algorithm 3 does not change the complexity bound given in Theorem I. 7.2. Implementation infrastructure To implement our algorithm, we need three main ingredients: (i) Gröbner bases computations, in order to obtain monomial basis of quotient algebras that we use to compute our parametrized Hermite matrices, (ii) an implementation of an algorithm computing sample points connected components of semi-algebraic sets, (iii) a computer algebra system to manipulate polynomials and matrices. In our implementation, we use the Maple computer algebra system and its program-ming language to implement the overall algorithm. We use J.-Ch Faugère’s FGb library , implemented in C, for computing Gröbner bases. 36 In order to compute sample points per connected components of semi-algebraic sets, we use the RAGlib (Real Algebraic Library) package which is implemented using the Maple programming language and the FGb library. The algorithm implemented therein is the one of and its complexity remains to be established. Even if they share similar ingredients, it is not the same as the one of Section 3 which provides the state-of-the-art complexity result for this problem. Hence, our implementation might not meet the best promised by complexity results. Still, we see in the experiments below that it already can tackle problems which are out of reach of the current software state-of-the-art. 7.3. Experiments This subsection provides numerical results of several algorithms related to the real root classification. We report on the performance of each algorithm for different test instances. The computation is carried out on a computer of Intel(R) Xeon(R) CPU E7-4820 2GHz and 1.5 TB of RAM. The timings are given in seconds (s.), minutes (m.) and hours (h.). The symbol ∞means that the computation cannot finish within 120 hours. Throughout this subsection, the column hermite reports on the computational data of our algorithms based on parametric Hermite matrices described in Section 5. It uses the notations below: - mat: the timing for computing a parametric Hermite matrix H. - det: the runtime for computing the determinant of H. - min: the timing for computing the leading principal minors of H . - sp: the runtime for computing at least one points per each connected component of the semi-algebraic set Rt \ V(det(H)). - deg: the highest degree among the leading principal minors of H. Generic systems. In this paragraph, we report on the results obtained with generic in-puts, i.e., randomly chosen dense polynomials ( f1, . . . , fn) ⊂Q[y1, . . . , yt][x1, . . . , xn]. The total degrees of input polynomials are given as a list d = [deg(f1), . . . , deg( fn)]. We first compare the algorithms using Hermite matrices (Section 5) with the folklore Sturm-based algorithm sketched in the introduction for solving Problem (1). The column sturm of Fig. (1) shows the experimental results of the Sturm-based algorithm. It contains the following sub-columns: - elim: the timing for computing the eliminating polynomial. - sres: the timing for computing the subresultant coefficients in the Sturm-based algorithm. - sp-s: the timing for computing sample points per connected components of the non-vanishing set of the last subresultant coefficient. - deg-s: the highest degree among the subresultant coefficients. 37 We observe that the sum of mat-h and min-h is smaller than the sum of elim and sres. Hence, obtaining the input for the sample point computation in hermite strategy is easier than in sturm strategy. We also remark that the degree deg-h is much smaller than deg-s, that explains why the computation of sample points using Hermite matrices is faster than using the subresultant coefficients. We conclude that the parametric Hermite matrix approach outperforms the Sturm-based one both on the timings and the degree of polynomials in the output formulas. t d hermite sturm mat min sp total deg elim sres sp-s total deg-s 2 [2, 2] .07 s .01 s .3 s .4 s 8 .01 s .1 s 2 s 2.2 s 12 2 [3, 2] .1 s .12 s 4.8 s 5 s 18 .05 s .5 s 15 s 16 s 30 2 [2, 2, 2] .3 s .3 s 33 s 34 s 24 .08 s 2 s 8 m 8 m 56 2 [3, 3] .3 s .8 s 3 m 3 m 36 .1 s 3 s 20 m 20 m 72 3 [2, 2] .1 s .02 s 26 s 27 s 8 .07 s .1 s 40 s 40 s 12 3 [3, 2] .2 s .2 s 3 h 3 h 18 .1 s 1 s ∞ ∞ 30 3 [2, 2, 2] .5 s 7 s 32 h 32 h 24 .15 s 10 m ∞ ∞ 56 3 [4, 2] .6 s 12 s 90 h 90 h 32 .2 s 12 m ∞ ∞ 56 3 [3, 3] 1 s 27 s ∞ ∞ 36 .2 s 15 m ∞ ∞ 72 Figure 1: Generic random dense systems In Fig. (2), we compare our algorithm using parametric Hermite matrices with two Maple packages for solving parametric polynomial systems: RootFinding[Parametric] and RegularChains[ParametricSystemTools] . The new notations used in Fig. (2) are explained below. • The column rf stands for the RootFinding[Parametric] package. To solve a para-metric polynomial systems, it consists of computing a discriminant variety D and then computing an open CAD of Rt \ D. This package does not return explicit semi-algebraic formulas but an encoding based on the real roots of some polyno-mials. This column contains: - dv : the runtime of the command DiscriminantVariety that computes a set of polynomials defining a discriminant variety D associated to the input system. - cad : the runtime of the command CellDecomposition that outputs semi-algebraic formulas by computing an open CAD for the semi-algebraic set Rt\D. • The column rc stands for the RegularChains[ParametricSystemTools] package of Maple. The algorithms implemented in this package is given in . It also contains two sub-columns: - bp : the runtime of the command BorderPolynomial that returns a set of polynomials. 38 - rrc : the runtime of the command RealRootClassification. We call this com-mand with the option output=‘samples’ to compute at least one point per connected component of the complementary of the real algebraic set defined by border polynomials. Note that, in a strategy for solving the weak-version of Problem (1), DiscriminantVa-riety and BorderPolynomial can be completely replaced by parametric Hermite matrices. On generic systems, the determinant of our parametric Hermite matrix coincides with the output of DiscriminantVariety, which we denote by w. Whereas, because of the elimination BorderPolynomial returns several polynomials, one of them is w. In Fig. (2), the timings for computing a parametric Hermite matrix is negligible. Comparing the columns det, dv and bp, we remark that the time taken to obtain w through the determinant of parametric Hermite matrices is much smaller than using DiscriminantVariety or BorderPolynomial. For computing the polynomial w, using parametric Hermite matrices allows us to reach the instances that are out of reach of DiscriminantVariety, for example, the in-stances {t = 3, d = [2, 2, 2]}, {t = 3 d = [4, 2]}, {t = 3, d = [3, 3]} and {t = 4, d = [2, 2]} in Fig. (2) below. Moreover, we succeed to compute the semi-algebraic formulas for {t = 3, d = [2, 2, 2]}, {t = 3 d = [4, 2]} and {t = 4, d = [2, 2]}. Using the implementation in Subsection 7.1, we obtain the semi-algebraic formulas of degrees bounded by deg(w). Therefore, for these generic systems, our algorithm based on parametric Hermite ma-trices outperforms DiscriminantVariety and BorderPolynomial for obtaining a polynomial that defines the boundary of semi-algebraic sets over which the number of real solutions are invariant. Moreover, using the minors of parametric Hermite matrices, we can com-pute semi-algebraic formulas of problems that are out of reach of CellDecomposition and RealRootClassification. t d hermite rf rc mat det sp to-tal deg dv cad to-tal bp rrc to-tal 2 [2, 2] .07 s .01 s .3 s .4 s 8 .1 s .3 s .4 s .1 s 1 s 1.1 s 2 [3, 2] .1 s .2 s 4.8 s 5 s 18 1 m 5 s 1 m .3 s 12 s 12 s 2 [2, 2, 2] .3 s .3 s 33 s 34 s 24 17m 32 s 17m 23 s 2 m 2 m 2 [3, 3] .3 s .8 s 3 m 3 m 36 2 h 4 m 2 h 8 s 4 m 4 m 3 [2, 2] .1 s .02 s 26 s 27 s 8 1 s 35 s 36 s .2 s 12m 12m 3 [3, 2] .2 s .2 s 3 h 3 h 18 2 h 84 h 86 h 3 s 37 h 37 h 3 [2, 2, 2] .5 s 7 s 32 h 32 h 24 ∞ ∞ ∞ 20m ∞ ∞ 3 [4, 2] .6 s 12 s 90 h 90 h 32 ∞ ∞ ∞ 12m ∞ ∞ 3 [3, 3] .7 s 27 s ∞ ∞ 36 ∞ ∞ ∞ 15m ∞ ∞ 4 [2, 2] .2 s .1 s 8 m 8 m 8 4 s ∞ ∞ 1 s ∞ ∞ Figure 2: Generic random dense systems In what follows, we consider the systems coming from some applications as test in-39 stances. These examples allow us to observe the behavior of our algorithms on non-generic systems. Kuramoto model. This application is introduced in , which is a dynamical system used to model synchronization among some given coupled oscillators. Here we consider only the model constituted by 4 oscillators. The maximum number of real solutions of steady-state equations of this model was an open problem before it is solved in using numerical homotopy continuation methods. However, to the best of our knowledge, there is no exact algorithm that is able to solve this problem. We present in what follows the first solution using symbolic computation. Moreover, our algorithm can return the semi-algebraic formulas defining the regions over which the number of real solutions is invariant. As explained in , we consider the system f of the following equations ( yi −P4 j=1(sic j −sjci) = 0 s2 i + c2 i = 1 for 1 ≤i ≤3, where (s1, s2, s3) and (c1, c2, c3) are variables and (y1, y2, y3) are parameters. We are asked to compute the maximum number of real solutions of f(η, .) when η varies over R3. This leads us to solve the weak version of Problem (1) for this parametric system. We first construct the parametric Hermite matrix H associated to this system. This matrix is of size 14 × 14. The polynomial w∞has the factors y1 + y2, y2 + y3, y3 + y1 and y1 +y2 +y3. The polynomial wH has degree 48 (c.f. ). We denote by w the polynomial w∞· wH. Note that the polynomial system has real roots only if |yi| ≤3 (c.f. ). So we only need to consider the compact connected components of R3 \ V(w). Since the polynomial w is invariant under any permutation acting on (y1, y2, y3), we exploit this symmetry to accelerate the computation of sample points. Following the critical point method, we compute the critical points of the map (y1, y2, y3) 7→y1 + y2 + y3 restricted to R3 \ V(w); this map is also symmetric. We ap-ply the change of variables (y1, y2, y3) 7→(e1, e2, e3), where e1 = y1 + y2 + y3, e2 = y1y2 + y2y3 + y3y1 and e3 = y1y2y3 are elementary symmetric polynomials of (y1, y2, y3). This change of variables reduces the number of distinct so-lutions of zero-dimensional systems involved in the computation and, therefore, reduces the computation time. From the sample points obtained by this computation, we derive the possible number of real solutions and conclude that the system f has at most 10 distinct real solutions when (y1, y2, y3) varies over R3 \ V(w). This agrees with the result given in . We show below a list of parameter values such that the system has respectively 2, 4, 6, 8 and 10 distinct real solutions. Number of solutions (y1, y2, y3) 2 solutions [−2, −0.03, 0.22] 4 solutions [1, −0.09, 0.16] 6 solutions [0, −0.7, −0.48] 8 solutions [0.08, −0.03, 0.22] 10 solutions h 274945023031 2199023255552, −68723139707 549755813888 , −549808278091 4398046511104 i 40 Fig. (3) reports on the timings for computing the parametric Hermite matrix (mat), for computing its determinant (det) and for computing the sample points (sp). We stop both of the commands DiscriminantVariety and BorderPolynomial after 240 hours without obtaining the polynomial w. hermite dv bp mat det sp total 2 m 1 h 85 h 86 h ∞ ∞ Figure 3: Kuramoto model for 4 oscillators Static output feedback. The second non-generic example comes from the problem of static output feedback . Given the matrices A ∈Rℓ×ℓ, B ∈Rℓ×2, C ∈R1×ℓand a parameter vector P = "y1 y2 # ∈R2, the characteristic polynomial of A + BPC writes f(s, y) = det(sIl −A −BKC) = f0(s) + y1 f1(s) + y2 f2(s), where s is a complex variable. We want to find a matrix P such that all the roots of f(s, y) must lie in the open left half-plane. By substituting s by x1 + ix2, we obtain the following system of real variables (x1, x2) and parameters (y1, y2):          ℜ( f(x1 + ix2, y)) = 0 ℑ( f(x1 + ix2, y)) = 0 x1 < 0 Note that the total degree of these equations equals ℓ. We are now interested in solving the weak-version of Problem (1) on the system ℜ( f) = ℑ( f) = 0. We observe that this system satisfies Assumptions (A) and (B). Let H be the parametric Hermite matrix H of this system with respect to the usual basis we consider in this paper. This matrix H behaves very differently from generic systems. Computing the determinant of H (which is an element of Q[y]) and taking its square-free part allows us to obtain the same output w as DiscriminantVariety. However, this direct approach appears to be very inefficient as the determinant appears as a large power of the output polynomial. For example, for a value ℓ, we observe that the system consists of two polynomials of degree ℓ. The determinant of H appears as w2ℓ, where w has degree 2(ℓ−1). The bound we establish on the degree of this determinant is 2(ℓ−1)ℓ2, which is much larger than what happens in this case. Therefore, we need to introduce the optimization below to adapt our implementation of Algorithm 2 to this problem. We observe that, on these examples, the polynomial w can be extracted from a smaller minor instead of computing the determinant H. To identify such a minor, we reduce H to a matrix whose entries are univariate polynomials with coefficients lying in a finite field Z/pZ as follow. Let u be a new variable. We substitute each yi by random linear forms in Q[u] in H and then compute H mod p. Then, the matrix H is turned into a matrix Hu whose 41 entries are elements of Z/pZ[u]. The computation of the leading principal minors of Hu is much easier than the one of H since it involves only univariate polynomials and does not suffer from the growth of bit-sizes as for the rational numbers. Next, we compute the sequence of the leading principal minors of Hu in decreasing order, starting from the determinant. Once we obtain a minor, of some size r, that is not divisible by wu, we stop and take the index r + 1. Then, we compute the square-free part of the (r+1)×(r+1) leading principal minor of H, which can be done through evaluation-interpolation method. This yields a Monte Carlo implementation that depends on the choice of the random linear forms in Q[u] and the finite field to compute the polynomial w. In Fig. (4), we report on some computational data for the static output feedback problem. Here we choose the prime p to be 65521 so that the elements of the finite field Z/pZ can be represented by a machine word of 32 bits. We consider different values of ℓand the matrices A, B,C are chosen randomly. On these examples, our algorithm returns the same output as the one of DisciminantVariety. Whereas, BorderPolynomial (bp) returns a list of polynomials which contains our output and other polynomials of higher degree. The timings of our algorithm are given by the two following columns: • The column mat shows the timings for computing parametric Hermite matrices H. • The column comp-w shows the timings for computing the polynomials w from H using the strategy described as above. We observe that our algorithm (mat + comp-w) wins some constant factor comparing to DiscriminantVariety (dv). On the other hand, BorderPolynomial (bp) performs less efficiently than the other two algorithms in these examples. Since the degrees of the polynomials w here (given as deg-w) are small comparing with the bounds in the generic case. Hence, unlike the generic cases, the computation of the sample points in these problems is negligible as being reported in the column sp. ℓ hermite dv bp sp deg-w mat comp-w total 5 2 s 1 s 3 s 30 s 1.5 m .2 s 8 6 12 s 5 s 17 s 90 s 30 m .4 s 10 7 1 m 6 m 7 m 16 m 4 h 1 s 12 8 4 m 50 m 1 h 1.5 h 34 h 3 s 14 Figure 4: Static output feedback Acknowledgments. We thank the anonymous reviewers for their comments which helped to improve a lot the initial submission. References Alman, J., Williams, V. V., 2021. A refined laser method and faster matrix multiplication. In: Proceedings of the Thirty-Second Annual ACM-SIAM Symposium on Discrete Algorithms. SODA ’21. Society for Industrial and Applied Mathematics, USA, p. 522–539. 42 Bardet, M., Dec. 2004. Étude des systèmes algébriques surdéterminés. Applications aux codes cor-recteurs et à la cryptographie. Theses, Université Pierre et Marie Curie - Paris VI. Bardet, M., Faugère, J.-C., Salvy, B., 2015. On the complexity of the F5 Gröbner basis algorithm. Journal of Symbolic Computation 70, 49–70. Basu, S., Pollack, R., Roy, M.-F., 2006. Algorithms in Real Algebraic Geometry (Algorithms and Computation in Mathematics). Springer-Verlag, Berlin, Heidelberg. Bayer, D., Stillman, M., 06 1987. A theorem on refining division orders by the reverse lexicographic order. Duke Math. J. 55 (2), 321–328. Bonnard, B., Faugère, J.-C., Jacquemard, A., Safey El Din, M., Verron, T., 2016. Determinantal sets, singularities and application to optimal control in medical imagery. In: Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation. pp. 103–110. Brown, C. W., Davenport, J. H., 2007. The complexity of quantifier elimination and cylindrical algebraic decomposition. In: Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation. ISSAC ’07. Association for Computing Machinery, New York, NY, USA, p. 54–60. Canny, J. F., Kaltofen, E., Yagati, L., 1989. Solving systems of nonlinear polynomial equations faster. In: Proceedings of the ACM-SIGSAM 1989 International Symposium on Symbolic and Algebraic Computation. ISSAC ’89. Association for Computing Machinery, New York, NY, USA, p. 121–128. Collins, G. E., 1976. Quantifier elimination for real closed fields by cylindrical algebraic decompo-sition: a synopsis. ACM SIGSAM Bulletin 10 (1), 10–12. Corvez, S., Rouillier, F., 2002. Using computer algebra tools to classify serial manipulators. In: International Workshop on Automated Deduction in Geometry. Springer, pp. 31–43. Coste, M., Shiota, M., dec 1992. Thom’s first isotopy lemma: a semialgebraic version, with uniform bounds(real singularities and real algebraic geometry). RIMS Kokyuroku 815, 176–189. Cox, D. A., Little, J., O’Shea, D., 2007. Ideals, Varieties, and Algorithms: An Introduction to Com-putational Algebraic Geometry and Commutative Algebra, 3/e (Undergraduate Texts in Mathe-matics). Springer-Verlag, Berlin, Heidelberg. Dahan, X., Schost, É., 2004. Sharp estimates for triangular sets. In: Gutierrez, J. (Ed.), Symbolic and Algebraic Computation, International Symposium ISSAC 2004, Santander, Spain, July 4-7, 2004, Proceedings. ACM, pp. 103–110. Davenport, J. H., Heintz, J., Feb. 1988. Real quantifier elimination is doubly exponential. J. Symb. Comput. 5 (1–2), 29–35. Elliott, J., Giesbrecht, M., Schost, É., 2020. On the bit complexity of finding points in connected components of a smooth real hypersurface. In: Emiris, I. Z., Zhi, L. (Eds.), ISSAC ’20: International Symposium on Symbolic and Algebraic Computation, Kalamata, Greece, July 20-23, 2020. ACM, pp. 170–177. Faugère, J., Gaudry, P., Huot, L., Renault, G., 2013. Polynomial systems solving by fast linear algebra. CoRR abs/1304.6039. Faugere, J.-C., 1999. A new efficient algorithm for computing Gröbner bases (F4). Journal of pure and applied algebra 139 (1-3), 61–88. Faugère, J. C., 2002. A new efficient algorithm for computing Gröbner bases without reduction to zero (F5). In: Proceedings of the 2002 international symposium on Symbolic and algebraic computation. pp. 75–83. Faugère, J.-C., Moroz, G., Rouillier, F., Safey El Din, M., 2008. Classification of the perspective-three-point problem, discriminant variety and real solving polynomial systems of inequalities. In: Proceedings of the twenty-first international symposium on Symbolic and algebraic computation. pp. 79–86. Faugère, J.-C., Safey El Din, M., Spaenlehauer, P.-J., 2013. On the complexity of the generalized minrank problem. Journal of Symbolic Computation 55, 30–58. Faugère, J.-C., September 2010. FGb: A Library for Computing Gröbner Bases. In: Fukuda, K., Hoeven, J., Joswig, M., Takayama, N. (Eds.), Mathematical Software - ICMS 2010. Vol. 6327 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg, Berlin, Heidelberg, pp. 84–87. Gerhard, J., Jeffrey, D. J., Moroz, G., Jun. 2010. A package for solving parametric polynomial systems. ACM Commun. Comput. Algebra 43 (3/4), 61–72. Ghys, É., Ranicki, A., 2016. Signatures in algebra, topology and dynamics. Ensaios Matemáticos 30, 1 – 173. Gianni, P. M., Teo Mora, T., 1987. Algebraic solution of systems of polynomial equations using Gröebner bases. In: Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, 5th Inter-43 national Conference, AAECC-5, Menorca, Spain, June 15-19, 1987, Proceedings. pp. 247–257. Giusti, M., Heintz, J., Morais, J. E., Pardo, L. M., 1995. When polynomial equation systems can be ”solved” fast? In: Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, 11th International Symposium, AAECC-11, Paris, France, July 17-22, 1995, Proceedings. pp. 205–231. Giusti, M., Lecerf, G., Salvy, B., 2001. A gröbner free alternative for polynomial system solving. Journal of complexity 17 (1), 154–211. Hardt, R. M., 1980. Semi-algebraic local-triviality in semi-algebraic mappings. American Journal of Mathematics 102 (2), 291–302. Harris, K., Hauenstein, J. D., Szanto, A., 2020. Smooth points on semi-algebraic sets. Henrion, D., Sebek, M., 2008. Plane geometry and convexity of polynomial stability regions. In: Proceedings of the Twenty-First International Symposium on Symbolic and Algebraic Computation. ISSAC ’08. Association for Computing Machinery, New York, NY, USA, p. 111–116. Hermite, C., 1856. Sur le nombre des racines d’une équation algébrique comprises entre des limites données. extrait d’une lettre á m. borchardt. J. Reine Angew. Math. 52, 39–51. Jacobi, C. G., 1857. Uber eine elementare transformation eins in bezug auf jedes von zwei variablen-systemen linearen und homogenen ausdrucks. Journal fur die reine und angewandte Mathematik 53., 265 – 270. Kalkbrener, M., 1997. On the stability of gröbner bases under specializations. Journal of Symbolic Computation 24 (1), 51–58. Kronecker, L., 1882. Grundzüge einer arithmetischen theorie der algebraischen grössen. Journal für die reine und angewandte Mathematik 92, 1–122. Kuramoto, Y., 1975. Self-entrainment of a population of coupled non-linear oscillators. In: Araki, H. (Ed.), International Symposium on Mathematical Problems in Theoretical Physics. Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 420–422. Lazard, D., Rouillier, F., 2007. Solving parametric polynomial systems. Journal of Symbolic Com-putation 42 (6), 636–667. Moreno-Socıas, G., 2003. Degrevlex gröbner bases of generic complete intersections. Journal of Pure and Applied Algebra 180 (3), 263 – 283. Pardue, K., 2010. Generic sequences of polynomials. Journal of Algebra 324 (4), 579 – 590. Pedersen, P., Roy, M.-F., Szpirglas, A., 1993. Counting real zeros in the multivariate case. In: Eyssette, F., Galligo, A. (Eds.), Computational Algebraic Geometry. Birkhäuser Boston, Boston, MA, pp. 203–224. Rouillier, F., 1999. Solving zero-dimensional systems through the rational univariate representation. Appl. Algebra Eng. Commun. Comput. 9 (5), 433–461. Safey El Din, M., 2017. Real alebraic geometry library, raglib (version 3.4). URL Safey El Din, M., Schost, E., 2003. Polar varieties and computation of one point in each connected component of a smooth real algebraic set. In: Proc. of the 2003 Int. Symp. on Symb. and Alg. Comp. ISSAC ’03. ACM, NY, USA, p. 224–231. Safey El Din, M., Schost, É., Jan. 2017. A nearly optimal algorithm for deciding connectivity queries in smooth and bounded real algebraic sets. J. ACM 63 (6), 48:1–48:37. Safey El Din, M., Schost, E., 2018. Bit complexity for multi-homogeneous polynomial system solv-ing—application to polynomial minimization. Journal of Symbolic Computation 87, 176 – 206. Schost, É., 2003. Computing parametric geometric resolutions. Applicable Algebra in Engineering, Communication and Computing 13 (5), 349–393. Shafarevich, I. R., 2013. Basic Algebraic Geometry 1: Varieties in Projective Space. Springer Berlin Heidelberg, Berlin, Heidelberg. Sylvester, J. J., 1852. A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitution to the form of a sum of positive and negative squares. Philosophical Magazine IV., 138 – 142. Verron, T., Sep. 2016. Regularisation of Gröbner basis computations for weighted and determinantal systems, and application to medical imagery. Theses, Université Pierre et Marie Curie - Paris VI. Yang, L., Hou, X., Xia, B., 2001. A complete algorithm for automated discovering of a class of inequality-type theorems. Science in China Series F Information Sciences 44 (1), 33–49. Yang, L., Xia, B., 2005. Real solution classification for parametric semi-algebraic systems. In: Dolzmann, A., Seidl, A., Sturm, T. (Eds.), Algorithmic Algebra and Logic. Proceedings of the A3L 2005, April 3-6, Passau, Germany; Conference in Honor of the 60th Birthday of Volker Weispfenning. pp. 281–289. Yang, L., Zeng, Z., 2000. Equi-cevaline points on triangles. In: Computer Mathematics: Proceedings 44 of the Fourth Asian Symposium (ASCM 2000). World Scientific Publishing Company Incorporated, p. 130. Yang, L., Zeng, Z., 2005. An open problem on metric invariants of tetrahedra. In: Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation. ISSAC ’05. Association for Computing Machinery, New York, NY, USA, p. 362–364. 45
12249
https://www.sciencedirect.com/science/article/abs/pii/S0016508525058573
Skip to article My account Sign in View PDF Gastroenterology Volume 169, Issue 5, October 2025, Pages 828-861 Guidelines AGA Clinical Practice Guideline on Management of Gastroparesis Author links open overlay panel, , , , , , , , rights and content Complimentary access Background & Aims Gastroparesis is a complex gastric motility disorder characterized by nausea, vomiting, and other symptoms associated with a delay in gastric emptying in the absence of mechanical obstruction. Variations in diagnostic testing and limited effective treatments make caring for this patient population challenging. The American Gastroenterological Association developed this guideline to provide recommendations for ensuring an accurate diagnosis and identifying evidence-based, effective treatments among the available pharmacologic and procedural interventions for patients with idiopathic gastroparesis or gastroparesis related to diabetes. Methods The Grading of Recommendations Assessment, Development and Evaluation framework was used to assess evidence and develop this guideline. The Guideline Panel prioritized clinical questions and outcomes, conducted an evidence review, and used the Evidence to Decision Framework to develop recommendations. Results The Guideline Panel agreed on 12 recommendations. A conditional recommendation was issued against using 2-hour gastric emptying testing and in favor of 4-hour testing in patients with suspected gastroparesis. There are conditional recommendations for the use of metoclopramide and erythromycin in patients with gastroparesis. Conditional recommendations were issued against the use of domperidone, prucalopride, aprepitant, nortriptyline, buspirone, and cannabidiol as first-line therapies. In addition, conditional recommendations were issued against the routine initial use of gastric per-oral endoscopic pyloromyotomy or gastric electrical stimulation in patients with gastroparesis, reserving these treatments for select patients with symptoms refractory to medical therapies. No recommendation was given regarding the use of surgical pyloromyotomy and surgical pyloroplasty, which were identified as procedures with knowledge gaps in their use for treatment for gastroparesis. Conclusions The diagnosis of gastroparesis requires the use of 4-hour gastric emptying tests. Metoclopramide or erythromycin is appropriate for initial pharmacologic treatment. Other treatment recommendations require shared patient-physician decision making. There are still considerable unmet needs in the treatment of gastroparesis. Keywords Prokinetic Antiemetic Neuromodulator Endoscopic Myotomy Pyloroplasty Abbreviations used in this paper AGA American Gastroenterological Association BTI botulinum toxin injection CBD cannabidiol CIC chronic idiopathic constipation CoE certainty of evidence ECG electrocardiogram EndoFLIP endoluminal functional lumen imaging probe FDA US Food and Drug Administration GCSI-DD Gastroparesis Cardinal Symptom Index-Daily Diary GES gastric electrical stimulation GI gastrointestinal G-POEM gastric per oral endoscopic myotomy GRADE Grading of Recommendations Assessment, Development and Evaluation IND investigation new drug MID minimally important difference PICO population, intervention, comparator, and outcome RCT randomized controlled trial RR relative risk SMD standardized mean difference TCH tetrahydrocannabinol Cited by (0) : Correspondence Address correspondence to: Chair, Clinical Guidelines Committee, American Gastroenterological Association, National Office, 4930 Del Ray Avenue, Bethesda, Maryland 20814. e-mail: [email protected]. : Conflicts of interest Each Guideline Panel nominee underwent a vetting process that required disclosure of all conflicts of interest. Reported conflicts were reviewed by the Chair of the AGA Clinical Guidelines Committee and adjudicated against rules and criteria in the Clinical Guidelines Committee Conflict of Interest Policy. Only nominees whose conflict of interest status (Supplementary Table 2) complied with this policy were appointed. : Funding AGA provided all financial support for the development of this guideline. No funding from industry was offered or accepted to support the writing effort. Henry P. Parkman is supported by National Institutes of Health grant U01-DK073975-17. Michael Camilleri is supported by National Institutes of Health grants R01-DK122280 and R01-DK142696-01. ˆ— : Authors share co-first authorship. § : Authors share co-senior authorship. View Abstract © 2025 by the AGA Institute.
12250
https://www.cengage.com/c/principles-of-instrumental-analysis-7e-skoog-holler-crouch/9781305577213/
Principles of Instrumental Analysis, 7th Edition - 9781305577213 - Cengage Skip to Content school Student arrow_drop_down I am a... school Studentlocal_library Instructoraccount_balance Institutional Leader school Student arrow_drop_down I am a... school Studentlocal_library Instructoraccount_balance Institutional Leader Home Products keyboard_arrow_up eBooks Search by title, author or ISBN Online learning platforms MindTap, WebAssign, and more Cengage Unlimited Get access to all Cengage course materials for one low price Activate product Access your course materials Resources keyboard_arrow_up Get the Cengage Read app Read or listen to textbooks on the go Become a campus ambassador Get work experience and build connections Contact Search as... arrow_drop_down error Please select an option Search Our Catalog search shopping_cart Access: days Total: I'm ready to check out search menu help Contact support Talk to our support team Language US SIGN IN school Student arrow_drop_down I am a... school Studentlocal_library Instructoraccount_balance Institutional Leader Language US SIGN IN Home Products keyboard_arrow_up eBooks Search by title, author or ISBN Online learning platforms MindTap, WebAssign, and more Cengage Unlimited Get access to all Cengage course materials for one low price Activate product Access your course materials Resources keyboard_arrow_up Get the Cengage Read app Read or listen to textbooks on the go Become a campus ambassador Get work experience and build connections Contact Support keyboard_arrow_up Contact support Talk to our support team Search as... arrow_drop_down error Please select an option Search Our Catalog search Principles of Instrumental Analysis|7th Edition Douglas A. Skoog/F. James Holler/Stanley R. Crouch Copyright 2018 | Published View as Instructor eBook/Textbook from$68.99 Access the eBook $68.99 ISBN: 9781337670074 Tell me about Cengage eBooks Access until March 28, 2026 $68.99 Access until September 24, 2026 $94.99 ADD TO CART Buy or Rent the Textbook $80.00 ISBN: 9781305577213 ‎‎Tell me about TextbooksWhat's Included Access until March 28, 2026 $80.00 Buy Hardback : Principles of Instrumental Analysis ISBN: 9781305577213 $207.95 About This Product PRINCIPLES OF INSTRUMENTAL ANALYSIS is the standard for courses on the principles and applications of modern analytical instruments. In the 7th edition, authors Skoog, Holler, and Crouch infuse their popular text with updated techniques and new Instrumental Analysis in Action case studies. Updated material enhances the book's proven approach, which places an emphasis on the fundamental principles of operation for each type of instrument, its optimal area of application, its sensitivity, its precision, and its limitations. The text also introduces students to elementary analog and digital electronics, computers, and the treatment of analytical data. Digital Object Identifiers (DOIs) are provided for most references to the primary literature. Language United States Higher Ed Blog Contact Us Content Development Philosophy Events Sitemap Company About Accessibility Careers Investors News Royalties Follow Cengage Terms Of Use Privacy Do Not Sell Piracy Copyright © 2024 Cengage Learning, Inc. and its affiliates
12251
https://pubmed.ncbi.nlm.nih.gov/32731900/
Complications of peritonsillar abscess - PubMed Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation On or after July 28, sending email will require My NCBI login. Learn more about this and other changes coming to the email feature. Subject: 1 selected item: 32731900 - PubMed To: From: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Full text links BioMed CentralFree PMC article Full text links Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Conflict of interest statement Figures Similar articles Cited by References Publication types MeSH terms Substances Related information Grants and funding LinkOut - more resources Review Ann Clin Microbiol Antimicrob Actions Search in PubMed Search in NLM Catalog Add to Search . 2020 Jul 30;19(1):32. doi: 10.1186/s12941-020-00375-x. Complications of peritonsillar abscess Tejs Ehlers Klug1,Thomas Greve2,Malene Hentze3 Affiliations Expand Affiliations 1 Department of Otorhinolaryngology, Head and Neck Surgery, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99 Aarhus N, Aarhus, 8200, Denmark. tejsehlersklug@hotmail.com. 2 Department of Clinical Microbiology, Aarhus University Hospital, Aarhus, Denmark. 3 Department of Otorhinolaryngology, Head and Neck Surgery, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99 Aarhus N, Aarhus, 8200, Denmark. PMID: 32731900 PMCID: PMC7391705 DOI: 10.1186/s12941-020-00375-x Item in Clipboard Review Complications of peritonsillar abscess Tejs Ehlers Klug et al. Ann Clin Microbiol Antimicrob.2020. Show details Display options Display options Format Ann Clin Microbiol Antimicrob Actions Search in PubMed Search in NLM Catalog Add to Search . 2020 Jul 30;19(1):32. doi: 10.1186/s12941-020-00375-x. Authors Tejs Ehlers Klug1,Thomas Greve2,Malene Hentze3 Affiliations 1 Department of Otorhinolaryngology, Head and Neck Surgery, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99 Aarhus N, Aarhus, 8200, Denmark. tejsehlersklug@hotmail.com. 2 Department of Clinical Microbiology, Aarhus University Hospital, Aarhus, Denmark. 3 Department of Otorhinolaryngology, Head and Neck Surgery, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99 Aarhus N, Aarhus, 8200, Denmark. PMID: 32731900 PMCID: PMC7391705 DOI: 10.1186/s12941-020-00375-x Item in Clipboard Full text links Cite Display options Display options Format Abstract Background: The vast majority of patients with peritonsillar abscess (PTA) recover uneventfully on abscess drainage and antibiotic therapy. However, occasionally patient´s condition deteriorates as the infection spread in the upper airway mucosa, through cervical tissues, or hematogenously. The bacterial etiology of PTA is unclarified and the preferred antimicrobial regimen remains controversial. The current narrative review was carried out with an aim to (1) describe the spectrum of complications previously recognized in patients with peritonsillar abscess (PTA), (2) describe the bacterial findings in PTA-associated complications, and (3) describe the time relation between PTA and complications. Methods: Systematic searches in the Medline and EMBASE databases were conducted and data on cases with PTA and one or more complications were elicited. Results: Seventeen different complications of PTA were reported. The most frequently described complications were descending mediastinitis (n = 113), para- and retropharyngeal abscess (n = 96), necrotizing fasciitis (n = 38), and Lemierre´s syndrome (n = 35). Males constituted 70% of cases and 49% of patients were > 40 years of age. The overall mortality rate was 10%. The most prevalent bacteria were viridans group streptococci (n = 41, 25%), beta-hemolytic streptococci (n = 32, 20%), F. necrophorum (n = 21, 13%), S. aureus (n = 18, 11%), Prevotella species (n = 17, 10%), and Bacteroides species (n = 14, 9%). Simultaneous diagnosis of PTA and complication was more common (59%) than development of complication after PTA treatment (36%) or recognition of complication prior to PTA (6%). Conclusion: Clinicians involved in the management of PTA patients should be aware of the wide range of complications, which may arise in association with PTA development. Especially males and patients > 40 years of age seem to be at an increased risk of complicated disease. In addition to Group A streptococci and F. necrophorum, the current findings suggest that viridans group streptococci, S. aureus, Prevotella, and Bacteroides may also play occasional roles in the development of PTA as well as spread of infection. Complications occasionally develop in PTA patients, who are treated with antibiotics and surgical drainage. Keywords: Bacteria; Complications; Microbiology; Peritonsillar abscess. PubMed Disclaimer Conflict of interest statement None. Figures Fig. 1 PRISMA flow diagram of the… Fig. 1 PRISMA flow diagram of the literature searches Fig.1 PRISMA flow diagram of the literature searches Fig. 2 Diagram of the prevalent findings… Fig. 2 Diagram of the prevalent findings in 162 patients with peritonsillar abscess and complications Fig.2 Diagram of the prevalent findings in 162 patients with peritonsillar abscess and complications See this image and copyright information in PMC Similar articles Peritonsillar abscess: clinical aspects of microbiology, risk factors, and the association with parapharyngeal abscess.Klug TE.Klug TE.Dan Med J. 2017 Mar;64(3):B5333.Dan Med J. 2017.PMID: 28260599 Review. Peritonsillar abscess, retropharyngeal abscess, mediastinitis, and nonclostridial anaerobic myonecrosis: a case report.Civen R, Väisänen ML, Finegold SM.Civen R, et al.Clin Infect Dis. 1993 Jun;16 Suppl 4:S299-303. doi: 10.1093/clinids/16.supplement_4.s299.Clin Infect Dis. 1993.PMID: 8324135 Low rate of co-infection in complicated infectious mononucleosis.Danstrup CS, Klug TE.Danstrup CS, et al.Dan Med J. 2019 Sep;66(9):A5564.Dan Med J. 2019.PMID: 31495372 Unveiling the etiology of peritonsillar abscess using next generation sequencing.Saar M, Vaikjärv R, Parm Ü, Kasenõmm P, Kõljalg S, Sepp E, Jaagura M, Salumets A, Štšepetova J, Mändar R.Saar M, et al.Ann Clin Microbiol Antimicrob. 2023 Nov 8;22(1):98. doi: 10.1186/s12941-023-00649-0.Ann Clin Microbiol Antimicrob. 2023.PMID: 37940951 Free PMC article. Microbiology and management of peritonsillar, retropharyngeal, and parapharyngeal abscesses.Brook I.Brook I.J Oral Maxillofac Surg. 2004 Dec;62(12):1545-50. doi: 10.1016/j.joms.2003.12.043.J Oral Maxillofac Surg. 2004.PMID: 15573356 Review. See all similar articles Cited by Parapharyngeal and floor-of-mouth abscess secondary to tonsillar phlegmon: A rare and unusual cause of Ludwig's angina.Tailor BV, Devakumar H, Myuran T, Ioannidis D.Tailor BV, et al.Clin Case Rep. 2022 Sep 12;10(9):e6325. doi: 10.1002/ccr3.6325. eCollection 2022 Sep.Clin Case Rep. 2022.PMID: 36172328 Free PMC article. Are Computed Tomography Scans Necessary for the Diagnosis of Peritonsillar Abscess?Eliason MJ, Wang AS, Lim J, Beegle RD, Seidman MD.Eliason MJ, et al.Cureus. 2023 Feb 9;15(2):e34820. doi: 10.7759/cureus.34820. eCollection 2023 Feb.Cureus. 2023.PMID: 36919070 Free PMC article. Peritonsillar abscess subsequently complicated by Ludwig's angina.Matsuura N.Matsuura N.J Gen Fam Med. 2021 May 7;22(5):298-299. doi: 10.1002/jgf2.451. eCollection 2021 Sep.J Gen Fam Med. 2021.PMID: 34485001 Free PMC article. Methanobrevibacter smithii tonsillar phlegmon: a case report.Djemai K, Gouriet F, Michel J, Radulesco T, Drancourt M, Grine G.Djemai K, et al.New Microbes New Infect. 2021 Apr 30;42:100891. doi: 10.1016/j.nmni.2021.100891. eCollection 2021 Jul.New Microbes New Infect. 2021.PMID: 34141438 Free PMC article. Unveiling the Enigmatic Adenoids and Tonsils: Exploring Immunology, Physiology, Microbiome Dynamics, and the Transformative Power of Surgery.Samara P, Athanasopoulos M, Athanasopoulos I.Samara P, et al.Microorganisms. 2023 Jun 21;11(7):1624. doi: 10.3390/microorganisms11071624.Microorganisms. 2023.PMID: 37512798 Free PMC article.Review. See all "Cited by" articles References Klug TE, Rusan M, Fuursted K, Ovesen T. Peritonsillar abscess: complication of acute tonsillitis or weber’s glands infection? Otolaryngol Head Neck Surg. 2016;155:199–207. doi: 10.1177/0194599816639551. - DOI - PubMed Klug TE. Incidence and microbiology of peritonsillar abscess: the influence of season, age, and gender. Eur J Clin Microbiol Infect Dis. 2014;33:1163–1167. doi: 10.1007/s10096-014-2052-8. - DOI - PubMed Jousimies-Somer H, Savolainen S, Makitie A, Ylikoski J. Bacteriologic findings in peritonsillar abscesses in young adults. Clin Infect Dis. 1993;16(Suppl 4):S292–S298. - PubMed Klug TE, Henriksen JJ, Rusan M, Fuursted K, Krogfeldt K, Ovesen T, et al. Antibody development to Fusobacterium necrophorum in patients with peritonsillar abscess. Eur J Clin Microbiol Infect Dis. 2014;33:1733–1739. doi: 10.1007/s10096-014-2130-y. - DOI - PubMed Klug TE. Peritonsillar abscess: clinical aspects of microbiology, risk factors, and the association with parapharyngeal abscess. Dan Med J. 2017. 64. pii: B5333. - PubMed Show all 160 references Publication types Review Actions Search in PubMed Search in MeSH Add to Search MeSH terms Airway Obstruction / etiology Actions Search in PubMed Search in MeSH Add to Search Anti-Bacterial Agents / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Bacterial Infections / drug therapy Actions Search in PubMed Search in MeSH Add to Search Bacterial Infections / microbiology Actions Search in PubMed Search in MeSH Add to Search Drainage Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Peritonsillar Abscess / complications Actions Search in PubMed Search in MeSH Add to Search Peritonsillar Abscess / microbiology Actions Search in PubMed Search in MeSH Add to Search Peritonsillar Abscess / therapy Actions Search in PubMed Search in MeSH Add to Search Substances Anti-Bacterial Agents Actions Search in PubMed Search in MeSH Add to Search Related information Cited in Books MedGen Grants and funding R185-2014-2482/Lundbeckfonden LinkOut - more resources Full Text Sources BioMed Central Europe PubMed Central PubMed Central Medical MedlinePlus Health Information Full text links[x] BioMed CentralFree PMC article [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
12252
https://www.youtube.com/watch?v=5NM5zSn3t3c
ALEKS: Graphing a line given its equation in slope-intercept form: Fractional slope Roxi Hulet 32400 subscribers 41 likes Description 2784 views Posted: 1 Apr 2023 2 comments Transcript: in this video I'm going to show you how to solve the Alex problem called graphing a line given its equation in slope intercept form fractional slope so this problem is asking or is giving us an equation of a line and it's asking us to draw that line in this graph space I'm going to rewrite the equation of my line over here y equals negative 1 3 X plus 3. this equation of the line is in the y equals m x plus b form in this format the B term which for my equation is Plus 3. this is the y-intercept this is the place where the line crosses the y-axis the y-axis is this one right here this is our x-axis and I know because of the plus three term in this position of my equation I know that my line crosses the y-axis at positive 3 because I've got a plus 3 here and so I know that that is one of the points of my line right there to get the other point of my line what I'm going to do is use the slope of my line so in this equation the Y equals MX plus b format M represents the slope of the line which is the rise over the run it's kind of the the trick that we use to remember that where rise is referring to the positioning either going up or going down on the y-axis and the run is referring to the movement along the x-axis either to the right or to the left so my slope my rise over run is negative one over three so that means the slope of my line is such that every time I go one space down on the y-axis I'm also going to go three spaces to the right on the x-axis how do I know down versus up versus right versus left negative is referring to going down the y-axis if this was a positive I'd be going up the y-axis a positive means I'm moving to the right on the x-axis a negative would mean that I'm moving to the left on the x-axis it corresponds to the signs of the numbers on the X's X and Y so to find my second data point or to find any second data point of this line what I'm going to do is use the slope I know that if I'm starting right here my slope is such that if I move down one one space down let's get this over on the y-axis I'm also going to be moving three spaces to the right on the x-axis and that's this position right here so that is my second data point now to actually draw this line on Alex I'm going to be using this line drawing tool I'm going to select it and then this tool allows me to drop two points on the graph so I'm going to click on this tool and then I will click on this spot and I will click on this spot next and Alex is going to automatically draw a line a straight line between those two points for me
12253
https://byjus.com/maths/cos-30-degrees/
In trigonometry, the cosine function is defined as the ratio of the adjacent side to the hypotenuse. If the angle of a right triangle is equal to 30 degrees, and then the value of cosine at this angle i.e., the value of Cos 30 degree is in a fraction form as √3/2. It is noted that the value of Sec 30 will be reciprocal of Cos 30 value. Cos 30 Value Cos 30 degrees is written as cos 30° and has a value in fraction form as √3/2. Cos 30° = √3/2 Cos 30° = √3/2 is an irrational number and equals to 0.8660254037 (decimal form). Therefore, the exact value of cos 30 degrees is written as 0.8660 approx. √3/2 is the value of Cos 30° which is a trigonometric ratio or trigonometric function of a particular angle. Cos 30 Another alternative form of Cos 30° is pi/6 or π/6 or Cos 33 (⅓)g | | | | --- | Form | Formula | Value | | Trigonometric ratio | √3 / 2 | 0.8660254037 | | Circular system | pi/6 or π/6 | 0.8660254037 | | Centesimal system | Cos 33 (⅓)g | 0.8660254037 | Cos 30 Degrees Proof Now that you know the value of Cos 30 degrees, let’s explore how to derive this value. We will study the two approaches to derive it. Theoretical approach Practical approach Theoretical approach Knowing the property that length of an opposite side is half of the length of the hypotenuse – Right triangle property for an angle equal to 30 degrees in the given right triangle. As we know two sides those are the length of the opposite side and hypotenuse but the third side is unknown i.e adjacent side. We need to find this side to find the value of Cos 30 degrees. We will use the Pythagorean theorem to find the value of this third side In triangle AOB, AO 2 = AB2 + AB2 Hypotenuse = d Opposite side = d/2 d2 = (d/2)2 + AB2 d2 = d2/4 + AB2 d2 – d2/4 = AB2 (4d2 – d2)/4 = AB2 3d2/4 = AB2 Sqrt (3) x d/2 = AB AB/d = √(3) / 2 As d is the length of the opposite side, so Length of adjacent / length of hypotenuse = √3 / 2 The value of angle BOA in the given right triangle is pi/6 and the given ratio as per the definition of cosine ratio represents the value of Cos 30 degrees Cos 30° =Length of adjacent/length of hypotenuse = √3/2 = 0.8660254037 Practical Approach Another way to calculate the value of Cos 30 degree is through a practical approach. So if we make a right triangle using constructions, Cos pi/6 can be calculated. Step 1: Mark a point P on the plane and horizontally draw a line on it. Step 2: With the help of a protractor makes an angle of 300 with the centre as P and taking the baseline as the drawn line in step 1. Step 3: Use the ruler to draw a line to make an angle of 30 degrees. Step 4: Using a compass, draw an arc on the line of an angle 30 degrees with any length. Mark that point as Q. Step 5: From Q, draw a line perpendicular to the base (horizontal line). Mark it as R (Intersecting point of the perpendicular line to the base.) Now after drawing a right-angled triangle PRQ, we can calculate the value of Cos 30 degrees. Cos 30° =Length of the adjacent/length of the hypotenuse = PR/PQ. In the above construction, the length of the side PQ is taken as 7.5 cm, and the length of the adjacent side is unknown. But if we measure it with a ruler, It comes out to be 6.5 cm. Cos 30° = PR/PQ = 6.5/7.5 = 0.866666… Stay tuned to BYJU’S to learn math in the most fun and engaging way. | | | Related Links | | Cos 60 Degree | Cos 0 Degree | | Cosine Function | Cosine Rule | Quiz on Cos 30 Degrees Q5 Put your understanding of this concept to test by answering a few MCQs. Click ‘Start Quiz’ to begin! Select the correct answer and click on the “Finish” buttonCheck your score and answers at the end of the quiz Congrats! Visit BYJU’S for all Maths related queries and study materials Your result is as below 0 out of 0 arewrong 0 out of 0 are correct 0 out of 0 are Unattempted Login To View Results Did not receive OTP? Request OTP on Login To View Results Comments Leave a Comment Cancel reply Register with BYJU'S & Download Free PDFs Register with BYJU'S & Watch Live Videos
12254
https://www.osti.gov/servlets/purl/1376405
M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nuclear Science & Engineering , Jeju, Korea , April 16 -20, 201 7, on USB (2017) 1This manuscript has been authored by UT -Battelle, LLC, under contract DE -AC05 -00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Go vernment retains a non -exclusive, paid -up, irrevocable, world -wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access t o these results of federally sponsored research in accordance with the DOE Public Access Plan ( -public -access -plan). Optimizing HFIR Isotope Production through the Development of a Sensitivity -Informed Target Design Process 1 Christopher Perfetti , Susan Hogle, Seth Johnson, Bradley Rearden, and Thomas Evans Oak Ridge National Laborator y, P.O. Box 2008, Bldg. 5700 , Oak Ridge, TN 37831 -6170, USA perfetticm@ornl.gov Abstract – This paper summarizes efforts to improve the efficiency of 252 Cf production at Oak Ridge National Laboratory’s High Flux Isotope Reactor by using sensitivity analysis to identify potential 252 Cf isotope production target design optimizations . The Generalized Perturbation Theory sensitivity coefficient capabilities of the TSUNAMI -3D code within the SCALE Code Package were integrated into the high - performance computing Shift Monte Carlo code to obtain sufficiently resolved sensitivity estimates for models containing small concentrations of heavy actinide isotopes. The TSUNAMI -3D sensitivity algo rithms were adapted for use in a parallel environment , resulting in a 79% parallel efficiency for simulations using up to 1,000 processors. The potential of several design changes was investigated using the improved TSUNAMI -3D sen sitivity analysis tool, including potential changes to the isotope production target density and geometry, and the potential addition of a thin neutron filter material. Several isotope production target design improvements were identified, including a desi gn that featured a lower density target with an indium filter material, resulting in an approximately 1,300% increase in 252 Cf production efficiency. I. INTRODUCTION The High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory ( ORNL) is a valuable national resource for materials irradiation studies and radioisotope production. Scientists designing 252 Cf isotope production targets in HFIR facilities must consider multiple design objectives simultaneously , including making efficient use of a limited number of irradiation locations, limiting heat generation in targets, and making efficient use of valuable heavy isotope feedstock. The heavy curium feedstock that is currently used for 252 Cf production was pro duced at the Savannah River National Laboratory nearly 40 years ago, and about 99% of heavy curium isotopes are lost to fission reactions before they can absorb a sufficient number of neutrons to transmute into 252 Cf , as shown in Fig. 1 below. Fig. 1. Hea vy actinide loss during 252 Cf production. . The efficiency of 252 Cf production at ORNL can be improved. This paper discuss es the research and development activities to optimize 252 Cf production using sensitivity analysis methods. This paper begin s with an introduction to sensitivity analysis methods, then discuss es their implementation in the massively parallel Shift Monte Carlo Code, and then summarize s potential improvements to 252 Cf production efficiency that were identified using the sensitivity met hods. II. GENERALIZED PERTURBATION THEORY (GPT) SENSITIVITY AN ALYSIS The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI ) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications such as quantifying the data -induced uncertainty in the eigenvalue of critical systems, assessing the neutronic simi larity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies . The continuous -energy (CE) TSUNAMI -3D code is a new tool included in SCALE 6.2 that allows for eigenvalue and generalized response sensitiv ity calculations using high -fidelity CE Monte Carlo methods [2 ,3 ]. As shown in Eq. (1), sensitivity coef ficients describe the relative change that occurs in a system response, 𝑅 , due to perturbations or uncertainty in nuclear data parameters (typically cr oss sections, 𝛴 !). 𝑆 !,!! = 𝛿 𝑅 𝑅 𝛿𝛴 ! 𝛴 ! . (1) CE TSUNAMI -3D contains the Generalized Adjoint Responses in Monte Carlo ( GEAR -MC ) method, a first -of - its -kind capability for calculating sensitivity coefficients for generalized responses using Generalized Perturbation Theory (GPT) and CE Monte Carlo methods . Rather M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nu clear Science & Engineering , Jeju, Korea ,April 16 -20, 201 7, on USB (2017) than computing sensitivity coefficients for the eigenvalue of a system, GEAR -MC calculations compute sensitivity coefficients for the ratio of two reaction rates, 𝑅 , where 𝑅 = 𝛴 !𝜙 𝛴 !𝜙 . (2) GPT sensitivity analysis has the potential to improve the efficiency of 252 Cf production by calculating sensitivity coefficients for ratios of transmutation reaction rates (typically capture -to -fission ratios) . These rates offer insight into the potential design changes that can be made to maximize desirable capture reactions and limit heavy actinide destruction through fission reactions. III. PARALLEL COMPUTING A ND GPT SENSITIVITY ANALYSIS The original CE TSUNAMI -3D GPT sensitivity imp lementation was shipped with the Beta 4 version of SCALE 6.2 and was completed with the goal of obtaining proof of principle for the new sensitivity capability . The version of this tool that was shipped with the SCALE 6.2 official release includes a nu mber of algorithmic improvements, including significant (typically 60% or more) reductions to the computational memory footprint and simulation runtime, as well as the ability to compute sensitivity coefficients for multiple reaction rate ratios within a s ingle simulation . This improvement comes at the cost of a typically 1 –3% increase in memory footprint and runtime per additional response . Sensitivity analysis of systems containing 252 Cf production targets may require lengthy simulation runtimes becau se of the potentially small concentrations of heavy actinides in isotope production targets. To obtain sufficiently converged sensitivity tallies in reasonable turnaround times, the sensitivity analysis methods in SCALE 6.2 were extended to parallel Monte Carlo simulations. These sensitivity algorithms were parallelized by implementing them in the Shift Monte Carlo code, which was designed for efficient calculations in a parallel environment . The sensitivity algorithms require tracking a substantial amo unt of data to determine the importance of events that occur during a particle’s lifetime, and these algorithms were modified significantly so that they could function efficiently in a parallel environment. The Iterated Fission Probability methodology use d by GPT sensitivity methods requires saving reaction rate information for particles in chains of fission events over several generations. This information consists of two types of data : (1) progenitor tallies , a relatively large number of reaction rate ta llies, and (2) progenitor importances , a relatively small number of tallies that describe the importance of the progenitor tallie s. Previously , both progenitor tallies and progenitor importances were tied to a particle in a chain of fission events and were communicated along with the Monte Carlo fission source through several generations of a simulation. The amount of information stored in these tallies often exceeds multiple gigabytes, so parallel simulations – which cannot take advantage of point ers – would require reading, communicati ng , and writing many gigabytes of information. These algorithms have been rewritten to minimize communication and to enable their use in a high - performance computing environment. The large progenitor tallies are no l onger tied to a given fission chain but are instead stored locally on the processor where they originate. Each particle history now only communicates its progenitor importances , along with a unique identifier describing which particl e history and node crea ted the corresponding importances . The creation of progenitor i mportances and tallies is illustrated in Fig. 2 . Fig. 2. Progenitor tallies are stored locally , while p rogenitor importance and unique identifier information are communicated to the master n ode. After several generations , the final asymptotic progenitor importance is obtained for the progenitor t allies in a set of fission chains and is returned to their progenitor’s original processor core by being passed in a so -called bucket brigade between neighboring cores. The asymptotic progenitor importance is used to weight the stored progenitor t allies to produce sensitivity tally estimates . Fig. 3 shows the results of a weak -scaling study for examining the efficiency of the sensitivity algo ri thm implementation in Shift. I n this study , each slave node simulated 500 particles per generation, a value that used the maximum number of particle histories per central processing unit ( CPU ) core given the memory requirements of the GPT sensitivity algorithm’s iterated fission M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nu clear Science & Engineering , Jeju, Korea ,April 16 -20, 201 7, on USB (2017) probability tallies. The processors that ran this simulation are automatically boost ed from their ordinary 2.5 GHz speed up to 3.6 GHz when not using all CPUs on a node , resulting in a greater than 100% efficiency for simulations that used less than 32 CPU cores. The parallel efficiency steadily drops for simulations using more than 32 CPU cores, reaching a minimum efficiency of 79% using 1,000 CPU cores. Fig. 4 shows th e fraction of compute time used for various processes during the parallel GPT sensitivity simulations. A large majority of the compute time is spent transporting particle histories and tallying sensitivity coefficient estimates, and a small (but growing) f raction of compute time is used for global sensitivity tally reduction. A very small fraction of time is used for response communication, in which the progenitor importance and unique identifier information is communicated for each particle history, as sho wn in Fig s. 2 and 3. Although the sensitivity analysis algorithms did not achieve linear scaling, their parallel efficiency was sufficient for the optimization discussed below . Because the batch statistics used by the sensitivity tallies require accumulati ng first and second moments at each generation, global message passing interface ( MPI ) reductions on large amounts of data are being performed frequently during the simulation. This accounts for the significant , increasing fraction of the compute time as t he number of CPU cores increases. Parallel scaling of these methods can be improved even more by performing batch statistics global sums less frequently (perhaps once every 10 generations instead of after every generation), by moving away from batch statis tics entirely, or by the “poor man’s parallelism” approach, which involves separating the parallel simulation into 30 or more of repeated simulations, each with a different random seed. Fig. 3. Efficiency of parallel GPT sensitivity calculations (> 100% efficiency occurs for simulations with >32 CPU cores due to automatic CPU overclocking ). Fig. 4. Compute time used for various processes during parallel GPT sensitivity simulations. IV. RESULTS OF 252 Cf ISOTOPE PRODUCTION TARGET OPTIMIZATION Having obtained a tool for calculating GPT sensitivity coefficients using parallel computing, the Shift sensitivity analysis tool was used to examine the impact of several potential changes to the design of 252 Cf isotope production targets. Each simulation used a high -fidelity model of HFIR and required about one full day of runtime. The potential design changes allowed for modifications to the geometry of the 252 Cf production targets and the use of a neutron -absorbing filter material for removing neutr ons that are likely to cause fission in heavy actinide isotopes. Optimizing the Geometry of 252 Cf Production Targets Although few changes can be made to the HFIR central flux trap in which the 252 Cf production targets are placed, the geometry of the i rradiation targets can be modified to use either an annular or thin target design. To determine which of these design changes would be optimal, the targets were equally divided into three layers – inner, middle, and outer . S ensitivity coefficients were cal culated for capture -to - fission ratios in the middle layer with respect to the material density in all three layers. Sensitivity coefficients that are larger (or smaller) for a layer indicate that it is more (or less) important to the transmutation of 252 Cf . For example, large positive sensitivity coefficients in the outer layer suggest an annular target design. Table I presents the sensitivity coefficients computed for capture -to -fission ratios in the middle layer of the 252 Cf targets with respect to the de nsity of the inner, middle, and outer layers. These sensitivity coefficients are unitless and are presented such that a -15% sensitivity implies that 1% increase in the density of a region would cause a 0.15% decrease in the corresponding response. The sen sitivity coefficients in Table I are consistently negative, implying that the capture -to -fission ratios can be increased , and the efficiency of 252 Cf production can be improved by lowering M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nu clear Science & Engineering , Jeju, Korea ,April 16 -20, 201 7, on USB (2017) the heavy actinide number density in any of the three geometry regions. These sensitivity coefficients are larger in magnitude for the outer and middle layers than for the inner layer, which suggests that removing material from the outer and/or middle laye rs and fabricating a thinner irradiation target would more effectively improve the efficiency of 252 Cf isotope production. Table I. Sensitivity of Heavy Actinide Capture -to -Fission Ratios to the Density of 252 Cf Production Targets Sensitivity of Capture -to -Fission Ratio in the Middle Layer to: Isotope Inner Layer Density Sensitivity Middle Layer Density Sensitivity Outer Layer Density Sensitivity 244 Cm -4.21% -10.79% -12.42% 245 Cm -0.06% -0.06% -0.06% 246 Cm -6.18% -12.40% -10.44% 247 Cm -0.18% -0.24% -0.19% 248 Cm -7.92% -12.55% -10.58% 249 Bk -0.58% -0.66% -0.57% 250 Cf -8.44% -9.63% -8.53% 251 Cf -0.10% -0.11% -0.11% A possible explanation for the consistently negative sensitivity coefficients is that (1) the heavy actinides in the outermost regions of the targets are over self -shielding the flux at energies corresponding to neutron capture resonances in the targets and (2) the neutron flux that causes neutron fissions is not over -shielding (presumably beca use fission is induced at predominantly faster neutron energies). Lowering the heavy actinide density should decrease the neutron flux depression at the energies corresponding to the location of neutron capture resonances, thereby increasing the capture - to -fission ratios. Although the calculated sensitivity coefficients suggest that decreasing the amount of heavy actinides in the isotope production targets will increase the heavy actinide capture - to -fission ratios, placing less feedstock material into the isotope production targets will likely lower the overall yield of 252 Cf from targets , although the transmutation will be more efficient. This effect can be counteracted by placing additional 252 Cf production targets within the HFIR flux trap, but these targ ets will occupy space that might be otherwise used for materials irradiations or other isotope production campaigns. Thus, HFIR scientists may be required to decide whether to hav e less efficient 252 Cf targets, more efficient targets with less overall 252 Cf production, or more efficient 252 Cf production that requires additional irradiation locations in the central flux trap. Thus, any design changes must be weighed by the priorities of the 252 Cf production program, which may place more value on producing a certain amount of 252 Cf, conserving limited heavy curium feedstock, or using a limited number of irradiation locations in the HFIR flux trap for 252 Cf production. Additional flux trap irradiation locations are available to the 252 Cf production program, so moving to thin, annular, or lower density irradiation targets is a feasible design change. Selecting an Optimal Neutron Filter The second potential design change to 252 Cf production targets is the placement of a thin filter material around the targets to absorb neutrons that are likely to cause fission in heavy actinides. To explore the viability of different filter materials, an artificial filter was modeled containing a mixture of several potential filter materials , and the sensitivity of transmutati on reaction rate ratios was determined with respect to the number density of the filter materials. The most promising filter materials would produce positive sensitivity coefficients for desirable reaction rate ratios, which implies that including a full - density filter foil of that material would improve the efficiency of 252 Cf production. Table II presents the sensitivity coefficients that were calculated for several key reaction rate ratios to the presence of several potential filter materials. Rather tha n simply examining the capture -to -fission ratios (C/F) of all isotopes, this analysis examined several ratios of capture (cap. ) reaction rates that strongly influence the equilibrium concentration of 252 Cf. Each ratio has a positive impact on 252 Cf product ion , and an ideal filter will produce positive sensitivity coefficients for each ratio. Identifying an ideal filter material is not simple because a material may (and often does) increase one reaction rate ratio at the expense of another ratio. The refore, the reaction rate ratio sensitivity coefficients must be weighted by the importance of each ratio to the overall 252 Cf production to determine the net sensitivity of 252 Cf production to that material. Fortunately, HFIR scientists have enough experience wit h 252 Cf production to have reasonable estimates for the importance of different reaction rate ratios, as given in Table II. M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nu clear Science & Engineering , Jeju, Korea , April 16 -20, 201 7, on USB (2017) Table II. Sensitivity of 252 Cf Transmutation Reaction Rate Ratios to Candidate Filter Materials . Reaction Rate Ratio Relative Imp . Sens. to 176 Lu Sens. to Rh Sens. to In Sens. to 149 Sm 247 Cm C/F 6.76% -0.11% -1.06% -1.16% -0.23% 248 Cm C/F 1.33% -0.76% -0.99% -1.83% -1.46% 251 Cf C/F 9.12% -0.75% -0.62% -1.08% -5.31% 244 Cm cap. / 252 Cf cap. 1.70% 2.18% 3.10% 3.45% 2.59% 246 Cm cap. / 252 Cf cap. 24.49% 3.74% 5.91% 7.51% 5.19% 247 Cm cap. / 252 Cf cap. 11.27% 0.35% -6.60% -7.93% 0.36% 248 Cm cap. / 252 Cf cap. 29.29% 3.99% 6.00% 7.89% 5.20% 251 Cf cap. / 252 Cf cap. 14.84% -6.19% -11.25% -16.51% -26.92% Net Sensitivity 1.10% 0.61% 0.47% -1.88% Of the four potential filter materials, rhodium, indium, and 176 Lu produced a positive net sensitivity, indicating that they would likely improve the efficiency of 252 Cf production. The 149 Sm filter produced a negative net sensitivity coefficient, primarily because of its negative impact on the 251 Cf capture -to -fission ratio and the 251 Cf / 252 Cf capture -to -capture ratio. Effectiveness of Sensitivity -Informed Design Changes The effectiv eness of the potential design optimizations was evaluated by performing TRITON -3D depletion simulations with the modified 252 Cf production targets in the central flux trap of HFIR for three full -power 30 -day irradiation cycle s. Each design change was evaluated based on four factors , as shown in Tables IV through VII : Table IV: overall yield of 252 Cf Table V: potential 252 Cf that was created Table VI: potential 252 Cf that was destroyed Table VII: efficiency of 252 Cf production Measuring the potential 252 Cf created or destroyed gives credit for producing heavy actinides, and although they are not 252 Cf, they can be transmuted into 252 Cf in future irradiations. Different heavy actinides contribute different amount s of potential 25 2Cf . F or example, 251 Cf provides more potential 252 Cf than 248 Cm. The potential 252 Cf present in a sample was determined using conversion factors for each heavy actinide . The conversion factors describe the fraction of each isotope that would be expected t o transmute into 252 Cf . These conversion factors have been estimated based on historical yields from previous HFIR 252 Cf production campaigns and are given in Table III below. Table III. Heavy Actinide Potential 252 Cf Conversion F actors Isotope Potential Californium Factor 244 Cm 0.0010 245 Cm 0.0033 246 Cm 0.0141 247 Cm 0.0850 248 Cm 0.1800 249 Bk 0.3500 250 Cf 0.3500 251 Cf 0.3500 The efficiency of the 252 Cf production was defined as the ratio of the 252 Cf yield and the potential 252 Cf that was de stroyed: 𝐶𝑓 𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 ≡ 𝐶𝑓 𝑌𝑖𝑒𝑙𝑑 !"! 𝑃𝑜𝑡𝑒𝑛𝑡𝑖𝑎𝑙 𝐶𝑓 !"! 𝐷𝑒𝑠𝑡𝑟𝑜𝑦𝑒𝑑 !"! . (3) The annular and thin target designs each used half the overall mass of heavy actinide feedstock in their targets due to their geometry reductions, and their heavy actinide production results were scaled up by a factor of two for ease of comparison . The lower density design that was investigated used the standard target geometry with 50% of the nominal heavy actinide atom d ensity , and its results were also scaled up by a factor of two . All of the filter ed designs produced lower 252 Cf yield s (Table IV), but the potential 252 Cf produced by these designs (Table V) resulted in much smaller changes : in most cases, the potential 252 Cf increased slightly. Th ese results indicate that the filter materials are slowing down the transmutation of 252 Cf because they block some neutrons that would have been captured in the targets. The fact that the potential 252 Cf destroyed by these designs (Table VI) drops even more significantly than the yields indicates that these filtered design s block more harmful neutrons (i.e. , the neutrons likely to cause fission) than helpful neutrons (i.e. , the neutrons likely to be captured) . This observati on is reflected in Table VII, in which the filtered designs significantly improve the efficiency metric for 252 Cf production. The results shown in Table VII are not likely accurate due to the ir use of approximate potential 252 Cf conversion factors from Tab le III , which means that the potential 252 Cf estimates are themselves approximate . Furthermore , these efficiency measurement s can be skewed because the efficiency metrics inflate rapidly as the potential 252 Cf that is destroye d decreases to almost zero . No netheless, these results indicate that significant potential exists to improve the efficiency of 252 Cf production. M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nu clear Science & Engineering , Jeju, Korea , April 16 -20, 201 7, on USB (2017) Table I V. Yield of 252 Cf for Potential D esign Changes. Filter Material Standard Geometry Annular Target Thin Target Low (50%) Density Target Unfiltered Target Baseline 3.8% 13.0% 2.3% 176 Lu Filter -25.2% -23.7% -24.4% -15.3% Rh Filter -39.7% -37.4% -56.5% -30.5% In Filter -45.8% -43.5% -44.3% -37.4% 149 Sm Filter -58.0% -56.5% -58.0% -51.1% Table V. Potential Yield of 252 Cf for Potential D esign Changes Filter Material Standard Geometry Annular Target Thin Target Low (50%) Density Target Unfiltered Target Baseline 0.0% 0.0% -0.9% 176 Lu Filter 0.7% 0.6% -0.2% 0.6% Rh Filter 1.5% 1.5% 1.7% 1.7% In Filter 1.8% 1.7% 0.9% 1.9% 149 Sm Filter 1.7% 1.7% 0.9% 1.8% Table VI. Potential 252 Cf Destroyed for Potential D esign Changes Filter Material Standard Geometry Annular Target Thin Target Low (50%) Density Target Unfiltered Target Baseline 2.1% 0.0% 0.0% 176 Lu Filter -34.0% -31.9% -34.0% -31.9% Rh Filter -76.6% -74.5% -85.1% -83.0% In Filter -89.4% -85.1% -87.2% -95.7% 149 Sm Filter -87.2% -85.1% -87.2% -93.6% Table VII. Production Efficiency of 252 Cf for Potential Design Changes Filter Material Standard Geometry Annular Target Thin Target Low (50%) Density Target Unfiltered Target Baseline 0.9% 10.5% 1.3% 176 Lu Filter 11.3% 10.5% 11.5% 24.5% Rh Filter 157.2% 147.4% 181.1% 328.8% In Filter 375.9% 273.2% 319.4% 1312.5% 149 Sm Filter 208.0% 181.1% 229.1% 570.1% As summarized in Table VII, all suggested design changes improved the efficiency of 252 Cf production . However, the filter materials that most effective ly improv ed 252 Cf production efficiency were not the ones predicted in Table II. Furthermore, the additio n of 149 Sm was expected to reduce the efficiency of 252 Cf production, but i t resulted in significant efficiency gains in Table VII ; however, 149 Sm did result in the greatest drop in 252 Cf yield in Table IV. Possible explanations for the poorly predicted effects of filter materials include (1) imperfect relative importances in Table II, (2) imperfect potential 252 Cf conversion factors in Table III , (3) the infeasibility of using steady -state sensitivity coeffi cients to optimize a time -dependent design, or (4) the complexity of the 252 Cf transmutation chain. The filter material predicted to have the greatest positive impact on 252 Cf production ( 176 Lu from Table II) resulted in the highest yield of 252 Cf among th e filter materials in Table IV . Likewise , the isotopes with the second , third , and fourth largest sensitivities in Table II also produced the second , third , and fourth highest 252 Cf yields in Table IV, respectively. This correlation may be coincidental, bu t it may also suggest that the optimization efforts presented in Table II have optimized the overall 252 Cf yield rather than the 252 Cf production efficiency. Many factors may be influencing the predictive capabilities of these sensitivity coefficients, and it is difficult to attribute the gap in predictive capabilities to any one factor. At this stage , these sensitivity methods appear to be more useful for identifying qualitative design changes to isotope production campaigns. These methods may see improved predictability for isotope production campaigns that are less complex than the 252 Cf campaign, which can require as many as 8 neutron capture events to transmute curium feedstock into 252 Cf. The 238 Pu production campaign, which requires only one neutron capture, may be a more suitable application for these sensitivity methods. Overall , the design with half of the nominal actinide number density and an indium filter produced 252 Cf most efficiently, resulting in an increase in efficiency of more than 1300% compared to the standard design. However, this efficiency metric can be deceptive because of the small amount of potential 252 Cf that was destroyed. This design may have been more efficient than the standard target, but it also produced a lower overall yie ld of 252 Cf, which highlights a weakness of the efficiency metric and of the reaction rate ratio sensitivity analysis. A design change that decreases both the rate of fission and the rate of capture in the 252 Cf targets can produce a positive sensitivity coefficient and a higher 252 Cf efficiency if it decreases the fission rate more than it decreases the capture rate. The slowed transmutation that occurs in filtered designs can be avoided by using thin target geometries to most effectively improve the yield of 252 Cf and the efficiency of 252 Cf production. Analysts must prioritize increasing the overall 252 Cf yield or conserving limited heavy curium feedstock material. M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nu clear Science & Engineering , Jeju, Korea ,April 16 -20, 201 7, on USB (2017) V. CONCLUSIONS This paper document s ongoing research and development activities for using sensitivity analysis to identify potential design optimizations in 252 Cf isotope production targets. This sensitivity analysis appl ied the TSUNAMI -3D GPT reaction rate ratio sensitivity capability to predict how changes to the 252 Cf target design would impact ratios of reaction rates (typically capture -to -fission ratios) that are significant to the production of 252 Cf. Before performing optimization analysis, the sensitivity methods were implemented in the Shift Monte Carlo code to enable parallel simulations, achieving a parallel efficiency of 79% for a simulation that used 1,000 CPU cores. Next, the sensitivity analysis capability was used to detect the sensitivity of 252 Cf production to the geometry of irradiation targets and ide ntified that either an annular, thin, or lower density target would improve the efficiency of 252 Cf production. Finally , the sensitivity c apability identified t hat adding a 176 Lu , rhodium, indium, or 149 Sm foil filter around the 252 Cf production targets wo uld improve production efficiency. When combined, the geometry and filter design changes were found to increase the efficiency of 252 Cf production by more than 1 ,300%. Depletion simulations were used to confirm the sensitivity -suggested design changes , and it was observed that the reaction rate ratio sensitivity coefficients are more effective at predicting qualitative design improvements rather than quantitative improvements. There is potential to improve the predictive capability of this sensitivity analy sis by calculating sensitivity coefficients for the overall 252 Cf yield rather than individual reaction rate ratios. ACKNOWLEDGMENTS This research was sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT -Battelle, LLC, for the U. S. Department of Energy. REFERENCES B. T. REARDEN and M. A. JESSEE, Eds., SCALE Code System , ORNL/TM -2005/39, Version 6.2, Oak Ridge National Laboratory, Oak Ridge, Tennessee (2016). Available from Radiation Sa fety Information Computational Center as CCC -834. C. M. PERFETTI and B. T. REARDEN, “SCALE 6.2 Continuous -Energy TSUNAMI -3D Capabilities,” Proc. ICNC 2015 , Charlotte, North Carolina, USA, September 13 -17, 2015. C. M. PERFETTI and B. T. REARDEN, “Continuous - Energy Monte Carlo Methods for Calculating Generalized Response Sensitivities using TSUNAMI - 3D,” Proc. PHYSOR 2014 , Kyoto, Japan, September 28 –October 3, 2014, American Nuclear Society (2014). C. M. PERFETTI, B . T. REARDEN, “CE TSUNAMI - 3D Algorithm Improvements in SCALE 6.2,” Trans. Am. Nucl. Soc. , 114 , 948 –951 (2016). T. M. PANDYA, S. R. JOHNSON, G. G. DAVIDSON, T. M. EVANS, S. P. HAMILTON, “Shift: A Massively Parallel Monte Carlo Radiation Transport Package,” Proc. M&C2015 , Nashville, Tennessee, USA, April 19 –23, 2015. R. T. PRIMM, III, N. XOUBI, 2005. Modeling of the High Flux Isotope Reactor Cycle 400, ORNL/TM - 2004/251, Oak Ridge National Laboratory.
12255
https://physics.nist.gov/cgi-bin/ASD/ie.pl?spectra=aluminum&units=1&e_out=0&unc_out=1&at_num_out=1&el_name_out=1&ion_charge_out=1&biblio=1
NIST Atomic Ionization Energies Output Magnetic dipole transition occurs only between states of the same parity. Electric quadrupole transition occurs only between states of the same parity. Magnetic quadrupole transition occurs only between states of different parities. Magnetic octupole transition occurs only between states of the same parity. Two-photon transition occurs only between states of the same parity. Hyperfine-induced transition may occur only in isotopes having non-zero nuclear spin. Undefined transition type. M1+E2 is a mix of a magnetic-dipole and an electric-quadrupole transition, both of which occur only between states of the same parity. E1+M2 is a mix of an electric-dipole and a magnetic-quadrupole transition, both of which occur only between states of different parities. Oscillator strength is a dimensionless quantity. For strong lines (both in atoms and in ions), it is of the order of unity. This level/line may not be real. This level was determined by interpolation or extrapolation of known experimental values or by semiempirical calculation; its absolute accuracy is reflected in the number of significant figures assigned to it. This level was determined by interpolation or extrapolation of known experimental values or by semiempirical calculation; its absolute accuracy is reflected in the number of significant figures assigned to it. Theoretical value. Observed in absorption Line or feature having large width due to autoionization broadening Beam-foil measurement Intensity is shared by several lines Double line Broad due to overexposure in the quoted reference Line position estimated Very hazy line Superposed with neighbor line See the original reference on other issues Observed wavelength given is actually a rounded Ritz value (no wavelength measurement is available) The relative positions of the levels within such a system are accurate within experimental uncertainties, but no experimental connection between this system and the other levels of the spectrum has been made. This level may have substantial autoionization rate. Band head Blended with another line that may affect the wavelength and intensity Complex line Diffuse line Forbidden line Transition involving a level of the ground term Hazy line Line has hyperfine structure Identification uncertain Wavelength smoothed along isoelectronic sequence Shaded to longer wavelengths Masked by another line (no wavelength measurement) Term assignment of the level is questionable Perturbed by a close line Asymmetric line Easily reversed Shaded to shorter wavelengths Tentatively classified line. Unresolved from close line Wide line Extrapolated wavelength This wavelength is calculated on-line. The number of significant digits was obtained assuming the trailing zeros in the energies are insignificant, and thus the real accuracy may be higher. This wavelength is calculated on-line with a proper account of significant digits This is a Ritz wavelength calculated on-line from the stored values of the lower and upper energy levels. Its numerical precision is very approximate. It is determined by estimated uncertainty of the wave number, which is calculated as a combination in quadrature of the level uncertainties. If the latter are unknown, they are estimated as 10 units of the last significant digit of the level value. Even if they are known, presence of correlations or unknown systematic errors in level values may cause the number of significant digits in the Ritz wavelength to be wrong by up to ±2. Unknown line type Somewhat less intensity than the value given A multiplet in the original compilation has been separated into its component lines and the transition probability was derived from the compiled value assuming spin-orbit coupling. This may decrease the listed accuracy, especially for weaker transitions. Relative intensities provide a qualitative description of what the emission spectrum of a particular element in a particular (low-density) source looks like. More help. NIST Atomic Spectra Database Ionization Energies Data Al (all spectra) 13 Data Rows FoundExample of how to reference these results: Kramida,A., Ralchenko,Yu., Reader,J., and NIST ASD Team (2024). NIST Atomic Spectra Database (ver.5.12), [Online]. Available: [2025, September 28]. National Institute of Standards and Technology, Gaithersburg, MD. DOI: BibTex Citation (new window) | At. Num. | Ion Charge | El. name | Ionization Energy (eV) | Uncertainty (eV) | References | --- --- --- | | 13 | 0 | Aluminum | 5.985769 | 0.000003 | L7215,L10321 | | 13 | +1 | Aluminum | 18.82855 | 0.00005 | L7215 | | 13 | +2 | Aluminum | 28.447642 | 0.000025 | L4714,L277 | | 13 | +3 | Aluminum | 119.9924 | 0.0019 | L4714,L2829 | | 13 | +4 | Aluminum | ))153.8252[])) | 0.0025 | L4714,L3565 | | 13 | +5 | Aluminum | ))190.49[])) | 0.05 | L7215 | | 13 | +6 | Aluminum | ))241.76[])) | 0.09 | L7215 | | 13 | +7 | Aluminum | ))284.64[])) | 0.07 | L11770 | | 13 | +8 | Aluminum | ))330.21[])) | 0.04 | L11770 | | 13 | +9 | Aluminum | ))398.65[])) | 0.06 | L11770 | | 13 | +10 | Aluminum | ())442.005))) | 0.007 | L16264c99 | | 13 | +11 | Aluminum | ())2 085.97693))) | 0.00016 | L21139 | | 13 | +12 | Aluminum | ())2 304.140359))) | 0.000012 | L19200 | If you did not find the data you need, please inform the ASD Team.
12256
https://digilent.com/reference/_media/learn/courses/real-analog-chapter-10/real-analog-chapter-10.pdf?srsltid=AfmBOoqNLuvWS9RHUFgRnVBR45tWkBB5zVbjS_wKA-TtHmUSLKRaFy-G
1300 Henley Court Pullman, WA 99163 509.334.6306 www.store. digilent .com Real Analog Chapter 10: Steady -state Sinusoidal Analysis Chapter 10 Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 1 of 85 10 Introduction and Chapter Objectives We will now study dynamic systems which are subjected to sinusoidal forcing functions. Previously, in our analysis of dynamic systems, we determined both the unforced response (or homogeneous solution ) and the forced response (or particular solution ) to the given forcing function. In the next several chapters, however, we will restrict our attention to only the system’s forced response to a sinusoidal input; this response is commonly called the sinusoidal steady -state system response. This analysis approach is useful if we are concerned primarily with the system’s respo nse after any initial conditions have died out, since we are ignoring any transient effects due to the system’s natural response. Restricting our attention to the steady -state sinusoidal response allows a considerable simplification in the system analysis: we can solve algebraic equations rather than differential equations. This advantage often more than compensates for the loss of information relative to the systems natural response. For example it is often the case that a sinusoidal input is applied for a very long time relative to the time required for the natural response to die out, so that the overall effects of the initial conditions are negligible. Steady -state sinusoidal analysis methods are important for several reasons: • Sinusoidal inputs are an extremely important category of forcing functions. In electrical engineering, for example, sinusoids are the dominant signal in the electrical power industry. The alternating current (or AC) signals used in power transmission are, in fact, so pervasive t hat many electrical engineers commonly refer to any sinusoidal signal as “AC”. Carrier signals used in communications systems are also sinusoidal in nature. • The simplification associated with the analysis of steady state sinusoidal analysis is often so de sirable that system responses to non -sinusoidal inputs are interpreted in terms of their sinusoidal steady -state response. This approach will be developed when we study Fourier series. • System design requirements are often specified in terms of the desired steady -state sinusoidal response of the system. In section 10.1 of this chapter, we qualitatively introduce the basic concepts relative to sinusoidal steady state analyses so that readers can get the “general idea” behind the analysis approach before addr essing the mathematical details in later sections. Since we will be dealing exclusively with sinusoidal signals for the next few chapters, section 10.2 provides review material relative to sinusoidal signals and complex exponentials. Recall from chapter 8 that complex exponentials are a mathematically convenient way to represent sinusoidal signals. Most of the material in section 10.2 should be review, but the reader is strongly encouraged to study section 10.2 carefully -- we will be using sinusoids and complex exponentials extensively throughout the remainder of this text, and a complete understanding of the concepts and terminology is crucial. In section 10.3, we examine the forced response of electrical circuits to sinusoidal inputs; in this section, we analyze our circuits using differential equations and come to the important conclusion that steady -state response of a circuit to sinusoidal inputs is Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 2 of 85 governed by algebraic equations. Section 10.4 takes advantage of this conclusion to perform steady -st ate sinusoidal analyses of electrical circuits without writing the governing differential equation for the circuit! Finally, in section 10.5, we characterize a system’s response purely by its effect on a sinusoidal input. This concept will be used extens ively throughout the remainder of this textbook. After completing this chapter, you should be able to:  State the relationship between the sinusoidal steady state system response and the forced response of a system  For sinusoidal steady -state conditions, st ate the relationship between the frequencies of the input and output signals for a linear, time -invariant system  State the two parameters used to characterize the sinusoidal steady -state response of a linear, time - invariant system  Define periodic signals  Define the amplitude, frequency, radian frequency, and phase of a sinusoidal signal  Express sinusoidal signals in phasor form  Perform frequency -domain analyses of electrical circuits  Sketch phasor diagrams of a circuit’s input and output  State the definition of impedance and admittance  State, from memory, the impedance relations for resistors, capacitors, and inductors  Calculate impedances for resistors, capacitors, and inductors  State how to use the following analysis approaches in the frequency do main: o KVL and KCL o Voltage and current dividers o Circuit reduction techniques o Nodal and mesh analysis o Superposition, especially when multiple frequencies are present o Thévenin’s and Norton’s theorems  Determine the load impedance necessary to deliver maximum power to a load  Define the frequency response of a system  Define the magnitude response and phase response of a system  Determine the magnitude and phase responses of a circuit 10.1 Introduction to Steady -state Sinusoidal Analysis In this chapter, we will be almost exclusively concerned with sinusoidal signals, which can be written in the form: 𝑓 (𝑡 ) = 𝐴 𝑐𝑜𝑠 (𝜔𝑡 + 𝜃 ) Eq. 10.1 Where A is the amplitude of the sinusoid, ω is the angular frequency (in radians/second) of the signal, and θ is the phase angle (expressed in radians or degrees) of the signal. A provides the peak value of the sinusoid, ω governs the rate of oscillation of the signal, and θ affects the translation of the sinusoid in time. A typical sinusoidal signal is shown in Fig. 10.1. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 3 of 85 Af(t) Time, t   2  Figure 10.1. Sinusoidal signal. If the sinusoidal signal of Fig. 10.1 is applied to a linear time invariant system, the response of the system will consist of the system’s natural response (due to the initial conditions on the system) superimposed on the system’s forced response (the response due to the forcing function). As we have seen in previous chapters, the forced response has the same form as the forcing function. Thus, if the input is a constant value the forced response is constant, as we have seen in the case of the step response of a system. In the case of a sinusoidal input to a system, the forced response will consist of a sinusoid of the same frequency as the input sinusoid . Since the natural response of the system decays with time, the steady state response of a linear time invariant system to a sinusoidal input is a sinusoid, as shown in Fig. 10.2. The amplitude and phase of the output may be different than the input amplitude and phase, but both the input and outp ut signals have the same frequency. It is common to characterize a system by the ratio of the magnitudes of the input and output signals ( 𝐵 𝐴 in Fig. 10.2) and the difference in phases between the input and output signals ( ϕ−θ) in Fig. 10.2) at a particul ar frequency . It is important to note that the ratio of magnitudes and difference in phases is dependent upon the frequency of the applied sinusoidal signal. System Input u(t)=Acos( t+ ) Output y(t)=Bcos( t+ f) Figure 10.2. Sinusoidal steady -state input -output relation for a linear time invariant system. Example 10.1: Series RLC Circuit Response Consider the series RLC circuit shown in Fig. 10.3 below. The input voltage to the circuit is given by: 𝑣 𝑠 (𝑡 ) = { 0, 𝑡 < 0 cos (5𝑡 ) , 𝑡 ≥ 0 Thus, the input is zero prior to t=0 , and the sinusoi dal input is suddenly “switched on” at time t=0 . The input forcing function is shown in Fig. 10.4(a). The circuit is “relaxed” before the sinusoidal input is applied, so the circuit initial conditions are: 𝑦 (0−) = 𝑑𝑦 𝑑𝑡 |𝑡 =0− = 0Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 4 of 85 +- vs(t) 0.004 F 1 W y(t) + - 1 H Figure 10.3. Series RLC circuit; output is voltage across capacitor. This circuit has been analyzed previously in Chapter 8, and the derivation of the governing differential equation will not be repeated here. The full output response of the circuit is shown in Fig. 10.4(b). The natural response of the circuit is readily apparent in the initial portion of the response but these transients die out quickly, leaving only the sinusoidal steady -state response of the circuit. It is only this steady state respo nse in which we will be interested for the next several modules. With knowledge of the frequency of the signals, we can define both the input and (steady -state) output by their amplitude and phase, and characterize the circuit by the ratio of the output -to -input amplitude and the difference in the phases of the output and input .time u(t) (a) Input signal time y(t) Steady-State Response (b) Output signal. Figure 10.4. Input and output signals for circuit of Figure 10.3. Section Summary  Sinusoidal signals can be expressed mathematically in the form: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 5 of 85 𝑓 (𝑡 ) = 𝐴 cos (𝜔𝑡 + 𝜃 ) • In the above, A is the amplitude of the sinusoid, it describes the maximum and minimum values of the signal. • In the above, θ is the phase angle of the sinusoid, it describes the time shift of the sinusoid relative to a pure cosine. • In the above, ω is the radian frequency of the sinusoid. The sinusoid repeats itself at time intervals of 2𝜋 𝜔 seconds. • A sinusoidal signal is completely described by its frequency, its a mplitude, and its phase angle. • The steady -state response of a linear, time -invariant system to a sinusoidal input is a sinusoid with the same frequency. • Since the frequencies of the input and output are the same, the relationship between the input and outp ut sinusoids is completely characterized by the relationships between: o The input and output amplitudes. o The input and output phase angles. 10.1 Exercises In the circuit below, all circuit elements are linear and time invariant. The input voltage 𝑉 𝑖𝑛 (𝑡 ) = 10 cos (2𝑡 + 40° ). What is the radian frequency of the output voltage 𝑉 𝑜𝑢𝑡 (𝑡 )?Vin (t) +-+- Vout (t) In the circuit below, all circuit elements are linear and time invariant. The input voltage is 𝑉 𝑖𝑛 (𝑡 ) = 10 cos (2𝑡 + 40° ). The output voltage is of the form 𝑉 𝑜𝑢𝑡 (𝑡 ) = 𝐴 cos (𝜔𝑡 + 𝜙 °). If the ratio between the input and output, |𝑉 𝑜𝑢𝑡 𝑉 𝑖𝑛 | = 0.5 and the phase difference between the input and output is 20 , what are: a. The radian frequency of the output, ? b. The amplitude of the output, A? c. The p hase angle of the output, f?Vin (t) +-+ -Vout (t) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 6 of 85 10.2 Sinusoidal Signals, Complex Exponentials, and Phasors In this section, we will review properties of sinusoidal functions and complex exponentials. We will also introduce phasor notation, whi ch will significantly simplify the sinusoidal steady -state analysis of systems, and provide terminology which will be used in subsequent sinusoidal steady -state related modules. Much of the material presented here has been provided previously in Chapter 8; this material is, however, important enough to bear repetition. Likewise, a brief overview of complex arithmetic, which will be essential in using complex exponentials effectively, is provided at the end of this section. Readers who need to review compl ex arithmetic may find it useful to peruse this overview before reading the material in this section relating to complex exponentials and phasors. 10.2.1 Sinusoidal Signals The sinusoidal signal shown in Fig. 10.5 is represented mathematically by: 𝑓 (𝑡 ) = 𝑉 𝑃 cos (𝜔𝑡 ) Eq. 10.2 The amplitude or peak value of the function is VP. VP is the maximum value achieved by the function; the function itself is bounded by +V P and −VP, so that -VP≤f(t)≤V P. The radian frequency or angular frequency of the function is ω; the units of ω are radians/second. The function is said to be periodic; periodic functions repeat themselves at regular intervals, so that: 𝑓 (𝑡 + 𝑛𝑇 ) = 𝑓 (𝑡 ) Eq. 10.3 Where n is any integer and T is the period of the signal. The sinusoidal waveform shown in Fig . 10.5 goes through one complete cycle or period in T seconds. Since the sinusoid of equation (10.2) repeats itself every 2π radians, the period is related to the radian frequency of the sinusoid by: 𝜔 = 2𝜋 𝑇 Eq. 10.4 It is c ommon to define the frequency of the sinusoid in terms of the number of cycles of the waveform which occur in one second. In these terms, the frequency f of the function is: 𝑓 = 1 𝑇 Eq. 10.5 The units of f are cycles/second or Hertz (abbreviated Hz ). The frequency and radian frequency are related by: 𝑓 = 𝜔 2𝜋 Eq. 10.6 Or equivalently: 𝜔 = 2𝜋𝑓 Eq. 10.7 Regardless of whether the sinusoid’s rate of oscillation is expressed as frequency or radian frequency, it is important to reali ze that the argument of the sinusoid in equation (10.2) must be expressed in radians . Thus, equation (10.2) can be expressed in terms of frequency in Hz as: 𝑓 (𝑡 ) = cos (2𝜋𝑓𝑡 ) Eq. 10.8 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 7 of 85 To avoid confusion in our mathematics, we will almost invariably write sinusoidal functions in terms of radian frequency as shown in equation (10.2), although Hz is generally taken as the standard unit for frequency (experimental apparatus, for example, commonly express frequency in Hz ). VP -V P Tf(t) t, sec Fi gure 10.5. Pure cosine waveform. A more general expression of a sinusoidal signal is: 𝑣 (𝑡 ) = 𝑉 𝑃 cos (𝜔𝑡 + 𝜃 ) Eq. 10.9 Where θ is the phase angle or phase of the sinusoid. The phase angle simply translates the sinusoid along the time axis, as shown in Fig. 10.6. A positive phase angle shifts the signal left in time , while a negative phase angle shifts the signal right – this is consistent with our discussion of step functions in section 6.1, where it was noted that subtracting a value from the unit step argument resulted a time delay of the function. Thus, as shown in Figure 10.6, a positive phase angle causes the sinusoid to be shifted left by θω seconds. The units of phase angle should be radians, to be consistent with the units of ωt in the argume nt of the cosine. It is typical, however, to express phase angle in degrees, with 180 ∘ corresponding to π radians. Thus, the conversion between radians and degrees can be expressed as: Number of degrees = 180 𝜋 𝑥 Number of radians For example, we will consi der the two expressions below to be equivalent, though the expression on the right - hand side of the equal sign contains a mathematical inconsistency: 𝑉 𝑃 cos (𝜔𝑡 + 𝜋 2) = 𝑉 𝑃 cos (𝜔𝑡 + 90° )Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 8 of 85 VP -V PT t, sec v(t)   Figure 10.6. Cosine waveform with non -zero phase angle. For convenience, we introduce the terms leading and lagging when referring to the sign on the phase angle, θ. A sinusoidal signal v1(t) is said to lead another sinusoid v2(t) of the same frequency if the phase difference between the t wo is such that v1(t) is shifted left in time relative to v2(t) . Likewise, v1(t) is said to lag another sinusoid v2(t) of the same frequency if the phase difference between the two is such that v1(t) is shifted right in time relative to v2(t) . This terminology is described graphically in Fig. 10.7. cos( t+ )  < 0 lags cos( t) Time v(t) cos( t+ )  > 0 leads cos( t) cos( t) Figure 10.7. Leading and lagging sinusoids. Finally, we note that the representation of sinusoidal signals as a phase shifted cosine function, as provided by equation (10.9), is completely general. If we are given a sinusoidal function in terms of a sine function, it can be readily converted to the form of equation (10.9) by subtracting a phase of 𝜋 2 (or 90 ̊ ) from the argument, since: sin (𝜔𝑡 ) = cos (𝜔𝑡 − 𝜋 2) Likewise, sign changes can be accounted for by a ±π radian phase shift, since: − cos (𝜔𝑡 ) = cos (𝜔𝑡 ± 𝜋 ) Obviously, we could have chosen either a cosine or sine representation of a sinusoidal signal. We prefer the cosine representation, since a cosine is the real part of a c omplex exponential . In the next module, we will see that sinusoidal steady -state circuit analysis is simplified significantly by using complex exponentials to represent the sinusoidal functions. The cosine is the real part of a complex exponential (as we s aw previously in chapter 8). Since all measurable signals are real valued, we take the real part of our complex exponential -based result as our physical response; this results in a solution of the form of equation (10.9). Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 9 of 85 Since representation of sinusoidal waveforms as complex exponentials will become important to us in circuit analysis, we devote the following subsection to a review of complex exponentials and their interpretation as sinusoidal signals. 10.2.2 Complex Exponentials and Phasors Euler’s ident ity can be used to represent complex numbers as complex exponentials: 𝑒 𝑗𝜃 = cos 𝜃 ± 𝑗 sin 𝜃 Eq. 10.10 If we generalize equation (9) to time -varying signals of arbitrary magnitude, we can write: 𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 ) = 𝑉 𝑃 cos (𝜔𝑡 + 𝜃 ) ± 𝑗 𝑉 𝑃 sin (𝜔𝑡 + 𝜃 ) Eq. 10.11 So that: 𝑉 𝑃 cos (𝜔𝑡 + 𝜃 ) = 𝑅𝑒 {𝑉 𝑃 𝑒 ±(𝜔𝑡 +𝜃 )} Eq. 10.12 And: 𝑉 𝑃 sin (𝜔𝑡 + 𝜃 ) = 𝐼𝑚 {𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 )} Eq. 10.13 Where 𝑅𝑒 {𝑉 𝑃 𝑒 ±(𝜔𝑡 +𝜃 )} and 𝐼𝑚 {𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 )} denote th e real part of 𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 ) and the imaginary part of 𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 ), respectively. The complex exponential of equation (10.11) can also be written as: 𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 ) = 𝑉 𝑃 𝑒 𝑗𝜃 𝑒 𝑗𝜔𝑡 Eq. 10.14 The term 𝑉 𝑃 𝑒 𝑗𝜃 on the rig ht -hand side of equation (10.14) is simply a complex number which provides the magnitude and phase information of the complex exponential of equation (10.11). From equation (10.12), this magnitude and phase can be used to express the magnitude and phase angle of a sinusoidal signal of the form given in equation (10.9). The complex number in polar coordinates which provides the magnitude and phase angle of a time -varying complex exponential, as given in equation (10.14) is called a phasor . The phasor repre senting 𝑉 𝑃 cos (𝜔𝑡 + 𝜃 ) is defined as: 𝑉 = 𝑉 𝑃 𝑒 𝑗𝜃 = 𝑉 𝑃 ∠𝜃 Eq. 10.15 We will use a capital letter with an underscore to denote a phasor. Using bold typeface to represent phasors is more common; our notation is simply for consistency between lecture material and written material – boldface type is difficult to create on a whiteboard during lecture! Note : The phasor representing a sinusoid does not provide information about the frequency of the sinusoid – frequency informati on must be kept track of separately . 10.2.3 Complex Arithmetic Review Much the material in this section has been provided previously in section 8.3. It is repeated here to emphasize its importance and to expand slightly upon some crucial topics. In our pre sentation of complex exponentials, we first provide a brief review of complex numbers. A complex number contains both real and imaginary parts. Thus, we may write a complex number 𝐴 as: 𝐴 𝑎 + 𝑗𝑏 Eq. 10.16 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 10 of 85 Where: 𝑗 = √−1 Eq. 10.17 And the underscore denotes a complex number. The complex number 𝐴 can be represented on orthogonal axes representing the real and imaginary part of the number, as shown in Fig. 10.8. (In Figure 10.8, we have taken the liberty of repres enting 𝐴 as a vector, although it is really just a number.) We can also represent the complex number in polar coordinates, also shown in Figure 10.8. The polar coordinates consist of a magnitude |A| and phase angle θA , defined as: |𝐴 | = √𝑎 2 + 𝑏 2 Eq. 10.18 𝜃 𝐴 = tan −1 (𝑏 𝑎 ) Eq. 10.19 Notice that the phase angle is defined counterclockwise from the positive real axis. Conversely, we can determine the rectangular coordinates from the polar coordinates from: 𝑎 = 𝑅𝑒 {𝐴 } = |𝐴 | cos (𝜃 𝐴 ) Eq. 10.20 𝑏 = 𝐼𝑚 {𝐴 } = |𝐴 |sin (𝜃 𝐴 ) Eq. 10.21 Where the notation 𝑅𝑒 {𝐴 } and 𝐼𝑚 {𝐴 } denote the real part of 𝐴 and the imaginary part of 𝐴 , respectively. The polar coordinates of a complex number of 𝐴 are often represented in the form: 𝐴 |𝐴 |∠𝜃 𝐴 Eq. 10.22 A A Re Im ab)cos( AA )sin( AA  Figure 10.8. Representation of a complex number in rectangular and polar coordinates. An alternate method of representing complex numbers in polar coordinates employs complex ex ponential notation. Without proof, we claim that: 𝑒 𝑗𝜃 = 1∠ 𝜃 Eq. 10.23 Thus, 𝑒 𝑗𝜃 is a complex number with magnitude 1 and phase angle θ. From Fig. 10.8, it is easy to see that this definition of the complex exponential agrees with Euler’s equation: 𝑒 ±𝑗𝜃 = 𝑐𝑜𝑠𝜃 ± 𝑗 sin 𝜃 Eq. 10.24 With the definition of equation (10.23), we can define any arbitrary complex number in terms of complex numbers. For example, our previous complex number 𝐴 can be represented as: 𝐴 = |𝐴 |𝑒 𝑗 𝜃 𝐴 Eq. 10.25 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 11 of 85 We can generalize our definition of the complex exponential to time -varying signals. If we define a time varying signal 𝑒 𝑗𝜔𝑡 , we can use equation (10.24) to write: 𝑒 𝑗𝜔𝑡 = cos 𝜔𝑡 ± 𝑗 sin 𝜔𝑡 Eq. 10.26 The signal 𝑒 𝑗𝜔𝑡 can be visualized as a unit vector rotating around the origin in the complex plane; the tip of the vector scribes a unit circle with its center at the origin of the complex plane. This is illustrated in Fig. 10.9 . The vector rotates at a rate defined by the quantity ω– the vector makes one complete revolution every 2𝜋 𝜔 seconds. The projection of this rotating vector on the real axis traces out the signal cos 𝜔𝑡 , as shown in Fig. 10.7, while the projection of the rotating vector on the imaginary axis traces out the signal sin 𝜔𝑡 , also shown in Fig. 10.9. Thus, we interpret the complex exponential function 𝑒 𝑗𝜔𝑡 as an alternate “type” of sinusoidal signal. The real part of this function is cos 𝜔𝑡 while the imaginary part of this function is sin 𝜔𝑡 .Im Re t cos tsin ttime time tt Figure 10.9. Illustration of tj e . Addition and subtraction of complex numbers is most easily performed in rectangular coordinates. Given two complex numbers 𝐴 and 𝐵 , defined as: 𝐴 = 𝑎 + 𝑗𝑏 𝐵 = 𝑐 + 𝑗𝑑 The sum and difference of the complex number can be determined by: 𝐴 + 𝐵 = (𝑎 + 𝑐 ) + 𝑗 (𝑏 + 𝑑 ) And: 𝐴 − 𝐵 = (𝑎 − 𝑐 ) + 𝑗 (𝑏 − 𝑑 )Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 12 of 85 Multiplication and division, on the other hand, are probably most easily performed using polar coordinates. If we define two complex numbers as: 𝐴 = |𝐴 |𝑒 𝑗 𝜃 𝐴 = |𝐴 |∠𝜃 𝐴 𝐵 = |𝐵 |𝑒 𝑗 𝜃 𝐵 = |𝐵 |∠𝜃 𝐵 The product and quotient can be determined by: 𝐴 ⋅ 𝐵 = |𝐴 |𝑒 𝑗 𝜃 𝐴 ⋅ |𝐵 |𝑒 𝑗 𝜃 𝐵 = |𝐴 | ⋅ |𝐵 |𝑒 𝑗 (𝜃 𝐴 +𝜃 𝐵 ) = |𝐴 | ⋅ |𝐵 |∠(𝜃 𝐴 + 𝜃 𝐵 ) And: 𝐴 𝐵 = |𝐴 |𝑒 𝑗 𝜃 𝐴 |𝐵 |𝑒 𝑗 𝜃 𝐵 = 𝐴 𝐵 ∠(𝜃 𝐴 − 𝜃 𝐵 ) Th e conjugate of a complex number, denoted by a , is obtained by changing the sign on the imaginary part of the number. For example , if 𝐴 = 𝑎 + 𝑗𝑏 = |𝐴 |𝑒 𝑗𝜃 , then: 𝐴 ∗ = 𝑎 − 𝑗𝑏 = |𝐴 |𝑒 −𝑗𝜃 Conjugation does not affect the magnitude of the complex number, but it changes the sign on the phase angle. It is easy to show that: 𝐴 ⋅ 𝐴 ∗ = |𝐴 |2 Several useful relationships between polar and rectangular coordinate representations of complex numbers are provided below. The reader is encouraged to prove any that are not self -evident. 𝑗 = 1∠90 ° −𝑗 = 1∠ − 90° 1 𝑗 = −𝑗 = 1∠ − 90° 1 = 1∠0 ° −1 = 1∠180 ° Section Summary • Periodic signals repeat themselves at a specific time interval. Sinusoidal signals are a special case of periodic signals. • A sinusoidal signal can always be written in the fo rm v(t) = VP cos (ωt + θ). • It is often convenient, when analyzing a system’s steady -state response to sinusoidal inputs, to express sinusoidal signals in terms of complex exponentials. This is possible because of Euler’s formula: 𝑒 ±𝑗𝜔𝑡 = cos 𝜔𝑡 ± 𝑗 sin 𝜔𝑡 • From Euler’s formula, a sinusoidal signal can be expressed as the real part of a complex exponential: 𝑣 (𝑡 ) = 𝑉 𝑃 cos (𝜔𝑡 + 𝜃 ) = 𝑅𝑒 {𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 )} • The magnitude and phase angle of a complex exponential signal are conveniently expressed as a phasor: 𝑉 = 𝑉 𝑃 𝑒 𝑗 𝜃 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 13 of 85 • Using phasor notation, the above complex exponential signal can be written as: 𝑉 𝑃 𝑒 ±𝑗 (𝜔𝑡 +𝜃 ) = 𝑉 𝑒 𝑗𝜔𝑡 • Phasors can then be operated on arithmetically in the same way as any other complex number. However, when operating on phasors, keep in mind th at you are dealing with the amplitude and phase angle of a sinusoidal signal. 10.2 Exercises Express the following complex numbers in rectangular form: 1.1. 3𝑒 𝑗 45 ° 1.2. 5√2𝑒 𝑗 135 ° 1.3. 2.5𝑒 𝑗 90 ° 1.4. 6𝑒 𝑗𝜋 Express the following complex numbers in complex exponential for m: 2.1. 2 − 𝑗 2 2.2. −𝑗 3 2.3. 6 2.4. 3 + 𝑗 Evaluate the following expressions. Express your results in complex exponential form. 3.1. − 𝑗 2(𝑗 +1) 3.2. 2−𝑗 2 4+𝑗 4 3.3. 2𝑒 𝑗 45 ° ⋅ 2 𝑗 +1 3.4. 𝑗 + 2 𝑗 Represent the following sinusoids in phasor form: 4.1. 3 cos (5𝑡 − 60° ) 4.2. −2 cos (300 𝑡 + 45° ) 4.3. sin (6𝑡 ) 4.4. 7 cos (3𝑡 ) Write the signal representing the real part of the following complex exponentials: 5.1. 5√2𝑒 𝑗 (100 𝑡 −45° ) 5.2. 3𝑒 𝑗𝜋 𝑒 𝑗 3𝑡 5.3. 2𝑒 𝑗 (𝜋𝑡 −30° ) + 4𝑒 𝑗 (4𝑡 +20° ) 10.3 Sinusoidal Steady -state System Response In this section, the concepts presented in sections 10.1 and 10. 2 are used to determine the sinusoidal steady -state response of electrical circuits. We will develop sinusoidal steady -state circuit analysis in terms of examples, rather than attempting to develop a generalized approach à priori. The approach is straightf orward, so that a general analysis approach can be inferred from the application of the method to several simple circuits. The overall approach to introducing sinusoidal steady -state analysis techniques used in this section is as follow: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 14 of 85  We first determine the sinusoidal steady -state response of a simple RC circuit, by solving the differential equation governing the system. This results directly in a solution which is a function of time; it is a time domain analysis technique. The approach is mathematically tedious, even for the simple circuit being analyzed.  We then re -analyze the same RC circuit using complex exponentials and phasors. This approach results in the transformation of the governing time domain differential equation into an al gebraic equation which is a function of frequency . It is said to describe the circuit behavior in the frequency domain . The frequency domain equation governing the system is then solved using phasor techniques and the result transformed back to the time do main. This approach tends to be mathematically simpler than the direct solution of the differential equation in the time domain, though in later sections we will simplify the approach even further.  Several other examples of sinusoidal steady -state circuit analysis are then performed using frequency domain techniques in order to demonstrate application of the approach to more complex circuits. It will be seen that, unlike time -domain analysis, the dif ficulty of the frequency domain analysis does not increase drastically as the circuit being analyzed becomes more complex. Example 10.2: RC Circuit Sinusoidal Steady -state Response via Time -domain Analysis In the circuit below, the input voltage is 𝑢 (𝑡 ) = 𝑉 𝑃 cos (𝜔𝑡 ) volts and the circuit response (or output) is the capacitor voltage, y(t) . We want to find the steady -state response (as 𝑡 → ∞). +- u(t) = V pcos( t) CRy(t) +- The differential equation governing the circuit is: 𝑑𝑦 (𝑡 ) 𝑑𝑡 1 𝑅𝐶 𝑦 (𝑡 ) = 𝑉 +𝑃 𝑅𝐶 cos (𝜔𝑡 ) Eq. 10.27 Since we are concerned only with the steady -state response, there is no need to determine the homogeneous solution of the differential equation (or, equivalently, the natural response of the system) so we will not be co ncerned with the initial conditions on the system – their effect will have died out by the time we are interested in the response. Thus, we only need to determine the particular solution of the above differential equation (the forced response of the system ). Since the input function is a sinusoid, the forced response must be sinusoidal, so we assume that the forced response yf(t) has the form: 𝑦 𝑓 (𝑡 ) = 𝐴 cos (𝜔𝑡 ) + 𝐵 sin (𝜔𝑡 ) Eq. 10.28 Substituting equation (10.28) into equation (10.27) results in: −𝐴𝜔 sin (𝜔𝑡 ) + 𝐵𝜔 cos (𝜔𝑡 ) + 1 𝑅𝐶 [cos (𝜔𝑡 ) + 𝐵𝑠𝑖𝑛 (𝜔𝑡 )] = 𝑉 𝑃 𝑅𝐶 cos (𝜔𝑡 ) Eq. 10.29 Equating coefficients on the sine and cosine terms results in two equations in two unknowns: −𝐴𝜔 + 𝐵 𝑅𝐶 = 0 𝐵𝜔 + 𝐴 𝑅𝐶 = 𝑉 𝑃 𝑅𝐶 Eq. 10.30 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 15 of 85 Solving equations (10.30) results in: 𝐴 = 𝑉 𝑃 1+(𝜔𝑅𝐶 )2 𝐵 = 𝑉 𝑃 𝜔𝑅𝐶 1+(𝜔𝑅𝐶 )2 Eq. 10.31 Substituting equations (10.31) into equation (10.28) and using the trigonometric identity 𝐴 cos (𝜔𝑡 ) + 𝐵 sin (𝜔𝑡 ) = √𝐴 2 + 𝐵 2 cos [𝜔𝑡 − tan −1 (𝐵 𝐴 )] results in (after some fairly tedious algebra): 𝑦 𝑓 (𝑡 ) = 𝑉 𝑃 √1+(𝜔𝑅𝐶 )2 cos [𝜔𝑡 − tan −1(𝜔𝑅𝐶 )] Eq. 10.32 Note : In all steps of the above analysis, the functions being used are functions of time . That is, for a particular value of ω, the functions vary with time. The above analysis is being performed in the time domain . Example 10.3: RC Circuit Sinusoidal Steady -state Response via Frequency -domain Analysis We now repeat Example 10.2, using phasor -based analysis techniques. The circuit being analyzed is shown in the figure to the left below for reference; the input voltage is 𝑢 (𝑡 ) = 𝑉 𝑃 cos (𝜔𝑡 ) volts and the circuit response (or output) is the capacitor voltage, y(t) . We still want to find the steady -state response (as 𝑡 → ∞). In this example, we replace the physical input, 𝑢 (𝑡 ) = 𝑉 𝑃 cos (𝜔𝑡 ), with a conceptual input based on a complex exponential as shown in the figure to the right below. The complex exponential input is chosen such that the rea l part of the complex input is equivalent to the physical input applied to the circuit . We will analyze the conceptual circuit with the complex valued input. +- u(t) = V pcos( t) CRy(t) +- +- u(t) = V pej tCRy(t) +- The differential equation governing the circuit above is the same as in example 10.2, but with the complex input: 𝑑𝑦 (𝑡 ) 𝑑𝑡 1 𝑅𝐶 𝑦 (𝑡 ) = 𝑉 𝑃 𝑅𝐶 𝑒 𝑗𝜔𝑡 Eq. 10.33 As in example 10.2, we now assume a form of the forced response. In this case, however, our solution will be assumed to be a complex exponentia l: 𝑦 (𝑡 ) = |𝑌 |𝑒 𝑗 (𝜔𝑡 +𝜃 ) Eq. 10.34 Which can be written in phasor form as: 𝑦 (𝑡 ) = 𝑌 𝑒 𝑗𝜔𝑡 Eq. 10.35 Where the phasor 𝑌 is a complex number which can be expressed in either exponential or polar form: 𝑦 (𝑡 ) = 𝑌 𝑒 𝑗𝜔𝑡 Eq. 10.36 Substituting (10.35) into equation (10.33) and taking the appropriate derivative results in: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 16 of 85 𝑗𝜔 𝑌 𝑒 𝑗𝜔𝑡 + 1 𝑅𝐶 𝑌 𝑒 𝑗𝜔𝑡 = 𝑉 𝑃 𝑅𝐶 𝑒 𝑗𝜔𝑡 Eq. 10.37 We can divide equation (10.37) by 𝑒 𝑗𝜔𝑡 to obtain: 𝑗𝜔 𝑌 + 1 𝑅𝐶 𝑌 = 𝑉 𝑃 𝑅𝐶 Eq. 10.38 Equation (10.38) can be solved for 𝑌 : (𝑗𝜔 + 1 𝑅𝐶 ) 𝑌 = 𝑉 𝑃 𝑅𝐶 ⇒ 𝑌 = 𝑉 𝑃 𝑅𝐶 𝑗𝜔 +1 𝑅𝐶 Eq. 10.39 So that: 𝑌 = 𝑉 𝑃 1+𝑗𝜔𝑅𝐶 Eq. 10.40 The magnitude and phase of the output response can be determined from the phasor 𝑌 : |𝑌 | = 𝑉 𝑃 √1+(𝜔𝑅𝐶 )2 ∠𝑌 = − tan −1(𝜔𝑅𝐶 ) Eq. 10.41 The complex exponential form of the system response is then, from equation (10.35 ): 𝑦 (𝑡 ) = 𝑉 𝑃 √1+(𝜔𝑅𝐶 )2 𝑒 𝑗 (𝜔𝑡 −tan −1(𝜔𝑅𝐶 )) Eq. 10.42 Since our physical input is the real part of the conceptual input, and since all circuit parameters are real valued , our physical output is the real part of equation (10.42) and the forced response is: 𝑦 𝑓 (𝑡 ) = 𝑉 𝑃 √1+(𝜔𝑅𝐶 )2 cos [𝜔𝑡 − tan −1(𝜔𝑅𝐶 )] Eq. 10.43 Which agrees with our result from the time -domain analysis of example 10.2. Notes:  The transition from equation (10.37) to equation (10.38) removed the time -dependence of our solution. The solution is now no longer a function of time! The solution includes the phasor representations of the input and output, as well as (generally) frequency. Thus, equation (10.38) is said to be in the phasor domain or, somewhat more commonly, the frequency domain . The analysis remains in the frequency domain until we reintroduce time in equation (10.43).  Equations in the frequency domain are algebraic equations rather than differential equations. This is a significant advantage mathematically, especially for higher -order systems .  Circuit components must have purely real values for the above process to work. We do not prove this, but merely make the claim that the process of taking the real part of the complex exponential form of the system response is not valid if circuit compone nts (or any coefficients in the differential equation governing the system) are complex valued. Fortunately, this is not a strong restriction – complex values do not exist in the physical world.  The complex exponential we use for our “conceptual” input, 𝑉 𝑃 𝑒 𝑗𝜔𝑡 , is not physically realizable. That is, we cannot create this signal in the real world. It is a purely mathematical entity which we introduce solely for the purpose of simplifying the analysis. The complex form of the output response given by eq uation (10.42) is likewise not physically realizable. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 17 of 85 Example 10.4: Numerical Example and Phasor Diagrams We now examine the circuit shown below. This circuit is simply the circuit of Example 10.3, with 𝑅 = 1𝑘 Ω, 𝐶 = 1𝜇𝐹 , 𝑉 𝑃 = 5𝑉 , and 𝜔 = 1000 rad/second. +- u(t) = 5cos( 1000 t) 1 mF1 k W y(t) +- In phasor form, the input is 𝑢 (𝑡 ) = 𝑈 𝑒 𝑗 1000 𝑡 , so that the phasor 𝑈 is 𝑈 = 5𝑒 𝑗 0° = 5∠0 °. The phasor form of the output is given by equations (10.41): |𝑌 | = 𝑉 𝑃 √1 + (𝜔𝑅𝐶 )2 = 5 √1 + (1000 ⋅ 1000 ⋅ 1 × 10 −6) = 5 √2 ∠𝜃 𝑇 = − tan −1(𝜔𝑅𝐶 ) = − tan −1(1000 ⋅ 1000 ⋅ 1 × 10 −6) = − 𝜋 4 = −45° And the phasor 𝑌 can be written as 𝑌 = 5 √2 𝑒 −𝑗 45 ° = 5 √2 ∠ − 45° We can create a phasor diagram of the input phasor 𝑈 and the output phasor 𝑌 Real Imaginary 52 5UY 45  The phasor diagram shows the input and output phasors in the complex plane. The magnitudes of the phasors are typically labeled on the diagram, as is the phase difference between the two phasors. Note that since the phase difference between 𝑌 and 𝑈 is negative, the output y(t) lags the input u(t) . The time -domain form of the output is: 𝑦 (𝑡 ) = 5 √2 cos (1000 𝑡 − 45° ) A time -domain plot of the input and output are shown below. This plot emphasizes that the output lags the input, as indicated by our phasor diagram. The plot below replicates what would be seen from a measurement of the input and output voltages. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 18 of 85 Time, sec 1000 2  Voltage Input, u(t) Output, y(t) 0 45  Example 10.5: RL Circuit Sinusoidal Steady -state Response In the circuit to the left below, the input voltage is 𝑉 𝑃 cos (𝜔𝑡 + 30° ) volts and the circuit response (or output) is the inductor current, iL(t) . We want to find the steady -state response 𝑖 𝐿 (𝑡 → ∞) .Vpcos( t+30 ) +- LRiL(t) +- LRiL(t) Vpej( t+30 ) The differential equation governing the circuit can be determined by applying KVL around the single lo op: 𝐿 𝑑 𝑖 𝐿 (𝑡 ) 𝑑𝑡 𝑅 𝑖 𝐿 (𝑡 ) = 𝑢 (𝑡 ) Eq. 10.44 We apply the conceptual input, 𝑢 (𝑡 ) = 𝑉 𝑃 𝑒 𝑗 (𝜔𝑡 +30° ) as shown in the figure to the right above to this equation. We can represent this input in phasor form as: 𝑢 (𝑡 ) = 𝑈 𝑒 𝑗𝜔𝑡 Eq. 10.45 Where the phasor 𝑈 = 𝑉 𝑃 ∠30 °. Likewise, we represent the output in phasor form: 𝑖 𝐿 (𝑡 ) = 𝐼 𝐿 𝑒 𝑗𝜔𝑡 Eq. 10.46 Where the phasor 𝐼 𝐿 = |𝐼 𝐿 |∠𝜃 . Substituting our assumed input and output in phasor form into equation (10.44) results in: 𝐿𝑗𝜔 𝐼 𝐿 𝑒 𝑗𝜔𝑡 + 𝑅 𝐼 𝐿 𝑒 𝑗𝜔𝑡 = 𝑈 𝑒 𝑗𝜔𝑡 Eq. 10.47 As in Example 10.4, we divide through by 𝑒 𝑗𝜔𝑡 to obtain the frequency domain governing equation: 𝐿𝑗𝜔 𝐼 𝐿 + 𝑅 𝐼 𝐿 = 𝑈 Eq. 10.48 So that: 𝐼 𝐿 = 𝑈 𝑅 +𝑗𝜔𝐿 = 𝑉 𝑃 ∠30 ° 𝑅 +𝑗𝜔𝐿 Eq. 10.49 So that the phasor 𝐼 𝐿 has magnitude and phase: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 19 of 85 |𝐼 𝐿 | = 𝑉 𝑃 √𝑅 2+(𝜔𝐿 )2 𝜃 = 30° − tan −1 (𝜔𝐿 𝑅 ) Eq. 10.50 The exponential form of the inductor current is therefore: 𝑖 𝐿 (𝑡 ) = 𝑉 𝑃 √𝑅 2+(𝜔𝐿 )2 𝑒 𝑗 [𝜔𝑡 +30° −tan −1(𝜔𝐿 𝑅 )] Eq. 10.51 And the actual physical inductor current is: 𝑖 𝐿 (𝑡 ) = 𝑉 𝑃 √𝑅 2+(𝜔𝐿 )2 cos [𝜔𝑡 + 30° − 𝑡𝑎𝑛 −1 (𝜔𝐿 𝑅 )] Eq. 10.52 Example 10.6: Series RLC Circuit Sinusoidal Steady -state Response Consider the circuit shown below. The input to the circuit is 𝑣 𝑠 (𝑡 ) = 2 cos (𝜔𝑡 ) volts. Find the output v(t) .+- vs(t) CRv(t) + - L In section 8.1, it was determined that the differential equation governing the system is: 𝑑 2𝑣 (𝑡 ) 𝑑 𝑡 2 𝑅 𝐿 𝑑𝑣 (𝑡 ) 𝑑𝑡 1 𝐿𝐶 𝑣 (𝑡 ) = 1 𝐿𝐶 𝑣 𝑆 (𝑡 ) Eq. 10.53 Assuming that the input is a complex exponential whose real part is the given vS(t) provides: 𝑣 𝑆 (𝑡 ) = 2𝑒 𝑗𝜔𝑡 Eq. 10.54 The output is assumed to have the phasor form: 𝑣 (𝑡 ) = 𝑉 𝑒 𝑗𝜔𝑡 Eq. 10.55 Where 𝑉 contains the (unknown) magnitude and phase of the output voltage. Substituting equations (10.54) and (10.55) into equation (10.53) results in: −(𝑗𝜔 )2𝑉 𝑒 𝑗𝜔𝑡 + 𝑅 𝐿 (𝑗𝜔 )𝑉 𝑒 𝑗𝜔𝑡 + 1 𝐿𝐶 𝑉 𝑒 𝑗𝜔𝑡 = 1 𝐿𝐶 2𝑒 𝑗𝜔𝑡 Eq. 10.56 Dividing through by 𝑒 𝑗𝜔𝑡 and noting that 𝑗 2 = −1, results in: [ 1 𝐿𝐶 − 𝜔 2 + 𝑗 𝑅 𝐿 𝜔 ] 𝑉 = 2 𝐿𝐶 So that: 𝑉 = 2 𝐿𝐶 1 𝐿𝐶 −𝜔 2+𝑗 𝑅 𝐿 𝜔 Eq. 10.57 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 20 of 85 The magnitude and phase of 𝑉 are: |𝑉 | = 2 𝐿𝐶 √( 1 𝐿𝐶 − 𝜔 2) 2 (𝑅 𝐿 𝜔 ) 2 ∠𝑉 = − tan −1 ( 𝑅𝜔 𝐿 1 𝐿𝐶 − 𝜔 2 ) And the capacitor voltage is: 𝑣 (𝑡 ) = 2 𝐿𝐶 √(1 𝐿𝐶 −𝜔 2)+(𝑅 𝐿 𝜔 )2 cos {𝜔𝑡 − 𝑡𝑎𝑛 −1 ( 𝑅𝜔 𝐿 1 𝐿𝐶 −𝜔 2 )} Eq. 10.58 The complex arithmetic in this case becomes a bit tedious, but the complexity of the frequency -domain approach is nowhere near that of the time -domain solution of the second -order differential equation. Section Summary • The steady -state response of a linear time invariant system to a sinusoidal input is a sinusoid with the same frequency as the input sinusoid. Only the amplitude and phase angle of the output sinusoid can be different from the input sinusoid, so the solution is entirely characterized by the magnitude and phase angle of the output sinusoid. • The steady -state response of a system to a sinusoidal input can be determined by assuming a form of the solution, substituting the input signal and the output signal into the governing differential equation and solving for the amplitude and phase angle of the output sinusoid. • The solution approach is simplified if the sinusoidal signals are represented as complex exponentials. The approach is further simplified if these complex exponentials are represented in phasor form – the phasor is a complex number whi ch provides the amplitude and phase angle of the complex exponential. • The above solution approaches convert the governing differential equation into an algebraic equation. If complex exponentials in phasor form are used to represent the signals of interest , the governing algebraic equation can have complex coefficients. • The relationships between the steady state sinusoidal inputs and outputs are described by a relationship between the amplitudes (generally a ratio between the output amplitude and the input amplitude) and the phase angles (generally a difference between the output and input phase angles). o These relationships are often displayed graphically in a phasor diagram. 10.3 Exercises The differential equation governing a circuit is: 2 𝑑𝑦 (𝑡 ) 𝑑𝑡 + 6𝑦 (𝑡 ) = 𝑢 (𝑡 ) Where u(t) is the input and y(t) is the output. Determine the steady -state response of the circuit to an input 𝑢 (𝑡 ) = 2 cos (3𝑡 ). For the circuit shown below, u(t) is the input and y(t) is the output. a. Write the differential equation relating u(t) and y(t) .Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 21 of 85 b. Determine 𝑦 (𝑡 ), 𝑡 → ∞, if 𝑢 (𝑡 ) = 3cos (2𝑡 )u(t) 0.5F +- y(t) +- 1W 10.4 Phasor Representations of Circuit Elements In section 10.3, we determined the sinusoidal steady -state response of an electrical circuit by transforming the circuit’s governi ng differential equation into the frequency domain or phasor domain . This transformation converted the differential equation into an algebraic equation. This conversion significantly simplified the subsequent circuit analysis, at the relatively minor expense of performing some complex arithmetic. In this module, we will further simplify this analysis by transforming the circuit itself directly into the frequency domain and writing the governing algebraic equations directly . This approach eliminates the necessity of ever writing the differential equation governing the circuit (as long as we are only interested in the circuit’s sinusoidal steady -state response). This approach also allows us to apply analysis techniques previously used only for purely resi stive circuits to circuits containing energy storage elements. 10.4.1 Phasor Domain Voltage -current Relationships In section 10.2, we introduced phasors as a method for representing sinusoidal signals. Phasors provide the magnitude and phase of the sinusoi d. For example, the signal 𝑣 (𝑡 ) = 𝑉 𝑃 cos (𝜔𝑡 + 𝜃 ) has amplitude 𝑉 𝑃 and the phase angle 𝜃 . This information can be represented in phasor form as: 𝑉 = 𝑉 𝑃 𝑒 𝑗𝜃 In which complex exponentials are used to represent the phase. Equivalently, the phase can be represented as an angle, and the phasor form of the signal can be written as: 𝑉 = 𝑉 𝑃 ∠𝜃 Note that the phasor does not provide the frequency of the signal, ω. To include frequency information, the signal is typically written in complex exponential form as: 𝑣 (𝑡 ) = 𝑉 𝑒 𝑗𝜔𝑡 In section 10.3, we used phasor representations to determine the steady -state sinusoidal response of electrical circuits by representing the signals of interest as complex exponentials in phasor form. When signals in the governing differential equation are represented in this form, the differential equation becomes an algebraic equation, resulting in a significant mathematical simplification. In section 10.3, it was also noted that the mathematics could be simplified further by repr esenting the circuit itself directly in the phasor domain. In this section, we present the phasor form of voltage -current relations for our basic circuit elements: resistors, inductors, and capacitors. The voltage -current relations for these elements are p resented individually in the following sub - sections. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 22 of 85 10.4.2 Resistors The voltage -current relationship for resistors is provided by Ohm’s Law: 𝑣 (𝑡 ) = 𝑅 ⋅ 𝑖 (𝑡 ) Eq. 10.59 If the voltage and current are represented in phasor form as: 𝑣 (𝑡 ) = 𝑉 𝑒 𝑗𝜔𝑡 Eq. 10.60 And: 𝑖 (𝑡 ) = 𝐼 𝑒 𝑗𝜔𝑡 Eq. 10.61 Equation (10.59) can be written: 𝑉 𝑒 𝑗𝜔𝑡 = 𝑅 ⋅ 𝐼 𝑒 𝑗𝜔𝑡 Eq. 10.62 Cancelling the 𝑒 𝑗𝜔𝑡 term from both sides results in: 𝑉 = 𝑅 ⋅ 𝐼 Eq. 10.63 The voltage -current relationship for res istors (Ohm’s Law) is thus identical in the time and frequency domains. Schematically, the time - and frequency -domain representations of a resistor are as shown in Fig. 10.10. +- v(t) i(t) R +-VIR (a) Time domain (b) Frequency domain Figure 1 0.10. Voltage -current relations for a resistor. Equation (10.63) shows that, in the frequency domain, the voltage and current in a resistor are related by a purely real, constant multiplicative factor. Thus, the sinusoidal voltage and current for a resistor are simply scaled versions of one another – there is no phase difference in the voltage and current for a resistor. This is shown graphically in Fig. 10.11. Current in phase with voltage Current Voltage Time v(t), i(t) Figure 10.11. Voltage and current waveforms for a resistor. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 23 of 85 A representative phasor diagram of the resistor’s voltage and current will appear as shown in Fig. 10.12 – the phasors representing voltage and current will always be in the same direction, though their lengths will typically be different. Real Imaginary IRV =I Figure 10.12. Voltage -current phasor diagram for resistor. 10.4.3 Inductors The voltage -current relationship for inductors is: 𝑣 (𝑡 ) = 𝐿 ⋅ 𝑑𝑖 (𝑡 ) 𝑑𝑡 Eq. 10.64 As with the resistive case presented above, we assume that the voltage and cu rrent are represented in phasor form as 𝑣 (𝑡 ) = 𝑉 𝑒 𝑗𝜔𝑡 and 𝑖 (𝑡 ) = 𝐼 𝑒 𝑗𝜔𝑡 , respectively. Substituting these expressions into equation (10.64) results in: 𝑉 𝑒 𝑗𝜔𝑡 = 𝐿 ⋅ 𝑑 𝑑𝑡 [𝐼 𝑒 𝑗𝜔𝑡 ] = 𝐿 (𝑗𝜔 )𝐼 𝑒 𝑗𝜔𝑡 Eq. 10.65 Dividing equation (10.65) by 𝑒 𝑗𝜔𝑡 and re -arranging terms slightly results in the phasor domain or frequency domain representation of the inductor’s voltage -current relationship: 𝑉 = 𝑗𝜔𝐿 ⋅ 𝐼 Eq. 10.66 In the frequency domain, therefore, the inductor’s phasor voltage is propo rtional to its phasor current. The constant of proportionality is, unlike the case of the resistor, an imaginary number and is a function of the frequency , ω. It is important to note that the differential relationship of equation (10.64) has been replaced with an algebraic voltage -current relationship. Schematically, the time - and frequency -domain representations of an inductor are as shown in Fig. 10.13. +- vL(t) iL(t) L +-LVLILj  (a) Time domain (b) Frequency domain Figure 10.13. Inductor voltage -current relations. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 24 of 85 The factor of j in the voltage -current relationship of equation (10.66) introduces a 90 ∘ phase shift between inductor voltage and current. Since 𝑗 = 𝑒 𝑗 90 °, the voltage across an inductor leads the current by 90 ∘ (or, equivalently, the current lags the voltage by 90 ∘). The relative phase difference between inductor voltage and current are shown graphically in the time domain in Fig . 10.14. A representative phasor diagram of the inductor’s voltage and current will appear as shown in Fig . 10.15 – the voltage phasor will always lead the current phasor by 90 ∘, and the length of the voltage phasor will be a factor of ωL times the length of the current phasor. Current lags voltage by 90  Current Voltage Time v(t), i(t) Figure 10.14. Voltage and current waveforms for an inductor. Real Imaginary IjV =I Figure 10.15. Voltage -current phasor diagram for inductor. 10.3.4 Capacitors The voltage -current relationship for capacitors is: 𝑖 (𝑡 ) = 𝐶 ⋅ 𝑑𝑣 (𝑡 ) 𝑑𝑡 Eq. 10.67 As with the previous cases, we assume that the voltage and current are represented in phasor form as 𝑣 (𝑡 ) = 𝑉 𝑒 𝑗𝜔𝑡 and 𝑖 (𝑡 ) = 𝐼 𝑒 𝑗𝜔𝑡 , respectively. Substituting these expressions into equation (10.67) results in: 𝐼 𝑒 𝑗𝜔𝑡 = 𝐶 ⋅ 𝑑 𝑑𝑡 [𝑉 𝑒 𝑗𝜔𝑡 ] = 𝐶 (𝑗𝜔 )𝑉 𝑒 𝑗𝜔𝑡 Eq. 10.68 Dividing the above by 𝑒 𝑗𝜔𝑡 results in the phasor domain or frequency domain representation of the capacitor's voltage -current relationship: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 25 of 85 𝐼 = 𝑗𝜔𝐶 ⋅ 𝑉 Eq. 10.69 To be consistent with our voltage -current relationship for resistors and capacitors, we write the voltag e in terms of the current. Thus, 𝑉 = 1 𝑗𝜔𝐶 ⋅ 𝐼 Eq. 10.70 In the frequency domain, therefore, the capacitor’s phasor voltage is proportional to its phasor current. The constant of proportionality is an imaginary number and is a function of the frequency , ω. As with inductors, the differential voltage -current relationship has been replaced with an algebraic relationship. Schematically, the time - and frequency -domain representations of a capacitor are as shown in Fig. 10.16. +- vC(t) iC(t) C +-CVCICj  1 (a) Time domain (b) Frequency domain Figure 10.16. Capacitor voltage -current relations. The factor of 1 𝑗 in the voltage -current relationship of equation (10.70) introduces a 90 ∘ phase shift between inductor voltage and current. Since 1 𝑗 = 𝑒 −𝑗 90 ° = 1∠ − 90° , the voltage across a capacitor lags the current by 90 ∘ (or, equivalently, the current leads the voltage by 90 ∘). The relative phase difference between capacitor voltage and current are shown graphically in the time domain in Fig . 10.17. A representative phasor diagram of the capacitor’s voltage and current will appear as shown in Fig . 10.18 – the voltage phasor will always lag the current phasor by 90 ∘, and the length of the voltage phasor will be a factor of 1 𝜔𝐶 times the leng th of the current phasor. Current leads voltage by 90  Current Voltage Time v(t), i(t) Figure 10.17. Voltage and current waveforms for a capacitor. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 26 of 85 Real Imaginary I Cj V  1 =I Figure 10.18. Voltage -current phasor diagram for capacitor. 10.3.5 Impedance and Admittance The frequency dom ain voltage -current characteristics presented in the previous subsections indicate that the voltage difference across a circuit element can be written in terms of a multiplicative factor (which can be a complex number) times the current through the element . In order to generalize and formalize this concept, we define impedance as the ratio of phasor voltage to phasor current. Impedance is typically denoted as 𝑍 and is defined mathematically as: 𝑍 = 𝑉 𝐼 Eq. 10.71 Therefore, if the phasor voltage and current for a circuit element are given by: 𝑉 = 𝑉 𝑃 𝑒 𝑗𝜃 And: 𝐼 = 𝐼 𝑃 𝑒 𝑗𝜙 Then the impedance is: 𝑍 = 𝑉 𝐼 = 𝑉 𝑃 𝐼 𝑃 𝑒 𝑗 (𝜃 𝑍 ) Eq. 10.72 Or alternatively, 𝑍 𝑉 𝑃 𝐼 𝑃 ∠𝜃 𝑍 Eq. 10.73 Where 𝜃 𝑍 is the angle of 𝑍 . The magnitude of the imp edance is the ratio of the magnitude of the voltage to the magnitude of the current: |𝑍 | = 𝑉 𝑃 𝐼 𝑃 = |𝑉 | |𝐼 | Eq. 10.74 And the angle of the impedance is the difference between the voltage phase angle and the current phase angle: 𝜃 𝑍 = ∠𝑍 = ∠𝑉 − ∠𝐼 = 𝜃 − 𝜙 Eq. 10.75 The impedance can also be represented in rectangular coordinates as: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 27 of 85 𝑍 = 𝑅 + 𝑗𝑋 Eq. 10.76 Where R is the real part of the impedance (called the resistance or the resistive component of the impedance) and X is the imaginary part of the impedance (called the reactance or the reactive part of the impedance). R and X are related to |𝑍 | and ∠𝑍 by the usual rules relating rectangular and polar coordinates, so that: |𝑍 | = √𝑅 2 + 𝑋 2 ∠𝑍 = tan −1 (𝑋 𝑅 ) And: 𝑅 = 𝑅𝑒 {𝑍 } = |𝑍 | cos 𝜃 𝑍 𝑋 = 𝐼𝑚 {𝑍 } = |𝑍 | sin 𝜃 𝑍 Impedance is an extremely useful concept, in that it can be used to represent the voltage -current relations for any two -terminal electrical circuit element in the frequency domain , as indicated in Fig. 10.19. Electical Circuit +-IVI V Z = Figure 10.19. Imp edance representation of two -terminal electric circuit. The admittance , 𝑌 , is defined as the reciprocal of impedance: 𝑌 = 1 𝑍 Eq. 10.77 Admittance is also a complex number, and is written in rectangular coordinates as: 𝑌 = 𝐺 + 𝑗𝐵 Eq. 10.78 Where G is called the conductance and B is the susceptance . Impedances and admittances for the three electrical circuit elements presented previously in this section are provided in Table 10.1 below. These results are readily obtained from the previously presented phasor domain voltage -current relationships and the definitions of impedance and admittance. The relations provided in Table 10.1 should be committed to memory .Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 28 of 85 Element Impedance Admittance Resistor Inductor Capacitor RLj Cj  1Lj  1R 1Cj  Table 10.1. Impedances and admittances for passive cir cuit elements. Example 10.7: Provide the Phasor -domain Representation of the Circuit Below +- 20cos( 10 t+30 ) 2 W 0.1H F 30 1 The input amplitude is 20 volts, and the input phase is 30 ∘, so the phasor representation of the input voltage is 20 ∠30 ∘. The frequency of the input voltage is ω=10rad/sec . Thus, the impedances of the passive circuit elements are as follows: • Resistor : 𝑍 = 𝑅 = 2𝛺 • Inductor : 𝑍 = 𝑗𝜔𝐿 = 𝑗 (10 𝑟𝑎𝑑 sec )(0.1𝐻 ) = 𝑗 1𝛺 ⁄ • Capacitor : 𝑍 = 1 𝑗𝜔𝐶 = 1 𝑗 (10 𝑟𝑎𝑑 sec )(1 30 𝐹 )⁄ = 3 𝑗 Ω = −𝑗 3Ω The phasor -domain circuit is shown below. +- 2 W j1 W -j3 W 30 20  Section Summary • Voltage -current relations for our passive circuit elements in the frequency domain are: o Resistor: 𝑉 = 𝑅 𝐼 o Inductor: 𝑉 = 𝑗𝜔𝐿 ⋅ 𝐼 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 29 of 85 o Capacitor: 𝑉 = 1 𝑗𝜔𝐶 ⋅ 𝐼 • The impedance of a circuit element is the ratio of the phasor voltage to the phasor current in that element: o Resistor: 𝑍 = 𝑅 o Inductor: 𝑍 = 𝑗𝜔𝐿 o Capacitor: 𝑍 = 1 𝑗𝜔𝐶 • Impedance is, in general, a complex number. Units of impedance are ohms (Ω). The real part of impedance is the resistance . The imaginary part of impedance is reactance . Impedance, for general circuit elements, plays the same role as resistance does for resistive circuit elements. In fact, for purely resistive circuit elements, impedance is simply the resistance of the element. • Admittance is the inverse of impedance. • Admittance is, in general, a complex number. The real part of admittance is conductance . The imaginary part of admittance is susceptance . For purely resistive circuits, admittance is the same as conductance. • Impedanc e and admittance are, in general, functions of frequency. • Impedance and admittance are not phasors. They are complex numbers – there is no sinusoidal time domain function corresponding to impedance or admittance. (Phasors, by definition, are a way to desc ribe a time -domain sinusoidal function.) 10.4 Exercises For the circuit shown below, u(t) is the input and y(t) is the output. Determine y(t) , t →, if u(t) = 3cos(2t) .u(t) 0.5F +- y(t) +- 1W Sketch a diagram of the input and output phasors for exer cise 1 above. Determine the impedance of the circuit elements shown below if Vin (t) = 2cos(4t). F 8 1+- Vin (t) 2mH 5k W +- Vin (t) +- Vin (t) Determine the impedance of the circuit elements in exercise 3 if Vin (t) = 3cos(8t) 10.5 Direct Frequency Domain Circuit Analysis In section 10.3, we determined the steady -state response of electrical circuits to sinusoidal signals using phasor representations of the signals involved, and time -domain representations of the circuit element voltage -current relations . Applying KVL and K CL in this manner resulted in governing equations in which the time dependence had been removed, which converted the governing equations from differential equations to algebraic equations. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 30 of 85 Unknowns in the resulting algebraic equations were the phasor repre sentations of the signals. These equations could then be solved to determine the desired signals in phasor form; these results could then be used to determine the time -domain representations of the signals. In section 10.4, we replaced the time -domain volt age -current relations for passive electrical circuit elements with impedances, which provide voltage -current relations for the circuit elements directly in the frequency domain . At the end of section 10.4, we used these impedances to schematically represen t a circuit directly in the frequency domain. In this section, we will use this frequency -domain circuit representation to perform circuit analysis directly in the frequency domain using phasor representations of the signals and impedance representations o f the circuit elements. This will allow us to write the algebraic equations governing the phasor representation of the circuit directly , without any reference to the time domain behavior of the circuit. As in section 10.3, these equations can be solved to determine the behavior of the circuit in terms of phasors, and the results transformed to the time domain. Performing the circuit analysis directly in the frequency domain using impedances to represent the circuit elements can result in a significant simpl ification of the analysis. In addition, many circuit analysis techniques which were previously applied to resistive circuits (e.g. circuit reduction, nodal analysis, mesh analysis, superposition, Thevenin’s and Norton’s Theorems) are directly applicable in the frequency domain. Since these analysis techniques have been presented earlier for resistive circuits, in this section we will simply: Provide examples of applying these analysis methods to frequency -domain circuits, and Note any generalizations relati ve to using phasors in these analysis methods Throughout this section, the reader should firmly keep in mind that we are dealing only with the steady -state responses of circuits to sinusoidal forcing functions . It is sometimes easy to lose track of this fa ct, since the sinusoidal nature of the signal is often not explicitly stated, but any time we deal with impedances and phasors, we are working with sinusoidal signals. 10.5.1 Kirchhoff’s Voltage Law Kirchhoff’s Voltage Law states that the sum of the voltag e differences around any closed loop is zero. Therefore, if v1(t), v2(t), …, vN(t) are the voltages around some closed loop, KVL provides: ∑ 𝑣 𝑘 (𝑡 ) = 0𝑁 𝑘 =1 Eq. 10.79 Substituting the phasor representation of the voltages results in: ∑ 𝑉 𝑘 𝑒 𝑗𝜔𝑡 = 0𝑁 𝑘 =1 Eq. 10.80 Dividing equation (10.80) by 𝑒 𝑗𝜔𝑡 results in: ∑ 𝑉 𝑘 = 0𝑁 𝑘 =1 Eq. 10.81 So that KVL states that the sum of the phasor voltages around any closed loop is zero. 10.5.2 Kirchhoff's Current Law Kirchhoff's Current Law states that the sum of the currents entering any node is zero. Therefore, if 𝑖 1(𝑡 ), 𝑖 2(𝑡 ), ⋯ , 𝑖 𝑁 (𝑡 ) are the currents entering a node, KCL provides: ∑ 𝑖 𝑘 (𝑡 ) = 0𝑁 𝑘 =1 Eq. 10.82 Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 31 of 85 Substituting the phasor representation of the currents results in: ∑ 𝐼 𝑘 𝑒 𝑗𝜔𝑡 𝑁 𝑘 =1 = 0 Eq. 10.83 Dividing equation (10.83) by 𝑒 𝑗𝜔𝑡 results in: ∑ 𝐼 𝑘 = 0𝑁 𝑘 =1 Eq. 10.84 So that KCL states that the sum of the phasor currents entering (or leaving) a node is zero. Important Result : KVL and KCL apply directly in the frequency domain. Example 10.8: RC Circuit Steady -state Sinusoidal Response In this example, we will revisit example 10.3. In that example, we determined the capacitor voltage in the circuit to the left below, using phasor analysis techniques applied to th e circuit’s time -domain governing equation. In this example, we will represent the circuit itself directly in the frequency domain, using impedance representations of the circuit element. The frequency -domain representation of the circuit is shown to the r ight below. +- Vpcos( t) CRy(t) +- +- R +- 0PVCj  1YI By the definition of impedance, we can determine the current through the capacitor to be: 𝐼 = 𝑌 𝑍 𝐶 = 𝑌 1 𝑗𝜔𝐶 = 𝑗𝜔𝐶 𝑌 The voltage across the resistor can now, by the definition of impedance, be written as 𝑉 𝑅 = 𝑅 ⋅ 𝐼 = 𝑅 (𝑗𝜔𝐶 𝑌 ). We now apply KVL for phasors to the circuit to the right above, which leads to: 𝑉 𝑃 ∠0 ° = 𝑅 (𝑗𝜔𝐶 𝑌 ) + 𝑌 Solving for 𝑌 in this equation provides 𝑌 = 𝑉 𝑃 ∠0 ° 1+𝑗𝜔𝑅𝐶 By the rules of complex arithmetic, we can determine the magnitude and phas e angle of 𝑌 to be: |𝑌 | = 𝑉 𝑃 √1 + (𝜔𝑅𝐶 )2 ∠𝑌 = − tan −1(𝜔𝑅𝐶 ) And the time -domain solution for y(t) is thus: 𝑦 (𝑡 ) = 𝑉 𝑃 √1 + (𝜔𝑅𝐶 )2 cos [𝜔𝑡 − tan −1(𝜔𝑅𝐶 )]Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 32 of 85 10.5.3 Parallel and Series Impedances & Circuit Reduction Consider the case of N impedances connected in series, as shown in Fig. 10.20. Since the elements are in series, and since we have seen that KCL applies to phasors, the phasor current 𝐼 flows through each of the impedances. Applying KVL for phasors around the single loop, and incorporating the definit ion of impedance, we obtain: 𝑉 = 𝐼 (𝑍 1 + 𝑍 2 + ⋯ + 𝑍 𝑁 ) = 0 Eq. 10.85 + -1V1Z + -2V2Z +-NVNZIV +- Figure 10.20. Series combination of impedances. If we define 𝑍 𝑒𝑞 as the equivalent impedance of the series combination, we have 𝑉 = 𝐼 ⋅ 𝑍 𝑒𝑞 , where: 𝑍 𝑒𝑞 = 𝑍 1 + 𝑍 2 + ⋯ + 𝑍 𝑁 Eq. 10.86 So that impedances in series sum directly . Thus, impedances in series can be combined in the same way as resistances in series. By extension of the above result, we can develop a voltage divider formula for phasors. Without derivation, we state that the phasor voltage across the k th impedance in a series combination of N impedances as shown in Fig. 10.20 can be determined as: 𝑉 𝑘 = 𝑉 (𝑌 1 + 𝑌 2 + ⋯ + 𝑌 𝑁 ) = 0 Eq. 10.88 IV+-1I1Z2I2ZNINZ Figure 10.21. Parallel combination of impedances. If we define 𝑌 𝑒𝑞 as the equivalent impedance of the series combination, we have: 𝐼 = 𝑉 ⋅ 𝑌 𝑒𝑞 Eq. 10.89 Where: 𝑌 𝑒𝑞 = 𝑌 1 + 𝑌 2 + ⋯ + 𝑌 𝑁 Eq. 10.90 So that admittances in series sum directly . Converting our admittanc es to impedances indicates that the equivalent impedance of a parallel combination of N impedances as shown in Fig. 10.21 is: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 33 of 85 𝑍 𝑒𝑞 = 1 1 𝑍 1+1 𝑍 2+⋯+1 𝑍 𝑁 Eq. 10.91 Thus, impedances in parallel can be combined in the same way as resistances in parallel. By extension of the above result, we can develop a current divider formula for phasors. Without derivation, we state that the phasor current across the k th impedance in a series combination of N impedances as shown in Fig. 10.21 can be determined as: 𝐼 𝑘 = 𝐼 1 𝑍 𝑘 1 𝑍 1+1 𝑍 2+⋯+1 𝑍 𝑁 Eq. 10.92 So that our current division relationships for resistors in parallel apply directly in the frequency domain for impedances in parallel . Important Result : All circuit reduction techniques for resistances apply directly to the frequency domain for impedances. Likewise, voltage and current divider relationships apply to phasor circuits in the frequency domain exactly as they apply to resistive circuits in the time domain. Example 10.9 Use circuit reduction t echniques to determine the current phasor 𝐼 leaving the source in the circuit below. (Note: the circuit below is the frequency domain circuit we obtained in example 10.7.) +- 2 W j1 W -j3 W 30 20 I Since impedances in series add directly , the inductor and resistor can be combined into a single equivalent impedance of (2 + 𝑗 1)𝛺 , as shown in the figure to the left below. The capacitor is then in parallel with this equivalent impedance. Since impedances in parallel add in the same way as resistors in paralle l, the equivalent impedance of this parallel combination can be calculated by dividing the product of the impedances by their sum, so 𝑍 𝑒𝑞 = (−𝑗 3)(2+𝑗 1) (−𝑗 3)+(2+𝑗 1) Ω = 3−𝑗 6 2−𝑗 2 Ω. Converting this impedance to polar form results in 𝑍 𝑒𝑞 = 2.37∠ − 18° Ω; the final reduced circuit is shown in the figure to the right below .+- (2+j1) W-j3 W 30 20 I +- 30 20 IW− 18 37 2. Using the reduced circuit to the right above and the definition of impedance, we can see that: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 34 of 85 𝐼 = 20∠30 ° 2.37∠ − 18° Ω = 2 2.37 ∠[30° − (−18° )]𝐴 So that: 𝐼 = 8.44∠48 °𝐴 Examp le 10.10 Use circuit reduction techniques to determine the current, i(t) through the inductor in the circuit below. +- 2 W 5cos(2t) 4W 0.125F 2H i(t) With =2 rad/sec , the frequency domain representation of the circuit is as shown in the figure to the left below; in that figure, we have also defined the current phasor 𝐼 𝑆 leaving the source. j4 W -j4 WI +- 2 W4W  05SI We now employ circuit reduction techniques to determine the phasor 𝐼 . To do this, we first determine the circuit impedance seen by the source; this impedance allows us to determine the source current 𝐼 𝑆 . The current 𝐼 can be determined from a current divider relation and 𝐼 𝑆 . The impedances of the series combination of the capacitor and the 4 W resistor is readily obtained by adding their individual impedances, as shown in the figure to the left below. This equivalent impedance is then in parallel with the inductor’s impedance; the equivalent i mpedance of this parallel combination is as shown in the circuit to the right below. j4 W(4-j4) WI +- 2 W 05SI (4-j4) W +- 2 W 05SI The source current is then, by the definition of impedance, 𝐼 𝑆 = 5∠0 ° (4−𝑗 4)Ω+2Ω = 0.69∠ − 33 .7° . The circuit to the left above, along with our vo ltage divider formula, provides: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 35 of 85 𝐼 = (4 − 𝑗 4)Ω (4 − 𝑗 4)Ω + 𝑗 4Ω ⋅ 𝐼 𝑆 = (1 − 𝑗 1)Ω ⋅ 0.69∠ − 33 .7° = 0.98∠ − 78 .7° And the current 𝑖 (𝑡 ) = 0.98 cos (2𝑡 − 78 .7° ) 10.5.4 Nodal and Mesh Analysis Nodal analysis and mesh analysis techniques have been previously applied to resistive circuits in the time domain. In nodal analysis, we applied KCL at independent nodes and used Ohm’s Law to write the resulting equations in terms of the node voltages. In mesh analysis, we applied KVL and used Ohm’s Law to write the resulting equations in t erms of the mesh currents. In the frequency domain, as we have seen in previous sub -sections, KVL and KCL apply directly to the phasor representations of voltages and currents. Also, in the frequency domain, impedances can be used to represent voltage -curr ent relations for circuit elements in the frequency domain in the same way that Ohm’s Law applied to resistors in the time domain (the relation 𝑉 = 𝐼 ⋅ 𝑍 in the frequency domain corresponds exactly to the relation 𝑣 (𝑡 ) = 𝑅 ⋅ 𝑖 (𝑡 ) in the time domain). Thus, n odal analysis and mesh analysis apply to frequency domain circuits in exactly the same way as to time domain resistive circuits, with the following modifications: • The circuit excitations and responses are represented by phasors • Phasor representations of no de voltages and mesh currents are used • Impedances are used in the place of resistances Application of nodal and mesh analysis to frequency -domain circuit analysis is illustrated in the following examples. Example 10.11 Use nodal analysis to determine the current i(t) in the circuit of example 10.10. The desired frequency -domain circuit was previously determined in Example 10.10. Nodal analysis of the frequency -domain circuit proceeds exactly as was done in the case of resistive circuits. The reference volt age, 𝑉 𝑅 = 0, and our single node voltage, VA, for this circuit are defined on the circuit below. j4 W -j4 WI +- 2 W4W  05 VA VR=0 Applying KCL in phasor form at node A provides: 5∠0 ° − 𝑉 𝐴 2Ω − 𝑉 𝐴 (4 − 𝑗 4)Ω − 𝑉 𝐴 𝑗 4Ω = 0Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 36 of 85 Solving for 𝑉 𝐴 gives 𝑉 𝐴 = 3.92∠11 .31° 𝑉 . By the definition of impedance, the desired current phasor 𝐼 = 𝑉 𝐴 𝑗 4Ω = 3.92∠11 .31° 4∠90 ° = 0.98∠ − 78 .7° so that 𝑖 (𝑡 ) = 0.98 cos (2𝑡 − 78 .7° ), which is consistent with our result obtained via circuit reduction in Example 3. Example 10.12 Use mesh analysis to determine the current i(t) in the circuit of examples 10.10 and 10.11. The desired frequency -domain circuit was previously determined in Example 10.10. Mesh analysis of the frequency -domain circuit proceeds exactly as for resistive cir cuits. The figure below shows our choice of mesh loops; the series resistor -capacitor combination has been combined into a single equivalent resistance in the figure below, for clarity. +- 2 W j4 WI 051I2I (4-j4) W KVLO around the mesh loop 𝐼 1 provides: 5∠0 ° − 2 ⋅ 𝐼 1 − (4 − 𝑗 4)(𝐼 1 − 𝐼 2) = 0 KVL around the mesh loop 𝐼 2 provides: (4 − 𝑗 4)(𝐼 2 − 𝐼 1) + 𝑗 4 ⋅ 𝐼 2 = 0 The second equation above can be simplified to provide: 𝐼 2 = (1 − 𝑗 )𝐼 1. Using this result to eliminate 𝐼 1 in the mesh equation for loop 𝐼 1 and simplifying provides: 5∠0 ° = [(6 − 𝑗 4) 1 − 𝑗 + (𝑗 4 − 4)] 𝐼 2 So that 𝐼 2 = 0.98∠ − 78 .7° . The mesh current 𝐼 2 is simply the desired current 𝐼 , so in the time domain: 𝑖 (𝑡 ) = 0.98 cos (2𝑡 − 78 .7° ) Which is consistent without results from examples 10.10 and 10.11. Important Result : Nodal and mesh analysis methods apply to phasor circuits exactly as they apply to resistive circuits in the time domain. Impedances simply replace resistances, and quantities of interest become complex valued. 10.5.5 Superposition The extension of superposition to the frequency domain is an extremely important topic. Several common analysis techniques you will encounter later in this course and in future courses (frequency response, Fourier Series, and Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 37 of 85 Fourier Transforms, for example) will depend heavily upon the superposition of sinusoidal signals. In this sub - section, we introduce the basic concepts involved. In all of our steady -state sinusoidal analyses, we have required that the circuit is linear . (The statement that the steady state response to a sinusoidal input is a sinuso id at the same frequency requires the system to be linear. Nonlinear systems do not necessarily have this characteristic.) Thus, all phasor circuits are linear and superposition must apply. Thus, if a phasor circuit has multiple inputs, we can calculate th e response of the circuit to each input individually and sum the results to obtain the overall response. It is important to realize, however, that the final step of summing the individual contributions to obtain the overall response can, in general, only b e done in the time domain . Since the phasor representation of the circuit response implicitly assumes a particular frequency, the phasor representations cannot be summed directly. The time domain circuit response, however, explicitly provides frequency inf ormation, allowing those responses to be summed. In fact, because the frequency -domain representation of the circuit depends upon the frequency of the input (in general, the impedances will be a function of frequency), the frequency domain representation o f the circuit itself is, in general, different for different inputs. Thus, the only way in which circuits with multiple inputs at different frequencies can be analyzed in the frequency domain is with superposition. In the special case in which all inputs s hare a common frequency, the circuit response can be determined by any of our previous analysis techniques (circuit reduction, nodal analysis, mesh analysis, superposition, etc.) In this case, if superposition is used, the circuit response to individual in puts can be summed directly in the frequency domain if desired. Examples of the application of superposition to analysis of frequency -domain circuits are provided below. Important Result : In the case of multiple frequencies existing in the circuit, superpo sition is the only valid frequency -domain analysis approach. Superposition applies directly in the frequency domain, insofar as contributions from individual sources can be determined by killing all other sources and analyzing the resulting circuit. In gen eral, however, superimposing (summing) the contributions from the individual sources must be done in the time domain. Superposition of responses to individual sources can be summed directly in the frequency domain (e.g. addition of the phasors representin g the individual responses) is only appropriate if all sources have the same frequency. In this case (all source having the same frequency) any of our other modeling approaches are also valid. Example 10.13 Determine the voltage v(t) across the inductor in the circuit below. 6cos(9t) A 3W 1W 4cos(3t+30 ) V H 3 1 +- v(t) +- Since two different input frequencies are applied to the circuit, we must use superposition to determine the response. The circuit to the left below will provide the phasor response 𝑉 1 to the current source ; the frequency is Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 38 of 85  = 9 rad/sec and the voltage source is killed. The circuit to the right below will provide the phasor response 𝑉 2 to the voltage source ; the frequency is  = 3 rad/sec and the current source is killed. 3W 1W +- j3 W06 3W 1W +- j1 W30 4 +-1V2V To determine the voltage phasor resulting from the current source ( 𝑉 in the circuit to the left above), we note that the inductor and the 3Ω resistor form a current divider. Thus, the current through the inductor resulting from the current source is 𝐼 1 = 3Ω (3+𝑗 3)Ω ⋅ 6∠0 ° = 3∠0 °⋅∠0 ° 3√2∠45 ° = 6 √2 ∠ − 45° . The voltage phasor 𝑉 can then b e determined by multiplying this current times the inductor’s impedance: 𝑉 1 = 𝑗 3Ω ⋅ 6 √2 ∠ − 45° = 3∠90 ° ⋅ 6 √2 ∠ − 45° 𝑉 And the time -domain voltage across the inductor due to the current source is: 𝑣 1(𝑡 ) = 9√2 cos (9𝑡 + 45° )𝑉 To determine the voltage phasor resulting from the voltage source (𝑉 2 in the circuit to the right above), we note that the inductor and the 3Ω resistor now form a voltage divider. Thus, the voltage 𝑉 2 can be readily determined by: 𝑉 2 = 𝑗 1Ω (3 + 𝑗 1)Ω ⋅ 4∠30 ° = 1∠90 ° ⋅ 4∠30 ° √10 ∠18 .4° = 4 √10 ∠101 .6° So that the time -domai n voltage across the inductor due to the voltage source is: 𝑣 2(𝑡 ) = 4 √10 cos (3𝑡 + 101 .6° )𝑉 The overall voltage is then the sum of the contributions from the two sources, in the time domain, so: 𝑣 (𝑡 ) = 𝑣 1(𝑡 ) + 𝑣 2(𝑡 ) And: 𝑣 (𝑡 ) = 9√2 cos (9𝑡 + 45° ) + 4 √10 cos (3𝑡 + 101 .6° )𝑉 Example 10.14 Determine the voltage v(t) across the inductor in the circuit below. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 39 of 85 6cos(9t) A 3W 1W 4cos(9t+30 ) V H 3 1 +- v(t) +- This circuit is essentially the same as the circuit of Example 10.13, with the important difference that the frequency of the voltage input has changed – the voltage source and current source both provide the same frequency input to the circuit, 9 rad/sec. We will first do this problem using superposition techniques. We will then use nodal analysis to solve the problem, to illustrate that multiple inputs at the same frequency do not require the use of superposition. Individually killing each source in the circuit above results in the two circuits shown below. Note that the impedance of the inductor is now the same in both of these circuits. 3W 1W +- j3 W06 3W 1W +- j3 W30 4 +-1V2V The two circuits shown above will now be analyzed to determine the individual contributions to the inductor voltage; these results will then be summed to determine the overall inductor voltage. The circuit to the left above has been a nalyzed in Example 6. Therefore, the voltage phasor 𝑉 1 is the same as determined in Example 10.13: 𝑉 1 = 9√2∠45 °𝑉 The voltage 𝑉 2 in the circuit to the right above can be determined from application of the voltage divider formula for phasors: 𝑉 2 = 𝑗 3Ω (3 + 𝑗 3)Ω ⋅ 4∠30 ° = 3∠90 ° ⋅ 4∠30 ° 3√2∠45 ° = 2√2∠75 ° Since both inputs have the same frequency, we can superimpose the phasor results directly (we could, of course, also determine the individual time domain responses and superimpose those responses if we chose): 𝑉 = 𝑉 1 + 𝑉 2 = 9√2∠45 ° + 2√2∠75 ° = 15 .24∠50 .3° 𝑉 So that the time domain inductor voltage is 𝑣 (𝑡 ) = 15 .24 cos (9𝑡 + 50 .3° )𝑉 . Notice that the circuit response has only a single frequency component, since both inputs have the same frequency. The superposition approach provided above is entirely valid. However, since both sources have the same input, we can choose any of our other analysis approaches to perform this problem. To emphasize this fact, we choose to do this problem using nodal analysis. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 40 of 85 The frequency -domain circuit, with our definition of reference voltage and independent node, is shown in the figure below. 3W 1W +- +-0630 4V VA VR=0 j3 W KCL at node A provides: 6∠0 ° = 𝑉 𝐴 − 0 𝑗 3Ω + 𝑉 𝐴 − 4∠30 ° 3Ω Solving the above equation for 𝑉 𝐴 provides 𝑉 𝐴 = 15 .24∠50 .3𝑉 so that the inductor voltag e as a function of time is: 𝑣 (𝑡 ) = 15 .24 cos (9𝑡 + 50 .3° )𝑉 Which is consistent with our result using superposition. 10.5.6 Thévenin’s & Norton’s Theorems, Source Transformations, and Maximum Power Transfer Application of Thévenin’s and Norton’s Theorems to freq uency domain circuits is identical to their application to time domain resistive circuits. The only differences are: • The open circuit voltage ( VOC ) and short circuit current ( iSC ) determined for resistive circuits is replaced by their phasor representation s, 𝑉 𝑂𝐶 and 𝐼 𝑆𝐶 . • The Thévenin resistance, RTH , is replaced by a Thévenin impedance, 𝑍 𝑇𝐻 . Thus, the Thévenin and Norton equivalent circuits in the frequency domain are as shown in Fig . 10.22. TH Z+-OC VTH ZSC I (a) Thévenin circuit (b) Norton circuit Figure 10.22. Thévenin and Norton equivalent circuits. Since Thévenin’s and Norton’s Theorems both apply in the frequency domain, the approaches we used for source transformations in the time domain for resistive circuits translate dire ctly to the frequency domain, with impedances substituted for resistances and phasors used for voltage and current terms. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 41 of 85 In order to determine the load necessary to draw the maximum power from a Thévenin equivalent circuit, we must re -derive the maximum p ower result obtained previously for resistive circuits, substituting impedances for admittances and using phasors for source terms. We will not derive the governing relationship, but will simply state that, in order to transfer the maximum power to a load, the load impedance must be the complex conjugate of the Thévenin impedance of the circuit being loaded. Thus, if a Thévenin equivalent circuit has some impedance 𝑍 𝑇𝐻 with a resistance RTH and a reactance XTH , the load which will draw the maximum power from this circuit must have resistance RTH and a reactance – XTH . The appropriate loaded circuit is shown in Fig. 10.23 below. TH Th TH jX RZ +=+-OC VTH Th L jX RZ −= Figure 10.23. Load impedance to draw maximum power from a Thévenin circuit. Example 10.15 Determine the Thévenin equivalent circuit seen by the load in the circuit below +- 2cos( 2t) 2 W 2 W 0.5H Load In the circuit below, we have used the input frequency, ω=2 rad/sec, to convert the circuit to the frequency domain.  02+- 2 W 2 W j1 WLZ Removing the load and killing the source allows us to determine the Thévenin resistance of the circuit. The appropriate circuit is: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 42 of 85 2 W 2 W j1 WTH Z The parallel combination of two, 2 W resistors have an equivalent resistance of 1 W. This impe dance, in series with the j1 W impedance, results in a Thévenin impedance 𝑍 𝑇𝐻 = (1 + 𝑗 1)Ω. Replacing the source, but leaving the load terminals open -circuited, as shown in the figure below, allows us to determine the open -circuit voltage 𝑉 𝑂𝐶 . 02+- 2 W 2 W j1 WOC V +- Since there is no current through the inductor, due to the open -circuit condition, 𝑉 𝑂𝐶 is determined from a simple resistive voltage divider formed by the two, 2Ω resistors. Thus, the open -circuit voltage is: 𝑉 𝑂𝐶 = 2Ω 2Ω + 2Ω ⋅ 2∠0 ° = 1∠0 ° The resulting Thévenin equivalent circuit is shown below: +- 01 (1+j1) W Example 10.16 Determine the Norton equivalent circuit of the circuit of example 10.15. Since we determined the Thévenin equivalent circuit in Example 10.15, a source tra nsformation can be used to determine the Norton equivalent circuit. Consistent with our previous source transformation rules, the short - circuit current, 𝐼 𝑆𝐶 , is equal to the open -circuit voltage divided by the Thévenin impedance: 𝐼 𝑆𝐶 = 𝑉 𝑂𝐶 𝑍 𝑇𝐻 = 1∠ 0° (1 + 𝑗 1)Ω = 1∠0 ° √2∠45 ° = 1 √2 ∠ − 45° Since the impedance doesn’t change during a source transformation, the Norton equivalent circuit is therefore as shown below: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 43 of 85 +- 01 (1+j1) W Example 10.17 Determine the load impedance for the circuit of Example 10. 15 which will provide the maximum amount of power to be delivered to the load. Provide a physical realization (a circuit) which will provide this impedance. The maximum power is delivered to the load when the load impedance is the complex conjugate of the Thévenin impedance. Thus, the load impedance for maximum power transfer is: 𝑍 𝐿 = (1 − 𝑗 )Ω And the loaded Thévenin circuit is: +-01 (1+j1) W (1-j1) W To implement this load, let us look at a parallel RC combination. With the frequency ω=2 rad/sec, the frequency domain load looks like: RC j 2 −LZ Combining parallel impedances results in: 𝑍 𝐿 = −𝑗 ( 𝑅 2𝐶 ) 𝑅 − 𝑗 2𝐶 Ω = 𝑅 4𝐶 2 − 𝑗 𝑅 2 2𝐶 𝑅 2 + 1 4𝐶 2 Ω Setting 𝑅 = 2Ω and 𝐶 = 0.25 𝐹 makes 𝑍 𝐿 = (1 − 𝑗 )Ω, as desired, so the physical implementation of our load is as shown below: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 44 of 85 2 W 0.25F Section Summary • The following analysis methods apply in the frequency domain exactly as they do in the time domain for purely resistive circuits o KVL and KCL o Voltage and current dividers o Circuit reduction tec hniques o Nodal and mesh analysis o Superposition, especially when multiple frequencies are present o Thévenin’s and Norton’s theorems One simply uses phasor representations for the voltages and/or currents in the circuit and impedances to represent the circuit element voltage -current relationships. The analysis techniques presented Chapters 1 through 4 are then applied exactly as they were for resistive circuits. • One minor exception to the above statement is that, in order to draw maximum power from a circuit, the load impedance should be the complex conjugate of the impedance of the circuit’s Thévenin equivalent. 10.5 Exercises: Determine the impedance seen by the source for the circuit below if u(t) = 4cos(t+30 ).u(t) +- 2W F 41 4W Determine the im pedance seen by the source for the circuit below if u(t) = 2cos(4000t) .u(t) +- 8W 8W 1mH For the circuit of exercise 1, determine the current delivered by the source. For the circuit of exercise 2, determine the voltage across the 8 W resistor s. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 45 of 85 10.6 Frequency Domain System Characterization In Chapters 7 and 8, we wrote the differential equation governing the relationship between a circuit’s input and output (the input -output equation) and used this differential equation to determine the response of a circuit to some input. We also characterized the time -domain behavior of the system by examining the circuit’s natural and step responses. We saw that the behavior of a first order circuit can be characterized by its time constant and DC gain , while the response of a second order circuit is characterized by its natural frequency, damping ratio and DC gain. It is important to recognize that these characterizations were independent of specific input parameters; they depended upon the type of res ponse (e.g. a step function or a natural response), but were independent of detailed information such as the amplitude of the step input or the actual values of the initial conditions. We will now use the steady state sinusoidal response to characterize a circuit’s behavior. As in the case of our time -domain characterization, this characterization will allow the system’s behavior to be defined in terms of its response to sinusoidal inputs, but the characterization will be independent of details such as the input sinusoid’s amplitude or phase angle. (The input sinusoid’s frequency will, however, still be of prime importance.) When a sinusoidal input is applied to a linear system, the system’s forced response consists of a sinusoid with the same frequency as t he input sinusoid, but in general having a different amplitude and phase from the input sinusoid. Figure 10.24 shows the general behavior, in block diagram form. Changes in the amplitude and phase angle between the input and output signals are often used t o characterize the circuit’s input -output relationship at the input frequency, ω. In this chapter, we will demonstrate how this characterization is performed for inputs with discrete frequencies (as in the case of circuits with one or several inputs, each with a single frequency component). Later chapters will extend these concepts to the case in which frequency is considered to be a continuous variable. System Input u(t)=Acos( t+ ) Output y(t)=Bcos( t+ f) Figure 10.24. Sinusoidal steady -state input -output relation for a linear t ime invariant system. Previously in this chapter, we have (bit -by -bit) simplified the analysis of a system’s steady state sinusoidal response significantly. We first represented the sinusoidal signals as complex exponentials in order to facilitate our ana lysis. We subsequently used phasors to represent our complex exponential signals, as shown in Fig. 10.25; this allowed us to represent and analyze the circuit’s steady state sinusoidal response directly in the frequency domain. Input Output System = AUf= BY Figure 10.25. Phasor representation of sinusoidal inputs and outputs. In the frequency domain analyses performed to date, we have generally determined the system’s response to a specific input signal with a given frequenc y, amplitude, and phase angle. We now wish to characterize the system response to an input signal with a given frequency, but an arbitr ary amplitude and phase angle. As indicated previously in section 10.1, we will see that the input -output relationship governing the system reduces to a relationship between the output and input signal amplitudes and the output and input signal phases . The circuit can thus be represented in phasor form as shown in Fig . 10.26. The system’s effect on a sinusoidal input consists of an amplitude gain between t he output and input signals ( 𝐵 𝐴 in Fig. 10.26) and a phase difference between the output and input signals ( 𝜙 − 𝜃 in Fig. 10.26). Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 46 of 85 Input Output ( )  f − A B= AUf= BY Figure 10.26. Frequency domain representation of circuit input -output relationship . Rather than perform a rigorous demonstration of this property at this time, we will simply provide some simple examples to illustrate the basic concept. Example 10.18 A sinusoidal voltage, v in (t), is applied to the circuit to the left below. Determine the frequency -domain relationship between the phasor representing v in (t) and the phasor representing the output voltage v out (t). +- CR +- vin (t) vout (t) +- R +-in Vout VCj  1 Since the frequency is unspecified, we leave frequency as an independen t variable, , in our analysis . In the frequency domain, therefore, the circuit can be represented as shown to the right above. The frequency domain circuit is a simple voltage divider, so the relation between input and output is: 𝑉 𝑜𝑢𝑡 = 1 𝑗𝜔𝐶 𝑅 + 1 𝑗 𝜔 𝐶 ⋅ 𝑉 𝑖𝑛 = 1 1 + 𝑗𝜔𝑅𝐶 ⋅ 𝑉 𝑖𝑛 The factor 1 1+𝑗𝜔𝑅𝐶 is a complex number, for given values of , R, and C. It constitutes a multiplicative factor which, when applied to the input, results in the output. This multiplicative factor is often used to characte rize the system’s response at some frequency, . We will call this multiplicative factor the frequency response function, and denote it as H(j ). For a particular frequency, H(j ) is a complex number, with some amplitude, |𝐻 (𝑗𝜔 )|, and phase angle, ∠𝐻 (𝑗𝜔 ). For our example, the magnitude and phase of our frequency response function are: |𝐻 (𝑗𝜔 )| = 1 √1 + (𝜔𝑅𝐶 )2 ∠𝐻 (𝑗𝜔 ) = − tan −1(𝜔𝑅𝐶 ) According to the rules of multiplication of complex numbers, when two complex numbers are multiplied, the magnitude of th e result is the product of the magnitudes of the individual numbers, and the phase angle of the result is the sum of the individual phase angles. Thus, if the input voltage is represented in phasor form as 𝑉 𝑖𝑛 = |𝑉 𝑖𝑛 | ⋅ |𝐻 (𝑗𝜔 )| and the output voltage is 𝑉 𝑜𝑢𝑡 = |𝑉 𝑜𝑢𝑡 |∠𝜙 , it is easy to obtain the output voltage from the input voltage and the frequency response function: |𝑉 𝑜𝑢𝑡 | = |𝑉 𝑖𝑛 | ⋅ |𝐻 (𝑗𝜔 )| ∠𝑉 𝑜𝑢𝑡 = ∠𝑉 𝑖𝑛 + ∠𝐻 (𝑗𝜔 )Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 47 of 85 Example 10.19 Use the frequency response function determined in Example 10.18 above to determine the response vout (t) of the circuit shown below to the following input voltages: • 𝑣 𝑖𝑛 (𝑡 ) = 3 cos (2𝑡 + 20° ) • 𝑣 𝑖𝑛 (𝑡 ) = 7 cos (4𝑡 − 60° )+- 0.25F 2 W +- vin (t) vout (t) When 𝑣 𝑖𝑛 (𝑡 ) = 3 cos (2𝑡 + 20° ), 𝜔 = 2 rad/sec, |𝑉 𝑖𝑛 | = 3 and ∠𝑉 𝑖𝑛 = 20° . For this val ue of ω, and the given values of R and C, the magnitude and phase of the frequency response function are: |𝐻 (𝑗 2)| = 1 √1 + (𝜔𝑅𝐶 )2 = 1 √1 + (2 ⋅ 2Ω ⋅ 0.25 𝐹 )2 = 1 √1 + 12 = 1 √2 ∠𝐻 (𝑗 2) = − tan −1(𝜔𝑅𝐶 ) = − tan −1(2 ⋅ 2Ω ⋅ 0.25 𝐹 ) = − tan −1(1) = −45° The output amplitude is then the product of |𝑉 𝑖𝑛 | and |𝐻 (𝑗 2)| and the output phase in the sum of ∠𝑉 𝑖𝑛 and ∠𝐻 (𝑗 2), so that: |𝑉 𝑖𝑛 | = |𝑉 𝑖𝑛 | ⋅ |𝐻 (𝑗 2)| = 3 ⋅ 1 √2 = 1 √2 ∠𝑉 𝑜𝑢𝑡 = ∠𝑉 𝑖𝑛 + ∠𝐻 (𝑗 2) = 20° + (−45° ) = −25° And the time -domain output voltage is: 𝑣 𝑜𝑢𝑡 (𝑡 ) = 3 √3 cos (2𝑡 − 25° ) When 𝑣 𝑖𝑛 = 7 cos (4𝑡 − 60° ) , 𝜔 = 4 𝑟𝑎𝑑 𝑠𝑒𝑐 ⁄ , |𝑉 𝑖𝑛 | = 7 and ∠𝑉 𝑖𝑛 = −60° . For this value of ω, and the given values of R and C, the magnitude and phase of the frequency response function are: |𝐻 (𝑗 4)| = 1 √1 + (𝜔𝑅𝐶 )2 = 1 √5 And: ∠𝐻 (𝑗 4) = − tan −1(𝜔𝑅𝐶 ) = −63 .4° The output amplitude is then the product of |𝑉 𝑖𝑛 | and |𝐻 (𝑗 4)| and the output phase is the sum of ∠𝑉 𝑖𝑛 and ∠𝐻 (𝑗 4) so that the time -domain output voltage in this case is: 𝑉 𝑜𝑢𝑡 (𝑡 ) = 7 √5 cos (2𝑡 − 12 .4° )Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 48 of 85 From the above examples we can see that, once the frequency response function is calculated for a circuit as a function of frequency , we can determine the circuit’s steady -state response to any input sinusoid directly from the frequency response function, without re -analyzing the circuit itself . We conclude this section with one additional example, to illustrate t he use of the frequency response function and superposition to determine a circuit’s response to multiple inputs of different frequencies. Example 10.20 Use the results of examples 10.18 and 10.19 above to determine the response v out (t) of the circuit show n below if the input voltage is v in (t) = 3cos(2t+20 ) + 7cos(4t -60 ). Plot the input and output waveforms. +- 0.25F 2 W +- vin (t) vout (t) Recall, from section 10.5, that superposition is the only valid approach for performing frequency domain analysis of circuits with inputs at multiple frequencies. Also recall that each frequency can be analyzed separately in the frequency domain, but that the superposition process (the summation of the individu al contributions) must be done in the time domain. For this problem, we have contributions at two different frequencies: 2 rad/sec and 4 rad/sec. Luckily, we have determined the individual responses of the circuit to these two inputs in Example 10.19. Therefore, in the time domain, the two contributions to our output will be: 𝑣 1(𝑡 ) = 3 √2 cos (2𝑡 − 25° ) And: 𝑣 2(𝑡 ) = 7 √5 cos (2𝑡 − 123 .4° ) The overall response is then: 𝑣 𝑜𝑢𝑡 (𝑡 ) = 𝑣 1(𝑡 ) + 𝑣 2(𝑡 ) = 3 √2 cos (2𝑡 − 25° ) + 7 √5 cos (2𝑡 − 123 .4° ) A plot the input and output waveforms is shown below: Time vout (t) vin (t) Voltage Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 49 of 85 Section Summary • The frequency response function or frequency response describes a circuit’s input -output relationship directly in the frequency domain, as a function of frequency . • The frequency response is a complex func tion of frequency H(jω) (that is, it is a complex number which depends upon the frequency). This complex function is generally expressed as a magnitude and phase, |𝐻 (𝑗𝜔 )| and ∠𝐻 (𝑗𝜔 ) , respectively. |𝐻 (𝑗𝜔 )| is called the magnitude response of the circuit, and ∠𝐻 (𝑗𝜔 ) is called the phase response of the circuit. The overall idea is illustrated in the block diagram below: Input Output )j(H)j(H)j(H    == AUf= BY • The magnitude response of the circuit is the ratio of the output amplitude to the input amplitude. This is also called the gain of the system. Thus, in the figure above, the output amplitude 𝐵 = |𝐻 (𝑗𝜔 )| ⋅ 𝐴 . Note that the magnitude response or gain of the system is a function of frequency, so that inputs of different frequencies will have different gains. • The pha se response of the circuit is the difference between the output phase angle and the input phase angle. Thus, in the figure above, the output phase 𝜙 = ∠𝐻 (𝑗𝜔 ) + 𝜃 . Like the gain, the phase response is a function of frequency – inputs at different frequencies will, in general, have different phase shifts. • Use of the frequency response to perform circuit analyses can be particularly helpful when the input signal contains a number of sinusoidal components at different frequencies. In this case, the response of the circuit to each individual component can be determined in the frequency domain using the frequency response and the resulting contributions summed in the time domain to obtain the overall response. 10.5 Exercises Determine the voltage across the capaci tor in the circuit below if u(t) = 4cos(t+30 ) + 2cos(2t -45 ). (Hint: this may be easier if you find the response to the input as a function of frequency, evaluate the response for each of the above frequency components, and superimpose the results.) u(t) +- 2W F 41 4W Determine the voltage across the resistors in the circuit below if u1(t) = 4cos(2t) and if u2(t) = cos(4t) . (Hint: this may be easier if you find the response to the input as a function of frequency, evaluate the response for ea ch of the above frequency components, and superimpose the results.) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 50 of 85 u1(t) +- 8W 8W 2H u2(t) +-Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 51 of 85 Real Analog Chapter 10: Lab Projects 10.4.1: Impedance In this lab assignment, we measure impedances of resistors, capacitors, and inductors. The measured values will be compared with our expectations based on analyses. Before beginning this lab, you should be able to: After completing this lab, you should be able to:  Represent sinusoidal signals in phasor form  Measure impedances of passive circuit element s This lab exercise requires:  Analog Discovery module  Digilent Analog Parts Kit  Digital multimeter (optional) Symbol K ey: Demonstrate circuit operation to teaching assistant; teaching assistant should initial lab notebook and grade sheet, indicating that circuit operation is acceptable. Analysis; include principle results of analysis in laboratory report. Numerical simulation (using PSPICE or MATLAB as indicated); include results of MATLAB numerical analysis and/or simulation in laboratory report. Record data in your lab notebook. General Discussion : The concept of impedance is only appropriate in terms of the steady -state response of a circuit to a sinusoidal input. Impedance is a complex number which provides the relationship between voltag e and current phasors in the circuit. Specifically, the impedance Z is the ratio of the voltage phasor to the current phasor: 𝑍 = 𝑉 𝐼 = 𝐼 𝑒 𝑗𝜑 𝑉 𝑒 𝑗𝜃 = |𝑉 𝐼 | 𝑒 𝑗 (𝜑 −𝜃 ) Eq. 1 where the voltage and current of interest, v(t) and i(t) , are assumed to be co mplex exponentials of the form: 𝑣 (𝑡 ) = 𝑉 𝑒 𝑗 (𝜔𝑡 +𝜃 ) = 𝑉 𝑒 𝑗𝜔𝑡 Eq. 2 𝑖 (𝑡 ) = 𝐼 𝑒 𝑗 (𝜔𝑡 +𝜑 ) = 𝐼 𝑒 𝑗𝜔𝑡 Eq. 3 𝐼 and 𝑉 are phasors representing the magnitude and phase of the current and voltage , respectively. Impedance is a very general concept which can be applied to any combination of vo ltage and current in a circuit. In this lab project, however, we will be interested only in the impedance of specific circuit elements: resist ors, capacitors, and inductors. In order to experimentally determine impedance, we must deter mine both voltage and current. Since oscilloscopes do not measure current, we will use the measured voltage across a known resistance in order to infer the current through th e circuit element of interest. The appropriate circuit sch ematic is as shown in Fig . 1. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 52 of 85 R +- vIN (t) i(t) +- v(t) -vR(t) Circuit element Figure 1. Circuit used for impedance measurements. In the circuit of Fig . 1, we can measure the voltages vR(t) and v(t) . The current through the circuit element of interest can be estimated from Ohm’s law as: 𝑖 (𝑡 ) = 𝑣 𝑅 (𝑡 ) 𝑅 Eq. 4 By measuring the voltage v(t) and estimating the current i(t) for the circuit element in Fig . 1, we can determine the circuit element’s impedance from equation (1). Pre -lab: Assume that the voltages vR(t) and v( t) in Figures 2 below are of the form: 𝑣 𝑅 (𝑡 ) = 𝑣 𝑅 cos (𝜔𝑡 + 𝜃 ) 𝑣 (𝑡 ) = 𝑉𝑐𝑜𝑠 (𝜔𝑡 + 𝜑 ) Determine the impedances of the impedances of the resistor R, the inductor L, and the capacitor C in Fig . 2 below in terms of the phasor representations of the voltages vR(t) and v(t) . Express your results in terms of the magnitudes and phase angles of vR(t) and v(t) .47 Ω -vR(t) +- v(t) +- vIN (t) Ri(t) 47 Ω -vR(t) +- v(t) +- vIN (t) Li(t) +- vIN (t) C 47 Ω +- v(t) i(t) -vR(t) (a) (b) (c) Figure 2. Circuits used in this lab project. Lab Procedures: a. Construct the circ uit of Fig. 2(a) with R = 100 Ω. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 53 of 85 i. Use your function generator to apply a sinusoidal input voltage vIN (t) with an amplitude of 2V and a 0V offset. Use your oscilloscope to measure the voltages vR(t) and v(t) . Set up a math channel to display the current i(t), according to equation (4). Rec ord an image of the oscilloscope window, showing the signals vR(t), v(t), and i(t) for input signals with the following frequencies: 1kHz 5kHz 10kHz ii. For each of the above three frequencies, tabulate:  the amplitude s of v(t) and i(t) , and  the time difference between v(t) and i(t) . iii. Calculate the impedance of the resistor a t the above three frequencies. Compare your results to your expectati ons from the pre -lab analyses. Include a percent difference between your expectations and your measure d impedances. Note: Appendix A of this lab assignment provides tips relative to gain and phase measurement of sinusoidal signals. iv. Demonstrate operation of your circuit to the TA and have them initial the appropriate pages of your l ab notebook and the lab worksheet . b. Construct the circ uit of Fig . 2(b) with L = 1mH. i. Use your function generator to apply a sinusoidal input voltage vIN (t) with an amplitude of 2V and a 0V offset. Use your oscilloscope to measure the voltages vR(t) and v(t) . Set up a math channel to display the current i(t), according to equation (4). Record an image of the oscilloscope window, showing the signals vR(t), v(t), and i(t) for input signals with the following frequencies: 1kHz 5kHz 10kHz ii. For each of the above three frequencies, tabulate:  the amplitude s of v(t) and i(t) , and  the time difference between v(t) and i(t) . iii. Calculate the impedance of the inductor a t the above three frequencies. Compare your results to your expectatio ns from the pre -lab analyses. Inclu de a percent difference between your expectations and your measured impedances. Note: Appendix A of this lab assignment provides tips relative to gain and phase measurement of sinusoidal signals. iv. Demonstrate operation of your circuit to the TA and have them initial the appropriate pages of your l ab notebook and the lab worksheet . c. Construct the circ uit of Figure 2(c) with C = 100nF. i. Use your function generator to apply a sinusoidal input voltage vIN (t) with an amplitude of 2V and a 0V offset. Use your oscilloscope to measure the voltages vR(t) and v(t) . Set up a math channel to display the current i(t), according to equation (4). Record an image of the oscilloscope window, showing the signals vR(t), v(t), and i(t) for input signals with the foll owing frequencies: 1kHz 5kHz 10kHz ii. For each of the above three frequencies, tabulate:  the amplitude s of v(t) and i(t) , and  the time difference between v(t) and i(t) . iii. Calculate the impedance of the capacitor a t the above three frequencies. Compare your results to your expectations fr om the pre -lab analyses. Include a percent difference between Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 54 of 85 your expectation s and your measured impedances. Note: Appendix A of this lab assignment provides tips relative to gain and phase measurement of sinusoidal signals. iv. Demonstrate operation of your circuit to the TA and have them initial the appropriate pages of your l ab notebook and the lab worksheet .Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 55 of 85 Appendix A : Measuring Gain and Phase: The gain of a system at a particular frequency is the ratio of the magnitude of the output voltage to the magnitude of the input voltage at that frequency , so that: 𝐺𝑎𝑖𝑛 = ∆𝑉 𝑜𝑢𝑡 ∆𝑉 𝑖𝑛 Where ∆𝑉 𝑜𝑢𝑡 and ∆𝑉 𝑖𝑛 can be measured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out DVin DVout The phase of a system at a particular frequency is a measure of the time shift between the output and input voltage at that frequency , so that: 𝑃 ℎ𝑎𝑠𝑒 = ∆𝑇 𝑇 × 360° where ∆𝑇 and ∆𝑇 can be measured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out T DTReal Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 56 of 85 Real Analog Chapter 10: Lab Worksheets 10.4.1: Impedance (75 points total) a. Resistor (25 points) In the space below, provide the impedances for the resistor in Fig. 2(a), i n terms of the magnitudes and phase angles of vR(t) and v(t). (4 pts) Attach to this worksheet images of the oscilloscope main window, showing the signals vR(t), v(t), and i(t) of Fig. 2(a) for input signals with the following frequencies:1kHz, 5kHz, and 10kHz. (9 pts, 3pts per image) In the space below, provide a table which gives the amplitude difference and time shift between v(t) and i(t) for the circuit of Fig. 2(a) at fr equencies of 1kHz, 5kHz, and 10kHz. (3 pts) In the space below, provide the measured impedance of the resistor at the three frequencies of interest. Compare your results to your expectations from the pre -lab, including percent differences between measured and expected impedances. (6 pts) DEMO : Have a teaching assistant initial this sheet, indicating that they have observed your system’s operation. (3 pts total) TA Initials: _ Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 57 of 85 b. Inductor (25 points) In the space below, provide the imp edances for the inductor in Fig. 2(b), in terms of the magnitudes and phase angles of vR(t) and v(t). (4 pts) Attach to this worksheet images of the oscilloscope main window, showing the signals vR(t), v(t), and i(t) of Fig. 2(b) for input signals with the following frequencies:1kHz, 5kHz, and 10kHz. (9 pts, 3pts per image) In the space below, provide a table which gives the amplitude difference and time shift between v(t) and i(t) for the circuit of Fig. 2(b) at frequencies of 1kHz, 5kHz, and 10kHz. (3 pts) In the space below, provide the measured impedance of the inductor at the three frequencies of interest. Compare your results to your expectations from the pre -lab, including percent differenc es between measured and expected impedances. (6 pts) DEMO : Have a teaching assistant initial this sheet, indicating that they have observed your system’s operation. (3 pts total) TA Initials: _ Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 58 of 85 c. Capacitor (25 points) In the space below, pro vide the impedances for the capacitor in Fig . 2(c), in terms of the magnitudes and phase angles of vR(t) and v(t). (4 pts) Attach to this worksheet images of the oscilloscope main window, showing the signals vR(t), v(t), and i(t) of Fig . 2(c) for input signals with the following frequencies:1kHz, 5kHz, and 10kHz. (9 pts, 3pts per image) In the space below, provide a table which gives the amplitude difference and time shift between v(t) and i(t) for the circuit of Fig. 2(c) at frequencie s of 1kHz, 5kHz, and 10kHz. (3 pts) In the space below, provide the measured impedance of the capacitor at the three frequencies of interest. Compare your results to your expectations from the pre -lab, including percent differences between measure d and expected impedances. (6 pts) DEMO : Have a teaching assistant initial this sheet, indicating that they have observed your system’s operation. (3 pts total) TA Initials: _ Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 59 of 85 Real Analog Chapter 10: Lab Projects 10.6.1: Passive RL Circuit Resp onse In this lab assignment, we will be concerned with the steady -state response of electrical circuits to sinusoidal inputs. Figure 1(a) shows a block -diagram representation of the system. The input and output signals both have the same frequency, but the two signals can have different amplitudes and phase angles. The analysis of the circuit of Fig . 1(a) can be simplified by representing the sinusoidal signals as phasors . The phasors provide the amplitude and phase information of sinusoidal signals. By comparing the phasors representing the input and output signals, t he effect of the circuit can be represented as an amplitude gain between the output and input signals and a phase difference between the output and input signals, as shown in Fig . 1(b). Circuit Input u(t)=Acos( t+ ) Output y(t)=Bcos( t+ f) (a) Physical circuit Input U=A  Output Y=B f ( )  f − A B (b) Phasor representation of circuit input -output relationship. Figure 1. Steady -state sinusoid al circuit analysis In this lab assignment, we will measure the gain and phase responses of a passive RL circuit and compare these measurements with expectations based on analysis. Before beginning this lab, you should be able to: After completing this lab , you should be able to:  Represent sinusoidal signals in phasor form  Determine electrical circuit steady -state sinusoidal r esponses in phasor form  Measure phasor form of circuit steady -state sinusoidal response  Measure input impedance of electrical circui t This lab exercise requires:  Analog Discovery module  Digilent Analog Parts Kit  Digital multimeter (optional) Symbol K ey: Demonstrate circuit operation to teaching assistant; teaching assistant should initial lab notebook and grade sheet, indicating that circuit operation is acceptable. Analysis; include principle results of analysis in laboratory report. Numerical simulation (using PSPICE or MATLAB as indicated); include results of MATLAB numerical analysis and/or simulation in laboratory report. Record data in your lab notebook. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 60 of 85 General Discussion : Consider the RL circuit shown in Fig . 2 below. The input to the circuit is an applied voltage and we choose the current supplied by the s ource to be the system output. The differential equation relating the applied voltage vIN (t) to the input current iIN (t) can be obtained by applying KVL around the single loop: If we assume that the input voltage and current are complex exponentials of the form: 𝑣 𝐼𝑁 (𝑡 ) = 𝑉 𝑒 𝑗 (𝜔𝑡 +𝜃 ) Eq. 1 𝑖 𝐼𝑁 (𝑡 ) = 𝐼 𝑒 𝑗 (𝜔𝑡 +𝜑 ) Eq. 2 We can write the circuit’s input -output relation as a ratio between the current and the voltage: 𝐼 𝑉 = 𝐼 𝑒 𝑗𝜑 𝑉 𝑒 𝑗𝜃 = 1 𝑅 +𝑗𝜔𝐿 Eq. 3 Where 𝐼 and 𝑉 are phasors representing the magnitude and phase of the input current and input voltage to the circuit, respectively. This input -output relation can be written in terms of an amplitude gain and a phase shift: |𝐼 𝑉 | = | 1 𝑅 +𝑗𝜔𝐿 | = 1 √𝑅 2+𝜔 2𝐿 2 Eq. 4 𝜑 − 𝜃 = − tan −1 (𝜔 𝐿 𝑅 ) Eq. 5 R -vR(t) +- vL(t) +- vIN (t) iIN (t) L Figure 2. RL circuit. Pre -lab: a. Show that the amplitude gain and phase difference between the input voltage and the input current are as shown in equations (4) and (5). b. The cutoff frequency for the circu it of Fig . 2 is gi ven to be 𝜔 𝑐 = 𝑅 𝐿 . Calculate the cutoff frequency for the circuit of Fig . 2 if L = 1mH and R = 47 Ω. c. Determine the gain and phase difference for the RL circuit for frequencies 𝜔 ≈ 0, 𝜔 → ∞, and 𝜔 = 𝜔 𝑐 if L = 1mH and R = 47 Ω. d. Do your low and high frequency gain results in part ( c) agree with your expectations based on the inductor’s l ow and high frequency behavior? (e.g. calculate the inductor impedance at low and high frequencies, substitute these impedances into the circuit of Fig . 2, calculate the response of the resulting resistive circuit, and compare to the results of part (c).) Notes: In this lab assignment, we will measure vIN (t) and vL(t). These measurement s will be used to estimate the gain and phase difference between vIN (t) and iIN (t) and the gain and phase difference between vL(t) and iIN (t) . These results will be compared with our expectations based on the pre -lab analyses. We do not have the ability to directly measure a time -varying current, so we will infer iIN (t) by measuring vIN (t) - vL(t) and determining iIN (t) by: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 61 of 85 𝑖 𝐼𝑁 (𝑡 ) = 𝑣 𝐼𝑁 (𝑡 )−𝑣 𝐿 (𝑡 ) 𝑅 Eq. 6 All signals we will be dealing with are sinusoidal. Appendix A of this lab assignment provides tips relative to gain and phase measurement of sinusoidal si gnals. Lab Procedures: Construct the circuit of Fig . 2 with L = 1mH and R = 47 Ω. a. Use your function generator to apply a sinusoidal input at vIN (t) . Use your oscilloscope to display both vIN (t) and vL(t) . Use the oscilloscope’s math operation to display the input current, iIN (t) , as provided by equation (6). Record the amplitude of vIN (t) and iIN (t) and the time delay between vIN (t) and iIN (t) for the following input voltage frequencies: • 𝜔 ≈ 𝜔 𝑐 10 (low frequency input) • 𝜔 ≈ 10 𝜔 𝑐 (high frequency input) • 𝜔 ≈ 𝜔 𝑐 (corner frequency input) b. Demonstrate operation of your circuit to the TA and have them initial the appropriate page(s) of your l ab notebook and the lab worksheet . c. Calculate the measured gains and phase differences between iIN (t) and vIN (t) for the three frequencies listed in part (b) above. Compare your measured results with your expectations from the pre -lab. Comment on your results. Appendix A: Measuring Gain and Phase The gain of a system at a particular frequency is the ratio of the mag nitude of the output voltage to the magnitude of the input voltage at that frequency , so that: Gain = ∆𝑉 𝑜𝑢𝑡 ∆𝑉 𝑖𝑛 where ∆𝑉 𝑜𝑢𝑡 and ∆𝑉 𝑖𝑛 can be measured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out DVin DVout The phase of a system at a particular frequency is a measure of the time shift between the output and input voltage at that frequency , so that: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 62 of 85 Phase = ∆𝑇 𝑇 × 360° where ∆𝑉 𝑖𝑛 and T can be measured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out T DTReal Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 63 of 85 Real Analog Chapter 10: Lab Worksheets 10.6.1: Passive RL Circuit Response (40 points total) Attach, to this worksheet, your derivation of gain and phase expressions (equations (5) and (6)) (3 pts) In the space below, provide the cutoff frequency calculated in part (b) of the pre -lab. (3 pts) In the space below, provide the gain ( 𝐼 𝑉 ) and phase ( ∠𝐼 − ∠𝑉 ) for the RL circuit at low, high, and corner frequencies as determined from part (c) of your pre -lab analysis. (9 pts) Comment below on the inductor physical behavior at low and high frequencies vs. expressions provided in (2) above. (2 pts) In the space below, provide a table listing vIN (t) and i IN (t) and time delays between vIN (t) and iIN (t) for the three frequencies of interest in part (a) of the lab procedures . ( 10 pts) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 64 of 85 In the space below, provide a table listing the measured gains and phase differences between iIN (t) and vIN (t) and iIN (t) and vL(t) for the three frequencies of interest. (8 pts) DEMO : Have a teaching assistant initial this sheet, indicating that they have observed your system’s operation. (5 pts total) TA Initials: _ Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 65 of 85 Real Analog Chapter 10: Lab Projects 10.6.2: Passive RC Circuit Response In this lab assignment, we will be concerned with the steady -state response of electrical circuits to sinusoidal inputs. Figure 1(a) shows a block -diagram representation of the system. The input and output sig nals both have the same frequency, but the two signals can have different amplitudes and phase angles. The analysis of the circuit of Fig . 1(a) can be simplified by representing the sinusoidal signals as phasors . The phasors provide the amplitude and phas e information of sinusoidal signals. By comparing the phasors representing the input and output signals, t he effect of the circuit can be represented as an amplitude gain between the output and input signals and a phase difference between the output and in put signals, as shown in Fig . 1(b). Circuit Input u(t)=Acos( ω t + θ ) Output y(t)=Bcos( ω t + ϕ )Input U=A  Output Y=B f ( )  f − A B In this lab assignment, we will measure the gain and phase responses of a passive RC circuit and compare these measurements with expectations based on analysis. These measurements will be used to estimate the impedance of the overall RC circuit. Before beginning this lab, you should be able to: After completing this lab, you should be able to:  Represent sinusoidal signals in phasor form  Determine electrical circuit steady -state sinusoidal r esponses in phasor form  Measure phasor form of circuit steady -st ate sinusoidal response  Measure input impedance of electrical circuit This lab exercise requires:  Analog Discovery module  Digilent Analog Parts Kit  Digital multimeter (optional) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 66 of 85 Symbol K ey: Demonstrate circuit operation to teaching assistant; teaching assistant should initial lab notebook and grade sheet, indicating that circuit operation is acceptable. Analysis; include principle results of analysis in laboratory report. Numerical simulation (using PSPICE or MATLAB as indicated); include results of MATLAB numerical analysis and/or simulation in laboratory report. Record data in your lab notebook. General Discussion : In this lab assignment, we will determine the input impedance of the passive RC circuit shown in Fig . 1. The input impedance of a circuit is defined as the ratio of i nput voltage to input current. Thus, for the circuit of Fig . 1, the input impedance is represented in phasor form as: 𝑍 𝐼𝑁 = 𝑉 𝐼𝑁 𝐼 𝐼𝑁 Eq. 1 Where 𝑉 𝐼𝑁 is the phasor representation of the circuit input voltage and 𝐼 𝐼𝑁 is the phasor representation of the input current to the circuit. The cutoff frequency for the circuit of Fig . 1 is: 𝜔 𝑐 = 1 𝑅𝐶 Eq. 2 R +- vIN (t) iIN (t) C Figure 1. Passive RC circuit. Pre -lab: a. Determine an expression for the input impedance of the circuit of Fig . 1 in terms of R, C, and ω. b. If R = 100 W and C = 1 mF, determine the cutoff frequency for the circuit. Also determine the input impedance for frequencies of: • 𝜔 = 𝜔 𝑐 10 (low frequency input) • 𝜔 = 10 𝜔 𝑐 (high frequency input) • 𝜔 = 𝜔 𝑐 (corner frequency input) c. Check your low and high frequency results in part (b) relative to your expectations based on the capacitor’s low and high frequency behavior. Lab Procedures: Construct the circuit of Fi g. 3, using R = 100 W and C = 1 mF. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 67 of 85 a. Measure the input voltage amplitude, the input current amplitude, and the time delay between the input voltage and the input current for the following frequencies: • 𝜔 ≈ 𝜔 𝑐 10 (low frequency input) • 𝜔 ≈ 10 𝜔 𝑐 (high frequency input) • 𝜔 ≈ 𝜔 𝑐 (corner frequency input) Use your data to calculate the input impedance (magnitude and phase) of the circuit for the above frequencies . Create a table providing the measured data and the calculated input impedances at the above f requencies. b. Compare your measured results with your expectations based on the analysis you did in the pre - lab. c. Demonstrate operation of your circuit to the TA and have them initial the appropriate page(s) of your l ab notebook and the lab worksheet . Hint : The process to perform the above lab procedures is comparable to the process performed in lab assignment 10.6.1 . Be sure to record all necessary data and any calculations you perform to obtain your results in your lab notebook. Appendix A of this lab ass ignment provides tips relative to gain and phase measurement of sinusoidal signals. Appendix A: Measuring Gain and Phase: The gain of a system at a particular frequency is the ratio of the magnitude of the output voltage to the magnitude of the input volt age at that frequency , so that: Gain = ∆𝑉 𝑜𝑢𝑡 ∆𝑉 𝑖𝑛 where ∆𝑉 𝑜𝑢𝑡 and ∆𝑉 𝑖𝑛 can be measured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out DVin DVout The phase of a system at a particular frequency is a measure of the time shift between the output and input voltage at that frequency , so that: Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 68 of 85 Phase = ∆𝑇 𝑇 × 360° where ∆𝑇 and T can be measured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out T DTReal Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 69 of 85 Real Analog Chapter 10: Lab Worksheets 10.6.2: Passive RC Circuit Response (30 points total) In the space below, provide your expression for the input impedance of the circuit. (3 pts) In the space below, provide the cutoff frequency calculated in part (b) of the pre -lab and the circuit’s input impedance at the three specified frequencies. (3 pts) In the space below, comment on your results in part 2 above, relative to your expectations based on the capacitor’s behavior at low and high frequenci es. (2 pts) In the space below, provide a table listing the measured input and output voltage amplitudes, the time difference between the input voltage and input current, and the calculated input impedances at the three frequencies of interest . ( 10 pts) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 70 of 85 In the space below, provide a brief comparison between your measured results and your expectations from the pre -lab. Include a percent difference between the expected and measured impedances at the three frequencies of interest. (7 pts) DEMO : Have a teaching assistant initial this sheet, indicating that they have observed your system’s operation. (5 pts total) TA Initials: _ Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 71 of 85 Real Analog Chapter 10: Lab Projects 10.6.3: Inverting Voltage Amplifier In this lab assignme nt, we will be concerned with the steady -state response of electrical circuits to sinusoidal inputs. Figure 1(a) shows a block -diagram representation of the system. The input and output signals both have the same frequency, but the two signals can have dif ferent amplitudes and phase angles. The analysis of the circuit of Fig . 1(a) can be simplified by representing the sinusoidal signals as phasors . The phasors provide the amplitude and phase information of sinusoidal signals. By comparing the phasors representing the input and output signals, t he effect of the circuit can be represented as an amplitude gain between the output and input signals and a phase difference between the output and input signals, as shown in Fig . 1(b). Circuit Input u(t)=Acos( t+ ) Output y(t)=Bcos( t+ f) (a) Physical circuit Input U=A  Output Y=B f ( )  f − A B (b) Phasor representation of circuit input -output relationship. Figure 1. Steady -state sinusoidal circuit analysis In this lab assignment, we will measure the gain and phase responses of an inverting voltage amplifier circuit and compare these measurements with expectations based on analysis. Before beginning this lab, you should be able to: After completing this lab, you should be able to:  Represent sinusoidal signals in phasor form  Represe nt electrical circuit steady -state sinusoidal r esponses in phasor form  Analyze operational amplifier -based circuits  Measure phasor form of circuit steady -state sinusoidal response  Measure input impedance of electrical circuit This lab exercise requires:  Analog Discovery module  Digilent Analog Parts Kit  Digital multimeter (optional) Symbol K ey: Demonstrate circuit operation to teaching assistant; teaching assistant should initial lab notebook and grade sheet, indicating that circuit operation is acceptable. Analysis; include principle results of analysis in laboratory report. Numerical simulation (using PSPICE or MATLAB as indicated); include results of MATLAB numerical analysis and/or simulation in laboratory report. Record data in your lab notebook. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 72 of 85 General Discussion : In this lab assignment, we will measure the frequency domain input -output relation governing the inverting voltage amplifier shown in Fig . 1. The frequency domain input -output relation for the circuit of Fig. 1 is : 𝑉 𝑂𝑈 𝑇 𝑉 𝐼𝑁 = − 1 𝑗𝜔𝑅𝐶 +1 Eq. 1 So that the amplitude gain between the output and input is: |𝑉 𝑂𝑈𝑇 𝑉 𝐼𝑁 | = − 1 √(𝜔𝑅𝐶 )2+1 Eq. 2 And the phase difference between the output and input is: ∠𝑉 𝑂𝑈𝑇 − ∠𝑉 𝐼𝑁 = 180° − tan −1 ( 1 𝜔𝑅𝐶 ) Eq. 3 +-RR VIN (t) +- VOUT (t) C +- Figure 1.Inverting voltage amplifier . Pre -lab: a. Show that equation (1) is the input -output relation for the circuit of Fig . 1. Also verify equations (2) and (3) above. b. Determine the cutoff frequency of the circuit if R = 10 kW and C = 10n F. Also d etermine the amplitude gain and the phase difference between the circuit’s input and output voltages for the circuit 1. Also determine the input impedance for frequencies of: • 𝜔 = 𝜔 𝑐 10 (low frequency input) • 𝜔 = 10 𝜔 𝑐 (high frequency i nput) • 𝜔 = 𝜔 𝑐 (corner frequency input) c. Check your low and high frequency results in part (b) relative to your expectations based on the capacitor’s low and high frequency behavior. Lab Procedures: Construct the circuit of Fig . 2, using R = 10k W and C = 10nF . a. Use the waveform generator to apply a sinusoidal signal with 2V amplitude and 0V offset to the circuit. Set up the oscilloscope to measure both the input and output voltages. Measure the 1Be sure to use units of radians/second for  in equations (2) and (3)! Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 73 of 85 amplitudes of the input and output voltage signal, a nd the time delay between the input and output signal for inputs with the following frequencies:  100 Hz  1 KHz  5 KHz b. Record an image of the oscilloscope window, showing the signals V IN (t) and V OUT (t) , for each of the above frequencies . c. Use your measurements to calculate the amplitude gain and phase difference of the circuit for the above three frequencies. Compare your measured results with your expectations based on the analysis you did in the pre -lab. d. Demonstrate operation of your circuit to the TA and have them initial the appropriate page(s) of your l ab notebook and the lab worksheet . Hint : Be sure to record all necessary data and any calculations you perform to obtain your results in your lab noteboo k. Appendix A of this lab assignment provides tips relative to gain and phase measurement of sinusoidal signals. Appendix A: Measuring Gain and Phase: The gain of a system at a particular frequency is the ratio of the magnitude of the output voltage to t he magnitude of the input voltage at that frequency , so that: Gain = ∆𝑉 𝑜𝑢𝑡 ∆𝑉 𝑖𝑛 Where ∆𝑉 𝑜𝑢𝑡 and ∆𝑉 𝑖𝑛 can be measured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out DVin DVout The phase of a system at a particular frequency is a measure of the time shift between the output and input voltage at that frequency , so that: Phase = ∆𝑇 𝑇 × 360° Where ∆𝑇 and T can be measured from the sinusoidal input and output voltages as shown in the figure below. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 74 of 85 Time Voltage Input voltage, V in Output voltage, V out T DTReal Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 75 of 85 Real Analog Chapter 10: Lab Worksheets 10.6.3: Inverting Voltage Amplifier (45 points total) Attach, to this worksheet, your derivation of equations (1), (2), and (3). (3 pts) In the space below, provide the cutoff frequency calculated in part (b) of the pre -lab. (3 pts) In the space below, provide the gain, phase, and impedance for the circuit at low, high, and corner frequencies as determined from part (c) of your pre -lab analysis. (6 pts) Comment belo w on the capacitor physical behavior at low and high frequencies vs. expressions provided in (3) above. (2 pts) Attach to this worksheet images of the oscilloscope window, showing the input and output voltages as functions of time for each of the three specified frequencies. (6 pts, 2 pts each image) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 76 of 85 In the space below, tabulate the amplitudes of the input and output voltages, the time difference between the input and output voltages, the gain and phase of the circuit, and the circuit’s input impedance, for each of the three frequencies of interest in part (a) of the lab procedures . ( 12 pts) In the space below, comment on the differences between the measured and expected gain and phase of the circuit at each of the frequencies in part 6 above. (e.g. compare your expressions in part 3 above with the measured data). (8 pts) DEMO : Have a teaching assistant initial this sheet, indicating that they have observed your system’s operation. (5 pts total) TA Initials: _ Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 77 of 85 Real Analog Chapter 10: Lab Projects 10.6.4: Non -inverting Voltage Amplifier In this lab assignment, we will be concerned with the steady -state response of electrical circuits to sinusoidal inputs. Figure 1(a) shows a block -diagram representation of the system. The input and output signals both have the same frequency, but the two signals can have different amplitudes and phase angles. The analysis of the circuit of Fig . 1(a) can be simplified by representing the sinusoidal signals as phasors . The phasors provide the amplitude and phase information of sinusoidal signals. By comparing the phasors representing the input and output signals, t he effect of the circuit can be represented as an amplitude gain between the output and input signals and a pha se difference between the output and input signals, as shown in Fig . 1(b). Circuit Input u(t)=Acos( t+ ) Output y(t)=Bcos( t+ f) (a) Physical circuit Input U=A  Output Y=B f ( )  f − A B (b) Phasor representation of circuit input -output relationship. Figure 1. Steady -state sinusoidal circuit analysis In this lab assignment, we will measure the gain and phase responses of a non -inverting voltage amplifier circuit and compare these measurements with expectations based on analysis. Before beginning this lab, you should be able to: After completing this lab, you should be able to:  Represent sinusoidal signals in phasor form  Represent electrical circuit steady -state sinusoidal r esponses in phasor form  Analyze operational amplifier -based circuits  Measure phasor form of circuit steady -state sinusoidal response  Measure input impedance of electrical circuit This lab exercise requires:  Analog Discovery module  Digilent Analog Parts Kit  Digital multimeter (optional) Symbol K ey: Demonstrate circuit operation to teaching assistant; teaching assistant should initial lab notebook and grade sheet, indicating that circuit operation is acceptable. Analysis; include principle results of analysis in laboratory report. Numerical simulation (using PSPICE or MATLAB as indicated); include results of MATLAB numerical analysis and/or simulation in laboratory report. Record data in your lab notebook. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 78 of 85 General Discussion : In this lab assignment, we will measure the frequency domain input -output relation governing the voltage amplifier shown in Fig . 1. The frequency domain input -output relation for the circuit of Fig . 1 is : 𝑉 𝑂𝑈𝑇 𝑉 𝐼𝑁 = 𝑅 1+𝑅 2 𝑅 1 1 𝑅 3𝐶 𝑗𝜔 +1 𝑅 3𝐶 Eq. 1 So that the amplitude gain and phase difference between the output and input are: |𝑉 𝑂𝑈𝑇 𝑉 𝐼𝑁 | = 2 1 𝑅𝐶 √𝜔 2+(1 𝑅𝐶 )2 Eq. 2 ∠𝑉 𝑂𝑈𝑇 − ∠𝑉 𝐼𝑁 = − tan −1(𝜔𝑅𝐶 ) Eq. 3 -+ RR +- C +- RVin Vout Figure 1.Non -inverting voltage amplifier . Pre -lab: a. Show that equation (1) is the input -output relation for the circuit of Fig . 1. Also verify equations (2) and (3) above. b. If R = 10 kW and C = 10n F, determine the amplitude gain and the phase difference between the circuit’s input and output voltages for the circuit for input frequencies of 100Hz, 5kHz, and 10kHz 2. c. Check your low and high frequency results in part (b) relative to your expectations based on the capacitor’s low and high frequency behavior. Lab Procedures: Construct the circuit of Fig . 2, using R = 10k W and C = 10nF . a. Use the waveform generator to apply a sinusoidal signal wi th 1V amplitude and 0V offset to the circuit. Set up the oscilloscope to measure both the input and output voltages. Measure the amplitudes of the input and output voltage signal, and the time delay between the input and output signal for inputs with the following frequencies:  100 Hz  5 KHz  10 KHz 2Be sure to use units of radians/second for  when evaluating equations (2) and (3)! Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 79 of 85 b. Record an image of the oscilloscope window, showing the signals V IN (t) and V OUT (t) , for each of the above frequencies . c. Use your measurements to calculate the amplitude gain and phase difference of the circuit fo r the above three frequencies. Compare your measured results with your expectations based on the analysis you did in the pre -lab. d. Demonstrate operation of your circuit to the TA and have them initial the appropriate page(s) of your l ab notebook and the lab worksheet . Hint : Be sure to record all necessary data and any calculations you perform to obtain your results in your lab notebook. Appendix A of this lab assignment provides tips relative to gain and phase measurement of sinusoidal signals. Appendix A: Measuring Gain and Phase The gain of a system at a particular frequency is the ratio of the magnitude of the output voltage to the magnitude of the input voltage at that frequency , so that: Gain = ∆𝑉 𝑜𝑢𝑡 ∆𝑉 𝑖𝑛 Where ∆𝑉 𝑜𝑢𝑡 and ∆𝑉 𝑖𝑛 can be mea sured from the sinusoidal input and output voltages as shown in the figure below. Time Voltage Input voltage, V in Output voltage, V out DVin DVout The phase of a system at a particular frequency is a measure of the time shift between the output and input voltage at that frequency , so that: Phase = ∆𝑇 𝑇 × 360° Where ∆𝑇 and T can be measured from the sinusoidal input and output voltages as shown in the figure below. Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 80 of 85 Time Voltage Input voltage, V in Output voltage, V out T DTReal Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 81 of 85 Real Analog Chapter 10: Lab Worksheets 10.6.4: Non -inverting Voltage Amplifier (40 points total) Attach, to this worksheet, your derivation of equations (1), (2), and (3). (3 pts) In the space below, provide the calculated gain and phase for the circuit at frequencies of 100Hz, 5kHz, and 10kHz . (Part (b) of the pre -lab.) (4 pts) Comment below on the capacitor physical behavior at low and high frequencies vs. expressions provided in (2) above. (2 pts) Attach to this worksheet images of the oscilloscope window, showing the input and output voltages as functions of time for each of the three specified frequencies. (6 pts, 2 pts each image) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 82 of 85 In the space below, tabulate the amplitudes of the input and output voltages, the time difference between the input and output voltages, and the gain and phase of the circuit for each of the three frequencies of interest in part (a) of the lab procedures . ( 12 pts) In the space below, comment on the differences between the measured and expected gain and phase of the circuit at each of the frequencies in part 6 above. (e.g. compare your expressions in part 3 above with the measured data). (8 pts) DEMO : Have a teaching assistant initial this sheet, indicating that they have observed your system’s operation. (5 pts total) TA Initials: _ Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 83 of 85 Real Analog Chapter 10: Homework 10.1 A circuit is described by the differential equation: 2 𝑑𝑖 (𝑡 ) 𝑑𝑡 + 10 𝑖 (𝑡 ) = 10 𝑣 (𝑡 ) If v(t) = 3cos(5t) , determine the steady -state response of i(t) . 10.2 The differential equation governing a circuit is: 3 𝑑𝑖 (𝑡 ) 𝑑𝑡 6𝑖 (𝑡 ) = 𝑣 𝑠 (𝑡 ) Where vs(t) is the input and i(t) is the output. Determine the steady -state response of the circuit to an input 𝑣 𝑠 (𝑡 ) = 5cos (4𝑡 + 30° ). 10.3 For the circuit below, a. The equivalent impedance seen by the source. b. iC(t), t →.iC(t) 4cos(3t+30 ) +-H 3 1 4ΩF 61 10.4 For the circuit below, find v(t), t→.10cos(4t ) +-H 21 4ΩF 8 1H 21 4Ωv(t) +- 10.5 For the circuit shown, find a. The equivalent impedance seen by the source. b. The steady -state response of the voltage across the resistor, vR(t). Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 84 of 85 20cos(10t) 0.1H 2ΩF 30 1 vR(t) +- 10.6 For the circuit shown, find a. The equivalent impedance seen by the source. b. The steady -state current delivered by the source .+- 2cos(3t) H 3 1F 91 3Ω 1H 10.7 For the circuit shown, determine a. The equivalent impedance seen by the source. b. The steady -state current out of the source , is(t →).+- 3cos(2t)V is(t) 6Ω 1H 3ΩF 61 10.8 For the circuit shown, find a. The equivalent impedance seen by the source. b. iS(t), t →.+- 5cos(4t+30 ) 2Ω 4ΩF 16 1 1H is(t) Real Analog Chapter 10: Steady -state Sinusoidal Analysis Copyright Digilent, Inc. All rights reserved. Other product and company names mentioned may be trademarks of their respective owners. Page 85 of 85 10.9 For the circuit shown, find a. The equivalent impedance seen by the source. b. vR(t →). +- 2cos(4t+25 ) 4Ω 2Ω 0.5H F 8 1 -vR(t) 10.10 For the circuit shown, find is(t) , t>0 .+- 5cos(2t-30 )V is(t) 2Ω 1H 0.5F 2Ω 10.11 For the circuit shown, find a. The equivalent impedance seen by the source. b. The steady -state response of the voltage across the resistor, vR(t). 20cos(10t) 0.1H 2ΩF 30 1 vR(t) +-
12257
https://www.superprof.co.uk/resources/academic/maths/linear-algebra/determinants/3x3-determinant.html
3x3 Determinant Search: Search: Academic English English Language A level English GCSE English KS3 English Resources 11 Plus English English Literature A Level English Literature GCSE English Literature Maths Algebra Equations Inequalities Log Polynomials Analytical Geometry Conics Distance Vectors Arithmetic Complex Numbers Decimal Numbers Divisibility Fractions Integers Metric System Natural Numbers Ratio Real Numbers Sexagesimal Calculus Derivatives Functions Integrals Limits Geometry Line Plane Solid Linear Algebra Determinants Linear Programming Matrix System Maths Exams A Level Maths Exams GCSE Maths Exams QTS Maths UKMT University Entrance Exams Probability Binomial Combinatorics Normal Distribution Statistics Descriptive Inference Statistics Exercises Trigonometry Trigonometry Theory Science Biology A Level Biology GCSE Biology KS3 Biology Chemistry A Level Chemistry GCSE Chemistry Physics A Level Physics GCSE Physics Social Sciences Economics A level Economics GCSE Economics Geography A Level Geography GCSE Geography History A level History Questions Accounting Art Biology Business studies Chemistry Computing Economics Engineering English English language English literature Exam preparation French General studies Geography Geology German Health and social care History Humanities Languages Latin Law Maths Medicine Physics Politics Psychology Religious studies Science Social sciences Sociology Spanish 3x3 Determinant ResourcesAcademicMathsLinear AlgebraDeterminants3×3 Determinant Learn Maths from the best First lesson free! Chapters Introduction Determinant of a 3x3 Matrix Example 1 Solution Example 2 Solution Example 3 Solution Example 4 Solution Rule of Sarrus Example Solution The best Maths tutors available 5 (62 reviews) Poonam £100 /h 1 st lesson free! 4.9 (56 reviews) Sehaj £60 /h 1 st lesson free! 5 (66 reviews) Intasar £129 /h 1 st lesson free! 5 (47 reviews) Johann £50 /h 1 st lesson free! 5 (32 reviews) Hiren £149 /h 1 st lesson free! 4.9 (163 reviews) Harjinder £25 /h 1 st lesson free! 4.9 (47 reviews) Farooq £50 /h 1 st lesson free! 4.9 (35 reviews) Jesse £25 /h 1 st lesson free! 5 (62 reviews) Poonam £100 /h 1 st lesson free! 4.9 (56 reviews) Sehaj £60 /h 1 st lesson free! 5 (66 reviews) Intasar £129 /h 1 st lesson free! 5 (47 reviews) Johann £50 /h 1 st lesson free! 5 (32 reviews) Hiren £149 /h 1 st lesson free! 4.9 (163 reviews) Harjinder £25 /h 1 st lesson free! 4.9 (47 reviews) Farooq £50 /h 1 st lesson free! 4.9 (35 reviews) Jesse £25 /h 1 st lesson free! Let's go Introduction We can calculate a special number from the square matrix known as determinant. A square matrix is a matrix that has equal number of rows and columns. It can be of any order, for instance a square matrix of order 2x2 means that there are two rows and two columns in it. Similarly, a square matrix of 3x3 means that its has three rows and three columns in it. We also have the square matrices of higher orders, for example 4x4, 5x5 and so on. We use vertical lines symbol || to denote the matrix. If we are given a matrix B, then its determinant is denoted as |B|. The determinant of the matrix which has unequal number of rows and columns is not possible. The determinants of the matrices are useful in finding the inverse of a matrix, and solving system of linear equations. In this article, we will discuss how to find the determinant of the 3x3 matrix. Determinant of a 3x3 Matrix To find the determinant of a 3x3 matrix, we break down it into smaller components, for example the determinants of 2x2 matrices, so that it is easier to calculate. Consider the following 3x3 matrix: The determinant of the matrix B will be calculated as: You can observe that : The elements in the first row; a, b and c are multiplied by the corresponding 2x2 matrix. The elements a is multiplied with a 2x2 matrix obtained after eliminating the row and column in which the element a is present. The same goes for other elements b and c. Now, we will solve some of the examples in which we will compute the determinant of 3x3 matrix by hand. Example 1 Calculate the determinant of the following matrix. Solution We will use the following formula to calculate the determinant of the above matrix: To decompose this matrix into smaller 2x2 matrices we need to look at the first row and multiply each element with the determinant of 2x2 matrix. We know that the formula for finding the determinant of the following 2x2 matrix is: We will employ this formula to calculate the determinants of the smaller 2x2 matrices. Hence, the determinant of the matrix A is -67. Example 2 Calculate the determinant of the following matrix. Solution The following formula will be employed to calculate the determinant of 3x3 matrix. To break this matrix into smaller 2x2 matrices we need to look at the first row and multiply each element with the determinant of 2x2 matrix. The 2x2 matrix will be obtained after eliminating the row and column in which the element is present. We know that the formula for finding the determinant of the following 2x2 matrix is: We will employ this formula to calculate the determinants of the smaller 2x2 matrices. Hence, the determinant of the matrix B is 57. Example 3 Calculate the determinant of the following matrix. Solution The following formula will be employed to calculate the determinant of above 3x3 matrix. We will break this matrix into smaller 2x2 matrices, by looking at the first row and multiply each element with the determinant of 2x2 matrix. The 2x2 matrix will be obtained after eliminating the row and column in which the elements 9, 4 and 7 are present, We know that the formula for finding the determinant of the following 2x2 matrix is: We will employ this formula to calculate the determinants of the smaller 2x2 matrices. Hence, the determinant of the matrix C is -151. Example 4 Calculate the determinant of the following matrix. Solution We will put the values from the above matrix in the following formula to compute the determinant. We will break this matrix into smaller 2x2 matrices, by looking at the first row and multiply each element with the determinant of 2x2 matrix. The 2x2 matrix will be obtained after eliminating the row and column in which the elements 3, 4 and 0 are present, We know that the formula for finding the determinant of the following 2x2 matrix is: We will employ this formula to calculate the determinants of the smaller 2x2 matrices. Hence, the determinant of the matrix D is 144. Rule of Sarrus Sarrus' rule is also known as basketweave method. It is another method to calculate the determinant of a 3x3 matrix. The terms with a+sign are formed by the elements of the principal diagonal and those of the parallel diagonals with its corresponding opposite vertex. The terms with a−sign are formed by the elements of the secondary diagonal and those of the parallel diagonals with its corresponding opposite vertex. Example Calculate the determinant of the following 3x3 matrix using Sarrus' rule. Solution Write the first two columns outside the determinant of the matrix and draw diagonal lines like this: Now, we will multiply all the elements in diagonals with each other. You can see that we have constructed lines to make it more clear. The numbers obtained by multiplying the diagonals of the orange lines will be added and the numbers obtained by multiplying the diagonals represented by the blue lines will be subtracted together. Did you like this article?Rate it! 4.00 (12 rating(s)) Loading... Emma I am passionate about travelling and currently live and work in Paris. I like to spend my time reading, gardening, running, learning languages and exploring new places. Theory ### Determinants ### Matrix Rank ### 4×4 Determinant ### Minor and Cofactor ### 3×3 Determinant ### Properties of Determinants ### Vandermonde Determinant ### Matrix Inverse Formulas ### Determinant Formulas Exercises ### Determinants Worksheet Leave us a comment Cancel reply Your comment Name Email Current ye@r Leave this field empty Ihsan ullah March 2025 Minor of matrix Reply The platform that connects tutors and students Superprof Resources English Maths Science Social Sciences Questions About Find a private tutor Who are we? Superprof Blog Follow the adventure Give lessons
12258
https://www.youtube.com/watch?v=nYoOoLf-rUo
Venn Diagrams (2 of 2: Constructing from group sizes) Eddie Woo 1940000 subscribers 104 likes Description 4392 views Posted: 18 Oct 2019 More resources available at www.misterwootube.com 14 comments Transcript: i'd like to consider with you two different groups of people and uh what the information we know about them tells us about those groups of people so here's my first group of people eyes up have a look at these numbers with me right i want you to imagine a hypothetical group of 28 students okay and in that hypothetical group of 28 students 21 of them have black hair six of them have brown hair and it just so happens that exactly one of them has red hair okay now if you were to represent this hypothetical group of students in a venn diagram what would the venn diagram look like hmm what do you think hearing mutually exclusive circles i'm gonna have some mutually exclusive circles we've already established these are circles that don't overlap how many um how many circles am i going to need how many hair colors do i have i have i have three of them right like one of them the outside that's not that so if you wanted i could say just one on the outside which is neither these but i've kind of like named this color so i want to put it i want to place it somewhere so i'm going to say for this one here i'm going to have my three okay i've got my brown i've got black and i've got red okay now the important part that i was trying to get at is these are mutually exclusive and one of the ways you can see is 6 plus 21 plus 1 gives you the 28 it gives you the total does that make sense it exactly lines up now this is the contrast i want to draw have a look at this group of people okay i tell you from the outset just like i did here i tell you how many people there are in the group there are 30 of them and then i tell you they went shopping and these were the things that they bought either milk or bread or neither i want you to look closely at the numbers this time what do you notice about them that's different jessica [Music] yeah so you're 20 to 13 plus you one which gives you what 34 which is more than the number of customers i'm supposed to have what does this imply it means that luis some people bought both milk so some people must have got both right because you don't have enough people to buy just one of each one you've run out of people so there have to be some people who did both and that's how we account for the numbers so in your book now maybe leave space for you to complete that question i'd like you to draw for me a new venn diagram i'm drawing it with an overlap because you told me there have to be some people who bought bro both milk and bread milk and bread and we're going to work out how many okay here's my venn diagram you can see my universal box on the outside there's a single person who bought neither right and i'm going to start with them because that's the easiest number to place on the diagram where does that one person go on my diagram on the outside of the circles but inside the box yeah because they're in this group of 30 that i'm considering okay now stay with me how many people will be inside the circles in total now that you know that one's on the outside they're going to be 29 very good so i'm like counting down in my head this is how many people i've got left to put into the circles okay now this point see the number 20 and the number 13 i can't write them directly anywhere right because for example if i wrote 20 here right then what should be the number that goes here think carefully if i've got 20 here then how many leftover people milk to put in here the answer is i don't have any left right i've accounted for all 20 of them and we know that can't be the case you guys told me someone had to buy both right so i can't do this here's what i'm going to do instead maybe if you've got like another color or a pencil or something like that instead i'm going to draw like a little curly brace down here this is everyone who's bought milk there's 20 of them right so if i take those 20 you told me i had 29 people left who i could put inside the circles so i know 20 of them will be over here so if i take that 29 remove that 20 how many do i have left i have 9 right 9 have to be here does that make sense you following i took the 29 that had to be inside the circles i know 20 going to be over here so therefore i can infer whoopsie daisy that there must be 29 take away that 20 over there don't move we can fill this in now can't we how many people are left in the middle i've counted nine i have to add four to get to 13 and now i can finish by saying 16 plus 4 gives me the 20. does that make sense
12259
https://uu.diva-portal.org/smash/get/diva2:1950432/FULLTEXT01.pdf
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 2535 Finding the features of water and oxygen on metal oxide surfaces Structure, stability and spectroscopic signatures ANDREAS RÖCKERT ACTA UNIVERSITATIS UPSALIENSIS 2025 ISSN 1651-6214 ISBN 978-91-513-2471-5 urn:nbn:se:uu:diva-554013 Dissertation presented at Uppsala University to be publicly examined in room 2001, Ångströmslaboratoriet, Regementsvägen 10, Uppsala, Thursday, 5 June 2025 at 13:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Professor Christof Wöll (Karlsruher Institut für Technologie, Germany). Abstract Röckert, A. 2025. Finding the features of water and oxygen on metal oxide surfaces. Structure, stability and spectroscopic signatures. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 2535. 60 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-513-2471-5. Metal oxides are utilized in a broad spectrum of surface-chemical processes due to the ma-terial group’s valuable surface chemistry. Interactions between molecules and metal-oxide surfaces are crucial in natural phenomena, such as cloud formation, as well as in diverse technological applications, ranging from chemical synthesis to pollutant degradation. Cerium oxide (CeO2, “ceria”) shows particular strength in both high- and low-temperature oxidative catalysis, under gas-phase and aqueous-phase conditions. Experimental probes of ceria surfaces, such as infrared spectroscopy and atomic force microscopy, provide insights into adsorbate structure and bonding but lack atomic-scale resolution; thus, they often require complementary simulations for a comprehensive interpretation. In this thesis, density-functional theory (DFT) calculations bridge that gap by connecting spectroscopic signatures of water and dioxygen on ceria to specific atomic configurations. The water–ceria interaction was explored through a series of studies. Thermodynamic analysis indicates that water preferentially adsorbs as hydrogen-bonded chains on the ceria surface, which are allowed to form due to partial dissociation into surface hydroxides that allow for strong water–hydroxide hydrogen bonds. The computed vibrational spectra are correlated with distinct hydrogen-bond motifs among the surface-adsorbed water, showing that certain motifs produce notably red-shifted frequencies. Compared to bulk water, these surface-mediated hydrogen bonds adopt more tilted geometries while maintaining strong interactions and exhibiting pronounced vibrational shifts. I further analyze how vibrational frequencies depend on structural descriptors, such as bond lengths, local electric fields, and high-dimensional embeddings of the surrounding atomic environment, to quantify the amount of structural detail encoded in each spectral feature, which provides insight into the cause of the vibrational shift. Next, DFT simulations of O2 adsorption, coupled with infrared reflection–absorption spec-troscopy (IRRAS) simulation, reveal that charged O2X – species lying flat on CeO2 exhibit large transition dipoles, contrary to existing interpretations. I demonstrate that these dipoles arise from surface-to-molecule charge transfer perpendicular to the O–O bond, induced by the molecular vibration of O2X –. This insight suggests new heuristics for interpreting oxide-surface spectra and underscores the necessity of explicit atomistic modeling for accurate spectral assignments. Overall, this thesis presents a systematic framework that integrates DFT calculations with thermodynamic, vibrational, and statistical analyses to bridge the gap between atomic structure and experimental observables. The simulation strategies, analysis software, and spectroscopic “rules of thumb” developed here advance the understanding of water and oxygen interactions on ceria — and by extension, on other metal-oxide surfaces — offering generally applicable methods for studying molecule–metal oxide interfaces. Keywords: density functional theory, hydrogen bonding, surface structure, infrared spectroscopy, infrared reflection absorption spectroscopy, molecule adsorption, surface, ceria Andreas Röckert, Department of Chemistry - Ångström, Structural Chemistry, Box 538, Uppsala University, SE-751 21 Uppsala, Sweden. © Andreas Röckert 2025 ISSN 1651-6214 ISBN 978-91-513-2471-5 URN urn:nbn:se:uu:diva-554013 ( I dedicate this thesis to Rebecka, Tage, and all the incredible people in my life who have been an invaluable source of support, empowering me to persevere and successfully complete this academic endeavor. List of papers This thesis is based on the following papers, which are referred to in the text by their Roman numerals. I The water/ceria(111) interface: Computational overview and new structures A. Röckert, J. Kullgren, P. Broqvist, S. Alwan, and K. Hermansson, The Journal of Chemical Physics, 152, 104709 (2020) II Water on ceria{111}: Comparison between 23 experimental vibrational studies in the literature and new modeling A. Röckert, J. Kullgren, D. Sethio, L. Agosta and K. Hermansson, The Journal of Chemical Physics, 159, 044705 (2023) III Predicting Frequency from the External Chemical Environment: OH Vibrations on Hydrated and Hydroxylated Surfaces A. Röckert, J. Kullgren, and K. Hermansson, Journal of Chemical Theory and Computation, 18, 7683-7694 (2022) IV Water in Crystals: A Database for ML and a Knowledge Base for Vibrational Prediction J. Kullgren, A. Röckert, and K. Hermansson, The Journal of Physical Chemistry C, 127, 13740-13750 (2023) V Standing or lying down? Simulated IRRAS spectra yield strong intensity for flat adsorption geometries of O2 species on CeO2 A. Röckert, M. J. Wolf, K. Hermansson, and J. Kullgren, In manuscript Reprints were made with permission from the publishers. Disclaimer: Parts of chapter 2 and 3 in this thesis are based on my licentiate thesis titled Characterization of metal oxide - water interfaces: Deciphering structures and simulating spectra (Uppsala University, 2021) The author’s contribution to the papers My contributions to the referenced papers included scientific discussion, ex-periment design, manuscript preparation, and, primarily, executing simula-tions, computer code implementation, and data analysis. List of publications not included in the thesis VI. Electronic structure of organic–inorganic lanthanide iodide perov-skite solar cell materials, M. Pazoki, A. Röckert, M. J. Wolf, R. Imani, T. Edvinsson, J. Kullgren, Journal of Materials Chemistry A, 44, 23131-23138 (2017) VII. Electronic structure of 2D hybrid perovskites: Rashba spin–orbit coupling and impact of interlayer spacing, M. Pazoki, R. Imani, A. Röckert, T. Edvinsson, Journal of Materials Chemistry A, 39, 20896-20904 (2022) Contents The author’s contribution to the papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of publications not included in the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Scope of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Computational methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Simulated Spectroscopies and Computed Properties . . . . . . . . . . . . . . . . . . . . . 5 2.2 Density functional theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3 Paper I: Water layer stability on ceria(111) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Factors that impact stability at 1.0 ML water coverage . . . . . . . . . . . . . . . 21 3.3 Stability at different coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4 Paper II: Vibrational frequency shifts for surface-bound water . . . . . . . . . . . . . 26 4.1 Experiments for validation and DFT for new insights . . . . . . . . . . . . . . . . . 26 4.2 H-bond definitions: geometry or frequency? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3 Vibrational frequencies for different H-bond motifs . . . . . . . . . . . . . . . . . . . 29 4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5 Papers III and IV: Information content of structure descriptors relating to the vibrational frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.2 Descriptors and means of assessing their quality . . . . . . . . . . . . . . . . . . . . . . . . . 34 5.3 One-dimensional descriptor correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.4 Multidimensional descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.5 Insight vs accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6 Paper V: Finding heuristics for dioxygen adsorption on ceria . . . . . . . . . . . . . . . 40 6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.2 Adsorbed reactive oxygen species on ceria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.3 Simulated IRRAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.4 Relating transition dipole moment to molecular adsorption angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.5 Assignments and conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 7 Concluding remarks and outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 8 Sammanfattning på svenska . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 9 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 1. Introduction 1.1 Background Metal oxides are materials composed of metal cations (MeX+) and oxide an-ions (O2–), which are primarily held together by ionic bonds. This bonding gives metal oxides their characteristic hardness, thermal stability, and elec-tronic insulation properties. These materials exhibit a wide range of surface chemistries, allowing them to serve various functions. For example, they can act as supports in organic chemistry catalysis, function as gas sensors through surface adsorbates, and provide tunable properties such as wettability, surface acidity, and ultraviolet (UV) filtering capabilities [1, 2]. The interaction between small molecules and metal oxide surfaces is a topic of considerable scientific interest due to its implications for the materials’ sta-bility, catalytic activity, surface acidity, wettability, and optical properties. These effects — beneficial or, perhaps, unwanted — also make molecule-metal oxide interfaces actors of considerable interest in a range of technolog-ical applications, as well as in processes of natural importance. Water (H2O) and oxygen (O2, "dioxygen") are the molecular adsorbents in focus in this thesis. The water molecules in the layer closest to the surface can play a decisive role in the adhesion of water films onto a particle. The adhesion is often driven by hydroxylation, where dissociative adsorption results in surface-bound hy-droxyl (OH–) groups, which in turn modify surface adsorption dynamics and catalytic properties. As a result, the degree of dissociative adsorption can govern the overall hydrophobic and hydrophilic properties of metal oxide sur-faces. Water in the first layer also affects the ability of other molecules to adsorb onto the metal oxide surface. For example, the composition of water and hy-droxides in the first layer of metal oxides regulates the adsorption of biological adsorbents. Careful design and consideration of these factors are important in applications related to nanotoxicology and nanomedicine . As for water-metal oxide interfaces in natural processes, a prime exam-ple is cloud formation, which is strongly influenced by the ability of water molecules to adhere to dust particles. Many of these consist of metal oxides, such as minerals. This adsorption process triggers a series of physicochemical transformations that contribute to the nucleation and growth of cloud droplets, ultimately leading to cloud coverage, precipitation, and broader climate ef-fects. 1 Dioxygen molecules are also ubiquitous; they primarily participate in red-uction-oxidation (redox) processes with metal oxides. They can adsorb to metal oxide surfaces as intact molecules or dissociate into two highly reactive oxygen species. These species can then react with the substrate, leading to the oxidation of the surface, or with other surface-adsorbed molecules, result-ing in heterogeneous catalytic reactions. A typical example of heterogeneous oxidative catalysis is the combustion of hydrocarbons in automotive catalytic converters. In this process, unburnt fuel from internal combustion engines is oxidized and converted into simple, harmless molecules, thereby reducing the environmental impact of vehicles . In these oxidative catalytic systems, reducible metal oxide supports such as cerium oxide (ceria, CeO2) are heavily utilized. Ceria is quite unique among metal oxides, as it has one of the highest oxygen storage capacities (OSC), allowing for unusually high reducibility, i.e., an unusually high number of oxygen vacancies. In ceria, a dynamic equilibrium exists between surface-adsorbed oxygen and oxygen in the form of oxide ions (O2–) within the lat-tice. O2– is reactive and can participate in chemical reactions on the sur-face, leaving a vacancy in the material. Gaseous oxygen can adsorb onto the surface, dissociate, and fill oxygen vacancies, thereby regenerating the oxy-gen vacancy. This property renders it a standalone catalyst for CO oxidation, water-gas shift, and soot removal. Additionally, surface-bound molecular oxygen (O2), superoxide (O2 –), and peroxide (O22–) have been associated with ceria and are of particular inter-est in the field of reactive medicine and enzyme mimetics. For instance, ce-ria nanoparticles can facilitate oxidation reactions in aqueous environments at ambient temperature [6–9]. In summary, the distinct properties of ceria arise mainly from its interac-tions with O2, and to some extent with H2O, which is why it is a subject of investigation in this thesis. To gain insight into surface processes such as those mentioned, detailed atomistic and molecular-level structural insights into both adsorbates and sub-strate surfaces are essential. Numerous experimental imaging, diffraction, and spectroscopic techniques have been developed to provide such surface-sensitive information, complemented by theoretical and computational meth-ods that enhance their interpretation. Among computational methods, density functional theory (DFT) is the most widely used tool for conducting such stud-ies. The need for computational modeling arises from the fundamental and practical limitations of experimental methods in observing these systems at sufficiently high resolution. Since atomic dimensions are in the order of a few Ångströms, probing these entities directly poses significant experimental challenges. Among experimental methods, Atomic Force Microscopy (AFM) and Scanning Tunneling Microscopy (STM) have the highest resolution down to 10 Å[10–12], Fourier Transform Infrared spectroscopy (FTIR), Infrared 2 Reflection Absorption Spectroscopy (IRRAS), and Raman spectroscopy are limited by the wavelength of IR light of about 6000 Å, and Temperature Programmed Desorption (TPD) investigates averages adsorption energies of adsorbates on an entire sample which can be scaled in the millimeters or cen-timeters. The spectroscopic techniques presented here — be it IR spectroscopy, TPD, or others — yield data that is essentially one- or two-dimensional. The col-lected spectrograms, characterized by peak positions and widths, offer indi-rect clues about the molecular environment. However, mapping these lim-ited dimensions of spectral data to form a comprehensive picture of a surface composed of many atoms, often irregularly positioned, presents an intractable problem. This dimensional reduction means that, despite the detailed informa-tion embedded within the spectra, reconstructing the full atomic-scale struc-ture and surface heterogeneity remains a significant challenge. Molecular, atomic, and quantum mechanical simulations have been incredibly fruitful in providing likely structures to interpret spectral peaks. Yet, much work remains to continually interpret spectra, develop more potent models, and provide heuristics that provide easily interpretable structural features directly from spectra. 1.2 Scope of this thesis This thesis focuses on DFT simulations of H2O and O2X– on CeO2. The constituent papers also investigate interactions with the surfaces of CaO and MgO, as well as bulk crystalline hydrates and hydroxides. The primary aim is to investigate the atomic arrangement of surface adsorbates and explore the molecule-surface interactions, which govern many surface processes. By sim-ulating Fourier-transform infrared spectroscopy (FTIR), infrared reflection-absorption spectroscopy (IRRAS), and temperature-programmed desorption (TPD), I aim to establish relationships between these interactions, the atomic arrangements of surface adsorbates, and their corresponding spectroscopic signals. Additionally, the simulations facilitate the analysis of structure-frequency relationships, yielding quantitative methods for inferring atomic arrangements from vibrational signals and vice versa. The simulations also reveal general-ized behaviors of the adsorbed atoms that can serve as useful heuristics for interpreting spectrograms. These themes are consistently reflected throughout the five papers presented in this thesis, which is structured as follows: Chapter 2 provides an overview of density functional theory and the simu-lated spectroscopic methods employed throughout the papers. Using vibrational spectroscopy as an example, the vibrational frequency is the first dimension and the absorption intensity the second. 3 Chapter 3 presents Paper I in which "how and where" water prefers to ad-sorb on the ceria(111) surface and whether infinite or isolated agglomerates are favorable, is discussed. The roles of water dissociation and hydrogen bonding as system stabilizers are discussed, and the results are compared to those from TPD experiments in the literature. Chapter 4 covers Paper II where the vibrational spectra of water on ce-ria(111) are simulated and compared to FTIR experiments. The goal is to resolve the wide spectral signature of hydrogen-bonded water by qualitatively mapping the adsorption motifs of water to frequency ranges in the spectral signal. Chapter 5 delves into method development related to quantitatively map-ping structural information of surface adsorbates to the resulting IR spectra. In Papers III and IV, a large number of structural and other descriptors were used in the construction of the mapping models, which were evaluated with respect to their predictive and explanatory qualities. Finally, chapter 6 covers Paper V, in which the mechanism behind IR ab-sorption of non-polar molecules on ceria is investigated. In particular, IRRAS simulations are used to evaluate the relationship between the O2X– structure and IR absorption, aiming to provide useful heuristics regarding the validity of prevailing ’molecular orientation versus intensity’ relations in the literature. The ultimate goal of the work presented in this thesis is to advance our un-derstanding of these delicate systems. By refining the simulation techniques and establishing robust heuristics, I have aimed to provide a framework that can be employed to interpret experimental observations and predict the be-havior of new material systems under various environmental conditions. Such insights are essential for future explorations into natural processes and the ra-tional design of next-generation materials with tailored surface properties. 4 2. Computational methods In this chapter, I first outline the various techniques employed in the thesis to simulate spectroscopic data related to the surface adsorption of H2O and O2 on metal oxide surfaces. Following that, I delve into the fundamental principles of density functional theory (DFT) that underpin these simulations. 2.1 Simulated Spectroscopies and Computed Properties Adsorption Energy and Simulated Temperature Programmed Desorption In Paper I, the energetics of surface adsorbed water were studied. Experi-mentally, a measure of the adsorption energy of adsorbed molecules can be estimated by monitoring the amount of desorbing molecules from a surface as the temperature increases (under ultra-high vacuum). In simulations, an es-timate of the energy of adsorption (denoted as only adsorption energy, Eads, from here on) is determined by comparing the energy of three model sys-tems: (i) a clean surface, (ii) the gas phase molecule, and (iii) a surface with adsorbed molecules. In this thesis, the molecule is H2O and the surface is CeO2(111). Their respective total energies are: (i) E[CeO2], E[H2O(g)], and E[H2O/CeO2]. Eads = 1 n E[H2O/CeO2]−E[CeO2]−n·E[H2O(g)] (2.1) where n is the number of molecules adsorbed on the surface. The total energy calculated in DFT corresponds to the electronic ground state energy EDFT. Vibrational analysis (described below) can give us the zero K, zero-point energy, which equals the vibrational ground state energy and is equivalent to EZPE = 1 2 m ∑ i=1 ωi (2.2) where ωi is the vibrational frequency of the ith mode, and m is the number of vibrational modes. EZPE is a good estimate of the total vibrational energy at low temperatures. For solid phases, it is also assumed that the rotational and translational contributions to the energy are zero and that volume changes 5 in the slab with respect to pressure can be neglected. Thus, the system energy can be approximated as E[X] ≈EDFT[X]+EZPE[X] (2.3) for system X. However, due to the large computational requirement of cal-culating the vibrational ground state energy, it is often omitted, whereby E[X] ≈EDFT[X]. (2.4) The desorption temperature is estimated from the Redhead equation . By assuming that the adsorption energy equals the energy of desorption and that it constitutes the reaction barrier for the desorption process, an estimate of the temperature of desorption T is calculated by solving the following equation for T Eads RT 2 = ν β exp  −Eads RT  (2.5) where R is the gas constant, β the heating rate, and ν the pre-exponential factor associated with the entropy of the transition state. In this work, the two latter constants were selected to be comparable with the experimentally determined counterparts, where β = 2 Ks−1 and ν = 1014.6 s−1 in accordance with reference . Surface energy The stability of the surface, studied in Paper I, at different water coverages was examined using the surface energy γ. A phase diagram can then be gen-erated from the surface energy after consideration of the water vapor pressure. γ = 1 A  E[H2O/CeO2]−m·E[CeO2]bulk (2.6) −n·(E[H2O(g)]+ΔμH2O(T, p))  −γopposite (2.7) γopposite = 1 2A(E[CeO2]−m·E[CeO2]bulk) (2.8) where A is the cross-sectional area of a surface cell where water is adsorbed on one of the two exposed surfaces. m is the number of bulk CeO2 formula units with energy E[CeO2]bulk. When water is adsorbed on only one of the two slab surfaces, a correction, γopposite, is needed to account for the energy of the opposite and clean surface. n is the number of water molecules of energy E[H2O(g)], ΔμH2O(T, p) is the temperature and pressure-dependent contribu-tions for the gas phase molecule. 6 ΔμH2O = μH2O(T, p0)−μH2O(T 0, p0)+kTln(p/p0) (2.9) It includes the temperature-dependent vibrational, rotational, and transla-tional contributions. In this work, they were retrieved from thermodynamic tables . T 0 and p0 represent standard temperature and pressure, respec-tively. Simulated Scanning Tunneling Microscopy Although clearly not a spectroscopic method, simulated STM can be used as an analogy for surface imaging and provides realistic images of what atom resolution imaging experiments could show. Such simulated STM was used in Paper I. The calculated electron density can be imaged and related to an experi-mental observable, namely the tunneling of electrons. In Scanning Tunneling Microscopy (STM), the signal is generated from the tunneling of electrons be-tween a conductive surface and a very sharp, conducting probe placed under a bias voltage. This response can be simulated from the local density of states (LDOS), within the Tersoff-Hamann approximation , where the electron density at each energy is represented on a three-dimensional grid. By analyz-ing the electron density in a plane, e.g., 2 Å above the highest coordinate of any surface-adsorbed species, within an energy window between 0 and -2 eV, a signal proportional to a bias voltage of -2 V can be generated. Interaction between light and matter In vibrational spectroscopy, the signal corresponds to the absorption of elec-tromagnetic radiation, i.e., light, from a molecule or material. The energy of the incident radiation E = hc λ = hcν (2.10) is proportional to its wavelength λ, Planck’s constant h, and the speed of light c. In spectroscopic studies of vibrating molecules, the wavenumber or frequency ν = 1/λ is stated in units of cm−1. Incident light of a certain frequency can excite a vibrating oscillator when the photon’s energy matches the transition between two quantized energy states. Most commonly, the fundamental frequency is implied and refers to the exci-tation between the vibrational ground state and the first excited state. How-ever, energy alone does not guarantee light absorption; transition also relies on a coupling between the electric field component of the light wave and the transition dipole generated by the excitation. In practice, this transition dipole roughly matches the molecular dipole, meaning that the molecular dipole must 7 align with the electromagnetic wave, which is light. In the following sub-sections, models for approximating the vibrational frequency are described, followed by the electric field-dipole coupling that gives rise to absorption in-tensity. The harmonic model of a vibrating molecule The harmonic vibrational frequency ω of a classical harmonic oscillator can be calculated from the second-order force constant k2. For a diatomic molecule ω = 1 2πc  k2 μ (2.11) where the reduced mass μ = m1m2 m1+m2 is expressed in terms of the reduced masses of atoms 1 and 2, and k2 is related to the second derivative of the potential energy surface V(r) with respect to the stretch coordinate r. k2 = ∂2V(r) ∂r2 (2.12) A periodic system of n atoms will instead have 3n −6 modes. The ref. inspires the presentation in the following section, which generalizes the harmonic model to a periodic system of N atoms. The potential energy V of a crystal in a periodic environment depends on the positions of its atoms V[r(1,1),...,r(n,N)] (2.13) where r(i, j) is the atomic position of the ith atom out of n total atoms in the jth unit cell. The force acting on atom i in direction α is Fα(i, j) = − ∂V ∂rα(i, j) (2.14) and the second-order force constant Φαβ(i) = ∂2V ∂rα(i, j)∂rβ(i′, j′) = −Fβ(i′, j′) ∂rα(i, j) (2.15) where α and β are the Cartesian indices corresponding to ˆ x, ˆ y or ˆ z, i and i′ are the atomic indices, i and j′ are the unit cell indices. The forces can be calculated using finite displacement Δrα(i), in atomic position relative to their equilibrium distance, allowing us to write an approximate second-order force constant Φαβ(i) ≈Fβ(i′;Δrα(i))−Fβ(i′) Δrα(i) . (2.16) 8 This results in a tensor of second-order force constants for each atom pair (within a cutoff distance) in the system . They can also be calculated directly from the second derivative of the potential energy, as described in Density-functional perturbation theory (DFPT). The dynamical properties of atoms in terms of their harmonic vibrational motion are obtained by solving the eigenvalue problem of the dynamical ma-trix ∑ βi′ Dαβ i,i′ (q)eβi′ ql = ω2 qleαi ql (2.17) with Dαβ i,i′ (q) = ∑ j′ Φαβ(0i, j′i′) √mimi′ eiq·[r( j′,i′)−r(0,i)] (2.18) where q is the wave vector, mi is the mass of atom i, l is the band index, ωq,l is the harmonic phonon frequency and eq,l is the polarization vector of the phonon mode . The vibrational frequency corresponds to the q = 0 wave vector. The harmonic approximation is suitable for vibrational analysis for all but the lightest elements and was used in Paper V to simulate the vibrational frequency of O2X–-species adsorbed to ceria. Uncoupled, one-dimensional vibrational wave function The following approach was used for water molecules containing the lightest element (H). The one-dimensional quantum oscillator is equivalent to a di-atomic molecule in a fixed chemical environment. The model assumes that (i) the chemical environment remains unchanged during the vibrational period and (ii) that the oscillator’s vibrational modes, characterized by the nuclear wave functions, are decoupled from those of all other oscillators. Both of these approximations hold for the high frequency oscillators of OH-oscillators when the so-called isotope-isolation approach is used, where e.g., 95% D2O and 5% H2O are mixed, which results in mostly D2O and HDO by proton transfer. The resulting signal is significantly more resolved as coupling is minimized, thus acting as one-dimensional oscillators. The full nuclear wave functions of the one-dimensional quantum oscillator can be calculated at N discrete points along the stretch coordinate using Dis-crete Variable Representation (DVR) in accordance with Light et al. [19, 20]. The energy eigenvalues and values of the wave functions (corresponding to the eigenvectors) at these discrete points are determined by diagonalizing the DVR Hamiltonian H H HDVR. H H HDVR = T T T DVR +V V V DVR (2.19) 9 Figure 2.1. One-dimensional vibrational oscillator potential energy surface with fitted nuclear wave functions using discrete variable representation (DVR). The fundamen-tal frequency is calculated from the energy difference between the ground state and the first excited state, corresponding to the energy required to excite the vibrational oscillator between these states. where V V V DVR is a N × N square matrix with the potential energy of the N points along the diagonal, and T T T DVR, corresponding to the kinetic contribu-tions are calculated from basis functions expressed using Chebyshev polyno-mials. This model has been applied for OH oscillators in H2O and OH−, in Pa-pers II, III, and IV, where the OH stretching vibrational frequencies were calculated using a 1D uncoupled OH vibrational model. To generate the 1D potential energy surface (PES) of the oscillator, one OH group from the op-timized geometry is stretched and contracted around its center of mass at 71 equidistantly placed points in the range [-0.375 +0.675] Å from the equilib-rium position. At the same time, the rest of the system remains fixed. Masses of O and H were set to 15.994915 a.m.u. and 1.007825 a.m.u., respectively. An example of a PES and the associated wave functions is found in Figure 2.1. This model’s benefit is that it captures the full anharmonicity of the stretch-ing vibration, a phenomenon that is particularly noticeable for oscillators con-taining H. The model is quite robust and could fit the computed PES perfectly. In addition, the DVR approach can reproduce established PESs, such as the Morse potential or harmonic potential, in the 1D PES without explicit param-eterization. 10 Light absorption and infrared reflection absorption spectroscopy For a transition to occur, the energy levels between the ground state and the first excited state must correspond to the energy of the incoming light. Ad-ditionally, the light’s electric field component must align with the change in the electrical dipole induced by exciting the vibrational mode, the so-called transition dipole moment. For surface systems with incident light, the electric field component is de-termined by polarization. p-polarization corresponds to the electric field com-ponent that falls into the light beam plane of a reflecting and/or refracting sur-face. p-polarized light can be divided into two components, one perpendicular to the surface and one parallel to the surface. s-polarized light corresponds to the electric field component that is perpendicular to this light plane and is always parallel to the surface plane. Figure 2.2. The light path and polarization vectors of incident light as it is reflected in the surface. The electric field component of s-polarized light is parallel to the surface, whereas the electric field component of p-polarized light lies in the light plane. p-polarized light can be divided into two components, parallel and perpendicular to the surface. For metallic substrates, the electric field components that are parallel to the surface are completely screened, leaving only the p-polarized light component. For surface adsorbed molecules, the transition dipole moment aligned with the surface normal (perpendicular to the surface, TDM⊥) would result in light absorption, following the so-called surface selection rule [21, 22]. For dielectric substrates, such as ceria, the screening is incomplete, ren-dering the surface parallel s- and p-polarized components detectable if they couple with a TDM which lies in the surface plane (TDM∥) along the surface normal TDM⊥. . 11 Infrared reflection absorption spectroscopy (IRRAS) is an experimental te-chnique for analyzing surface-adsorbable molecules. Based on these princi-ples, information about the orientation of the TDM and, hence, a surface-adsorbate molecule can be deduced. In many cases, a grazing incident angle is used to maximize the p-polarized signal. For Paper V, IRRAS spectra were generated from DFT calculations of O2X– species on ceria in accordance with the following methodology. The absolute IR intensity, IIR of a vibrating mode is related to the dipole moment derivative along the normal-mode coordinate Q according to: IIR = TDM2 = dμ dQ (2.20) where TDM is the absolute transition dipole moment. The dipole moment derivatives dμ dQ can be evaluated using the eigenvector of the dynamical matrix, X, according to: dμ dQ = ∇μ ·XT ; ∇μ = ∂μ ∂x1 , ∂μ ∂y1 ,..., ∂μ ∂zN (2.21) where x,y, and z corresponds to the Cartesian coordinates of atom i. For a periodic system, the dipole gradient vector ∇μ is directly related to the Born effective charge tensor Z∗through the following expression : ∂μα ∂βi = Z∗ i,αβ with α,β = x,y,z (2.22) The Born effective charge tensor is, in turn, related to the electrical polar-ization of the system induced by a displacement of atom i, or equivalently, the force induced on atom i by an electric field E. Z∗ i,αβ = ∂Fj,α ∂Eβ (2.23) The IRRAS experiment is simulated by representing the system using a three-layer model consisting of a dielectric substrate and a vacuum layer, which are separated by an adsorbate layer. The dielectric constants of the vacuum phase is set to εv = 1 whereas the dielectric constant of the substrate is set to a real scalar εs = n2 s where ns is the refractive index of the substrate, in this case CeO2, which has a refractive index of is 2.2 in its oxidized state and 1.8 when heavily reduced. The dielectric function of the adsorbate layer, ˜ εa, is determined considering the thickness of the adsorbate layer, which is infinitesimally small compared to the light wavelength, thus excluding any in-ternal reflections. To account for reflectivity and absorptivity of this layer, ˜ εa is treated as both complex and frequency dependent. The measured quantity is the light after being reflected in the surface con-taining surface adsorbates. The reflectance spectrum, ΔR R0 (ν), represents the 12 Figure 2.3. Schematic representation of the three-layer model of external reflection on semiconductor substrates. The adsorbate layer has a complex refractive index that can both increase or decrease the reflectivity of the surface with respect to the clean substrate as it couples with the three components of the electric field. reflectivity of the substrate with the surface adsorbed molecules after subtract-ing the contributions from the clean surface. The signal can be attributed to the adsorbate layer and its interaction with s- and p-polarized light. For incident light at an angle θ, the s-polarized electric field component Et,s is perpendicular to the plane of the incident light and tangential to the surface plane. The p-polarized electric field component Ep is divided into two parts, Et,p and En,p, which are both in the plane of the incident light, either tangential or normal to the surface plane, respectively. The reflectivity attributed to a given electric field component is denoted with a vertical bar, such as in ΔR R0 En,p (ν) where it refers to the reflectivity attributed to the field component of p-polarized light oriented along the surface normal. ΔR R0 Et,s = 2πν 4 √ εv cosθ εv −εs ε ′′ y dy (2.24) ΔR R0 Et,p = 2πν 4√εv cosθ εv εs sin2 θ −1 (εv −εs) εv+εs εs sin2 θ −1 ε ′′ x dx (2.25) ΔR R0 En,p = 2πν 4εvεs√ εv cosθ sin2 θ (εv −εs) εv+εs εs sin2 θ −1  ε ′′ z dz ε ′ z2 +ε ′′ z 2 (2.26) The complex valued frequency dependent dielectric function of the adlayer, εa, relates to the polarizability P(ν) of the adsorbate layer and its imaginary part represents the absorption of the medium . d represents an effective thickness that is even allowed to vary with direction. Using a Lorenz oscillator 13 model, d is interpreted as the isotropic dipole coupling in the adlayer and is therefore set to the same value for all directions. d = 4πNsU0 (2.27) The constant, U0, is the self-interaction term for dipoles covering the sur-face separated by m and n units along the surface axes x and y, respectively, and can be expressed as: [13, 25]. U0 = ∑ n̸=m (rm,n)−3 (2.28) The dielectric functions can be written as: ε ′ α(ν) = 1+ 4πNs d Re[P(ν)] α = x,y,z (2.29) ε ′′ α(ν) = 4πNs d Im[P(ν)] α = x,y,z (2.30) The frequency-dependent polarizability is taken to be the following "weighed sum" of polarizabilities over the vibrational modes, m: P α(ν) = ∑ m P m,αν2 m ν2 m −ν2 −iΓiν α = x,y,z (2.31) The polarizability of each mode is related to the absolute IR intensity : P m,α = 1 2πcνm dμα dQm 2 (2.32) The vibrational modes νm and the Born effective charge tensor Z∗ i,α,β can be obtained using DFT, by which all equations ultimately determining the frequency-dependent reflectivity can be calculated. Based on these equations, I have constructed an analysis package that generates the vibrational spectra from the output of common DFT codes. 2.2 Density functional theory Density functional theory (DFT) is employed in this work to simulate the po-tential energy landscape and energies of molecules adsorbed to metal oxide surfaces. Furthermore, the results thereof are used to simulate the spectro-scopic results described previously. As DFT underpins all the work presented in this thesis, the following section is a brief description of the theory, accom-panied by some historical context. 14 System energy from electron density In 1964, Hohenberg and Kohn proposed two theorems that form the basis for modern Density Functional Theory (DFT) . Namely, (i) the true many-particle ground state is a unique functional of the electron density ρ(r), and (ii) the ground state energy can be obtained variationally, i.e., by minimizing the total energy with respect to the electron density. The following year (1965), Kohn and Sham proposed the so-called Kohn-Sham equation . Their idea was to use a reference system of non-interacting electrons whose density is the same as that of the fully interacting system, by writing: ρ(r) = ∑ i |ψi(r)|2 (2.33) where ψi(r) represents the ith one-electron wave function, and ρ(r) is the electron density at a coordinate r. The Kohn-Sham equation is an eigenvalue problem used to obtain the orbital energy εi for orbital ψi. The equation treats each electron as non-interacting, embedded in the effective potential Vef f (r) of the surrounding electrons and nuclei:  −¯ h2 2m∇2 +Ve f f (r)  ψi = εiψi (2.34) where the first term within the brackets corresponds to the kinetic energy operator. Within the static potential Ve f f (r), three additive terms are found: Ve f f (r) = Vne(r)+Vee(r)+Vxc(r) (2.35) where Vne(r) is the Coulomb interaction between nuclei and electrons, Vee(r) is the Coulomb interaction between pairs of electrons, and Vxc(r) is the so-called exchange-correlation potential. This results in a total energy ex-pression: E[ρ(r)] = Ts[ρ(r)]+Ene[ρ(r)]+Eee[ρ(r)]+Exc[ρ(r)] (2.36) where Ts[ρ(r)] is the kinetic energy, Ene[ρ(r)] is the energy attributed to the Coulomb interaction between electrons and atomic nuclei, Eee[ρ(r)] is the Coulomb interaction between electrons, and Exc[ρ(r)] is the exchange-correlation energy. These expressions are obtained by integrating over the continuous electron density and sum over the classically modeled nuclei at positions Rn with charge Zn: The word "functional" refers to the fact that the ground state energy is expressed as a function that depends on another function, namely the electron density. 15 Ene[ρ(r)] = −∑ n Zne2  ρ(r′) |Rn −r′|,dr′ (2.37) Eee[ρ(r)] = e2 2   ρ(r)ρ(r′) |r −r′| ,dr,dr′ (2.38) Exc[ρ(r)] =  ρ(r′)Vxc[ρ(r′)],dr′ = 1 2   ρ(r)ρxc(r,r′) |r −r′| ,dr,dr′ (2.39) The kinetic and Coulomb terms are analytically defined and are generally seen as exact, whereas the exchange-correlation Exc term contains the many-body terms omitted in the single-electron description assumed at the start. It should eliminate the self-interaction of an electron and describe its exchange-correlation hole ρxc(r,r′). Unfortunately, the exact exchange-correlation term is unknown, but many exchange-correlation functionals exist to approximate it. These often include other missing contributions as well, such as non-relativistic effects and residual contributions to the kinetic energy. Exchange-correlation functionals are largely divided by the level at which they depend on the electron density, where the simplest (i) ELDA xc is a function of the den-sity, (ii) EGGA xc is a function of the density and density gradient ∇ρ(r), and (iii) Emeta−GGA xc also include the kinetic energy density in addition to the density and density gradient. Standard DFT can not properly describe dispersion. In practice, these ef-fects can be included with functionals that either: (i) correct the energy for two atoms at a distance r by 1/r6, (ii) add the dispersion energy after establishing the ground state density, or (iii) calculate the dispersion self-consistently as an additional term in the exchange-correlation functional. The functional used in this thesis is the optPBE-vdW functional by Klimeš and Michaelides , which is of the category (iii) described above. optPBE-vdW (also known as optPBE-DRSLL) was evaluated in the comprehensive review paper "Perspec-tive: How good is DFT for water?" and was found to be in shared first place with respect to a number of structural and energetic properties for the water monomer, water clusters, bulk liquid water, and ice polymorphs. More-over, optPBE-vdW has been found to perform well for vibrational properties of surface-adsorbed water on other oxides and in ionic crystalline hydrates . Therefore, the optPBE-vdW method was employed in the works in Pa-pers I - IV. Another shortcoming of DFT is that it typically does not accurately de-scribe systems with strong on-site Coulomb interactions, which result in an over-delocalization of d- and f-electrons. By employing a rotationally invari-ant Hubbard correction using the Dudarev formulation , these problems can be mitigated. This formulation introduces an additional on-site Coulomb repulsion term (U) to the Hamiltonian, penalizing the delocalization of d- or f-electrons. This method was employed in Paper V to accurately describe lo-16 calized f-electrons on Cerium. In this case, the correction was applied to the "default" GGA functional, namely Perdew-Burke-Ernzerhof (PBE) . Basis functions and pseudopotentials The orbitals in the Kohn-Sham equation (2.35) are typically described by a set of basis functions. A set of basis functions is called a basis set. Basis sets can contain localized basis functions, often atomic-centered, or plane waves, which are delocalized functions. Solutions of the Kohn-Sham equations in a periodic system can be de-scribed using Bloch’s theorem. It states that the resulting periodic functions, i.e. Bloch functions, take the form of a plane wave ei k·r, modulated by a periodic potential u(r) with the same periodicity as the crystal lattice. ψPW n,k (r) = uk(r) ei k·r (2.40) where n is the band index, k is the allowed wave vector consistent with the periodic lattice and representing the momentum, i the imaginary number, and r the position that repeats with a translation of the unit cell vector R. ψPW n,k (r +R) = ψPW n,k (r) ei k·R (2.41) uk(r) can be expanded into m plane waves based on the reciprocal lattice vectors Gm of the crystal. uk(r) = ∑ m ck,mei Gm·r (2.42) where ck,m is the expansion coefficient. It follows that ψPW n,k (r) = ∑ m ck,mei Gm·r ei k·r = ∑ m ck,mei (Gm+k)·r. (2.43) Although there exists an infinite number of wave vectors k, we restrict ourselves to a finite number in the first Brillouin zone†. These are typically equidistantly spaced along the unit axes in a Monkhorst-Pack grid [34, 35]. With increasing Gm, the expansion coefficient ck,m becomes smaller. The infinite series in the equation 2.43 is therefore truncated at a cutoff energy Ecut = ¯ h2 2mG2 cut (2.44) |k +G| < Gcut. (2.45) The two parameters Ecut and k are the main ones to control to achieve the desired accuracy in DFT simulations. †Primitive unit cell in reciprocal space. 17 Pseudopotentials are used to accelerate the calculations. They are used to remove the rapidly oscillating core wave functions close to the atomic nu-clei, which would require a very high cutoff energy Ecut. In addition, using pseudopotentials also decreases the number of explicit electrons in the sys-tem. Since the core electrons have little effect on the chemical bonding of the valence electrons, they can be approximated by a potential that acts on the valence electrons. A generalization of the pseudopotential is the projector augmented-wave (PAW) method , which transforms the rapidly oscillat-ing core wave functions into a smooth one while retaining the core states at no computational overhead. 18 3. Paper I: Water layer stability on ceria(111) 3.1 Motivation Clean metal oxide surfaces expose undercoordinated metal atoms. Here, wa-ter can adsorb to reestablish the natural coordination of the solid bulk phase by physisorption at positions with dangling bonds exposed by surface cations. The (111) ceria facet is the most stable of the stoichiometric ceria surfaces and is thus the prevailing facet in most conditions . This surface exposes both oxygen and cerium in triangular patterns with oxygen in the outermost atomic layer and cerium in the threefold hollow position underneath the oxygen tri-angles, where water can adsorb, as is illustrated in figure 3.1.                       Figure 3.1. Schematic overview of the stoichiometric ceria(111) surface illustrated with a side and a top view. The surface is oxygen-terminated and exposes a Ce-cation in every other threefold follow position of Osur f . H2O adsorbs over Ce-cation and can dissociate to form OHf and OsH, which are coordinated to one and three Ce-cations, respectively. Issues that have attracted considerable interest in the literature include the structure of the surface itself following water adsorption, the degree of water 19 dissociation (i.e., the extent to which adsorbed water donates a hydrogen to the surface, thus creating two hydroxides), and the orientations adopted by the adsorbed molecules. At the time of writing Paper I it had been exten-sively studied both experimentally [10, 11, 15, 38–41] and through theoretical calculations [3, 42–52]. From the literature, it is clear that water adsorbs onto exposed Ce4+ cations, as demonstrated in figure 3.1. However, due to the large distance between Ce atoms, most researchers argue that the adsorbed water molecules are too far apart to form any meaningful interactions with one another, even when an adsorbed water molecule covers each surface cation. This coverage is re-ferred to as 1.0 monolayer (ML) coverage. This interpretation can be traced back to ref. . In contrast, based on molecular dynamics simulations where adsorbed water was heated and subsequently cooled, the authors of Ref. proposed that hydrogen bonding between adsorbed water exists at cover-ages of 1.0 ML and above. At 1.0 ML coverage, small surface clusters form through hydrogen-bond networks that include both intact and dissociated wa-ter molecules, eventually transforming from clusters into a complete surface coverage at 1.75 ML. For reference, hydrogen bonds (H-bonds) are stabilizing, primarily electro-static interactions between O−H species and electronegative acceptors, such as Osur f or O in other adsorbed water species. The interaction between water molecules is the archetypal example of the H-bonded system, and its interac-tion energy is of the order of 3-5 kcal per mol of bonds (or ca. 15-20 kJ/mol of bonds or 0.15-0.20 eV per bond). Still, it can be an order of magnitude stronger given a more negative acceptor [53, 54]. My careful investigation of the structures from the literature revealed that the presented hydrogen-bonding patterns at 1.0 ML coverage were rather dis-organized, with many suboptimal hydrogen bonds and low symmetry, which indicates metastable structures. Based on this, I gathered that more stable sur-face adsorbate structures could probably be found. Consequently, in Paper I I further explore "how and where" water adsorbs in thin water films on ce-ria(111), with a particular focus on the 1.0 ML coverage, as it represents the contact layer and is important for understanding surface behavior under am-bient conditions. To do so, I investigate the role of water–water interactions and analyze the stability with respect to both the degree of dissociation and the overall coverage, comparing the findings with experimental results from the literature. 20                   Figure 3.2. Schematic representations of ceria(111) with surface-adsorbed water using color representation of figure 3.1. (a-c) A single water molecule in its intact and dissociated form. Representation of (d) hydrogen bonded (1D) chain of alternating H2O and OH–, 1 ML coverage with (e) 1D chain, (f) intact water molecules. In (e), a blue zig-zag line is drawn to illustrate the 1D chain. 3.2 Factors that impact stability at 1.0 ML water coverage The role of dissociation To investigate the stability of surface adsorbed water on the ceria(111) sur-face, I begin by examining the 1.0 ML coverage where a set of structures ranging from fully intact to fully dissociated waters are considered and where the strengths of the water interactions (with the surface and to other water molecules) are compared by their adsorption energies. The resulting adsorp-tion energies with respect to the degree of dissociation are visualized in Figure 3.3. Based on the convex hull defined by the most stable adsorption energy at each dissociation level, the adsorption energy is maximized at 50% dissoci-ation, when half of the adsorbed water molecules are dissociated. Structures with more than 50% dissociation are highly unstable and unlikely to occur, whereas, among those with 50% or less dissociation, several structures are found within 0.05 eV of the minimum energy structure. A Boltzmann popula-tion analysis indicates that the minimum energy structure likely dominates at room temperature, although significant contributions from other structures are also present. Consequently, the overall degree of dissociation is probably 50% or less. The analysis presented in Paper I focused solely on the enthalpic con-tributions to water stability; however, the presence of many metastable struc-21 tures with energies near the minimum suggests that configurational entropy could play a role in determining the absolute stability of surface adsorbate structures under ambient conditions.         Figure 3.3. The adsorption energy of 1ML local energy minimum structures, with respect to the fraction of dissociated water molecules. The role of hydrogen bonding Among the more stable surface structures, it is found that these generally ex-hibit hydrogen bond lengths comparable to those found in liquid water and ice, with R(H/cdotsO) bond lengths around 1.8 Å. Moreover, bonds of the type HOH ···OH−, where water acts as a hydrogen bond donor and hydroxide ions serve as the acceptor, are found to be especially common in the most stable water films. The HOH ···OH−are observed in the surface structure at 25%, 50%, and 75% dissociation, and the most stable structures contain many of them. At 50% dissociation, a structure that maximizes the number of hydro-gen bonds possible is identified; it consists of a chain of alternating H2O and OH–, as shown in Figure 3.4. H O H O Hf O Hf H O H Figure 3.4. A chain of hydrogen-bonded H2O and OH–. 22 At 1.0 ML coverage, such chains extend infinitely along one axis and are separated by 5.0 Å in the perpendicular surface direction. Although it is de-batable whether these structures can be observed experimentally, their dis-tinctive features and high symmetry should render them well-resolved using direct imaging techniques, such as AFM or STM. Figure 3.5 illustrates the monolayer and a simulated STM image (from Paper I), which may serve as a reference for future experiments. Figure 3.5. Chain of alternating H2O and OHf adsorbed on ceria(111). The structure is most stable at a 1.0 ML coverage, with 50% of the water molecules dissociated. The right part of the figure shows the simulated STM image of the same structure, where the brightest spots correspond to the largest electron density. 3.3 Stability at different coverages In Paper I, the adsorption energy of water between 0.5 and 2.0 ML is inves-tigated, and the adsorption energies of both intact and dissociated monomers, studied in Papers II, are also included. Considering the lowest energy at each coverage, the magnitude of the adsorption energy is maximized at 1.0 ML (see Figure 3.6). Generally, for water on ionic solid surfaces, the adsorption energy peaks at or near 1.0 ML, where both the interactions between the surface and the adsorbates and the interactions among the adsorbates themselves are at their strongest. As the coverage increases beyond 1.0 ML, water–water interactions become dominant, and the adsorption energy converges toward that of liquid water—a trend consistent with water adsorption on other oxides . The calculated adsorption energies at the different coverages can be cor-related with experimental results obtained from TPD analysis. At least four experimental studies of water desorption from ceria(111) have been published in the literature [38–41], in which two distinct spectral features are attributed to surface-adsorbed water: (i) a large peak centered around 150–200 K and (ii) a shoulder peaking at 200 K. Desorption temperatures are estimated from the calculated adsorption ener-gies using a three-step process. First, the adsorption energies are corrected for 23 Figure 3.6. Adsorption energy of water in thin water films with respect to coverage. A coverage of 0.1 ML was considered corresponding to an isolated water ad-molecule on the surface. the zero-point vibrational contributions of both the adsorbed and gas-phase molecules. Second, the assumption is made that desorption occurs layer by layer, with the desorption energy at 1.0 ML being equal to the adsorption en-ergy of that layer. For the 2.0 ML coverage, the desorption energy of the top-most layer was calculated as 2Eads,2.0ML −Eads,1.0ML. Finally, the desorption temperatures are determined using the Redhead equation (Eq. 2.5), with an experimentally derived prefactor from and a heating rate consistent with most experiments. The estimated desorption temperatures of 177 K (second layer) and 227 K (first layer) are in accordance with experimental findings. The adsorption energy and TPD experiments offer insights into the bond strength between water and the surface. Such measurements can be connected to the stability of various surface coverages under different vapor pressures and temperatures through a phase diagram. A simple phase diagram (Figure 3.7) is constructed from my DFT calculations using the most stable structure at each coverage. The key parameter in generating this diagram is the surface energy γ (see Section 2.1), which depends on the chemical potential of water vapor in the gas phase, μ[H2O(g)]. The results suggest that under ambient conditions, specifically at a water partial pressure of approximately 103 Pa and a temperature of 300 K, the 1.0 ML coverage would dominate. 24 Figure 3.7. Surface energy with respect to the vapor pressure of H2O for coverages 1.0, 1.5, 1.75, 2.0 ML. 3.4 Conclusions The findings in Paper I show that hydrogen bonds are likely common between water adsorbed on the ceria(111) surface and lead to stabilization of the adsor-bates. This hydrogen bonding is facilitated by the partial dissociation of water molecules in direct contact with the surface. The adsorption energies for mul-tiple water thicknesses are also consistent with experimental TPD results in the literature. These findings motivated further exploration of surface-adsorbed water, highlighting the need to understand and verify the interactions between sur-face adsorbates, as discussed in the next chapter. 25 4. Paper II: Vibrational frequency shifts for surface-bound water The thermodynamic analysis of water adsorbed on the ceria(111) surface, as described in Paper I, reveals that partial dissociation of water molecules can facilitate hydrogen bond (H-bond) networks among surface-adsorbed water on ceria, leading to enhanced structural stability. H-bond structures are also central to Paper II, but here I introduced the OH vibrational frequencies as a probe and measure of the nature of H-bonding. DFT calculations are per-formed for 142 structurally unique OH groups on the ceria (111) surface, en-abling the simulation of both the vibrational frequencies and the structural configurations of surface-bound water molecules. With these data I do three things: (i) compare the frequencies to available experimental data in the litera-ture, (ii) examine whether traditional frequency vs. H-bond distance relations are adequate for surface-adsorbed water molecules and hydroxide ions, and (iii) establish correlations between frequency and structural motifs, i.e. the local surroundings around the OH groups. The results should be helpful to ex-perimental spectroscopists in their quest to assign spectral signals to structural features. 4.1 Experiments for validation and DFT for new insights OH stretching vibrational frequencies are used experimentally to identify wa-ter molecules and hydroxide ions in condensed matter and on surfaces, thereby also characterizing the surrounding environment, including the presence of hy-drogen bonds. The vibrational stretching frequency of the OH group is known to serve as a reliable indicator of the internal OH-bond strength, which is com-monly weakened by interactions with neighboring atoms, leading to a fre-quency downshift (redshift) [53, 56]. For surface adsorbed water molecules, the OH frequency is affected by coordination with both the surface and other water molecules. Redshifts induced by hydrogen bonding are often greater in magnitude than those induced by coordination and often mask the latter. Paper II contains a comprehensive collection of reported experimental fre-quencies obtained from FTIR, Raman spectroscopy and IRRAS, of intact and dissociated water adsorbed on pristine ceria surfaces from various authors [57– 26 79], up to and including 2020. Despite the diversity of sources, these publi-cations collectively present quite coherent assignments of frequencies to cer-tain structural motifs, especially among isolated water molecules coordinated to the surface (4.1, a). The following species were categorized in order of descending frequencies: OH −1Ce, H2O −1Ce, OH −2Ce, OH −3Ce, and H-bonded water. The first four correspond to H2O or OH– coordinated to 1, 2, or 3 Ce cation(s) in the surface. In addition, these assignments largely align with the simulated results of Paper II, indicating that the assignments made in the 1980s remain relevant today. Fig. 4.1, b) shows my DFT-calculated anharmonic OH frequencies for the water and OH groups on ceria(111) included in Paper II. Overall, the agree-ment between experiment and calculation is good. It can further be noted that the H-bonded category in Fig. 4.1 spans a large vibrational frequency range. One goal of this work was to resolve this category into separate subcategories. This will be further elaborated in section 4.3. Figure 4.1. Comparison of OH vibrational frequencies for water/ceria(111) interfaces and their assignments from (a) the experimental literature, as assigned by the authors themselves, and (b) computed results in Paper II. 4.2 H-bond definitions: geometry or frequency? Paper II focuses on H-bonded water molecules, typically involving an H-bond donor (O-H) binding to an acceptor molecule or ion, represented as D−H ···A (where D is the donor and A is the acceptor), or as a Donor···Acceptor-pair. In the simulated systems, the acceptor is consistently oxygen, found as part of the surface O2–, a surface-adsorbed water molecule (Ow), or a surface-adsorbed hydroxide ion (Of ). A first step is therefore to identify and classify 27 H-bonds by some criteria. Two established geometric criteria are used for this task. One suitable for bulk crystals, proposed by Luzar and Chandler , and one suitable for liquid water, proposed Wernet . The two works classify H-bonding by the following distance and angle criteria: R(O···O) ≤3.5Å,Φ < 30o and R(O···O) ≤3.3−0.00044Φ, respectively (Fig. 4.2). Figure 4.2. Geometrical hydrogen bond definitions reported in referenses [80, 81] and Paper II overlays over the OwH ···O, and OH−···O among the 142 unique OH-oscillators. Def. 1 is that of Luzar and Chandler , Def. 2 is that of Wernet et al. . Def. 3 is that of Paper II. Surprisingly, an intriguing observation arose when I overlaid the simulated vibrational frequencies of each OH-oscillator: molecules with significantly red-shifted vibrational frequencies are not categorized as H-bonding accord-ing to either of the two definitions, as can be seen by the fact that, e.g., the substantially red-shifted green dots in Fig. 4.2 fall outside of the borders per-taining to both Definition 1 and Definition 2 in the figure. This comparison suggested that, although the red-shifted frequency indicated the presence of an H-bond, these bonds did not conform to the existing geometric criteria. Further comparisons of the distribution of H-bond angles in surface-adsor-bed molecules with those in hydrated crystals and bulk water revealed that surface-adsorbed molecules form quite, or even very, bent H-bonds, yet ex-hibit substantial red-shifts. This led to proposing a new set of criteria better suited to the surface systems’ characteristics. By examining H-bond distances and angles in relation to the maximum vibrational frequency classified as H-bonded, it was established that a bond distance R(H ···O) of 2.5 Å and an angle θ of approximately 100 degrees were reasonable (Fig. 4.2). This was 28 supported by the observation that red-shifts of about 150 cm−1 typically mark non-bonded OH oscillators, and using this criterion, these were all signified as non-H-bonded. Thus, this newly established criterion effectively reflects the unique nature of H-bonds on surfaces. Comparing the bond environments of water species on surfaces with those found in bulk crystals, it is evident that the hydrogen bonds in bulk samples are much straighter and are well described by the established criteria. This is illustrated by an example comparing the bond angles with respect to the frequency (investigated in Paper III) in figure 4.3. Figure 4.3. Hydrogen bonds found among surface adsorbates are more heavily angled than those found in bulk samples as evident by a comparison of bond angles with respect to vibrational frequencies in surface bound water (blue rings), surface bound hydroxides (red rings) to water from bulk crystalline hydrates presented in ref.. The same physics certainly governs the vibrational frequency shifts induced by hydrogen bonding in both bulk and surface systems, so it is reasonable that the updated criterion would also be applicable in bulk samples. However, as most hydrogen bonds are linear in bulk crystals, there are few examples of heavily angled bonds where the frequency shift can be evaluated to either confirm or disprove the new criteria. 4.3 Vibrational frequencies for different H-bond motifs Using the newly proposed, frequency-based criteria for determining H-bonds in surface-adsorbed water and hydroxides, a number of motifs are identified in the studied surface systems, which are visualized in Figure 4.4. These consist of the Donor···Acceptor pairs: OwH ···OH2, OwH ···OH−, OwH ···O2−, OH−···OH2, and OH−···OH−. These motifs exhibit a diversity of hydrogen bonds on the surface, indicat-ing that molecular adsorption sites are predominantly dictated by the available 29 Figure 4.4. Hydrogen bonded Donor···Acceptor pairs identified using the criteria proposed in Paper II of H2O and OH– adsorbed to the stoichiometric ceria(111) surface. surface Ce-cations, with their orientation influenced by hydrogen bonding. The pronounced large angles of these hydrogen bonds, as depicted in Fig. 4.2, can be attributed to the constraints imposed by the large space between Ce adsorption sites. The Ce cations, which serve as adsorption sites, are spaced about 3.8 Å apart and greatly exceed the optimal O-O H-bond distance of 2.9 Å found in crystalline ice. This relationship elucidates how the cations’ spac-ing in the first layer influences the hydrogen bond geometry, allowing for the 1D chain described in Paper I (Chapter 3). Among H-bonded oscillators, the simulated vibrational frequencies largely agree with those reported experimentally, as seen in figure 4.1. By divid-ing the computationally determined data into the Donor···Acceptor pairs as-signed using the new H-bond criteria, and comparing them by frequency, a new picture emerges; the H-bonded oscillators collectively show up as a wide distribution, but divided by Donor···Acceptor pairs, five distributions can be resolved (Fig. 4.1 right panel, and a close-up in Fig. 4.5). Although the groups heavily overlap, they appear in characteristic frequency ranges correspond-ing to the strength of the H-bonds of each group. The weakest H-bonds are found at around 3500 cm−1 and correspond to OwH ···OH2 pairs. Moderately shifted are the two groups of OH−···OH2 and OH−···OH−and moderately to strong bonds appear in the group of OwH ···O2−and Ow ···OH−(figure 4.5). The large shifts observed among OwH ···OH−are consistent with these 30 bonds contributing significantly to stabilization, as discussed in the previous chapter. This qualitative assignment might be useful in interpreting existing and future observed vibrational frequencies. Figure 4.5. Comparison of vibrational frequencies of H-bonded molecules in (a) the experimental literature (b) in Paper II with the new motifs proposed. Figure 4.6. Scatterplots of OH frequency with respect to (a) R(O···O), (b) R(H ···O), and (c) the effective electric field at the position of H induced by surrounding atoms when the molecule itself is removed. The points are colored according to the hydrogen bond motif. Panel (c) also includes non-H-bonded OH-oscillators. By displaying the internal distances with respect to vibrational frequencies, so-called structure-property relations are displayed. These are useful for re-solving structure from frequency and will be discussed in more detail in the next chapter. Without going into much detail at this point, these correlation curves allow for a basic representation of the immediate chemical surround-ings to be estimated from the vibrational frequency. In this case, the hydrogen bond acceptor and the effective electric field at the oscillating OH. 31 A classification into Donor···Acceptor pairs (or the lack thereof), based on the new H-bond criteria, is instructive for the interpretation of these curves (figure 4.6) as it clearly shows that otherwise scattered plots hide more narrow distributions. An example is R(O···O). Overlaying the structure motifs over the assign-ments of frequency vs. R(O···O), a formation of more or less three inde-pendent branches appears, as seen in figure 4.6 (a). The separation correlates quite well with the characteristic angles of the respective motifs, where θ are ordered as OH−···OH−< OwH ···O2−< OwH ···OH−. Such a separation is not present in the correlation with R(H ···O), further indicating that R(H ···O) is more suitable than R(O···O) for H-bond criteria based on the distances and angles for surface adsorbed water. It also highlights that the angle dependence is not random but associated with the Donor···Acceptor type. 4.4 Conclusions By analyzing hydrogen bonding patterns among surface adsorbates and com-paring them to their vibrational frequencies, it was found that surface hydro-gen bonds are significantly more angled than those in bulk hydrates and hy-droxides. A new classification criterion is proposed, which is better suited for surface-adsorbed water. Applying this criterion to assign hydrogen bond-ing motifs allowed the broad category of hydrogen bonds to be resolved into distinct donor–acceptor motifs, resulting in narrower groups with distinct yet overlapping vibrational frequency ranges. 32 5. Papers III and IV: Information content of structure descriptors relating to the vibrational frequency 5.1 Motivation As previously described, vibrational spectroscopy is a leading characteriza-tion technique for materials and surface systems. It helps to understand a sample’s properties and is commonly used to identify molecules or molecular fragments. In this chapter, the property of interest is the vibrational frequency of OH-groups. As shown in the previous chapter, the vibrational frequency can identify the number of cations to which surface-adsorbed water coordi-nates and, to some extent, intramolecular bonding patterns, such as the hy-drogen bonding motif. For these tasks, structure-property correlations — both qualitative and quantitative — are useful. Structure-property correlations re-fer to the relationships between a material’s atomic and/or electronic structure and a specific property, such as the vibrational frequency. These structure-property correlations enable the prediction of the OH vi-brational frequency from the structure or a structure descriptor. Descriptors can be any quantitative measure taken from the three-dimensional structure as a summary statistic. Such predictions are alluring from a computational perspective, since they allow one to omit the resource-intensive vibrational calculation entirely. These descriptors are also useful as they can be used to identify the origins of induced vibrational frequency shifts. Many structure-frequency correlations based on hydrogen bond distances R(O···O) and R(H ···O) have been published in the literature [53, 56, 83– 85]. Similarly, the effective electric field at the site of a water molecule or hydroxide ion has been linked to the vibrational frequency [86–93], and IR intensity [91–93]. The structure can also be quantified using high-dimensional basis set approaches to map the atomic environment; ref. utilizes the Atom-centered symmetry functions (ACSF) to obtain a fingerprint of the environment, for which an artificial neural network is trained to reproduce the vibrational frequency and intensity. The examples above demonstrate a wide variety of structure-frequency cor-relations that exist and can be used to map vibrational frequencies from struc-tures. The focus of Papers III and IV is to evaluate the accuracies of these structure descriptors with respect to the vibrational frequency. 33 5.2 Descriptors and means of assessing their quality Descriptors Detailed analyses of structure descriptors for hydrogen bonds in surface-ad-sorbed water are conducted in Paper III, and in water in crystals in Paper IV. In both papers, the main focus is on determining and evaluating the structural information useful in reproducing the vibrational frequency; in other words, the information content available in various structural descriptors with respect to the vibrational frequency is assessed. In essence, which descriptor is best to use if you want to predict the vibrational frequency? Frequencies of OH-groups in intact and dissociated water in many systems are simulated: Paper III contains 217 structurally unique OH groups from 38 water/metal oxide interfaces on CeO2(111), MgO(001), and CaO(001), and Paper IV contains over 300 unique OH oscillators from 101 crystalline hy-drate and hydroxide systems. For these OH species, several descriptors are se-lected, including the geometric H-bond distances, bond orders, effective elec-tric field descriptors, and "machine learning"† descriptors. These descriptors were selected to represent various levels of physics content and complexity. The descriptors all reflect the equilibrium structure and are based on one or many of the following: (a) the positions of atoms close to the OH oscilla-tor, (b) the position of all atoms surrounding the OH oscillator, and (c) those encompassing the full electron density. All descriptors are listed in table 5.1. Table 5.1. List of descriptors used to correlate the structure with the vibrational frequency. All descriptors reflect equilibrium positions and distances. ∗Effective electric field measured at the equilibrium coordinate of H in the oscillating OH-bond. Electric fields are typically calculated from charges derived from the electron density. However, the papers also feature charges derived from the atoms’ respective oxidation states. Descriptor Type Description Paper r(OH) geometric O−H intraatomic bond distance III, IV R(H ···O) geometric H −O interatomic bond distance III, IV R(O···O) geometric O−O interatomic bond distance III, IV s(OH)geometric geometric Bond order derived from O distances III θ(O−H ···O) geometric Interatomic bond angle III ACSF ML geometric Atomically centered symmetry functions III, IV SOAP ML geometric Smooth overlap of atomic potentials III sOHQM electronic Bond order derived from electron density III |E| electronic Electric field∗magnitude III, IV E||H ···O electronic Electric field∗along H ···O H/bond III E ⊥OH electronic Electric field∗perpendicular to OH bond III E||OH electronic Electric field∗parallel to OH bond III, IV E′ electronic Derivative of electric field∗(E||OH) IV †Basis set descriptors of such a high dimension that they are only practical for machine learning methods. 34 Determining accuracy of descriptors To assess the information content provided by each descriptor, a process in multiple steps is used: I. Structure optimization and OH frequency calculations. Initially, all structures are optimized, and vibrational frequencies are computed for each OH oscillator using the isotope isolation approach. II. Calculating structure descriptors. From the optimized structure, the atomic positions and electron density are used to calculate the structure descriptor values or vectors. This is achieved by selecting certain bond distances, computing the electric field over an atom, or applying a func-tion to the positions of all atoms within a radius around an atom, as in the cases of the ML descriptors ACSF and SOAP. III. Relating structure descriptors to frequency. Each descriptor is trans-lated to the vibrational frequency using a function to yield an approxi-mate, descriptor-derived vibrational frequency. The function is purpose-fully left quite ambiguous here, but will be described in detail shortly. IV. Assessing accuracy. The root mean square error of the approximate descriptor-derived frequency to the DFT-computed accuracy is calcu-lated. In this way, the information content of the descriptors can be assessed under the assumption that the function in step III accurately relates the descriptor to the frequency. Since publishing these papers, I have learned that statistical methods exist to circumvent step III entirely. One example is the Hilbert-Schmidt Independence Criterion, described in refs. [95, 96], which can be used to directly assess the accuracy of descriptors of arbitrary dimensionality to the frequency, obtaining a covariance measure as an indicator of correlation. Stating that, the method chosen in Papers III and IV was carefully chosen to minimize function-induced errors, meaning that most, if not all, error is attributed to the descriptor. Relating structure descriptor to frequency The descriptors can, in principle, be related to the vibrational frequency using any mathematical function. Libowitzky fitted the correlations of R(H ···O) and R(O···O) with respect to the frequency using exponential curves of the form ν = a −be−R/c where ν is the frequency and R is the descriptor . The frequency shifts correlation to an electric field, on the other hand, has been fitted using a second-degree function of the form ν = a−bE2, where E is the electric field in refs. [86–93]. However, three challenges exist to compare the descriptors on equal foot-ing: (a) For many descriptors, or combinations of descriptors, no function has been reported in the literature for the frequency-descriptor correlation, mean-ing that a model that will follow the shape of the data without prerequisite 35 knowledge of the correlation is required (b) the amount of data is compara-tively small meaning that most machine learning methods must be excluded, and (c) the dimensionality varies greatly between descriptors, where r(OH) is one dimensional but ACSF can contain ten or more dimensions. In view of these difficulties, Gaussian Process Regression (GPR) was em-ployed to fit the structure descriptors, modulated by a kernel, to computed vibrational frequencies in such a way as to minimize the Root Mean Square Error (RMSE) between an estimated frequency and the one determined from quantum mechanical calculations of a training set. As this is a fitting proce-dure, careful consideration was made to split the data into training and test sets to allow for the optimization of coefficients and hyperparameters. See Paper III for details. The resulting model was then applied to a test set, and the re-sulting RMSE is used as an indicator of the correctness of fit for the descriptor, where a small value is indicative of an accurate descriptor. 5.3 One-dimensional descriptor correlations Figure 5.1. Ten one-dimensional, single-valued structure descriptors related to the vibrational frequency of OH-oscillators. The top panel for each descriptor shows the GPR fitted model, whereas the bottom shows an analytical function suggested in the literature. 36 All one-dimensional descriptors are compared on an equal footing by their root mean squared error (RMSE), which was obtained using the aforemen-tioned method. Figure 5.1 shows 10 descriptors studied in Paper III. Here, the equilibrium bond length of the OH oscillator r(OH) clearly out-performs the other descriptors with an RMSE of 31 cm−1, compared to the next best descriptor R(H ···O) at 84 cm−1. The low RMSE of r(OH) is ev-idence of the tight correlation between it and the frequency. Due to the low RMSE, the descriptor is practical as a substitute for the vibrational frequency. For this reason, it is used in Paper IV in place of explicit vibrational analysis to significantly accelerate the calculations. Although r(OH) excels with low RMSE, the descriptor is sensitive to small variations in the bond length, making it impractical to extend these results to the interpretation of diffraction data or dynamical systems. Errors or variations in these systems are often of the same magnitude as the entire range of equi-librium r(OH) values, which propagates to large uncertainties in vibrational frequencies. The intermolecular bond distance R(H ···O) (RMSE of 84 cm−1) and the effective electric field along the hydrogen bond EF||H ···O (RMSE of 86 cm−1) are descriptors based on the external chemical environment. These descriptors consistently show low RMSE in bulk and surface systems, indicat-ing that these systems can be viewed as hydrogen-bonded and electrostatic in nature. Interestingly, in bulk systems, the hydrogen bonds are generally linear ( ¯ θ >= 165o), whereas in surface systems they span a much larger range ( ¯ θ = 120− 180o). This, in turn, makes descriptor R(O···O) comparable to R(H ···O) in bulk (RMSE 98 cm−1), whereas they perform poorly on surface systems (RMSE 193 cm−1). The high RMSE for R(O···O) can, however, be com-pensated for by including information about the hydrogen bond angle in a combined analysis, where the two perform well together(RMSE 88 cm−1). 5.4 Multidimensional descriptors In the previous section, one combination of descriptors was considered, namely R(O···O) and the hydrogen bond angle. In this section, combinations of one-dimensional descriptors are considered. Using pairwise combinations of descriptors, lower RMSE are obtained. For surface adsorbed water (Paper III), it was observed that combining geomet-ric and quantum mechanical descriptors yielded more accurate results than combinations within each group separately. Equal RMSE was found in two combinations: (a) the hydrogen bond angle θ and EF||OH, (b) R(H ···O) and EF||H ···O with RMSEs of 62 cm−1. Similarly, in bulk systems (Paper IV), combinations of the electric field and its derivative resulted in a lower RMSE of 75 cm−1, as compared to 87 cm−1for the electric field alone. 37 A combination of all one-dimensional descriptors (excluding r(OH)) re-sulted in an RMSE of 55 cm−1(Paper III), which is probably the lowest ob-tainable using one-dimensional descriptors derived from hydrogen bond rela-tions. The multidimensional descriptor ACSF considers the positions of surround-ing atoms using a basis set approach. The results showed that ACSF performed favorably, yielding similar results in both bulk and surface systems (RMSE of 65 cm−1in surface systems and 63 cm−1in bulk systems) when the vibrat-ing OH oscillator was excluded. In Paper III, the full system was analyzed with an RMSE of 29 cm−1, with modest improvements over the RMSE of 31 cm−1for r(OH) alone. 5.5 Insight vs accuracy Low-RMSE descriptors are highly predictive for translating structure to vibra-tional frequency. Suitable, multidimensional descriptors are flexible enough to detect subtle nuances of the chemical environment, resulting in highly accu-rate models. However, these descriptors are humanly incomprehensible and, therefore, difficult to utilize when making a qualified guess of a bonding envi-ronment from a vibrational frequency. The statistical method presented here was designed to compare descriptors based on their ability to reproduce the vibrational frequency, irrespective of complexity. On the other hand, more subjective measures are important to consider when assessing the usability of a descriptor, e.g., what does it tell us about the chemistry and physics of the system? A physically intuitive model is valuable if it provides heuristics or easily understandable relationships that can be used as a rule of thumb. In addition, if such a method allows for a reciprocal (one-to-one) mapping between the structure and the vibrational frequency, and vice versa, it is even more valuable. The issue of predictability versus insight is discussed through the "curve of insight" seen in figure 5.2, which is discussed in detail in Paper IV. As discussed above, the OH equilibrium bond length, r(OH), undoubtedly yields the lowest RMSE. This reflects the fact that both the vibrational fre-quency and r(OH) accurately reflect the internal bond strength of the OH group. The descriptor and frequency essentially reflect the same property. Looking among descriptors that provide information about the surrounding environment, other simple geometric descriptors, e.g., R(H ···O), and elec-tric field descriptors EF||H ···O are decent estimators of the vibrational fre-quency. In fact, the RMSE of the R(H ···O) and EF||H ···O combination is nearly as accurate as ACSF, capturing the entire environment around OH, sug-gesting that the hydrogen bonding represents the full frequency shift. For this reason, both the electric field and hydrogen bond distances provide valuable information. 38 Figure 5.2. Illustration of the "curve of insight," i.e., subjective amount of knowledge that can be derived by interpreting the descriptor to frequency correlations. 5.6 Conclusions The information content of descriptors was compared by comparing how well structural descriptors of hydrogen-bonded water and hydroxides in surfaces and in bulk can reproduce the DFT-calculated vibrational frequencies. Using Gaussian process regression, a method for comparing descriptors of arbitrary dimensions without prerequisite knowledge about the descriptor-frequency re-lation was created. Using this method, it was found that multidimensional geometric descrip-tors and combinations of simple geometric and quantum descriptors of the external environment contain a significant amount of information that can be used to reproduce the frequency. However, the best-in-class descriptor is the internal r(OH) descriptor, which shows significantly lower RMSE than all other descriptors investigated, indicating that it is the most efficient in trans-lating structure to frequency. By using the descriptors to provide insight into the underlying physics re-sponsible for the frequency shifts, it was found that descriptors such as the hydrogen bond distance R(H ···O) and EF||H ···O perform well indicating that hydrogen bonding, through electrostatic interactions induce the majority of the vibrational shift. 39 6. Paper V: Finding heuristics for dioxygen adsorption on ceria 6.1 Motivation Ceria is perhaps most known for its redox activity and large oxygen storage capacity (OSC). Whereby oxygen from the lattice can partake in chemical re-actions and be regenerated from gaseous oxygen. The OSC has been attributed to two different mechanisms, one involving lattice oxygen in chemical reac-tions by means of the Mars-Van Krevelen reaction mechanism . The other from surface adsorbed dioxygen O2, superoxide O2 –, and peroxide O22–, col-lectively referred to as (O2X–) species [98–101] that can be adsorbed on the surface or in vacancies. These surface adsorbed O2X–-species are more com-mon on nano-structured materials and are visualized, for the three low-index ceria surfaces, in figure 6.1. Side view Top view (111) (110) (100) O2-OV O2-Ce O2-OV O2-Ce Figure 6.1. Illustration of the CeO2(111), (110), and (100) surfaces with the examples of adsorbed O2X–-species in oxygen vacancies and on top of the surface, seen from a side and top view. 40 One technique that shows promise in inferring the adsorption geometry of O2X– species is infrared reflection absorption spectroscopy (IRRAS), which utilizes illumination of polarized light along controlled crystal orientations to characterize surface-adsorbed molecules. O2X–-species are inherently diffi-cult to study using IRRAS, as the molecule has no permanent dipole moment, making it IR-inactive without an external perturbation breaking this symmetry. On surfaces, the signal is weak, and state-of-the-art experiments are required to distinguish their IR-absorption from background noise . When interpreting the experimental spectra of surface adsorbed molecules, authors from the literature commonly employ a set of heuristics to deduce the molecular orientation based on the direction of the transition dipole moment of the adsorbed molecules [102–111]. The transition dipole moment is the electric dipole induced by the vibrational motion, or more exactly, by vibra-tional excitation. Such a heuristic is used to determine whether the molecule stands perpendicular to the surface. Molecules lying down in the surface plane are attributed to a small signal, whereas molecules standing perpendicular to it give rise to a detectable signal. In Paper V, the relation between adsorption geometries and IRRAS inten-sities is investigated by simulating the IRRAS intensities of O2X–-species in various orientations. Several O2X–-species adsorbed in multiple adsorption geometries on stoichiometric and reduced (100), (110), and (111) surfaces of ceria are modeled, and their adsorption geometries and vibrations are deter-mined from DFT simulations. Additionally, the IRRAS intensity is calculated from the transition dipole moment, using a three-layer model, as is described in more detail in section 2.1. 6.2 Adsorbed reactive oxygen species on ceria The O2X–-species all have characteristic vibrational frequencies and intra-molecular bond lengths; both reflect the bond strength of the O-O bond of the species. The main difference between the differently charged species lies in its occupation of the π∗antibonding orbitals which contain 0, 1, and 2 electrons for O2, O2 –, and O22– respectively, as is visualized in figure 6.2 (a). The resulting elongation and red-shifted vibrational frequencies with in-creased occupation of this orbital are shown in figure 6.2 (b). The assignments of chemical species presented in Paper V are consistent with those assigned by experimental IRRAS and by RAMAN spectroscopy [112, 113] of O2 – and O22– on low-index ceria surfaces, in all cases verified by calcula-tions. The correlation and assignments also agree with O2 – and O22– species adsorbed on the platinum(111) surface . 41 (a) (b) Figure 6.2. (a) Schematic representation of the orbital diagram of O2 with the low-est unoccupied molecular orbital consisting of a partially occupied antibonding π or-bital (π∗) highlighted. With increased π∗orbital population, the O-O bond is weak-ened, resulting in a longer O-O bond length and decreased vibrational frequency. (b) Observed intramolecular O-O bond length and vibrational frequencies of surface ad-sorbed O2X–-species on ceria. 6.3 Simulated IRRAS Figure 6.3. Reflectivity of p-polarized light interacting with TDM⊥(blue) and with TDM∥(orange, magnified 100 times), and s-polarized light interacting with TDM∥ (green, magnified 100 times) simulated with nsub = 2.2 (left) and 1.8 (right) with an incident light angle of 84 degrees. In this example, the transition dipole moments are of the same magnitude, whereas the observed contributions vary in size. In this work, IRRAS spectra are simulated using refractive indices that match those of ceria in its oxidized and reduced forms, with incident light angles chosen to replicate common experimental setups. Figure 6.3 shows a repre-sentative example of O22– on (110) where all three components of IR-light absorption are visualized. s-polarized light couples with a transition dipole 42 moment vector parallel to the surface plane and results in light absorption. p-polarized light, on the other hand, interacts with two transition dipole mo-ment components, one normal to the surface and one parallel to the surface, which act in opposing directions. Large transition dipole moments perpen-dicular to the surface yield large absorption, whereas large transition dipole moments in the surface plane result in large reflections. However, in the ex-ample, it can be seen that p-polarized light resonating with a transition dipole moment component perpendicular to the surface dominates as it is several or-ders of magnitude larger than the other two components. As is shown in figure 6.4, the total p-polarized light absorption (negative reflectivity) is almost per-fectly proportional to the square of the transition dipole moment. Thus, it is reasonable to approximate the total intensity by the transition dipole moment perpendicular to the surface. Figure 6.4. Reflectivity of p-polarized light correlated with (TDM⊥)2 per unit area. For O2X–-species, the total reflectivity is almost perfectly proportional to TDM⊥. The datapoints that deviate from the ideal (gray) line have a non-zero TDM parallel to the surface, decreasing the magnitude of the peak. It should be noted that these simulations are based on grazing incident an-gles, as most experiments are set up to maximize the p-polarized light inten-sity of surface adsorbed molecules on metallic substrates, as discussed in . However, the same selection rules do not apply to dielectric substrates . When using a refractive index representative of stoichiometric ceria (n = 2.2), the simulations suggest that an optimal angle would be around 65 degrees; see figure 6.5. At such an angle, the three measured components would be larger than at 84 degrees. However, the p-polarized light component relat-ing to TDM⊥would still dominate, making the continued discussion equally relevant. 43 Figure 6.5. Reflectivity, for each light component, as a function of incident light angle and substrate refractive index. Due to the selection rules of dielectric substrates, the light adsorption is larger at 65o than 84o for ceria with a refractive index of n=2.2. 6.4 Relating transition dipole moment to molecular adsorption angle How the transition dipole moment is generated from the redistribution of char-ges following the vibrational excitation can be interpreted using two simple models, each with its own implementation related to the Born effective charge (BEC) tensor: I. In the first model, it is assumed that two point charges can be used to describe the transition dipole moment of the vibrating molecule. This is conceivable by imagining an O-O diatomic molecule/ion, which is perturbed by a potential or electric field and induces a charge redistri-bution within the molecule, making one of the atoms slightly positively charged and the other slightly negatively charged. This model is anal-ogous to considering only the diagonal elements of the Born effective charge tensor when calculating the transition dipole moment. II. A second model exists, whereby the transition dipole moment is not in-duced by oscillating point charges but instead by a more flexible charge redistribution which, by polarization or charge transfer, could induce a transition dipole moment in any direction. This model uses the full Born effective charge tensor and is therefore more accurate. The resulting transition dipole moments, computed using the two models, are seen in figure 6.6; the first model TDMs are shown in blue, and the sec-ond model TDMs are shown in red. For standing molecules (small angles), which are typically physisorbed, most of the total transition dipole moment stems from the diagonal elements. In contrast, the off-diagonal contributions quickly dominate for molecules that are angled more parallel to the surface. The largest TDMs are found among molecules adsorbed parallel to the surface. I therefore conclude that the first model of oscillating point charges centered 44 Figure 6.6. Transition dipole moments computed using the first method, which ac-counts for diagonal BEC elements (blue), and the second model, which accounts for the full BEC tensor. on the atoms is too simplistic for this system, and that the full tensor must be employed. The scatter of red dots (in figure 6.6) is untangled by considering each sur-face facet separately (figure 6.7), where it is evident that TDM⊥increases with adsorption angle, i.e., flat molecules give rise to larger signals. Figure 6.7. Transition dipole moments perpendicular to the surface with respect to the angle between O-O bond and surface normal for O2, O2 –, O22– adsorbed in two adsorption sites, on a cerium cation in an on-top configuration (Ce) or in an oxygen vacancy (Ov). The charge state of the systems is regulated to obtain the correct charge state of the dioxygen species or defect concentration. The underlying mechanism can be understood by analyzing the charge re-distribution when molecules vibrate. Figure 6.8 illustrates charge density dif-ference plots of two O2X–-species with appreciable intensities adsorbed on the (110) surface at different angles. The figure shows that the charge density difference is very similar between the two examples, despite being adsorbed at different angles. The transition dipole is composed of two effects, one from the polarization of the molecule and the other from an adsorbate-surface charge-sharing effect. By computing dipole moments within the molecule (0.0025 e/3 isosurface in figure 6.8), it is established that only 25% of the ef-45 fect could be attributed to molecular polarization and that the majority stems from adsorbate-surface charge sharing. The contour plots further reveal the characteristic shape of f-shaped orbital lobes attributed to the conduction band edge of ceria, indicating that this may be an effect shared with other f-element oxides. The flat and tilted O2 –, in figure 6.8, are typical of adsorbed O2X–-species and show how TDM⊥is maximized for O2X–-species which lie parallel to the surface, as the charge sharing dipole (aligned along the O2 –-Ce vector) aligns with the surface normal. Figure 6.8. Top panels: A comparison in between two O2 – species at the CeO2(110) surface. One in which the O2 – lies flat (left panel) and one in which it is tilted (right panel). The top panels display their geometries, along with the isosurface illustrating the electron redistribution after a small displacement along the O2 – vibrational mode. Gray and red spheres represent Cerium and oxygen, respectively. Bottom panels: Cor-responding contour plot of the charge density difference in the O− 2 −Ce plane. The center of the panels is taken to be the center of the O-O bond and the y-direction is parallel to the direction from the Ce to this center. The magnitude of the displace-ment is 0.01 times the normalized displacement vector, and the contour levels are ±0.0025,±0.005,±0.01,±0.02,±0.04,±0.08 e/Å3. 46 6.5 Assignments and conclusions By calculating the transition dipole moment of surface adsorbed O2X–-species on low-index surfaces of CeO2, it is demonstrated that molecules oriented parallel give rise to larger IR absorptions compared to their standing coun-terparts. The large transition dipole moment of these oscillators is attributed to a charge-sharing between the surface and adsorbates, which maximizes the transition dipole moment along the surface normal for horizontally adsorbed molecules. To accurately assign experimental spectra to surface adsorbed molecular motifs, three factors are important to consider: (a) the vibrational frequency, (b) the adsorption energy (which determines their relative abundance), and (c) the IR absorption of any proposed motif. Together, these three factors make up the peak position and absorption intensity. As I demonstrate in this work, established heuristics do not apply to all sys-tems; I therefore recommend calculating the IR absorption intensity as a stan-dard procedure when using DFT calculations to interpret experimental spectra. This work demonstrates how to do so. 47 7. Concluding remarks and outlook The focus of this thesis is on characterizing small molecules, H2O and O2, adsorbed on metal oxide surfaces (primarily ceria) using simulated spectro-scopic methods. Using density functional theory (DFT) simulations, the goal was to provide insights into experimental and computational studies of these technologically and phenomenologically interesting systems, with a focus on developing new methodologies and heuristics to relate atomic positions to spectroscopic signals. In the five papers presented in the thesis, the follow-ing conclusions were drawn: Paper I: By studying the adsorption energy of water adsorbed on ceria(111), it was found that water is stable under ambient conditions, at least in thin lay-ers. The adsorption energies of the first and second water layers were in good agreement with experimental results from temperature-programmed desorp-tion (TPD) analyses available in the literature. Furthermore, partial dissoci-ation of water was shown to facilitate the formation of hydrogen bonds be-tween surface-adsorbed species, which act to stabilize the water layer. For a full monolayer (1.0 ML), the most stable configuration was found at 50% dissociation, forming infinite hydrogen-bonded chains that span the surface. In addition to this structure, several other configurations with partial disso-ciation were determined to be nearly as stable, indicating that multiple com-peting structures could coexist at finite temperatures. The energy differences among these systems can be largely attributed to variations in hydrogen bond-ing, highlighting the importance of hydrogen bonding in determining adsor-bate structures. Paper II: By analyzing the vibrational frequencies of surface-adsorbed wa-ter on ceria(111), the bonding structure of the molecules forming the water layers was examined. Using vibrational frequency shifts to indicate whether a water molecule or a hydroxide group participates in a hydrogen bond, it was found that hydrogen bonds in surface systems are significantly more tilted than those in bulk systems, while still providing substantial hydrogen bond-ing. This observation led to the proposal of new geometric criteria for iden-tifying hydrogen bonds among surface-adsorbed water. Assignments for free (non-hydrogen-bonded) and hydrogen-bonded hydroxyl groups based on these criteria largely agree with those published in the literature. Furthermore, hy-drogen bonds were shown to induce frequency shifts that are similar within bonding motifs, indicated by Donor···Acceptor pairs. This information can be used to interpret future spectroscopic data. Papers III and IV: Quantitative estimates and assignments of hydrogen-bonded systems were further explored using structure–frequency correlation 48 curves. These curves are useful for interpreting vibrational spectrograms. Many established correlations exist for hydrogen-bonded systems that model the chemical environment surrounding oscillators. Many of these, and sev-eral others, were evaluated based on their ability to accurately reproduce vi-brational frequencies. The statistical framework presented here enabled an unbiased comparison of one-dimensional and multidimensional descriptors, providing accuracy estimates for each. The analysis revealed that complex de-scriptors, which incorporate atomic positions and effective electric fields from the electron density (or high-dimensional embeddings of atomic positions), can achieve high accuracy. However, these descriptors are difficult to inter-pret and cannot be easily deduced from frequency data alone, highlighting the balance between predictive power and interpretability. The framework of re-lating structure, or any feature, to frequency or experimental observable shows promise and could be extended to other experimental methods such as Nuclear Magnetic Resonance (NMR) or X-ray Photoelectron Spectroscopy (XPS). Paper V: The IRRAS spectra related to O2 adsorption on ceria were in-vestigated, focusing on the question of how the orientation of the adsorbed molecule relates to high-intensity peaks. By simulating IRRAS from first-principles calculations, the adsorption and spectroscopic response of O2X– were systematically analyzed. The results showed that O2X– species, which is IR inactive in the gas phase, become IR active when adsorbed to a surface, and that the signal is strongest for horizontally aligned species. This is due to a charge transfer between the surface and the molecule during the vibration. These findings provide a new heuristic for interpreting future experiments, namely that flat O2X– yields large signals. It also highlights the importance of explicitly modeling spectroscopic responses, rather than relying solely on empirical heuristics based on atomic orientation. Although explicit simulation of IR absorption is not commonly practiced in spectrum interpretation, there is significant potential for further insights, particularly if simulation software becomes widely available. Common themes across these works include providing tangible interpreta-tions of existing experimental data and complementing these with new method-ologies where needed. The contributions presented in this thesis can be sum-marized as follows: Qualitative Heuristics: Providing guidelines that relate hydrogen bond mo-tifs to vibrational frequencies and linking the orientation of O2X– to high-intensity IRRAS peaks. Quantitative Tools: Developing simulation tools for IRRAS and a statistical framework for comparing descriptors, thereby facilitating quantitative comparisons. Advancing Fundamental Understanding: Contributing to the broader know-ledge of these specific systems through both methodological innovations and detailed case studies. 49 8. Sammanfattning på svenska Växelverkan mellan små molekyler och metalloxider är av stort vetenskapligt intresse på grund av dess centrala roll för många materials stabilitet, kat-alytiska aktivitet och vätbarhet med mera. Denna växelverkan – önskad eller ej – gör molekyl-metalloxid-gränssnitt relevanta för såväl teknologiska app-likationer som naturliga processer. Molekylerna vatten (H2O) och syrgas (O2) samt deras växelverkan med metalloxiden ceriumoxid (ceria, CeO2) utgör hu-vudfokus i denna avhandling, även om vissa andra material också har studer-ats. Ett exempel som drivs av sådan växelverkan bland naturliga processer är molnbildning. Luftburet stoft, ofta bestående av små mineralpartiklar, in-teragerar med och samlar vatten från omgivande fuktig luft, vilket leder till droppbildning och därmed uppkomst av moln och en vidare påverkan på kli-matet. Den initiala reaktionen mellan ytan och vattenånga beror till stor del på hur vattenmolekyler närmast ytan interagerar med materialet. Vätning av ytan drivs ofta av dissociering, där vattenmolekyler avger ett väte till ytan och bildar två hydroxidjoner som sedan kan binda ytterligare vatten. Ur teknologisk synvinkel används ofta metalloxider för att påskynda kem-iska reaktioner, genom så kallad heterogen katalys. Ceria är en metalloxid som utmärker sig som ovanligt användbar vid bland annat avgasrening i för-bränningsmotorer och används som reaktiva nanopartiklar i medicinska sam-manhang på grund av dess rika syrekemi. Syrgas från luften står i jämvikt med syre i och på metalloxiden, vilket gör att syre från materialet blir tillgängligt för reaktioner på dess yta samtidigt som materialet kan återta syre från luften i ett senare skede. Dessa processer bygger på växelverkan mellan gasmolekyler och metal-loxidytor. En förståelse för hur molekyler binder till ytorna och vilka struk-turer som därmed uppstår är viktig för att kunna kartlägga processerna. Det finns många experimentella metoder för att undersöka systemen, men resul-taten kräver ofta datorsimuleringar på atomnivå för korrekt tolkning. I detta doktorandarbete används kvantmekaniska simuleringar för att undersöka hur vatten- och syremolekyler interagerar med metalloxidytor, främst ceria. Det övergripande målet är att tolka experimentella resultat från befintlig veten-skaplig litteratur samt utveckla tumregler för framtida analyser av liknande experiment. Avhandlingen bygger på fem vetenskapliga arbeten där den röda tråden är att koppla var och hur vatten och syrgas adsorberas på metalloxidytor till sig-naler från vibrationsspektroskopi och termisk desorptionsspektroskopi. 50 I det första arbetet undersöks förekomsten av vatten på den vanligast förekom-mande ceriaytan. Tidigare studier hade visat att vatten adsorberar över metal-lkatjoner, som ytan exponerar, men det var ännu oklart om vattnet dissocierar. När vatten dissocierar avger vatten ett väte till en närliggande syreanjon i ytan och bilar två hydroxider, H2O(g) + O2 – − − →2OH–. Utöver det undersök-tes även om vattenmolekyler tenderar att enbart binda till ytan, eller också binda till varandra och bilda agglomerat eller vattenfilmer på ytan. Genom att undersöka många potentiella konfigurationer av vatten på ceria-ytan och beräkna deras stabilitet kunde jag identifiera de mest sannolika scenarierna. Simuleringarna visar att vatten dissocierar när flera vattenmolekyler adsor-beras samtidigt. Hälften av alla vattenmolekyler i det närmaste lagret dis-socierar, vilket skapar förutsättningar för vätebindningar med ytterligare vat-tenmolekyler och möjliggör kedjebildning genom vätebindingar. I det andra arbetet undersöks hur vätebindningar mellan adsorberat vat-ten (bestående av vatten H2O och hydroxider OH–) på ceriaytan kvalitativt kan relateras till resultat från vibrationsspektroskopi. Med resultaten från det första arbetet som grund simuleras vibrationerna hos de ytbundna vatten-molekylerna, och specifika spektroskopiska signaler kopplas till strukturele-ment hos adsorbaten, så kallade vätebindningsmotiv som visar på interak-tionen mellan två molekyler. Analysen visade att definitionen av vätebind-ningar, baserat på avstånd och vinklar, behövde anpassas till ytbundet vat-ten och jag föreslår en ny sådan definition. Dessutom visar sig vibrations-frekvensen variera beroende på typen av vätebindningsmotiv som molekylen ingick i. Exempelvis binder en vattenmolekyl (H2O) betydligt starkare till en hydroxidjon (OH–) än till en annan vattenmolekyl, vilket resulterar i ett större frekvensskifte. I två efterföljande arbeten studerades kvantitativt hur omgivningen kring vattenmolekyler, både på ytor och i kristaller, påverkar vibrationsfrekvensen. Molekylens omgivning kan beskrivas på flera sätt, exempelvis genom väte-bindningsavstånd, avstånd till närmaste grannar, elektriska fält eller med hjälp av högdimensionella representationer av omgivningen. I dessa arbeten utveck-lades ett statistiskt verktyg för att koppla strukturer till vibrationsfrekvenser och för att bedöma hur väl molekylens omgivning kunde användas för att ap-proximera frekvensen utan kostnadsintensiva beräkningar. En kombination av vätebindningsavstånd och elektriskt fält kunde bäst reproducera vibrations-frekvensen och antyder att vätebindningar, ger upphov till frekvensskiften hos molekylerna. I det femte och sista arbetet relatereades även vibrationsspektra, specifikt från IRRAS-tekniken, till atomstruktur genom att studera hur syrgasmolekyler (O2) binder till olika ceriaytor och huruvida molekyler som står upp eller lig-ger ner ger upphov till en stark signal. Där tidigare arbeten antytt att lad-dade superoxider (O2 –) och peroxider (O22–) troligen står upp visar jag att molekyler som ligger i ytans plan ger upphov till den starkaste ljusabsorptio-51 nen. Denna absorption kan härledas till elektronöverföring mellan ytan och molekylen som är som starkast när molekylen ligger ned. Sammanfattningsvis visar avhandlingen att studier som kombinerar simu-leringar med fysikaliska och statistiska analyser kan ge nya kompletterande insikter vid tolkning av experimentella resultat. Genom att noggrant kop-pla vanligt förekommande molekylers placering på metalloxidytor till spek-troskopiska data bidrar denna avhandling med nya tolkningsverktyg och tum-regler som fördjupar förståelsen av framtida experimentella resultat. 52 9. Acknowledgments First, I would like to thank Kersti Hermansson for her careful guidance through this thesis work. I especially appreciate our in-depth and rigorous scientific discussions, as well as your meticulous review of my scientific work and writ-ing. Her superpower is her ability to read text like a compiler, identifying all logical flaws and potential misinterpretations. Secondly, I would like to thank Jolla Kullgren for inspiring me and provid-ing positive guidance through the deepest of rabbit holes and adversity. His superpower is never losing a positive outlook when exploring the unknown. Thirdly, I would like to thank Pavlin Mitev for introducing me to the pro-gramming aspects of computational chemistry and scientific computing. This opened the door to my true passion for producing great software. I would also like to thank all the current and previous TEOROO/CMC group members for their support and helpful attitude. Thank you, Akshay Krishna, Ageo Meier de Andrade, Dou Du, Yunqi Shao, Lisanne Knijf, Shao Zhang, Amber Mace, Peter Broqvist, and Lorenzo Agosta, to name a few. I would also like to thank Wim Briels, Jonas Ångström, and Cleber Mar-chiori for our long collaborations and the interesting work that, unfortunately, never materialized into papers, and Matthew J. Wolf for the long and interest-ing discussions about exploring new research questions. I would also like to acknowledge the support of my coworkers at my current workplace. Thank you to Tobias Persson and Max Hård af Segerstad, among others, and a special thanks to Kina Jansson, who designed the cover image. I would like to acknowledge the financial support for my studies and re-search provided by Uppsala University, the National Strategic e-Science Pro-gram eSSENCE, and the Liljewalch Foundation. The Swedish National Infras-tructure for Computing (SNIC) provided the computational resources used to conduct the research presented in this thesis. Lastly, I must thank Rebecka for the endless support that was needed to keep me afloat through this effort, and our son, Tage, who brings joy and laughter to our lives every single day. If not for you, this thesis would not have been written. 53 Bibliography (1) Rothenberg, G., Catalysis: concepts and green applications; John Wi-ley & Sons: 2017. (2) Védrine, J. C. Catalysts 2017, 7, 341. (3) Carchini, G.; García-Melchor, M.; Łodziana, Z.; López, N. ACS Ap-plied Materials and Interfaces 2016, 8, 152–160. (4) Limo, M. J.; Sola-Rabada, A.; Boix, E.; Thota, V.; Westcott, Z. C.; Puddu, V.; Perry, C. C. Chemical Reviews 2018, 118, 11118–11193. (5) Nelson, N. C.; Szanyi, J. ACS Catalysis 2020, 10, DOI: 10.1021/ acscatal.0c01059. (6) Celardo, I.; Pedersen, J. Z.; Traversa, E.; Ghibelli, L. Nanoscale 2011, 3, 1411–1420. (7) Naganuma, T. Nano research 2017, 10, 199–217. (8) Yao, J.; Cheng, Y.; Zhou, M.; Zhao, S.; Lin, S.; Wang, X.; Wu, J.; Li, S.; Wei, H. Chemical science (Cambridge) 2018, 9, 2927–2933. (9) Zhang, Y.; Chen, L.; Sun, R.; Lv, R.; Du, T.; Li, Y.; Zhang, X.; Sheng, R.; Qi, Y. ACS biomaterials science & engineering 2022, 8, 638–648. (10) Torbrügge, S.; Custance, O.; Morita, S.; Reichling, M. Journal of Physics Condensed Matter 2012, 24, 084010. (11) Gritschneder, S.; Reichling, M. Nanotechnology 2007, 18, 044024. (12) Mu, R.; Zhao, Z. J.; Dohnálek, Z.; Gong, J. Chemical Society Reviews 2017, 46, 1785–1806. (13) Chabal, Y. J. Surface Science Reports 1988, 8, 211–357. (14) Redhead, P. A. Vacuum 1962, 12, 203–211. (15) Campbell, C. T.; Sellers, J. R. Chemical Reviews 2013, 113, 4106– 4135. (16) Chase, M. W.; Davies, C. A.; Downey, J. R.; Frurip, D. J.; McDonald, R. A.; Syverud, A. N. J. Phys. Chem. Ref. Data Monogr. 1998, 9, 1. (17) Tersoff, J.; Hamann, D. R. Physical Review B 1985, 31, 805–813. (18) Togo, A.; Tanaka, I. Scripta Materialia 2015, 108, 1–5. (19) Light, J. C.; Hamilton, I. P.; Lill, J. V. The Journal of Chemical Physics 1985, 82, 1400–1409. 54 (20) Bacic, Z.; Light, J. C. Annual Review of Physical Chemistry 1989, 40, 469–498. (21) Greenler, R. G. The Journal of Chemical Physics 1966, 44, 310–315. (22) Greenler, R. G.; Snider, D. R.; Witt, D.; Sorbello, R. S. Surface Science 1982, 118, 415–428. (23) Yang, C.; Wöll, C. Advances in Physics: X 2017, 2, 373–408. (24) Skelton, J. M.; Burton, L. A.; Jackson, A. J.; Oba, F.; Parker, S. C.; Walsh, A. Physical Chemistry Chemical Physics 2017, 19, 12452– 12465. (25) Topping, J. Proceedings of the Royal Society of London. Series A, Con-taining Papers of a Mathematical and Physical Character 1927, 114, 67–72. (26) Pederson, M. R.; Baruah, T.; Allen, P. B.; Schmidt, C. Journal of Chem-ical Theory and Computation 2005, 1, 590–596. (27) Hohenberg, P.; Kohn, W. Physical Review 1964, 136, B864–B871. (28) Kohn, W.; Sham, L. J. Physical Review 1965, 140, A1133–A1138. (29) Klimeš, J.; Bowler, D. R.; Michaelides, A. Journal of Physics Con-densed Matter 2010, 22, 022201. (30) Gillan, M. J.; Alfè, D.; Michaelides, A. Journal of Chemical Physics 2016, 144, 130901. (31) Kebede, G.; Mitev, P. D.; Broqvist, P.; Eriksson, A.; Hermansson, K. Journal of Chemical Theory and Computation 2019, 15, 584–594. (32) Dudarev, S.; Botton, G. Physical Review B - Condensed Matter and Materials Physics 1998, 57, 1505–1509. (33) Perdew, J. P.; Burke, K.; Ernzerhof, M. Physical Review Letters 1996, 77, 3865–3868. (34) Monkhorst, H. J.; Pack, J. D. Physical Review B 1976, 13, 5188–5192. (35) Pack, J. D.; Monkhorst, H. J. Physical Review B 1977, 16, 1748–1749. (36) Blöchl, P. E. Physical Review B 1994, 50, 17953–17979. (37) Mullins, D. R. Surface Science Reports 2015, 70, 42–85. (38) Henderson, M. A.; Perkins, C. L.; Engelhard, M. H.; Thevuthasan, S.; Peden, C. H. Surface Science 2003, 526, 1–18. (39) Mullins, D. R.; Albrecht, P. M.; Chen, T. L.; Calaza, F. C.; Biegalski, M. D.; Christen, H. M.; Overbury, S. H. Journal of Physical Chemistry C 2012, 116, 19419–19428. (40) Matolín, V.; Matolínová, I.; Dvoˇ rák, F.; Johánek, V.; Mysliveˇ cek, J.; Prince, K. C.; Skála, T.; Stetsovych, O.; Tsud, N.; Václav˚ u, M.; Šmíd, B. Catalysis Today 2012, 181, 124–132. 55 (41) Chen, B.; Ma, Y.; Ding, L.; Xu, L.; Wu, Z.; Yuan, Q.; Huang, W. Jour-nal of Physical Chemistry C 2013, 117, 5800–5810. (42) Paier, J.; Penschke, C.; Sauer, J. Chemical Reviews 2013, 113, 3949– 3985. (43) Chen, H. T.; Choi, Y. M.; Liu, M.; Lin, M. C. ChemPhysChem 2007, 8, 849–855. (44) Watkins, M. B.; Foster, A. S.; Shluger, A. L. Journal of Physical Chem-istry C 2007, 111, 15337–15341. (45) Kumar, S.; Schelling, P. K. Journal of Chemical Physics 2006, 125, 204704. (46) Fronzi, M.; Piccinin, S.; Delley, B.; Traversa, E.; Stampfl, C. Physical Chemistry Chemical Physics 2009, 11, 9188–9199. (47) Molinari, M.; Parker, S. C.; Sayle, D. C.; Islam, M. S. Journal of Phys-ical Chemistry C 2012, 116, 7073–7082. (48) Fernández-Torre, D.; Ko´ smider, K.; Carrasco, J.; Ganduglia-Pirovano, M. V.; Pérez, R. Journal of Physical Chemistry C 2012, 116, 13584– 13593. (49) Yang, Z.; Wang, Q.; Wei, S.; Ma, D.; Sun, Q. Journal of Physical Chemistry C 2010, 114, 14891–14899. (50) Kropp, T.; Paier, J.; Sauer, J. Journal of Physical Chemistry C 2017, 121, 21571–21578. (51) Fronzi, M.; Assadi, M. H. N.; Hanaor, D. A. Applied Surface Science 2019, 478, 68–74. (52) Farnesi Camellone, M.; Negreiros Ribeiro, F.; Szabová, L.; Tateyama, Y.; Fabris, S. Journal of the American Chemical Society 2016, 138, 11560–11567. (53) Steiner, T. Angewandte Chemie - International Edition 2002, 41, 48– 76. (54) Jeffrey, G. A., An introduction to hydrogen bonding; Oxford University Press: New York, 1997. (55) Kebede, G. G.; Spångberg, D.; Mitev, P. D.; Broqvist, P.; Hermansson, K. Journal of Chemical Physics 2017, 146, 064703. (56) Libowitzky, E. Monatshefte fur Chemie 1999, 130, 1047–1059. (57) Li, C.; Sakata, Y.; Arai, T.; Domen, K.; Maruya, K. I.; Onishi, T. Jour-nal of the Chemical Society, Faraday Transactions 1: Physical Chem-istry in Condensed Phases 1989, 85, 929–943. (58) Laachir, A.; Perrichon, V.; Badri, A.; Lamotte, J.; Catherine, E.; Laval-ley, J. C.; El Fallah, J.; Hilaire, L.; Le Normand, F.; Quéméré, E.; Sauvion, G. N.; Touret, O. Journal of the Chemical Society, Faraday Transactions 1991, 87, 1601–1609. 56 (59) Binet, C.; Badri, A.; Lavalley, J. C. Journal of Physical Chemistry 1994, 98, 6392–6398. (60) Badri, A.; Binet, C.; Lavalley, J. C. Journal of the Chemical Society -Faraday Transactions 1996, 92, 4669–4673. (61) Binet, C.; Daturi, M.; Lavalley, J. C. Catalysis Today 1999, 50, 207– 225. (62) Holmgren, A.; Andersson, B.; Duprez, D. Applied Catalysis B: Envi-ronmental 1999, 22, 215–230. (63) Daturi, M.; Finocchio, E.; Binet, C.; Lavalley, J. C.; Fally, F.; Perrichon, V. Journal of Physical Chemistry B 1999, 103, 4884–4891. (64) Jacobs, G.; Williams, L.; Graham, U.; Sparks, D.; Davis, B. H. Journal of Physical Chemistry B 2003, 107, 10398–10404. (65) Jacobs, G.; Williams, L.; Graham, U.; Thomas, G. A.; Sparks, D. E.; Davis, B. H. Applied Catalysis A: General 2003, 252, 107–118. (66) Jacobs, G.; Graham, U. M.; Chenu, E.; Patterson, P. M.; Dozier, A.; Davis, B. H. Journal of Catalysis 2005, 229, 499–512. (67) Natile, M. M.; Glisenti, A. Chemistry of Materials 2005, 17, 3403– 3414. (68) Pozdnyakova, O.; Teschner, D.; Wootsch, A.; Kröhnert, J.; Steinhauer, B.; Sauer, H.; Toth, L.; Jentoft, F. C.; Knop-Gericke, A.; Paál, Z.; Schlögl, R. Journal of Catalysis 2006, 237, 1–16. (69) Solsona, B.; García, T.; Murillo, R.; Mastral, A. M.; Ndifor, E. N.; Hetrick, C. E.; Amiridis, M. D.; Taylor, S. H. Topics in Catalysis 2009, 52, 492–500. (70) Azambre, B.; Zenboury, L.; Koch, A.; Weber, J. V. Journal of Physical Chemistry C 2009, 113, 13287–13299. (71) Vayssilov, G. N.; Mihaylov, M.; Petkov, P. S.; Hadjiivanov, K. I.; Ney-man, K. M. Journal of Physical Chemistry C 2011, 115, 23435–23454. (72) Lykhach, Y.; Johánek, V.; Aleksandrov, H. A.; Kozlov, S. M.; Happel, M.; Skála, T.; Petkov, P. S.; Tsud, N.; Vayssilov, G. N.; Prince, K. C.; Neyman, K. M.; Matolín, V.; Libuda, J. Journal of Physical Chemistry C 2012, 116, 12103–12113. (73) Amrute, A. P.; Mondelli, C.; Moser, M.; Novell-Leruth, G.; López, N.; Rosenthal, D.; Farra, R.; Schuster, M. E.; Teschner, D.; Schmidt, T.; Pérez-Ramírez, J. Journal of Catalysis 2012, 286, 287–297. (74) Agarwal, S.; Lefferts, L.; Mojet, B. L. ChemCatChem 2013, 5, 479– 489. (75) Farra, R.; Wrabetz, S.; Schuster, M. E.; Stotz, E.; Hamilton, N. G.; Amrute, A. P.; Pérez-Ramírez, J.; López, N.; Teschner, D. Physical Chemistry Chemical Physics 2013, 15, 3454–3465. 57 (76) López, J. M.; Gilbank, A. L.; García, T.; Solsona, B.; Agouram, S.; Torrente-Murciano, L. Applied Catalysis B: Environmental 2015, 174-175, 403–412. (77) Wu, Z.; Mann, A. K.; Li, M.; Overbury, S. H. Journal of Physical Chemistry C 2015, 119, 7340–7350. (78) Filtschew, A.; Hofmann, K.; Hess, C. Journal of Physical Chemistry C 2016, 120, 6694–6703. (79) Luo, L.; LaCoste, J. D.; Khamidullina, N. G.; Fox, E.; Gang, D. D.; Hernandez, R.; Yan, H. Surface Science 2020, 691, 0039–6028. (80) Luzar, A.; Chandler, D. Nature 1996, 379, 55–57. (81) Wernet, P.; Nordlund, D.; Bergmann, U.; Cavalleri, M.; Odelius, N.; Ogasawara, H.; Näslund, L. Å.; Hirsch, T. K.; Ojamäe, L.; Glatzel, P.; Pettersson, L. G.; Nilsson, A. Science 2004, 304, 995–999. (82) Kebede, G. G.; Mitev, P. D.; Broqvist, P.; Kullgren, J.; Hermansson, K. Journal of Physical Chemistry C 2018, 122, 4849–4858. (83) Novak, A. In Large Molecules; Springer Berlin Heidelberg: Berlin, Heidelberg, 1974, pp 177–216. (84) Berglund, B.; Lindgren, J.; Tegenfeldt, J. Journal of Molecular Struc-ture 1978, 43, 179–191. (85) Bertolasi, V.; Gilli, P.; Ferretti, V.; Gilli, G. Chemistry - A European Journal 1996, 2, 925–934. (86) Hermansson, K.; Knuts, S.; Lindgren, J. The Journal of Chemical Physics 1991, 95, 7486–7496. (87) Pejov, L.; Hermansson, K. Journal of molecular liquids 2002, 98, 369– 382. (88) Pejov, L.; Spångberg, D.; Hermansson, K. The Journal of Physical Chemistry A 2005, 109, 5144–5152. (89) Pejov, L.; Spångberg, D.; Hermansson, K. The Journal of chemical physics 2010, 133. (90) Hermansson, K.; Bopp, P. A.; Spångberg, D.; Pejov, L.; Bakó, I.; Mitev, P. D. Chemical Physics Letters 2011, 514, 1–15. (91) Corcelli, S. A.; Lawrence, C. P.; Skinner, J. L. The Journal of chemical physics 2004, 120, 8107–8117. (92) Corcelli, S. A.; Skinner, J. L. Journal of Physical Chemistry A 2005, 109, 6154–6165. (93) Kananenka, A. A.; Yao, K.; Corcelli, S. A.; Skinner, J. L. Journal of Chemical Theory and Computation 2019, 15, 6850–6858. (94) Behler, J. Journal of Chemical Physics 2011, 134, 74106. 58 (95) Gretton, A.; Bousquet Olivier; Smola, A.; Schölkopf, B. In Lecture notes in computer science, Springer: New York, 2005, pp 63–77. (96) Wang, T.; Dai, X.; Liu, Y. Knowledge-Based Systems 2021, 234, 107567. (97) Trovarelli, A., Catalysis by ceria and related materials, 1st ed.; Cat-alytic science series ; v. 2; Imperial College Press: London, 2002. (98) Xu, J.; Harmer, J.; Li, G.; Chapman, T.; Collier, P.; Longworth, S.; Tsang, S. C. Chemical communications (Cambridge, England) 2010, 46, 1887–1889. (99) Preda, G.; Migani, A.; Neyman, K. M.; Bromley, S. T.; Illas, F.; Pac-chioni, G. Journal of physical chemistry. C 2011, 115, 5817–5822. (100) Kullgren, J.; Hermansson, K.; Broqvist, P. The journal of physical chem-istry letters 2013, 4, 604–608. (101) Renuka, N. K.; Harsha, N.; Divya, T. RSC advances 2015, 5, 38837– 38841. (102) Xu, M.; Gao, Y.; Wang, Y.; Wöll, C. Physical Chemistry Chemical Physics 2010, 12, 3649–3652. (103) Yang, C.; Yu, X.; Heißler, S.; Weidler, P. G.; Nefedov, A.; Wang, Y.; Wöll, C.; Kropp, T.; Paier, J.; Sauer, J. Angewandte Chemie 2017, 129, 16618–16623. (104) Yang, C.; Cao, Y.; Plessow, P. N.; Wang, J.; Nefedov, A.; Heissler, S.; Studt, F.; Wang, Y.; Idriss, H.; Mayerhöfer, T. G.; Wöll, C. The Journal of Physical Chemistry C 2022, 126, 2253–2263. (105) Buchholz, M.; Weidler, P. G.; Bebensee, F.; Nefedov, A.; Wöll, C. Physical Chemistry Chemical Physics 2014, 16, 1672–1678. (106) Xu, M.; Noei, H.; Buchholz, M.; Muhler, M.; Wöll, C.; Wang, Y. In Catalysis Today, 2012; Vol. 182, pp 12–15. (107) Buchholz, M.; Xu, M.; Noei, H.; Weidler, P.; Nefedov, A.; Fink, K.; Wang, Y.; Wöll, C. 2015, DOI: 10.1016/j.susc.2015.08.006. (108) Yang, C.; Yu, X.; Heißler, S.; Nefedov, A.; Colussi, S.; Llorca, J.; Trovarelli, A.; Wang, Y.; Wöll, C. Angewandte Chemie 2017, 56, 375– 379. (109) Petrik, N. G.; Kimmel, G. A. Journal of Physical Chemistry Letters 2012, 3, 3425–3430. (110) Lin, X.; Yoon, Y.; Petrik, N. G.; Li, Z.; Wang, Z. T.; Glezakou, V. A.; Kay, B. D.; Lyubinetsky, I.; Kimmel, G. A.; Rousseau, R.; Dohnálek, Z. Journal of Physical Chemistry C 2012, 116, 26322–26334. (111) Cao, Y.; Hu, S.; Yu, M.; Yan, S.; Xu, M. Physical Chemistry Chemical Physics 2015, 17, 23994–24000. 59 (112) Schilling, C.; Ganduglia-Pirovano, M. V.; Hess, C. Journal of Physical Chemistry Letters 2018, 9, 6593–6598. (113) Schilling, C.; Hofmann, A.; Hess, C.; Ganduglia-Pirovano, M. V. Jour-nal of Physical Chemistry C 2017, 121, 20834–20849. (114) Gustafsson, K.; Andersson, S. Journal of Chemical Physics 2004, 120, 7750–7754. 60 Acta Universitatis Upsaliensis Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 2535 Editor: The Dean of the Faculty of Science and Technology A doctoral dissertation from the Faculty of Science and Technology, Uppsala University, is usually a summary of a number of papers. A few copies of the complete dissertation are kept at major Swedish research libraries, while the summary alone is distributed internationally through the series Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology. (Prior to January, 2005, the series was published under the title “Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology”.) Distribution: publications.uu.se urn:nbn:se:uu:diva-554013 ACTA UNIVERSITATIS UPSALIENSIS 2025
12260
https://ocw.mit.edu/courses/6-042j-mathematics-for-computer-science-fall-2010/ddbf7b0dec318467d2a3bf1fa9191100_MIT6_042JF10_chap13.pdf
“mcs-ftl” — 2010/9/8 — 0:40 — page 379 — #385 13 Infinite Sets So you might be wondering how much is there to say about an infinite set other than, well, it has an infinite number of elements. Of course, an infinite set does have an infinite number of elements, but it turns out that not all infinite sets have the same size—some are bigger than others! And, understanding infinity is not as easy as you might think. Some of the toughest questions in mathematics involve infinite sets. Why should you care? Indeed, isn’t computer science only about finite sets? Not exactly. For example, we deal with the set of natural numbers N all the time and it is an infinite set. In fact, that is why we have induction: to reason about predicates over N. Infinite sets are also important in Part IV of the text when we talk about random variables over potentially infinite sample spaces. So sit back and open your mind for a few moments while we take a very brief look at infinity. 13.1 Injections, Surjections, and Bijections We know from Theorem 7.2.1 that if there is an injection or surjection between two finite sets, then we can say something about the relative sizes of the two sets. The same is true for infinite sets. In fact, relations are the primary tool for determining the relative size of infinite sets. Definition 13.1.1. Given any two sets A and B, we say that A surj B iff there is a surjection from A to B, A inj B iff there is an injection from A to B, A bij B iff there is a bijection between A and B, and A strict B iff there is a surjection from A to B but there is no bijection from B to A. Restating Theorem 7.2.1 with this new terminology, we have: Theorem 13.1.2. For any pair of finite sets A and B, jAj  jBj iff A surj B; jAj  jBj iff A inj B; jAj D jBj iff A bij B; jAj > jBj iff A strict B: 1 “mcs-ftl” — 2010/9/8 — 0:40 — page 380 — #386 Chapter 13 Infinite Sets Theorem 13.1.2 suggests a way to generalize size comparisons to infinite sets; namely, we can think of the relation surj as an “at least as big” relation between sets, even if they are infinite. Similarly, the relation bij can be regarded as a “same size” relation between (possibly infinite) sets, and strict can be thought of as a “strictly bigger” relation between sets. Note that we haven’t, and won’t, define what the size of an infinite set is. The definition of infinite “sizes” is cumbersome and technical, and we can get by just fine without it. All we need are the “as big as” and “same size” relations, surj and bij, between sets. But there’s something else to watch out for. We’ve referred to surj as an “as big as” relation and bij as a “same size” relation on sets. Most of the “as big as” and “same size” properties of surj and bij on finite sets do carry over to infinite sets, but some important ones don’t—as we’re about to show. So you have to be careful: don’t assume that surj has any particular “as big as” property on infinite sets until it’s been proved. Let’s begin with some familiar properties of the “as big as” and “same size” relations on finite sets that do carry over exactly to infinite sets: Theorem 13.1.3. For any sets, A, B, and C, 1. A surj B and B surj C IMPLIES A surj C. 2. A bij B and B bij C IMPLIES A bij C. 3. A bij B IMPLIES B bij A. Parts 1 and 2 of Theorem 13.1.3 follow immediately from the fact that composi-tions of surjections are surjections, and likewise for bijections. Part 3 follows from the fact that the inverse of a bijection is a bijection. We’ll leave a proof of these facts to the problems. Another familiar property of finite sets carries over to infinite sets, but this time it’s not so obvious: Theorem 13.1.4 (Schr¨ oder-Bernstein). For any pair of sets A and B, if A surj B and B surj A, then A bij B. The Schr¨ oder-Bernstein Theorem says that if A is at least as big as B and, con-versely, B is at least as big as A, then A is the same size as B. Phrased this way, you might be tempted to take this theorem for granted, but that would be a mis-take. For infinite sets A and B, the Schr¨ oder-Bernstein Theorem is actually pretty technical. Just because there is a surjective function f W A ! B—which need not be a bijection—and a surjective function g W B ! A—which also need not 2 “mcs-ftl” — 2010/9/8 — 0:40 — page 381 — #387 13.2. Countable Sets be a bijection—it’s not at all clear that there must be a bijection h W A ! B. The challenge is to construct h from parts of both f and g. We’ll leave the actual construction to the problems. 13.1.1 Infinity Is Different A basic property of finite sets that does not carry over to infinite sets is that adding something new makes a set bigger. That is, if A is a finite set and b … A, then jA [ fbgj D jAj C 1, and so A and A [ fbg are not the same size. But if A is infinite, then these two sets are the same size! Theorem 13.1.5. Let A be a set and b … A. Then A is infinite iff A bij A [ fbg. Proof. Since A is not the same size as A [ fbg when A is finite, we only have to show that A [ fbg is the same size as A when A is infinite. That is, we have to find a bijection between A [ fbg and A when A is infinite. Since A is infinite, it certainly has at least one element; call it a0. Since A is infinite, it has at least two elements, and one of them must not be equal to a0; call this new element a1. Since A is infinite, it has at least three elements, one of which must not equal a0 or a1; call this new element a2. Continuing in this way, we conclude that there is an infinite sequence a0, a1, a2, . . . , an, . . . , of different elements of A. Now it’s easy to define a bijection f W A [ fbg ! A: f .b/ WWD a0; f .an/ WWD anC1 for n 2 N; f .a/ WWD a for a 2 A fb; a0; a1; : : : g:  13.2 Countable Sets 13.2.1 Definitions A set C is countable iff its elements can be listed in order, that is, the distinct elements in C are precisely c0; c1; : : : ; cn; : : : : This means that if we defined a function f on the nonnegative integers by the rule that f .i/ WWD ci, then f would be a bijection from N to C. More formally, Definition 13.2.1. A set C is countably infinite iff N bij C. A set is countable iff it is finite or countably infinite. 3 “mcs-ftl” — 2010/9/8 — 0:40 — page 382 — #388 Chapter 13 Infinite Sets Discrete mathematics is often defined as the mathematics of countable sets and so it is probably worth spending a little time understanding what it means to be countable and why countable sets are so special. For example, a small modification of the proof of Theorem 13.1.5 shows that countably infinite sets are the “smallest” infinite sets; namely, if A is any infinite set, then A surj N. 13.2.2 Unions Since adding one new element to an infinite set doesn’t change its size, it’s obvi-ous that neither will adding any finite number of elements. It’s a common mis-take to think that this proves that you can throw in countably infinitely many new elements—just because it’s ok to do something any finite number of times doesn’t make it ok to do it an infinite number of times. For example, suppose that you have two countably infinite sets A D fa0; a1; a2; : : : g and B D fb0; b1; b2; : : : g. You might try to show that A[B is countable by making the following “list” for A [ B: a0; a1; a2; : : : ; b0; b1; b2; : : : (13.1) But this is not a valid argument because Equation 13.1 is not a list. The key property required for listing the elements in a countable set is that for any element in the set, you can determine its finite index in the list. For example, ai shows up in position i in Equation 13.1, but there is no index in the supposed “list” for any of the bi. Hence, Equation 13.1 is not a valid list for the purposes of showing that A [ B is countable when A is infinite. Equation 13.1 is only useful when A is finite. It turns out you really can add a countably infinite number of new elements to a countable set and still wind up with just a countably infinite set, but another argument is needed to prove this. Theorem 13.2.2. If A and B are countable sets, then so is A [ B. Proof. Suppose the list of distinct elements of A is a0, a1, . . . , and the list of B is b0, b1, . . . . Then a valid way to list all the elements of A [ B is a0; b0; a1; b1; : : : ; an; bn; : : : : (13.2) Of course this list will contain duplicates if A and B have elements in common, but then deleting all but the first occurrence of each element in Equation 13.2 leaves a list of all the distinct elements of A and B.  Note that the list in Equation 13.2 does not have the same defect as the purported “list” in Equation 13.1, since every item in A [ B has a finite index in the list created in Theorem 13.2.2. 4 “mcs-ftl” — 2010/9/8 — 0:40 — page 383 — #389 13.2. Countable Sets b0 b1 b2 b3 : : : a0 c0 c1 c4 c9 a1 c3 c2 c5 c10 a2 c8 c7 c6 c11 a3 c15 c14 c13 c12 : : : ::: Figure 13.1 A listing of the elements of C D AB where A D fa0; a1; a2; : : : g and B D fb0; b1; b2; : : : g are countably infinite sets. For example, c5 D .a1; b2/. 13.2.3 Cross Products Somewhat surprisingly, cross products of countable sets are also countable. At first, you might be tempted to think that “infinity times infinity” (whatever that means) somehow results in a larger infinity, but this is not the case. Theorem 13.2.3. The cross product of two countable sets is countable. Proof. Let A and B be any pair of countable sets. To show that C D A B is also countable, we need to find a listing of the elements f .a; b/ j a 2 A; b 2 B g: There are many such listings. One is shown in Figure 13.1 for the case when A and B are both infinite sets. In this listing, .ai; bj / is the kth element in the list for C where ai is the ith element in A, bj is the jth element in B, and k D max.i; j /2 C i C max.i j; 0/: The task of finding a listing when one or both of A and B are finite is left to the problems at the end of the chapter.  13.2.4 Q Is Countable Theorem 13.2.3 also has a surprising Corollary; namely that the set of rational numbers is countable. Corollary 13.2.4. The set of rational numbers Q is countable. 5 “mcs-ftl” — 2010/9/8 — 0:40 — page 384 — #390 Chapter 13 Infinite Sets Proof. Since ZZ is countable by Theorem 13.2.3, it suffices to find a surjection f from Z Z to Q. This is easy to to since f .a; b/ D ( a=b if b ¤ 0 0 if b D 0 is one such surjection.  At this point, you may be thinking that every set is countable. That is not the case. In fact, as we will shortly see, there are many infinite sets that are uncountable, including the set of real numbers R. 13.3 Power Sets Are Strictly Bigger It turns out that the ideas behind Russell’s Paradox, which caused so much trouble for the early efforts to formulate Set Theory, also lead to a correct and astonishing fact discovered by Georg Cantor in the late nineteenth century: infinite sets are not all the same size. Theorem 13.3.1. For any set A, the power set P.A/ is strictly bigger than A. Proof. First of all, P.A/ is as big as A: for example, the partial function f W P.A/ ! A where f .fag/ WWD a for a 2 A is a surjection. To show that P.A/ is strictly bigger than A, we have to show that if g is a function from A to P.A/, then g is not a surjection. So, mimicking Russell’s Paradox, define Ag WWD f a 2 A j a … g.a/ g: Ag is a well-defined subset of A, which means it is a member of P.A/. But Ag can’t be in the range of g, because if it were, we would have Ag D g.a0/ for some a0 2 A. So by definition of Ag, a 2 g.a0/ iff a 2 Ag iff a … g.a/ for all a 2 A. Now letting a D a0 yields the contradiction a0 2 g.a0/ iff a0 … g.a0/: So g is not a surjection, because there is an element in the power set of A, namely the set Ag, that is not in the range of g.  6 “mcs-ftl” — 2010/9/8 — 0:40 — page 385 — #391 13.3. Power Sets Are Strictly Bigger 13.3.1 R Is Uncountable To prove that the set of real numbers is uncountable, we will show that there is a surjection from R to P.N/ and then apply Theorem 13.3.1 to P.N/. Lemma 13.3.2. R surj P.N/. Proof. Let A  N be any subset of the natural numbers. Since N is countable, this means that A is countable and thus that A D fa0; a1; a2; : : : g. For each i  0, define bin.ai/ to be the binary representation of ai. Let xA be the real number using only digits 0, 1, 2 as follows: xA WWD 0:2 bin.a0/2 bin.a1/2 bin.a2/2 : : : (13.3) We can then define a surjection f W R ! P.N/ as follows: f .x/ D ( A if x D xA for some A 2 N; 0 otherwise: Hence R surj P.N/.  Corollary 13.3.3. R is uncountable. Proof. By contradiction. Assume R is countable. Then N surj R. By Lemma 13.3.2, R surj P.N/. Hence N surj P.N/. This contradicts Theorem 13.3.1 for the case when A D N.  So the set of rational numbers and the set of natural numbers have the same size, but the set of real numbers is strictly larger. In fact, R bij P.N /, but we won’t prove that here. Is there anything bigger? 13.3.2 Even Larger Infinities There are lots of different sizes of infinite sets. For example, starting with the infinite set N of nonnegative integers, we can build the infinite sequence of sets N; P.N/; P.P.N//; P.P.P.N///; : : : By Theorem 13.3.1, each of these sets is strictly bigger than all the preceding ones. But that’s not all, the union of all the sets in the sequence is strictly bigger than each set in the sequence. In this way, you can keep going, building still bigger infinities. 7 “mcs-ftl” — 2010/9/8 — 0:40 — page 386 — #392 Chapter 13 Infinite Sets 13.3.3 The Continuum Hypothesis Georg Cantor was the mathematician who first developed the theory of infinite sizes (because he thought he needed it in his study of Fourier series). Cantor raised the question whether there is a set whose size is strictly between the “smallest” infinite set, N, and P.N/. He guessed not: Cantor’s Continuum Hypothesis. There is no set A such that P.N/ is strictly bigger than A and A is strictly bigger than N. The Continuum Hypothesis remains an open problem a century later. Its diffi-culty arises from one of the deepest results in modern Set Theory—discovered in part by G¨ odel in the 1930s and Paul Cohen in the 1960s—namely, the ZFC axioms are not sufficient to settle the Continuum Hypothesis: there are two collections of sets, each obeying the laws of ZFC, and in one collection, the Continuum Hy-pothesis is true, and in the other, it is false. So settling the Continuum Hypothesis requires a new understanding of what sets should be to arrive at persuasive new axioms that extend ZFC and are strong enough to determine the truth of the Con-tinuum Hypothesis one way or the other. 13.4 Infinities in Computer Science If the romance of different size infinities and continuum hypotheses doesn’t appeal to you, not knowing about them is not going to lower your professional abilities as a computer scientist. These abstract issues about infinite sets rarely come up in mainstream mathematics, and they don’t come up at all in computer science, where the focus is generally on countable, and often just finite, sets. In practice, only logicians and set theorists have to worry about collections that are too big to be sets. In fact, at the end of the 19th century, even the general mathematical community doubted the relevance of what they called “Cantor’s paradise” of unfamiliar sets of arbitrary infinite size. That said, it is worth noting that the proof of Theorem 13.3.1 gives the simplest form of what is known as a “diagonal argument.” Diagonal arguments are used to prove many fundamental results about the limitations of computation, such as the undecidability of the Halting Problem for programs and the inherent, unavoid-able inefficiency (exponential time or worse) of procedures for other computational problems. So computer scientists do need to study diagonal arguments in order to understand the logical limits of computation. Ad a well-educated computer scien-tist will be comfortable dealing with countable sets, finite as well as infinite. 8 MIT OpenCourseWare 6.042J / 18.062J Mathematics for Computer Science Fall 2010 For information about citing these materials or our Terms of Use, visit:
12261
https://www2.seas.gwu.edu/~mlancast/cs254/FP%20Training.pdf
Page 1 www.SoftwareMetrics.Com Longstreet Consulting Inc Function Points Analysis Training Course Instructor: David Longstreet David@SoftwareMetrics.Com www.SoftwareMetrics.Com 816.739.4058 Page 2 Table of Contents Introduction _________ 7 Objective of Section:_________ 7 Introduction: ___________ 7 Elementary Process: _________ 8 Definition: ___________ 8 Benefits and Uses:_________ 9 When Not to Use Function Points _______ 10 Types of Function Point Counts:_______ 10 What about Lines of Code (LOC) ________ 11 Understanding Productivity: ________ 11 Understanding Software Productivity:______ 12 Questions: __________ 14 Function Point Counting Process _______ 17 Objective of Section:_________ 17 Introduction: _________ 17 Definition: __________ 17 Types of Function Point Counts:_______ 18 High Level Steps: _________ 18 Independence and Dependence: _______ 18 FPA Steps for Files: __________ 20 Questions: __________ 20 Establishing the Boundary _______ 21 Objective of Section:__________ 21 Definition: __________ 21 Identify the Boundary:________ 21 Standard Documentation: ________ 21 Establishing the Boundary early in the Life cycle: __________ 21 Technology Issues: _________ 22 Tabulating: ___________ 22 Questions: __________ 22 Identifying RET’s, DET’s, FTR’s______ 23 Page 3 FP Online Class Objective of Section:________ 23 Definition: __________ 23 Rating: __________ 24 Transaction DET’s: __________ 24 Record Element Types (RET’s): _______ 24 Tips to Identify RET’s and DET’s early in the life cycle:_____ 24 DET’s for GUI _________ 24 DET’s For Real Time Systems________ 26 Navigation __________ 26 Skill Builder: _________ 26 External Inputs __________ 28 Objective of Section:__________ 28 Definition: __________ 28 Rating: __________ 28 Counting Tips: _________ 28 Examples: __________ 29 Data Elements: _________ 29 File Types Referenced (FTR’s): _______ 29 Uniqueness: _________ 30 Understanding Enhancement Function Points:_____ 30 Technology Issues: _________ 30 Standard Documentation: ________ 31 Tips to Identify External Inputs early in the life cycle:_______ 31 Typical Vocabulary:__________ 32 Skill Builder: _________ 32 External Outputs ________ 34 Objective of Section:__________ 34 Definition: __________ 34 Rating: __________ 34 Counting Tips: _________ 35 Terminology:_________ 35 Examples: __________ 35 Data Elements: _________ 35 Page 4 File Types Referenced (FTR): _______ 36 Uniqueness: ___________ 36 Understanding Enhancement Function Points:_____ 36 Technology Issues: _________ 37 Standard Documentation: ________ 37 Tips to Identify External Outputs early in the life cycle: ____ 37 Typical Vocabulary:__________ 37 Special Issues and Concerns:________ 38 Skill Builder: _________ 39 External Inquiries ________ 43 Objective of Section:________ 43 Definition: __________ 43 Rating: __________ 43 Examples: __________ 44 Terminology:__________ 44 Data Elements: _________ 44 File Type Referenced (FTR’s): ________ 45 Uniqueness: _________ 45 Understanding Enhancement Function Points:_______ 45 Technology Issues: _________ 46 Standard Documentation: ________ 46 Tips to Identify EQ’s early in the life cycle:______ 47 Typical Vocabulary:__________ 47 Special Issues and Concerns:________ 47 Skill Builder: _________ 49 Transaction Review ________ 52 Objective of Section:__________ 52 Multiple Languages _________ 52 Display of Graphical Images or Icons______ 53 Messages __________ 54 Complex Control Inputs ________ 55 Hyperlinks on WebPages __________ 55 Internal Logical Files ________ 56 Objective of Section:__________ 56 Page 5 FP Online Class Definition: __________ 56 Rating: __________ 56 Counting Tips: _________ 56 Examples: __________ 57 Record Element Types: ________ 57 Data Element Types: _________ 58 Technology Issues: _________ 58 Standard Documentation: _________ 58 Tips to Identify ILF’s early in the life cycle: ______ 58 Other comments: __________ 58 Skill Builder: _________ 59 External Interface Files_______ 62 Objective of Section:________ 62 Definition: __________ 62 Rating: __________ 62 Counting Tips: _________ 63 Examples: __________ 63 Technology Issues: _________ 63 Standard Documentation: ________ 63 Tips to Identify EIF’s early in the life cycle: ______ 64 General System Characteristics_______ 65 Objective of Section:_________ 65 Definition: __________ 65 Rating: __________ 65 Standard Documentation: ________ 65 Rating GSC’s early in the life cycle: _______ 65 Tabulating: ___________ 66 GSC’s at a Glance: _________ 66 Considerations for GUI Applications _______ 67 Detail GSC’s:___________ 68 Skill Builder: __________ 79 General System Characteristics – Notes Page ______ 80 History and IFPUG _________ 81 Page 6 Objective of Section:_________ 81 Brief History: _________ 81 Growth and Acceptance of Function Point Analysis_____ 81 More Information about IFPUG: _______ 81 Calculating Adjusted Function Point______ 83 Objective of Section:__________ 83 Understanding the Equations:_______ 83 Definition: __________ 84 Unadjusted Function Point:________ 84 Development Project Function Point Calculation: ______ 84 Application Function Point Count (Baseline): ______ 85 Enhancement Project Function Point Calculation:______ 85 Application After Enhancement Project:______ 86 Skill Builder: __________ 88 Case Studies __________ 89 Objective of Section:_________ 89 Collection Letter __________ 91 Control Inputs_________ 92 Graphical Information _________ 93 Graphs Part II_________ 94 The Weather Application_____________ 95 Adding A New Customer ________ 97 Enhanced Weather Application _______ 100 BikeWare__________ 101 Pizza Screen Design _________ 103 www.PIZZACLUB.COM ________ 105 Control Information________ 108 Page 7 www.SoftwareMetrics.Com Longstreet Consulting Inc INTRODUCTION Objective of Section: Introduce the basic concepts of Function Point Analysis and to introduce and reinforce unit cost estimating. The exercises at the end of the section help the student demonstrate they have gained the basic knowledge required. Introduction: Systems continue to grow in size and complexity, becoming increasingly difficult to understand. As improvements in coding tools allow software developers to produce larger amounts of software to meet ever-expanding user requirements, a method to understand and communicate size must be used. A structured technique of problem solving, function point analysis is a method to break systems into smaller components, so they can be better understood and analyzed. This book describes function point analysis and industry trends using function points. Human beings solve problems by breaking them into smaller, understandable pieces. Problems that may initially appear to be difficult are found to be simple when dissected into their components, or classes. When the objects to be classified are the contents of software systems, a set of definitions and rules, or a scheme of classification, must be used to place these objects into their appropriate categories. Function point analysis is one such technique: FPA is a method to break systems into smaller components, so they can be better understood and analyzed. It also provides a structured technique for problem solving. Function Point Analysis is a structured method to perform functional decomposition of a software application. Function points are a unit measure for software much like an hour is to measuring time, miles are to measuring distance or Celsius is to measuring temperature. Function Points are interval measures much like other measures such as kilometers, Fahrenheit, hours, so on and so forth. Function Points measure software by quantifying its functionality provided to the user based primarily on the logical design. Frequently the term end user or user is used without specifying what is meant. In this case, the user is a sophisticated user. Someone that would understand the system from a functional perspective --- more than likely someone that would provide requirements or does acceptance testing. There are a variety of different methods used to count function point, but this book is based upon those rules developed by the Alan Albrecht and later revised by the International Function Point User Group (IFPUG). The IFPUG rules have much to be desired, so this book attempts to fill in gaps not defined by IFPUG. 1 Chapter 1 Page 8 What is on the surface? The image to the right represents the tip of an iceberg. The real issue is not the tip, but what is under the surface of the water and can not be seen. The same is true when you design a software application. One of the largest misconceptions of function points is understanding what functionality is being exposed to an end user versus the delivered functionality. One trend happening in software development today is self service applications like most major airlines are using. If you visit American Airlines Website and/or Expedia, you will see a relatively simple screen exposed to the end user. The end user simply puts in their departure and destinations and the dates of travel. This appears on the surface to be a simple inquiry, but this is extremely complex. The process actually includes 1,000’s of elementary processes, but the end user is only exposed to a very simple process. All possible routes are calculated, city names are converted to their international three characters, interfaces are sent to all the airline carriers (each one being unique), this is an extremely complex and robust process! When we size software applications we want to understand what is exposed and what is under the surface. Elementary Process: A software application is in essence a defined set of elementary processes. When these elementary processes are combined they interact to form what we call a software system or software application. An elementary process is not totally independent existing alone, but the elementary processes are woven together becoming interdependent. There are two basic types of elementary processes (data in motion and data at rest) in a software application. Data in motion has the characteristic of moving data inside to outside the application boundary or outside to inside the application boundary. An elementary process is similar to an acceptance test case. Definition: On a conceptual level, function point analysis helps define two abstract levels of data - data at rest and data in motion. Data in motion Data in motion is handled via transactional function types or simple transactions. All software applications will have numerous elementary processes or independent processes to move data. Transactions (or elementary processes) that bring data from outside the application domain (or application boundary) to inside that application boundary are referred to as external inputs. Introduction Page 9 FP Online Class Transactions (or elementary processes) that take data from a resting position (normally on a file) to outside the application domain (or application boundary) are referred as either an external outputs or external inquiries (these will be defined later in this book). Data at rest Data at rest that is maintained by the application in question is classified as internal logical files. Data at rest that is maintained by another application in question is classified as external interface files. Benefits and Uses: A function point count has many uses. There are three types of function point counts. In the section How are Function Point Useful the benefits of function point counting is discussed in great detail. The article can be found on www.SoftwareMetrics.Com. • Function Points can be used to communicate more effectively with business user groups. • Function Points can be used to reduce overtime. • Function points can be used to establish an inventory of all transactions and files of a current project or application. This inventory can be used as a means of financial evaluation of an application. If an inventory is conducted for a development project or enhancement project, then this same inventory could be used to help maintain scope creep and to help control project growth. Even more important this inventory helps understand the magnitude of the problem. • Function Points can be used to size software applications. Sizing is an important component in determining productivity (outputs/inputs), predicting effort, understanding unit cost, so on and so forth. • Unlike some other software metrics, different people can count function points at different times, to obtain the same measure within a reasonable margin of error. That is, the same conclusion will be drawn from the results. • FPA can help organizations understand the unit cost of a software application or project. Once unit cost is understood tools, languages, platforms can be compared quantitatively instead of subjectively. This type of analysis is much easier to understand than technical information. That is, a non-technical user can easily understand Function Points. There are several other uses of function points. The following list are some practical applications of Function Points and FPA. The article Using Function Points on the Website www.SoftwareMetrics.Com, in the article section of the Website, provides more detail regarding each of these items. Function Points can be used for: • Defining When and What to Re-Engineer Chapter 1 Page 10 • Estimating Test Cases • Understanding Wide Productivity Ranges • Understanding Scope Creep • Calculating the True Cost of Software • Estimating Overall Project Costs, Schedule and Effort • Understanding Maintenance Costs • Help with contract negotiations • Understanding the appropriate set of metrics When Not to Use Function Points Function points are not a very good measure when sizing maintenance efforts (fixing problems) or when trying to understand performance issues. Much of the effort associated with fixing problems (production fixes) is due to trying to resolve and understand the problem (detective work). Another inherent problem with measuring maintenance work is that much of maintenance programming is done by one or two individuals. Individual skill sets become a major factor when measuring this type of work. The productivity of individual maintenance programmers can vary as much as 1,000 percent. Performance tuning may or may not have anything to do with functionality. Performance tuning is more a result of trying to understand application throughput and processing time. There are better metrics to utilize when measuring this type of work. Types of Function Point Counts: Function point counts can be associated with either projects or applications. There are three major types of software projects (Development, Enhancements and Maintenance). In accordance with these types of function points there are three different types of function point counts (Development, Enhancement and Application). Development Project Function Point Count Function Points can be counted at all phases of a development project from requirements up to and including implementation. This type of count is associated with new development work. Scope creep can be tracked and monitored by understanding the functional size at all phase of a project. Frequently, this type of count is called a baseline function point count. Enhancement Project Function Point Count It is common to enhance software after it has been placed into production. This type of function point count tries to size enhancement projects. All production applications evolve over time. By tracking enhancement size and associated costs a historical database for your organization can be built. Additionally, it is important to understand how a development project has changed over time. Application Function Point Count Application counts are done on existing production applications. This “baseline count” can be used with overall application metrics like total maintenance hours. This metric can be used to Introduction Page 11 FP Online Class track maintenance hours per function point. This is an example of a normalized metric. It is not enough to examine only maintenance, but one must examine the ratio of maintenance hours to size of the application to get a true picture. Additionally, application counts can assist organizations in understanding the size of the entire corporate portfolio (or inventory). This type of count is analogous to taking an inventory for a store. Like inventory, a dollar value can be associated with any application function point count and for the entire organization portfolio. What about Lines of Code (LOC) There are several problems with using LOC as a unit of measure for software. Imagine two applications that provide the same exact functionality (screens, reports, databases). One of the applications is written in C++ and the other application written a language like Clarion (a very visual language). The number of function points would be exactly the same, but aspects of the application would be different. The lines of code needed to develop the application would not be the same. The amount of effort required to develop the application would be different (hours per function point). We are able to compare the productivity of the languages. Unlike Lines of Code, the number of function points will remain constant (should remain constant). With this in mind; 1. The number of lines of code delivered is dependent upon the skill level of the programmer. In fact, the higher skill level of the programmer the fewer lines of code they will develop to perform the same function. 2. Higher-level languages such as Delphi, Progress 4GL, Forte, Dynasty, VB, Java Script, or other visual languages require far fewer lines of code than Assembler, COBOL, or C to perform the same functionality. That is, there is an inverse relationship between level of language and work output (when work output is LOC). 3. The actual number of LOC is not known until the project is almost completed. Therefore, LOC cannot be used to estimate the effort or schedule of a project. Function Points can be derived from requirements and analysis documents that are available early in a project life cycle. 4. There is no agreed upon method to count lines of code. The statement and type of statements used in Visual C++, Assembler, COBOL, SQL are completely different. It is common for applications to have a combination of different languages being utilized. Understanding Productivity: The standard economic definition of productivity is “Goods or services per unit of labor or expenses” until 1979, when A.J. Albrecht of IBM published a paper about Function Points, there Chapter 1 Page 12 Frederick W. Taylor was no definition of what “goods or services” were the output of software project. The good or service of software is the business functionality provided. While software productivity is a relatively new subject “industrial productivity” has been a subject of interest for many years. One of the first individuals to study productivity was Frederick Taylor (1856-1912). Taylor’s major concern throughout most of his life was to increase efficiency in production. Taylor decided that the problem of productivity was a matter of ignorance on the part of management. Taylor believed that application of scientific methods, instead of customs and rules of thumb could yield higher productivity. A century after Frederick Taylor most software managers use rules of thumb instead of systematic study. Hawthorne Studies Several scientists undertook the famous experiments at the Hawthorne plant of the Western Electric Company in 1927 and 1932. They began a study to determine the effect of illumination on workers and their productivity. They found that productivity improved when illumination was either increased or decreased for a test group. They found that when people felt they were being noticed then their productivity increased. They also found that the improvement in productivity was due to such social factors as morale, satisfactory interrelationships and effective management. They also found that the best managers were those that managed via counseling, leading, and communicating. The phenomenon, arising basically from people being “noticed,” has been known as the Hawthorne effect. Productivity: The definition of productivity is the output-input ratio within a time period with due consideration for quality. Productivity = outputs/inputs (within a time period, quality considered) The formula indicates that productivity can be improved by (1) by increasing outputs with the same inputs, (2) by decreasing inputs but maintaining the same outputs, or (3) by increasing outputs and decreasing inputs change the ratio favorably. Software Productivity = Function Points / Inputs Effectiveness v. Efficiency: Productivity implies effectiveness and efficiency in individual and organizational performance. Effectiveness is the achievement of objectives. Efficiency is the achievement of the ends with least amount of resources. Understanding Software Productivity: Software productivity is defined as hours/function points or function points/hours. This is the average cost to develop software or the unit cost of software. One thing to keep in mind is the unit cost of software is not fixed with size. What industry data shows is the unit cost of software goes up with size. Introduction Page 13 FP Online Class How does size impact productivity As the size of software development project becomes larger the cost per function point actually rises. As can be seen from the graph and data, the effort per unit does not remain constant as the size of the software project increases. This is self-evident because the tasks are not the same for software projects as the size increases. What is Marginal Cost? As some of you remember Marginal Cost is an economic term and is different from average cost. Average cost is the total cost of producing a particular quantity of output divided by that quantity. In this case to Total Cost/Function Points. Marginal cost is the change in total cost attributable to a one-unit change in output. In our case, how does per unit cost change as the software project size change? How does the cost of software change as the product becomes larger and larger? Imagine the average cost per square foot of a one-story building compared to the cost per square foot of a 100-story building. No doubt the construction costs (per unit cost) for the 100-story building are much higher than a one-story building. This same concept is true for a software project. Besides size there are several other factors, which impact the cost of construction • Where the building is located • Who is the general contractor? • Who does the actual labor? Why increasing Marginal Costs for Software Development? There are a variety of reasons why marginal costs for software increase as size increases. The following is a list of some of the reasons • As size becomes larger complexity increases. • As size becomes larger a greater number of tasks need to be completed. • As size becomes larger there is a greater number of staff members and they become more difficult to manage. • A the numbers of individuals in a project increases the number of communication paths increase also. Communication in large projects can be very difficult. Chapter 1 Page 14 • Since fixed costs for software projects is minimal. There are little if any economies of scale for software projects. Function Points are the output of the software development process. Function points are the unit of software. It is very important to understand that Function Points remain constant regardless who develops the software or what language the software is develop in. Unit costs need to be examined very closely. To calculate average unit cost all items (units) are combined and divided by the total cost. On the other hand, to estimate the total cost each item is examined. For example, assume you are going to manufacture a computer mousepad. The total Cost to manufacture 1,000 mousepad is $2,660. The unit cost is $2.66 (per pad). The cost break down is: • Artwork is a fixed cost at $500 (or .50 per unit) • Set Up costs are $250 (or .25 per unit) • Shipping costs are $10 (or .01 per unit) • Papers for production will cost $1.50 per unit. • Rubber Pads are $ .15 per unit. • Application of paper to pad cost is $.25 per unit Notice the variation in the unit cost for each item. One of the biggest problems with estimating software projects is understanding unit cost. Software managers fail to break down items into similar components or like areas. They assume all units cost the same. There are different costs for each of the function point components. The unit cost for external inputs is not the same as the unit cost of external outputs for example. The online external inputs and the batch external inputs do not have the same unit cost (or cost per function point). The cost per unit to build and implement internal logical files is not the same per unit cost as the building and implementing of online reports. To accurately estimate the cost of an application each component cost needs to be estimated. The same is true for the mousepad problem above. Questions: Problem 1 How would you estimate the number of hot chocolates being sold at the AFC Championship game in Kansas City (use your imagination, the Chiefs could be there)? What are the keys factors to consider? Who would you benchmark against and why? Problem 2 What is the average cost per mousepad if you produce 1,000 units at the following costs? Introduction Page 15 FP Online Class Artwork is a fixed cost at $500 Sets Up costs are $250 Shipping costs are $10 Papers for production will cost $2.50 per unit. Pads are $ .25 per unit. Application of paper to pad cost is $.35 per unit Are the unit costs the same for all items? Is it correct to assume that unit costs are fixed for software? (Intuitively, do you expect the per unit cost to create reports the same as the per unit cost to build a data base.) Chapter 1 Page 16 Notes: Page 17 www.SoftwareMetrics.Com Longstreet Consulting Inc FUNCTION POINT COUNTING PROCESS Objective of Section: The objective of this chapter is to introduce the student to the high level steps necessary to count function points and to perform function point analysis. Details of each step are discussed later in this book. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Introduction: Even though there have been attempts by the National Bureau of Standards (NBS) and IEEE to standardize terms and definitions, there are no industry wide practiced terms and definitions related to software development. IFPUG has developed some standard terms and definitions related to function points, but these terms and definitions need to be applied to a variety of different software environments. Clients who have standardized their terminology within their own environments have seen significant jumps in productivity. That is, they have reduced the number of verbs used to describe transactions and other events. Imagine if we compared a blue print document used for construction purposes with a typical software design document. While the blue print uses standard terminology the software design document uses a variety of different terminology to describe the same exact thing. Definition: The overall objective is to determine adjusted function point count. There are several steps necessary to accomplish this. While you may not understand the mechanics of the following steps, they will be discussed in great detail in the remainder of the book. The actual sequence or order of steps is not necessary. Many counters will complete step 5 through out the entire count – gathering information as they go; 1. Determine type of function point count 2. Determine the application boundary 3. Identify and rate transactional function types to determine their contribution to the unadjusted function point count. 4. Identify and rate data function types to determine their contribution to the unadjusted function point count. 5. Determine the value adjustment factor (VAF) 6. Calculate the adjusted function point count. 2 Chapter 2 Page 18 The unadjusted function point (UFP) count is determined in steps 3 & 4. Steps 3 & 4 are discussed later in this chapter and discussed in detail later in the book. It is not important if step 3 or step 4 is completed first. In GUI and OO type applications it is easy to begin with step 3. The final function point count (adjusted function point count) is a combination of both unadjusted function point count (UFP) and the general system characteristics (GSC’s). Types of Function Point Counts: Function point counts can be associated with either projects or applications. There are three types of function point counts. • Development project function point count • Enhancement project function point count • Application function point count High Level Steps: • o complete a function point count knowledge of function point rules and application documentation is needed. Access to an application expert can improve the quality of the count also. • Once the application boundary has been established, FPA can be broken into three major parts (FPA for transactional function types, FPA for data function types and FPA for GSCs). Independence and Dependence: Since the rating of transactions is dependent on both information contained in the transactions and the number of files referenced, it is recommended that transactions are counted first. At the same time the transactions are counted a tally should be kept of all FTR’s (file types referenced) that the transactions reference. It will be made clear in later chapters that every FTR must have at least one or more transactions. Function Point Counting Process Page 19 FP Online Class FPA Steps for Transactional Function Types: Later in this document external inputs, external outputs and external inquiries are discussed in detail. Each transaction must be an elementary process. An elementary process is the smallest unit of activity that is meaningful to the end user in the business. It must be self-contained and leave the business in consistent state. T1. Application documentation and transaction rules are used to identify transactions. T2. The application documentation and transaction rules are used to determine type of transaction (external input, external output, or external inquiry). T3. With the help of application documentation (data model and transaction model) and transaction rules the number data elements and file type referenced are determined. T4. Each identified transaction is assigned a value of low, average or high based upon type, data elements, and files referenced. T5. A distinct numerical value is assigned based upon type and value (low, average, or high). T6. All transactions are summed to create a transaction unadjusted function point count. FPA for Transactions FfffdFunct FPA Rules Application Documentation Functional Complexity Transaction Model Data Model T1. Identify Transactions T2. Type of Transaction (EO, EI, EQ) T3. Number of DETs and FTRs T4. Determine Low, Ave, High T5. Values Determined T6. All Transactions are summed to obtain UFP for Transactions. Transaction Rules Tables of Weight Chapter 2 Page 20 FPA Steps for Files: Later in this document internal logical files and external interface files are discussed in detail. F1. Application documentation and file rules are used to identify files. F2. The application documentation (transaction model and data model) is used to determine type of file (either external interface file or internal logical file). F3. With the help of application documentation (data model) and file rules the number data elements and record element types are determined. F4. Each identified file is assigned a value of low, average or high based upon type, data elements and record types. F5. A distinct numerical value is assigned based upon type and value (low, average, or high). F6. All files are summed to create a file unadjusted function point count. Questions: Is there any benefit to the sequence or order of counting function points? That is, is there a benefit to counting transactions prior to FTR’s? Are transactions independent or dependent on FTR’s? What about FTR’s? Are they counted independent or dependent of Transactions? FPA for Files FPA Rules Application Documentation Functional Complexity Transaction Model Data Model F1. Identify Files F2. Type of File(ILF or EIF) F3. Number of DETs and RETs F4. Determine Low, Ave, High F5. Values Determined F6. All Files are summed to obtain UFP for Files. File Rules Tables of Weight Page 21 www.SoftwareMetrics.Com Longstreet Consulting Inc ESTABLISHING THE BOUNDARY Objective of Section: Describe and define the concepts necessary to establish a boundary between applications. Definition: Since it is common for computer systems to interact with other computer systems and/or human beings, a boundary must be drawn around each system to be measured prior to classifying components. This boundary must be drawn according to the sophisticated user’s point of view. In short, the boundary indicates the border between the project or application being measured and the external applications or user domain. Once the border has been established, components can be classified, ranked and tallied. One of the benefits of function point is analysis is creating ratios with other metrics such hours, cost, headcount, duration, and other application metrics. It is important the function point boundary be consistent with other metrics that are being gathered for the application and project. Identify the Boundary: • Review the purpose of the function point count. • Look at how and which applications maintain data. • Identify the business areas that support the applications. The boundary may need to be adjusted once components have been identified. In practice the boundary may need to be revisited, as the overall application is better understood. Function point counts may need to be adjusted as you learn about the application. Standard Documentation: • General Specification Documents • Interface Documents • Other metric reports • Interviews with the users • User Documentation • Design Documentation • Requirements • Data flow diagrams Establishing the Boundary early in the Life cycle: Boundaries can be established early in the software life cycle. If the application is a replacement project, then the project boundary should be similar (perhaps identical) to the previous 3 Chapter 3 Page 22 application. If the application is a new application, other applications boundaries should be reviewed to establish the correct boundary. Technology Issues: Internet/Intranet Applications The boundary for an Internet/Intranet application is defined in a similar way for traditional applications. For traditional applications the boundary is not drawn just around the user interface or a group of screens but around the entire application. Frequently, Internet/Intranet applications are just extensions to current and existing applications. There is a tendency to create a "new" application for the Internet/Intranet extension, but this approach is incorrect. Client/Server The boundaries for client/server applications need to be drawn around both the client and server. The reason is that neither the client nor server supports a users (or sophisticated) view. That is, one alone does not represent a total application. As mentioned early, any complete application needs both data at rest (server) and data in motion (client). Tabulating: There is no special tabulating that needs to take place for establishing the boundary, but the boundary can dramatically impact the number of external inputs and external outputs. Questions: In theory, how does making the boundary too large impact a function point count? What if the boundary is too small? Page 23 www.SoftwareMetrics.Com Longstreet Consulting Inc IDENTIFYING RET’S, DET’S, FTR’S Objective of Section: Learn the necessary techniques to identify a RET, a DET and a FTR. Understanding how to identify DET’s and FTR’s is critical to distinguish one transaction from another. While in practice understanding the exact number of DET’s and FTR’s may not impact a function point count, understanding DET’s and FTR’s can help understand how to count function points for enhancement function point counts. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Definition: Record Element Type (RET): A RET is user recognizable sub group of data elements within an ILF or an EIF. It is best to look at logical groupings of data to help identify them. The concept of RET will be discussed in detail in the chapters that discuss internal logical file and external interface files. File Type Referenced (FTR): A FTR is a file type referenced by a transaction. An FTR must also be an internal logical file or external interface file. Data Element Type (DET): A DET is a unique user recognizable, non-recursive (non-repetitive) field. A DET is information that is dynamic and not static. A dynamic field is read from a file or created from DET’s contained in a FTR. Additionally, a DET can invoke transactions or can be additional information regarding transactions. If a DET is recursive then only the first occurrence of the DET is considered not every occurrence. A data element can be either quantitative or qualitative. A quantitative data element is data in numerical form. A qualitative data element is data not in numerical form, but is in the form of text, photographs, sound bytes and so on. Understanding the FTR’s and DET’s helped distinguish one transaction from another transactions. This concept will be discussed in detail later in this book. 4 Chapter 4 Page 24 Rating: All of the components are rated based upon DET’s, and either RET’s or FTR’s. Component RET’s FTR’s DET’s External Inputs (EI)   External Outputs (EO)   External Inquiries (EQ)   External Interface Files (EIF)   Internal Logical Files (ILF)   Transaction DET’s: • External Inputs: Data Input Fields, Error Messages, Calculated Values, Buttons • External Outputs: Data Fields on a Report, Calculated Values, Error Messages, and Column Headings that are read from an ILF. Like an EQ and EO can have an input side and output sides. • External Inquiries: Input Side - field used to search by, the click of the mouse. Output side - displayed fields on a screen. Record Element Types (RET’s): Record element types are one of the most difficult concepts in function point analysis. Most record element types are dependent on a parent - child relationship. In this case, the child information is a subset of the parent information. In a parent child relationship there is a one to many relationship. That is, each child piece of information is linked directly to a field on the parent file. More will be discussed about RET’s in the internal logical file and external interface file sections. Tips to Identify RET’s and DET’s early in the life cycle: RET’s and DET’s may be difficult to evaluate early in the software life cycle. Since RET’s and DET’s are essential to rating components, several techniques can be used to rate components. • Rate all transactional function types and data function types as Average. • Determine how are transactional function type and data function types rated in similar type applications. Are the majority of data function types rated as low in similar type applications? DET’s for GUI Using the strict definition of a data element provided by IFPUG’s Counting Practices Manual. “A data element is a user recognizable, non recursive field.” Unfortunately this does not provide much guidance when counting GUI applications. In fact, the IFPUG Counting Practices manual does not provide much detail on counting, radio buttons, check boxes, pick list, drop downs, look ups, combo boxes, so on and so forth. In GUI applications, a data element is information that is stored on an internal logical file or that is used to invoke a transaction. Identifying RET’s, DET’s and FTR’s Page 25 FP Online Class Radio Buttons Radio Buttons are treated as data element types. Within a group of, a frame, radio buttons the user has the option of selecting only one radio button; so only one data element type is counted for all the radio buttons contained in the frame. Check Boxes Check Boxes differ from radio buttons in that more than one check box can be selected at a time. Each check box, within a frame, that can be selected should be treated as a data element. Command Buttons Command buttons may specify an add, change, delete or inquire action. A button, like OK, may invoke several different types of transactions. According to IFPUG counting rules each command button would be counted as a data element for the action it invokes. In practice this data element will not impact the rating of the transaction, but it does help understand and dissect a screen full of transactions. A button like next may actually be the input side of an inquiry or another transaction. For example, a simple application to track distributors could have fields for Distributor Name, Address, City, State, Zip, Phone Number, and Fax Number. This would represent seven fields or (seven data elements) and the add command button would represent the eighth data element. In short, the “add” external input represent a one external input with eight data elements, the “change” external input represents another external input with eight (seven data elements plus the “change” command button), and the “delete” external input represents the last external input with eight data elements (seven fields plus the “delete” command button). Display of Graphical Images or Icons A display of a graphical image is simply another data element. An inventory application, for example, may contain data about parts. It may contain part name, supplier, size, and weight and include a schematic image of the part. This schematic is treated as a single data element. Sound Bytes Many GUI applications have a sound byte attached. This represents one data element. The number of notes played is simply recursive information. If the length of the sound byte increases, then the data element remains one. For example, you can play the “Star Spangled Banner” for two seconds or four seconds, but you’ll still count the sound bytes as one data element. The longer it is played the more recursive information it has. Photographic Images A photographic image is another data element, and is counted as one. A human resource application may display employee name, start date, etc. and a photograph of the employee. The photograph is treated the same as employee name or employee start date. The photograph is stored and maintained like any other piece of information about the employee. Chapter 4 Page 26 Messages There are three types of messages that are generated in a GUI application: error messages, confirmation messages and notification messages. Error messages and confirmation messages indicate that an error has occurred or that a process will be or have been completed. They are not an elementary or independent process alone, but they are part of another elementary process. A message that would state, “zip code is required” would be an example of an error message. A message that would state, “are you sure you want to delete customer” is an example of a confirmation message. Neither type of message is treated as a unique external output, but each is treated as a data element for the appropriate transaction. On the other hand, a notification messages is a business type message. A notification is an elementary process, has some meaning to the business user and is independent of other elementary processes. It is the basis of processing and a conclusion being drawn. For example, you may try to withdraw from an ATM machine more money than you have in your account and you receive the dreaded message, “You have insufficient funds to cover this transaction.” This is the result of information being read from a file regarding your current balance and a conclusion being drawn. A notification message is treated as an External Output. DET’s For Real Time Systems Using the strict definition of a data element provided by IFPUG’s Counting Practices Manual. “A data element is a user recognizable, non recursive field.” Unfortunately this does not provide much guidance when counting real time or embedded systems. In fact, the IFPUG Counting Practices manual does not provide any detail on counting these types of systems. Some traditional definitions can be applied directly to real time and embedded systems. The fields on a diagnostics file: time of diagnostics, hardware state during diagnostics, temperature, voltage, so on and so forth would all be examples of data elements. Real Time Systems may not have any “traditional user interface.” That is, the stimulus for the Real Time System may be it’s own output – or state. A real time or embedded systems can signal to determine current Hardware State (or location) and determine the appropriate adjustment (input) based on the current state. Navigation Navigation is moving from one transaction to another. Skill Builder: 1. The following information is heard in the Rome Train Station. How many data elements are heard? That is, what information varies from one train arrival to the next? The train arriving from Florence will arrive on Track 46 at 8:30 a.m. The train arriving from Naples will arrive on Track 43 at 11:00 a.m. Identifying RET’s, DET’s and FTR’s Page 27 FP Online Class 2. The totals on a particular report change colors depending if the amount is above or below $ 500. 3. For example if the amount is -$250 it appears as $250, but if the amount is over 0 then the value appears blue. For example if the amount is $1,000. How many data elements are represented by the number and by the color? Page 28 www.SoftwareMetrics.Com Longstreet Consulting Inc EXTERNAL INPUTS Objective of Section: Describe and define the concepts necessary to identify and rate External Inputs. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Definition: External Inputs (EI) - is an elementary process in which data crosses the boundary from outside to inside. This data is coming external to the application. The data may come from a data input screen or another application. The data may be used to maintain one or more internal logical files. The data can be either control information or business information. If the data is control information it does not have to maintain an internal logical file. If an external input adds, changes and deletes (maintains) information on an internal logical file, then this represents three external inputs. External inputs (especially change & delete) may be preceded by an external inquiry (see the section on external inquiries). Hence a full function screen is add, change, delete and inquiry (more will be discussed about inquiries later in the book). Rating: Like all components, EI’s are rated and scored. The rating is based upon the number of data element types (DET’s) and the file types referenced (FTR’s). DET’s and FTR’s are discussed earlier. The table below lists both the level (low, average or high) and appropriate score (3, 4 or 6). Files Type Referenced (FTR) Data Elements (DET’s) 1-4 5-15 Greater than 15 Less than 2 Low (3) Low (3) Average (4) 2 Low (3) Average (4) High (6) Greater than 2 Average (4) High (6) High (6) Counting Tips: Try to ask the question, do external inputs need more or less than 2 files to be processed? For all the EI’s that reference more than 2 FTR’s, all that is needed to know is if the EI has more or less than 4 data element types referenced. If the EI has more than 4 DET’s the EI will be rated as 5 External Inputs Page 29 FP Online Class high; less than 4 DET’s the EI will be rated as average. Any EI’s that reference less than 2 FTR’s should be singled out and counted separately. Examples: EI’s can be business data, control data and rules based data. Business Data: Customer Name, Address, Phone, and so on and so forth. Control Data: The data elements are those that invoke the transaction or change the behavior of the application. Each check box represents a data element. Additionally, the sort employee list radio buttons represents one data element as well as the time format radio buttons. Control information changes or alters the state (or behavior) of the application. Control information specifies how, what, and when data will be processed. Data Elements: Unique sets of data elements help distinguish external input from other external input. • Data Input Fields • Calculated Values or Derived Data that are stored • Error Messages • Confirmation Messages • Recursive fields are only counted as one DET. • Action keys (command buttons such as OK, Next, so on and so forth) • Multiple Action Keys that perform the same function are counted only as one DET. File Types Referenced (FTR’s): Unique FTR’s helps distinguish external input from other external input. An FTR must be either an Internal Logical File and/or External Interface File. Each internal logical file that an external input maintains is counted as an FTR. Any internal logical file or external interface file that is referenced by an external input as part of the elementary process of maintaining an internal logical file would be considered an FTR also. For example, an External Input may update an Chapter 5 Page 30 internal logical file, but must also reference a “security file” to make sure that the user has appropriate security levels. This would be an example of two FTR’s. Uniqueness: A unique set of data elements, and/or a different set of FTR’s, and/or a unique set of calculations make one external input unique or different from other external inputs. That is, one of the following must be true: • Unique or different set of data elements • Unique or different set of FTR’s • Unique or different calculations Calculations alone are not an elementary process but part of the elementary process of the external input. A calculation (or derived data) does not make the transaction an external output. External outputs and derived data will be discussed in detail in the external output section of this document. Understanding Enhancement Function Points: Modification of any of the items, which make an External Input unique from other external inputs, causes the EI to be “enhanced.” If any of the following are true: • DET’s added to an EI • DET’s modified on an EI. The DET was included in the last FP Count. • A New FTR • Modifications to a calculation that appears on an EI. Technology Issues: GUI Radio Buttons - each set of radio buttons is counted as one DET. Only one radio button can be selected at a time. Pick Lists- The actual pick list (also known as drop downs, lookups) could be an external inquiry, but the result of the inquiry may be a DET for an external input. Check Box - Each check box that can be simultaneously checked is a unique DET. Buttons - Buttons can be DET’s. The OK button above would be a data element. If there was a series of buttons Add, Change and Delete. Each button would be counted as a DET for the associated transaction. A single GUI “screen” may represent several transactional function types. For example, it is common for a GUI “screen” to have a series of external inquiries followed by an external input. External Inputs Page 31 FP Online Class Other Error Messages - error messages are counted as data elements (DET’s), not unique external inquiries. Count one DET for the entire input screen. Multiple Error Messages are similar to recursive values. An error message is part of another elementary process. The number of error messages on a GUI screen is less than the number of error messages associated with traditional applications. If used correctly, radio buttons and pick lists can force users to select correct information; therefore, eliminating the need to do editing behind the scenes. In practice the number of DET’s do not make much of a difference in evaluating an EI, understanding error or confirmation messages help in the understanding of uniqueness. Real Time and Embedded Systems In real time and embedded systems communication between hardware and software is common and should not be overlooked when counting these types of systems. Other types of inputs for real time and embedded systems are: Operator Controls, Volume Controls, Sensor Readings, Radio Frequencies, Standards and Limit Settings (Alarms Settings, so on and so forth. Standard Documentation: A good source of information to determine external inputs is Screen Layouts, Screen Formats & dialogs, and layouts of any input forms. Additional inputs from other applications should be inventoried here. Inputs from other applications must update internal logical files of the application being counted. • Screen Layouts • Screen Dialogs • Design Documentation • Functional Specifications • User Requirements • Any Input Forms • Context Diagrams • Data Flow Diagrams Tips to Identify External Inputs early in the life cycle: The following types of documentation can be used to assist in counting EI’s prior to system implementation. • Any refined objectives and constraints for the proposed system. • Collected documentation regarding the current system, if such a system (either automated or manual) exits. • Documentation of the users’ perceived objectives, problems and needs. • Preliminary Data Flow Diagram. • Requirements Documentation. Chapter 5 Page 32 Typical Vocabulary: The following words are associated with external input or “inputs.” While reading textual document or application description look for these type of words, they may indicate an add, change or delete aspect of an external input. Add Activate Amend (change and delete) Cancel Change Convert (change) Create (add) Delete Deassign Disable Disconnect (change or delete) Enable Edit (change) Insert ( add and change) Maintain (add, change, or delete) Memorize (add) Modify (change) Override (change) Post (add, change and delete) Remove (delete) Reactivate (change) Remit Replace (change) Revise (change and delete) Save (add, change or delete) Store (add) Suspend (change or delete) Submit (add, change or delete) Update (add, change or delete) Voids (change and delete) Skill Builder: The following questions are used to help build on the concepts discussed in this section. They are designed to encourage thought and discussion. 1. If an EI has one file type referenced and 5 data elements is it rated, low average or high? What about 7 data elements? Or 25 Data elements? 2. How many data elements are there on the control input in the body of the chapter (page 29)? 3. Does every EI have to update an ILF? Why? 4. What are the criteria for an EI to be rated high? 5. Fill in the “value” of a low average _ and high EI? The following screen is used to add a new customer to an application. The OK command button and the Next command button both add the new customer to the database. External Inputs Page 33 FP Online Class 6. How many data elements are there in this input screen? 7. If this screen updates one internal logical file how many unadjusted function points does this screen represent? 8. How many data elements does the phone number represent? 9. Is the Cancel command button counted as a data element? Application A has a batch input file. The batch file is one physical file, but contains many different types of records. The first field is a record identifier number. The record identifier number can range from 1-75. The second field describes if the record is new and adds to the file, changes a previous batch input or a deletes a previous batch input (add, change and delete). Depending on the record identifier number there are a unique set of data elements, a different set of files are updated and referenced, and different processing logic is followed. Every single record identifier number updates more than 3 files (has more than 3 FTR’s) and contains more than 5 data elements. How many function points does this one batch input represent? The answers to chapter questions are part of the online training course Page 34 www.SoftwareMetrics.Com Longstreet Consulting Inc EXTERNAL OUTPUTS Objective of Section: Describe and define the concepts necessary to identify and rate External Outputs. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Definition: External Outputs (EO) - an elementary process in which derived data passes across the boundary from inside to outside. Additionally, an EO may update an ILF. The data creates reports or output files sent to other applications. These reports and files are created from information contained in one or more internal logical files and external interface files. Derived Data is data that is processed beyond direct retrieval and editing of information from internal logical files or external interface files. Derived data is usually the result of algorithms, or calculations. Derived data occurs when one or more data elements are combined with a formula to generate or derive an additional data element(s). This derived data does not appear in any FTR (internal logical file or external interface file). An algorithm is defined as a mechanical procedure for performing a given calculation or solving a problem in a series of steps. A calculation is defined as an equation that has one or more operators. An operator is a mathematical function such as addition, subtraction, multiplication, and division (+, -,x, /). Transactions between applications should be referred to as interfaces. You can only have an external output or external inquiry of data external to your application. If you get data from another application and add it to a file in your application, this is a combination get and add (external inquiry and external input). Rating: Like all components, EO’s are rated and scored. The rating is based upon the number of data elements (DET’s) and the file types referenced (FTR’s). The rating is based upon the total number of unique (combined unique input and out sides) data elements (DET’s) and the file types referenced (FTR’s) (combined unique input and output sides). DET’s and FTR’s were discussed earlier in this section. The table below lists both the level (low, average or high) and appropriate score (4, 5 or 7). 6 External Outputs Page 35 FP Online Class File Types Referenced (FTR) Data Elements 1-5 6-19 Greater than 19 less than 2 Low (4) Low (4) Average (5) 2 or 3 Low (4) Average (5) High (7) Greater than 3 Average (5) High (7) High (7) Counting Tips: You may ask the question, Do external outputs need more or less than 3 files to be processed? For all the EO’s that reference more than 3 files, all that is needed to know is if the EO has more or less than 5 data element types. If the EO has more than 5 data element types then the EO will be rated as high, less than 5 the EO will be rated as average. Any EO’s that reference less than 3 files should be singled out and counted separately. Terminology: The definition states an EO contains information, which derived data passes across the boundary from inside to outside. Some confusion may arise because an EO has an input side. The confusion is the definition reads data passes across the boundary from inside to outside. The input side of an EO is search criteria, parameters, etc does not maintain an ILF. The information that a cross from outside to inside (input side) is not permanent data, but it is transient data. The intent of the information coming from outside the application (input side) is not to maintain an ILF. Examples: Unlike other components EO’s almost always contain business data. Rule base data and control based “outputs” are almost always considered External Inquiries. This is true due to the fact that rule data and control type data is not derived (or derivable). Notification Messages are considered EO’s. A notification message differs from an error message. A notification message is an elementary process, while an error message (or confirmation message) is part of an elementary process. A notification message is the result of some business logic processing. For example, a trading application may notify a broker that the customer trying to place an order does not have adequate funds in their account. Derived Data displayed in textual fashion (rows and columns) and graphical format is an example of two external outputs. Data Elements: Unique sets of data elements help distinguish one external output from another. Keep in mind that a DET is something that is dynamic. (A DET is a unique user recognizable, non-recursive (non-repetitive) field) • Error Messages Chapter 6 Page 36 • Confirmation Messages • Calculated Values (derived data) • Values on reports that are read from an internal logical file or external interface file. • Recursive values or fields (count only once) • Generally, do not count report headings (literals) as data elements unless they are dynamic. That is, if the report headings are read from files that are maintained they may be DET’s also. • System generated dates that are on the tops or reports or are displayed are normally not counted as DET’s. If system generated dates is part of business information of the external output they should be counted as DET’s. For example, the date an invoice is printed or the date a check is printed. File Types Referenced (FTR): Unique FTR’s help distinguish one external output from another. An FTR must be either an Internal Logical File and/or External Interface File. The elementary process associated with an external output may update an internal logical file or external interface file. For example, the elementary process that produces as payroll check may include an update to a file to set a flag to indicate that the payroll check was produced. This is not the same as maintaining the file. Maintained is the process of modifying data (adding, changed and deleting) via an elementary process (via an External Input). The primary intent of an EO is not to maintain an ILF. Uniqueness: A unique set of data elements, and/or a different set of FTR’s, and/or a unique set of calculations makes one external output unique or different from other external outputs. That is, one of the following must be true: • Unique or different set of data elements • Unique or different set of FTR’s • Unique or different calculations • Unique processing logic Understanding Enhancement Function Points: Modification of any of the items, which make an External Output unique from other external outputs, causes the EO to be “enhanced.” If any of the following are true: • DET’s added to an EO • DET’s modified on an EO. The DET was included in the last FP Count. • A New FTR • Modifications to a calculation that appears on an EO. External Outputs Page 37 FP Online Class Technology Issues: Each media that a report is sent to is counted as a unique EO. If a report were available on line, paper and electronic it would be counted as three EO’s. Now some organizations choose to count this as only one EO. Whatever decision is made, the organization needs to stick with it. Disk Cache: Information that is prepared, processed, and derived and put on cache files for another application to utilize should not be overlooked. These cached files may be external outputs or external inquiries. Standard Documentation: • Report Layouts • Design Documentation • Functional Specifications • User Requirements • Database descriptions • Field Sizes and Formats • Graphical Report Layouts Tips to Identify External Outputs early in the life cycle: The following types of documentation can be used to assist in counting External Outputs prior to system implementation. • Any refined objectives and constraints for the proposed system. • Collected documentation regarding the current system, if such a system (either automated or manual) exits. • Documentation of the users’ perceived objectives, problems and needs. • Preliminary Data Flow Diagrams. Typical Vocabulary: The following words are associated with an “external outputs.” While reading textual documents or application descriptions look for these types of words. They may indicate an external output. Notice these words are very similar to those words used for an External Inquiry (discussed in the next chapter). Browse Display Get On-lines Output Print Query Reports Request Retrieve Seek Select View Chapter 6 Page 38 Special Issues and Concerns: When to count DET’s for Report Headings: Report headings are counted when they are dynamic. That is, if report headings are being read from an internal logical file they should be counted as DET’s. Can an External Output have an input side? Since the input side is not stand-alone (independent or an elementary process) it should be considered as part of the entire external output. The FTR’s and DET’s used to search should be combined with unique outside DET’s and FTR’s for at grand total FTR’s and DET’s for the entire EO. In short, an external output can have an input side. Can an External Output Update an Internal Logical File? An external output can update an internal logical file, but it is incorrect to say that an external output can maintain an internal logical file. The update is part of the elementary process of the external output. An external input maintains data on and ILF file. The maintain process is an elementary process alone. The definition for maintaining is discussed in the internal logical file and external input sections of this book. Graphs Graphs are counted the same way as the textual EO’s. That is, the graph is rated and scored based on the number of DET’s and the number of FTR’s. In fact, recursive information is easily seen in a graph, but can be more difficult to visualize in a text report. There are 10 data elements in the following table 1. Days 2. Hits 3. % of Total Hits 4. User Sessions 5. Total Hits (weekday) 6. Total % (weekday) 7. Total User Sessions (weekday) 8. Total Hits (weekend) 9. Total % (weekend) 10. Total User Sessions (weekend) Days, Hits, % of Total Hits and User Sessions all have recursive data. The same data could be processed and presented as bar graph. But on the following bar graph there are only two data elements (user session and day of week). The bar graph is a separate External Outputs Page 39 FP Online Class external output and is unique from the above table. In short, it provides different business slightly different information than the table. The answers to chapter questions are part of the online training course Skill Builder: The following questions are used to help build on the concepts discussed in this section. They are designed to encourage thought and discussion. Ice Cream Cone Sales by Month Flavor Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Total Vanilla 80 85 85 90 110 120 135 145 90 84 75 70 1169 Chocolate 75 80 70 83 100 105 109 120 80 70 69 65 1026 Strawberry 30 35 35 40 70 80 95 105 40 34 25 20 609 Pistachio 8 9 9 9 11 12 14 15 9 8 8 7 119 Other 12 13 13 13 15 17 19 20 14 13 13 12 174 Total 205 222 212 235 306 334 372 405 233 209 190 174 1. How many data elements are there in the above chart? 2. Is there recursive (repetitive) information? What is it? Chapter 6 Page 40 3. How many data elements are there in the following line chart? Can recursive information be seen easier in graphs? 4. How many data elements are in the following chart with 2 y - axis? Max Average Daily Temperature in Kansas City Data is from 1893 - Present 32 42 52 62 72 82 92 102 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Month Fahrenheit 0 5 10 15 20 25 30 35 40 Celsius Ice Cone Sales by Month 0 20 40 60 80 100 120 140 160 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Vanilla Chocolate Straw berry Pistachio Other Figure 1 External Outputs Page 41 FP Online Class 5. How many data elements are there in the following pie chart? 6. If an EO has 4 file types referenced and 15 data elements is it rated, low average or high? 7. What about 5 data elements with 4 FTR’s? Or 45 Data elements with 4 FTR’s? 8. Is it possible to have an EO that does not reference any ILF’s? Why? 9. What is the criterion for an EO to be rated low? 10. Fill in the “value” of a low _ average and high _ EO? How does this compare to an EQ? Why the difference? 11. You have a list of 25 reports and you can safely assume that each report is separate elementary processes, estimate the number of unadjusted function points. 12. You are given a list of the following 5 reports and the only information you have are the number of FTR’s. Report 1, 3 FTR’s Report 2, 5 FTR’s Report 3, 1 FTR Report 4, 2 FTR’s Report 5, 1 FTR Percent of Cones Sold by Flavor Vanilla 37% Chocolate 33% Straw berry 20% Pistachio 4% Other 6% Figure 2 Chapter 6 Page 42 Estimate the number of unadjusted function points. What method did you use? 13. How would estimate the unadjusted number of function points if you were provided the following information. Report 1, 4 DET’s Report 2, 25 DET’s Report 3, 10 DET’s Report 4, 15 DET’s Report 5, 2 DET’s 14. What method did you use? 15. Previously, the line graph of ice cream cone sales was counted as one unique External Output. If a graph were exactly the same except in Italian, would this be considered another unique external output? 16. Two separate checks are created an expense check and a payroll check. Both checks look identical and have the following fields, employee name, employee address, amount of check, date of check is printed. The expenses check uses the expenses reimbursement file and the employee file and the payroll check uses the payroll file and the employee file. The calculations for each check are different. How many external outputs are there? Explain your answer? The answers to chapter questions are part of the online training course Page 43 www.SoftwareMetrics.Com Longstreet Consulting Inc EXTERNAL INQUIRIES Objective of Section: Describe and define the concepts necessary to identify and rate External Inquiries. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Definition: External Inquiry (EQ) - an elementary process with both input and output components that result in data retrieval from one or more internal logical files and external interface files. The input process does not update or maintain any FTR’s (Internal Logical Files or External Interface Files) and the output side does not contain derived data. Transactions between applications should be referred to as interfaces. You can only have an external output or external inquiry of data external to your application. If you get data from another application and add it to a file in your application, this is a combination get and add (external inquiry and external input). Rating: Like all components, EQ’s are rated and scored. Basically, an EQ is rated (Low, Average or High) like an EO, but assigned a value like and EI. The rating is based upon the total number of unique (combined unique input and out sides) data elements (DET’s) and the file types referenced (FTR’s) (combined unique input and output sides). DET’s and FTR’s were discussed in an earlier chapter. If the same FTR is used on both the input and output side, then it is counted only one time. If the same DET is used on both the input and output side, then it is only counted one time. Functional Complexity Matrix (shared table between EO and EQ) File Types Referenced (FTR) Data Elements 1-5 6-19 Greater than 19 less than 2 Low (3) Low (3) Average (4) 2 or 3 Low (3) Average (4) High (6) Greater than 3 Average (4) High (6) High (6) 7 Chapter 7 Page 44 Examples: EQ’s can contain business data, control data and rules based data. Business Applications: An example of Business data is customer names, addresses, phone number, so on and so forth. An example of Rules Data is a table entry that tells how many days a customer can be late before they are turned over for collection. Drop Down List (a listing of customers by name) would be an example of an EQ. A screen full of customer address information would be an example of an EQ. Reset (or restore) functionality where all the modified fields are reset to their saved values. The key to understanding this an external query is the “reset to their saved values.” Clearly a table is being read. Terminology: The definition states that an EO contains information, which derived data passes across the boundary from inside to outside. Some confusion may arise because an EO has an input side. The confusion is the definition reads data passes across the boundary from inside to outside. The input side of an EO is search criteria, parameters, etc does not maintain an ILF. The information that a cross from outside to inside (input side) is not permanent data, but it is transient data. The intent of the information coming from outside the application (input side) is not to maintain an ILF. Data Elements: Unique sets of data elements help to distinguish one external inquiry from another external inquiry. • Input Side  Click of a the mouse  Search values  Action keys (command buttons)  Error Messages  Confirmation Messages (searching)  Clicking on the an action key  Scrolling  Recursive fields are counted only once. • Outside  Values read from an internal logical file or external interface file  Color or Font changes on the screen  Error Messages  Confirmation Messages  Recursive fields are counted only once. • The combined (unique) total input and outside DET’s are used when rating EQ’s. External Inquiries Page 45 FP Online Class Like an EI, action keys that perform the same function but appear multiple times are counted as only one DET. Error Messages and confirmation messages can and do occur on either the input side and/or output side. If a user initiates a search and a message is displayed, “please wait searching” is an example of a confirmation message on the input side. The message “all fields must be populated” is another example of an error message on the input side. On the other hand, if the message is “customer not found” is an example of an error message on the output side. That is, the input side contained no problems. The database was searched and the “error” has occurred on the output side of the transaction. File Type Referenced (FTR’s): Unique FTR’s help distinguish one external inquiry from another external inquiry. Both the input side and output side must be considered when evaluating the FTR’s used by an external inquiry. Normally they are the same but there are instances where they may not be the same. The combined total should be used when evaluating an EQ. For example, a security check may be done on the input side of an external inquiry. The security check is done to make sure the user of the application has the appropriate level of authority to view the data. Uniqueness: A unique set of data elements, and/or a different set of FTR’s make one external inquiry unique or different from other external inquiry. That is, one of the following must be true: • Unique or different set of data elements • Unique or different set of FTR’s • Unique processing logic Sorting does not make on external inquiry unique from another since the data elements and FTR’s are the same. An external inquiry cannot have calculated values or derived data. This characteristic distinguishes an external inquiry from an external output. Understanding Enhancement Function Points: Modification of any of the items, which make an External Inquiry unique from other external inquiries, causes the EQ to be “enhanced.” If any of the following are true: • DET’s added to an EQ • DET’s modified on an EQ. The DET was included in the last FP Count. • A New FTR Chapter 7 Page 46 Example of Graphical Data Imagine the following map. There are two different ways to get the same exact data. One you can click on the specific state or you can use the drop down list. Once you choose a state data is generated and presented to the screen. These two EQ are repetitive and do the same exact thing. We would not consider this as two EQ’s but only one. You can view this map by visiting Technology Issues: GUI applications are usually rich with EQ’s (and EO’s). A dynamic pick list that reads from a file is an example an External Inquiry. GUI screens my have a series of EQ’s prior to an EI. Standard Documentation: • Screen Layouts • Design Documentation • Functional Specifications • Table Layouts • User Requirements • Database descriptions • Pick lists • Field sizes and formats External Inquiries Page 47 FP Online Class Tips to Identify EQ’s early in the life cycle: The following types of documentation can be used to assist in counting internal logical files prior to system implementation. • Any refined objectives and constraints for the proposed system. • Collected documentation regarding the current system, if such a system (either automated or manual) exits. • Documentation of the users’ perceived objectives, problems and needs. • Preliminary Data Flow Diagrams. Typical Vocabulary: The following words are associated with an “external inquiry.” While reading textual document or application description look for these type of words. They may indicate an external inquiry. Notice the words are very similar to those related to external outputs. Browse Display Extract Fetch Find Gather Get Drop Down Lists Look Ups On-lines Output Pick Lists Print Query Scan Seek Select Show View Reports Special Issues and Concerns: Can an External Inquiry not have an input side? Even though it may not be visible all external inquiries have an input side. In cases where the input side is not readily visible is referred to as an implied inquiry. Can an External Inquiry Update an Internal Logical File? Like an external output, an external inquiry may update an internal logical file, but it is incorrect to say that an external inquiry can maintains an internal logical file. The update is part of the elementary process of the external inquiry. The definition for maintaining is discussed in the internal logical file and external input sections of this book. The only component that maintains an internal logical file is an external input. Chapter 7 Page 48 Menus (Dynamic Menus) The menu displayed to the right is a dynamic menu. Word displays the last several files that have been opened. We can easily conclude that this information is being read from some type of internal file. Hence, the information is dynamic. The menu would be counted as an external inquiry. Even though the IFPUG Manual explicitly states that menus are not counted, in this case it is clear that the menu is dynamic and changes. The real distinction is if a menu is dynamic or static. That is, are the contents of the screen or report dynamic (read from some file) or are they static (hard coded). External Inquiries Page 49 FP Online Class Skill Builder: The following questions are used to help build on the concepts discussed in this section. They are designed to encourage thought and discussion. The following customer list is displayed by clicking on the title bar “Customer.” The following list is displayed (and is read from a file). If a particular customer is double clicked additional information is displayed. 1. How many EQ’s does the Customer Button, Customer: Job List and Edit Customer represent? Chapter 7 Page 50 If Customer:Job is clicked then the following menu is displayed. If new is selected a blank (empty screen appears – same fields as Edit Customer). If delete is selected on delete confirmation is displayed. External Inquiries Page 51 FP Online Class 2. How many EI’s does this series of screens (Edit, New and Delete) represent? 3. If an EQ references one file type and has 25 data elements is it rated, low average or high? What about 5 data elements? Or 45 Data elements? 4. Does every EQ have to have at least one FTR? Why? How does this differ from an EO? 5. What is the criterion for an EQ to be rated high? The answers to chapter questions are part of the online training course Page 52 www.SoftwareMetrics.Com Longstreet Consulting Inc TRANSACTION REVIEW Objective of Section: To review the three types of transactional function type (external input, external output and external inquiry). If the transaction can perform the “activity” then place a check in the appropriate column. Transactions Description or Activity External Input External Output External Inquiry DET’s retrieved from FTR’s Sorting of Data Updates an ILF Maintains an ILF Contains Derived Data Information from outside the boundary to inside Shares complexity matrix table Are valued the same for Low, Ave, and High Never Contains Derived Data At least on FTR is referenced Information from inside the boundary to outside The answers to chapter questions are part of the online training course Multiple Languages Consider an application that is a single language. More than likely report headings, text descriptions are all “hard coded.” That is the user cannot dynamically change the headings or the text. Now consider an application that has been developed with multiple languages in mind. The report headings, text descriptions are all read from files. Compare the following chart in 8 Transaction Review Page 53 FP Online Class Spanish to the English chart presented earlier. Is this chart a unique external output or the same external output? The Spanish chart is not a unique external output. If external outputs are available in multiple languages then several things need to be considered. First there is probably some control input that allows the user to dynamically select the language. Second, there is an additional FTR referenced that contains the language text. Third, this language internal logical file is maintained by an external input. Fourth, there are more data elements in the report. If an external output is available in more than one language then it is not considered an unique external output, but the external output is more complex (more DET’s and more FTR’s). Display of Graphical Images or Icons A display of a graphical image is simply another data element. An inventory application may contain data about parts. It may contain part name, supplier, size, and weight and include a schematic image of the part. This schematic is treated as another data element. Another example would be a map. The map may be “hot.” As the mouse pointer is moved over the map different city names are displayed. If the user clicks on a particular hot point details about that city is displayed. The details about each city are contained in an internal logical file or external interface file then the details could be an external inquiry. The following map of the United States is “hot.” If you click on Kansas City, Chapter 8 Page 54 then you get the following information. Kansas City, Missouri: Population 435,146: Location: 39.1 N, 94.5 W Houston, Texas: Populations 2,231,130: Location: 29.8 N, 95.4 W Chicago, Illinois: Population 2,783,726: Location: 41.8 N, 87.6 W This would be an example of another inquiry. Messages There are three types of messages that are generated in a GUI application: Error messages, Confirmation Messages and Notification Messages. An error message and a confirmation message indicate that an error has occurred or that a process will be or have been completed. A message that would state, “Zip code is required” would be an example of an error message. A message that would state, “Are you sure you want to delete the customer?” is an example of a confirmation message. Neither of these types of messages is treated as a unique External Output, but they are treated as data elements for the appropriate transaction. On the other hand, a notification messages is a business type message. It is the basis of processing and a conclusion being drawn. For example, you may try to withdraw from an ATM machine more money than you have in your account and you receive the dreaded message, “You have insufficient funds to cover this transaction.” This is the result of information being read from a file regarding your current balance and a conclusion being drawn. A notification message is treated as an External Output. Notification Messages may be the result of processing and the actual processing or derived data my not be seen. If a message is created to be sent to a pager (beeper) at a given time. This is much like an alarm. That is current time is compared to set time and they are equal the message is sent. The pager message has one data element the text message. Transaction Review Page 55 FP Online Class Complex Control Inputs Control inputs change the behavior of an application or the content of a report. In the “Create Report” control screen, the user has the ability to select which reports are going to be produced. This particular screen has several data element types. The check box, graph type, dimensions elements, sub-items and the action keys. Note that the users can choose each report individually. In fact each report is as an object. The generated report is a combination of several reports (or objects). Each object has several attributes. Hyperlinks on WebPages Many hyperlinks are nothing more than menus. In this case the meomix.com, dogchow.com are nothing more then links to other pages. In this case, they are not treated as an EI, EO or EQ. According to the rules for an external inquiry a request must come from outside the application boundary and information must be displayed from inside to outside the application boundary. A hyperlink is just that – a hyperlink. A hyperlink is navigation to another part of the application or another Internet/Intranet site. No information crosses the boundary. An external inquiry must reference at least one internal logical file and/or one external interface file. Both an internal logical file and an external interface file must be a logical group of related information. Imagine hyperlinking to another Website -- all the information displayed is not a logical group of information. On the other hand, a hyperlink that sends a parameter that is used to search could be an example of an external inquiry. That is, the hyperlink follows the rules required for an external inquiry. There is an input side (the parameter) and there is an output side the results of the search. In this case the output side is dynamic and changes. This is in sharp contrast to a static hyperlink that navigates to another part of the Website. Page 56 www.SoftwareMetrics.Com Longstreet Consulting Inc INTERNAL LOGICAL FILES Objective of Section: Describe and define the concepts necessary to identify and rate Internal Logical Files. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Definition: Internal Logical Files (ILF) - a user identifiable group of logically related data that resides entirely within the application boundary and is maintained through External Inputs. An internal logical file has the inherent meaning it is internally maintained, it has some logical structure and it is stored in a file. Even though it is not a rule, an ILF should have at least one external output and/or external inquiry. That is, at least one external output and/or external inquiry should include the ILF as an FTR. Simply put, information is stored in an ILF, so it can be used later. The EO or EQ could be from another application. It is worth noting that it is possible that a specific ILF is not referenced by EO or EQ, but it is used by an EI (other than the EI that maintains it). Again, even though it is not a rule, an ILF should have at least one external input. Rating: Like all components, ILF’s are rated and scored. The rating is based upon the number of data elements (DET’s) and the record types (RET’s). DET’s and RET’s were discussed earlier. The table below lists both the level (low, average or high) and appropriate score (7, 10 or 15). Record Element Types (RET) Data Elements 1 to 19 20 - 50 51 or More 1 RET Low (7) Low(7) Average (10) 2 to 5 RET Low (7) Average (10) High (15) 6 or More RET Average (10) High (15) High (15) Counting Tips: Determine the appropriate row first then the column. Ask the question, do all files contain one record type or more than one record type? If all or many of the files only contain one record type, then all that is needed to know if the file contains more or less than 50 data elements types (DET’s). If the file contains more than 50 data elements the file will be rated as average, if less than 50 data element types the file will be considered low. Any files that contain more than one record type can be singled out and counted separately. 9 Internal Logical Files Page 57 FP Online Class Examples: ILF’s can contain business data, control data and rules based data. The type of data contained in an ILF is the same type of data an EI to contains and maintains. It is common for control data to have only one occurrence within an ILF. For example control data file may only contain parameter settings, or a status setting. For example, part of the on board automobile system only contains current information, oil pressure, engine temperature, so on and so forth. This particular process of the on board system does not care about historical data – only the current instance. When the status changes the file is updated with current information and there is no historical information. The on board system may keep track of historical changes in diagnostics files, but this would be a totally separate process. This process is not used to keep the car running, but to help a mechanic understand what has been going on with the engine. Real Time and Embedded Systems: For example, Telephone Switching is made of all three types, Business Data, Rule Data and Control Data. Business Data is the actual call, Rule Data is how the call should be routed through the network, and Control Data is how the switches communicate with each other. Like control files it is common real time systems will have only one occurrence in an internal logical file. Business Applications: An example of Business data is customer names, addresses, phone number, so on and so forth. An example of Rules Data is a table entry that tells how many days a customer can be late before they are turned over for collection. Record Element Types: The idea behind RET’s is to quantify complex data relationships maintained in a single FTR. Record element types are one of the most difficult concepts in function point analysis. Most record element types are dependent on a parent - child relationship. The child information is a subset of the parent information. In a parent child relationship there is a one to many relationship. Figure 3 represents two separate logical groups of data A and B. In this case some A are B. Figure 4 represents one logical group of data A two record types. In this case All B are A. Imagine a customer file that contains Name, Address, so on and so forth. In addition all the credit cards and credit card numbers of the customer are contained in the file. This would be an example of 2 record types. There would be multiple occurrences of credit cards and numbers for each Figure 3 (two ILF, one RET each) Figure 4 (two RET, one ILF) Chapter 9 Page 58 customer. The credit card and numbers are meaningless when not linked to the customer. Additionally, a short article, Understanding RET’s can be found at Website\Articles\ret.htm. Data Element Types: Count a DET’s for each unique user recognizable, nonrecursive field on the ILF or EIF. Fields that are redundant and appear more than one time are only counted one time. Fields that are redundant because of implementation concerns are counted only one time. Count a DET’s for each piece of data in an ILF or EIF that exists because the user requires a relationship with another ILF to be maintained (key information). If an EIF has multiple key fields only the key fields that relate back to an ILF are counted as data element types. Technology Issues: Lotus Notes refers to data stores as “forms.” Powerbuilder Applications may store information on the host or client. Count it only one time. COBOL Applications may use a variety of data stores such as IMS, DB2 etc.… It is important to view data from the “logical model.” In Internet applications an html can be a data store if it is maintained. Standard Documentation: • Table Layouts • Database descriptions • Logical data models • Field sizes and formats • Design Documentation • Functional Specifications • User Requirements Tips to Identify ILF’s early in the life cycle: The following types of documentation can be used to assist in counting internal logical files prior to system implementation. • Any refined objectives and constraints for the proposed system. • Collected documentation regarding the current system, if such a system (either automated or manual) exits. • Documentation of the users’ perceived objectives, problems and needs. • Preliminary Data Models. Other comments: Code maintenance may not be maintained by the application and they may not be maintained by any other application, but they exist. The issue is that these same tables may be used by external Internal Logical Files Page 59 FP Online Class inquiries. A strict interpretation of the rules would not allow the inquiries to be counted. It is recommended that this type of tables be treated as external interface file. Skill Builder: The following questions are used to help build on the concepts discussed in this section. They are designed to encourage thought and discussion. 1. If a single internal logical file is separated into 3 physical files because of implementation concerns, then how many internal logical files are counted? 2. A logical group of data is best described as? 3. If an ILF has one record type and 25 data elements is it rated, low average or high? What about 5 data elements? Or 45 Data elements? 4. Does every ILF have to have at least one EI? Why? 5. Should every ILF have at least one external output or external inquiry? Why? 6. What are the criteria for an ILF to be rated high? 7. Fill in the “value” of a low average _ and high _ ILF? How does this compare to an EIF? Why the difference? Chapter 9 Page 60 Examine the following tables. The user requires detail information about customers and sales representatives. 1. How many internal logical files? 2. How many data elements? Is there more than one record type? 3. Can the tables be formed to combine one internal logical file? Customer Table Customer Number Name Address City State Zip Code Balance Credit Limit Sales Rep Number AN91 Atwater Nelson 215 Watkins Oakdale IN 48101 $347 $700 04 AW52 Alliance West 266 Ralston Allanson IN 48102 $49 $400 07 BD22 Betodial 542 Prairie Oakdale IN 48101 $57 $400 07 CE76 Carson Enterprise 96 Prospect Bishop IL 61354 $425 $900 11 Sales Representative Table Sales Rep Number Last Name First Name Address City State Zip Code Area Manager Number 04 Right Mike 95 Stockton Oakdale IN 48101 14 05 Perry Tom 198 Pearl Oakdale IN 48101 17 07 Sanchez Rachel 867 Bedford Benson MI 49246 17 11 Morris Katie 96 Prospect Bishop IL 61354 21 Internal Logical Files Page 61 FP Online Class Imagine a database that stores information about albums. The database is broken down as Artist, Album Name, Publication Date, and Songs. The key to the database is both Artist and Album Name. The field songs have three subset fields. Song contains tack number, song name and length of playing time. For example, Bruce Springsteen, Born to Run, Songs. The first row of the song subset is #1, Born To Run, 4:30. Figure 5 - Songs Field 1. How many internal logical files are represented by this database? 2. How many total data elements? 3. How many total record types are there on the database? 4. What is the recursive information? Page 62 www.SoftwareMetrics.Com Longstreet Consulting Inc EXTERNAL INTERFACE FILES Objective of Section: Describe and define the concepts necessary to identify and rate External Interface Files. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Definition: External Interface Files (EIF) - a user identifiable group of logically related data that is used for reference purposes only. The data resides entirely outside the application boundary and is maintained by another applications external inputs. The external interface file is an internal logical file for another application. An application may count a file as either a EIF or ILF not both. An external interface file has the inherent meaning it is externally maintained (probably by some other application), an interface has to be developed to get the data and it is stored in a file. Each EIF included in a function point count must have at least one external output or external interface file against it. At least one transaction, external input, external output or external inquiry should include the EIF as a FTR. Every application, which references the EIF, needs to include it in their FP Count. Some organizations have a pull theory and others have a push theory of data. The pull theory is an external application “reaching into” another applications and retrieving data. Those organizations which have push theory require applications to create interfaces (EO or EQ) which other applications read. Rating: Like all components, EIF’s are rated and scored. The rating is based upon the number of data elements (DET’s) and the record types (RET’s). DET’s and RET’s were discussed earlier in this section. The table below lists both the level (low, average or high) and appropriate score (5, 7 or 10). Record Element Types (RET) Data Elements 1 to 19 20 - 50 51 or More 1 RET Low (5) Low(5) Average (7) 2 to 5 RET Low (5) Average (7) High (10) 6 or More RET Average (7) High (10) High (10) 10 Internal Logical Files Page 63 FP Online Class Counting Tips: Only count the part of the file that is used by the application being counted not the entire file. The internal logical file, of another application, that you access may have a large amount of data, but only consider the DET’s and/or RET’s that are used when rating an EIF. Determine the appropriate row first then the column. Ask the question, do all files contain one record type or more than one record type? If all or many of the files only contain one record type, then all that is needed to know if the file contains more or less than 50 data elements types (DET’s). If the file contains more than 50 data elements the file will be rated as average, if less than 50 data element types the file will be considered low. Any files that contain more than one record type can be singled out and counted separately. Examples: EIF’s can contain business data, control data and rules based data. Real Time and Embedded Systems: For example, Telephone Switching is made of all three types, Business Data, Rule Data and Control Data. Business Data is the actual call, Rule Data is how the call should be routed through the network, and Control Data is how the switches communicate with each other. Business Applications: An example of Business data is customer names, addresses, phone number, so on and so forth. An example of Rules Data is a table entry that tells how many days a customer can be late before they are turned over for collection. Technology Issues: Lotus Notes refers to data stores as “forms.” Client/Server Applications may store information on the host or client. Count it only one time. COBOL Applications may use a variety of data stores such as IMS, DB2 etc.… It is important to view data from the “logical model.” Standard Documentation: • Table Layouts • Interface Diagrams • Database descriptions • Logical data models • Field sizes and formats • Design Documentation • Functional Specifications • User Requirements Chapter 10 Page 64 Tips to Identify EIF’s early in the life cycle: The following types of documentation can be used to assist in counting external interface files prior to system implementation. • Any refined objectives and constraints for the proposed system. • Collected documentation regarding the current system, if such a system (either automated or manual) exits. • Documentation of the users’ perceived objectives, problems and needs. • Preliminary Data Models. Page 65 www.SoftwareMetrics.Com Longstreet Consulting Inc GENERAL SYSTEM CHARACTERISTICS Objective of Section: Describe and define the concepts necessary to rate the General System Characteristics (GSC’s) to determine the overall Value Adjustment Factor. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Definition: The value adjustment factor (VAF) is based on 14 general system characteristics (GSC’s) that rate the general functionality of the application being counted. Each characteristic has associated descriptions to determine the degrees of influence. Rating: The degrees of influence range on a scale of zero to five, from no influence to strong influence. Each characteristic is assigned the rating based upon detail descriptions provided by the IFPUG 4.1 Manual. They ratings are: 0 Not present, or no influence 1 Incidental influence 2 Moderate influence 3 Average influence 4 Significant influence 5 Strong influence throughout Standard Documentation: • General Specification Documents • Interviews with the users Rating GSC’s early in the life cycle: GSC’s can be rated relative early in the software life cycle. In fact, if a user cannot answer these fourteen questions, then the entire project needs to be re-evaluated. 11 Chapter 11 Page 66 Tabulating: Once all the 14 GSC’s have been answered, they should be tabulated using the IFPUG Value Adjustment Equation (VAF) -- 14 where: Ci = degree of influence for each General System Characteristic VAF = 0.65 + [(  Ci) / 100] .i = is from 1 to 14 representing each GSC. i =1  = is summation of all 14 GSC’s. Another way to understand the formula is VAF = (65 + TDI)/100, where TDI is the sum of the results from each question. A Microsoft Excel formula would be: =0.65+SUM(A1:A14)/100 ; assuming that the values for the characteristics were in cells A1 – A14. GSC’s at a Glance: General System Characteristic Brief Description 1. Data communications How many communication facilities are there to aid in the transfer or exchange of information with the application or system? 2. Distributed data processing How are distributed data and processing functions handled? 3. Performance Did the user require response time or throughput? 4. Heavily used configuration How heavily used is the current hardware platform where the application will be executed? 5. Transaction rate How frequently are transactions executed daily, weekly, monthly, etc.? 6. On-Line data entry What percentage of the information is entered On-Line? 7. End-user efficiency Was the application designed for end-user efficiency? 8. On-Line update How many ILF’s are updated by On-Line transaction? 9. Complex processing Does the application have extensive logical or mathematical processing? 10. Reusability Was the application developed to meet one or many user’s needs? 11. Installation ease How difficult is conversion and installation? 12. Operational ease How effective and/or automated are start-up, back up, and recovery procedures? 13. Multiple sites Was the application specifically designed, developed, and supported to be installed at multiple sites for multiple organizations? 14. Facilitate change Was the application specifically designed, developed, and supported to facilitate change? General System Characteristics Page 67 FP Online Class Considerations for GUI Applications GSC items such as Transaction Rates, End User Efficiency, On Line Update, and Reusability usually score higher for GUI applications than on traditional applications. On the other hand, Performance, Heavily used configuration, multiple sites, will score lower for GUI applications than traditional applications. Chapter 11 Page 68 Detail GSC’s: The data and control information used in the application are sent or received over communication facilities. Terminals connected locally to the control unit are considered to use communication facilities. Protocol is a set of conventions, which permit the transfer, or exchange of information between two systems or devices. All data communication links require some type of protocol. Score As Descriptions to Determine Degree of Influence 0 Application is pure batch processing or a standalone PC. 1 Application is batch but has remote data entry or remote printing. 2 Application is batch but has remote data entry and remote printing. 3 Application includes online data collection or TP (teleprocessing) front end to a batch process or query system. 4 Application is more than a front-end, but supports only one type of TP communications protocol. 5 Application is more than a front-end, and supports more than one type of TP communications protocol. Comments: TCP/IP (Transmission Control Protocol/Internet Protocol). TCP/IP provides a common language for interoperation between networks that use a variety of local protocols (Ethernet, Netware, AppleTalk, DECnet and others) are examples of TP. An application that allows query of application via a web based solution and local access would receive a value of 3. An application that allows for the update of ILF’s via the Internet and local update would receive a value of a 5. 1. Data Communications General System Characteristics Page 69 FP Online Class Distributed data or processing functions are a characteristic of the application within the application boundary. Score As Descriptions To Determine Degree of Influence 0 Application does not aid the transfer of data or processing function between components of the system. 1 Application prepares data for end user processing on another component of the system such as PC spreadsheets and PC DBMS. 2 Data is prepared for transfer, then is transferred and processed on another component of the system (not for end-user processing). 3 Distributed processing and data transfer are online and in one direction only. 4 Distributed processing and data transfer are online and in both directions. 5 Processing functions are dynamically performed on the most appropriate component of the system. Comments: Copying files from a mainframe to a local PC or copy files from an Internet or intranet would receive a value of 2. Reading via a client or via Internet or intranet would receive a value of 3. Reading and updating via Internet or intranet would receive a value of 4. Depending on available resources, the application processes either local, on server, on intranet or Internet application would receive a value of 5. 2. Distributed Data Processing Chapter 11 Page 70 Application performance objectives, stated or approved by the user, in either response or throughput, influence (or will influence) the design, development, installation, and support of the application. Score As Descriptions To Determine Degree of Influence 0 No special performance requirements were stated by the user. 1 Performance and design requirements were stated and reviewed but no special actions were required. 2 Response time or throughput is critical during peak hours. No special design for CPU utilization was required. Processing deadline is for the next business day. 3 Response time or throughput is critical during all business hours. No special design for CPU utilization was required. Processing deadline requirements with interfacing systems are constraining. 4 In addition, stated user performance requirements are stringent enough to require performance analysis tasks in the design phase. 5 In addition, performance analysis tools were used in the design, development, and/or implementation phases to meet the stated user performance requirements. Comments: Again for a client/server or for internet/intranet application this remains the same. 3. Performance General System Characteristics Page 71 FP Online Class A heavily used operational configuration, requiring special design considerations, is a characteristic of the application. For example, the user wants to run the application on existing or committed equipment that will be heavily used Score As Descriptions To Determine Degree of Influence 0 No explicit or implicit operational restrictions are included. 1 Operational restrictions do exist, but are less restrictive than a typical application. No special effort is needed to meet the restrictions. 2 Some security or timing considerations are included. 3 Specific processor requirements for a specific piece of the application are included. 4 Stated operation restrictions require special constraints on the application in the central processor or a dedicated processor. 5 In addition, there are special constraints on the application in the distributed components of the system. Comments Does this application share hardware that is busy?. For example, an application that shares a server with 5 other applications would need to be optimized because it shares resources with 4 other applications. 4. Heavily Used Configuration Chapter 11 Page 72 The transaction rate is high and it influenced the design, development, installation, and support of the application Score As Descriptions To Determine Degree of Influence 0 No peak transaction period is anticipated. 1 Peak transaction period (e.g., monthly, quarterly, seasonally, annually) is anticipated. 2 Weekly peak transaction period is anticipated. 3 Daily peak transaction period is anticipated. 4 High transaction rate(s) stated by the user in the application requirements or service level agreements are high enough to require performance analysis tasks in the design phase. 5 High transaction rate(s) stated by the user in the application requirements or service level agreements are high enough to require performance analysis tasks and, in addition, require the use of performance analysis tools in the design, development, and/or installation phases. Online data entry and control functions are provided in the application. Score As Descriptions To Determine Degree of Influence 0 All transactions are processed in batch mode. 1 1% to 7% of transactions are interactive data entry. 2 8% to 15% of transactions are interactive data entry. 3 16% to 23% of transactions are interactive data entry. 4 24% to 30% of transactions are interactive data entry. 5 More than 30% of transactions are interactive data entry. 5. Transaction Rate 6. Online Data Entry General System Characteristics Page 73 FP Online Class The online functions provided emphasize a design for end-user efficiency. The design includes: • Navigational aids (for example, function keys, jumps, dynamically generated menus) • Menus • Online help and documents • Automated cursor movement • Scrolling • Remote printing (via online transactions) • Preassigned function keys • Batch jobs submitted from online transactions • Cursor selection of screen data • Heavy use of reverse video, highlighting, colors underlining, and other indicators • Hard copy user documentation of online transactions • Mouse interface • Pop-up windows. • As few screens as possible to accomplish a business function • Bilingual support (supports two languages; count as four items) • Multilingual support (supports more than two languages; count as six items) Score As Descriptions To Determine Degree of Influence 0 None of the above. 1 One to three of the above. 2 Four to five of the above. 3 Six or more of the above, but there are no specific user requirements related to efficiency. 4 Six or more of the above, and stated requirements for end-user efficiency are strong enough to require design tasks for human factors to be included (for example, minimize key strokes, maximize defaults, use of templates). 5 Six or more of the above, and stated requirements for end-user efficiency are strong enough to require use of special tools and processes to demonstrate that the objectives have been achieved. 7. End-User Efficiency Chapter 11 Page 74 The application provides online update for the internal logical files. Score As Descriptions To Determine Degree of Influence 0 None. 1 Online update of one to three control files is included. Volume of updating is low and recovery is easy. 2 Online update of four or more control files is included. Volume of updating is low and recovery easy. 3 Online update of major internal logical files is included. 4 In addition, protection against data lost is essential and has been specially designed and programmed in the system. 5 In addition, high volumes bring cost considerations into the recovery process. Highly automated recovery procedures with minimum operator intervention are included. Complex processing is a characteristic of the application. The following components are present. • Sensitive control (for example, special audit processing) and/or application specific security processing • Extensive logical processing • Extensive mathematical processing • Much exception processing resulting in incomplete transactions that must be processed again, for example, incomplete ATM transactions caused by TP interruption, missing data values, or failed edits • Complex processing to handle multiple input/output possibilities, for example, multimedia, or device independence Score As Descriptions To Determine Degree of Influence 0 None of the above. 1 Any one of the above. 2 Any two of the above. 3 Any three of the above. 4 Any four of the above. 5 All five of the above. 8. Online Update 9. Complex Processing General System Characteristics Page 75 FP Online Class The application and the code in the application have been specifically designed, developed, and supported to be usable in other applications. Score As Descriptions To Determine Degree of Influence 0 No reusable code. 1 Reusable code is used within the application. 2 Less than 10% of the application considered more than one user's needs. 3 Ten percent (10%) or more of the application considered more than one user's needs. 4 The application was specifically packaged and/or documented to ease re-use, and the application is customized by the user at source code level. 5 The application was specifically packaged and/or documented to ease re-use, and the application is customized for use by means of user parameter maintenance. Conversion and installation ease are characteristics of the application. A conversion and installation plan and/or conversion tools were provided and tested during the system test phase. Score As Descriptions To Determine Degree of Influence 0 No special considerations were stated by the user, and no special setup is required for installation. 1 No special considerations were stated by the user but special setup is required for installation. 2 Conversion and installation requirements were stated by the user, and conversion and installation guides were provided and tested. The impact of conversion on the project is not considered to be important. 3 Conversion and installation requirements were stated by the user, and conversion and installation guides were provided and tested. The impact of conversion on the project is considered to be important. 4 In addition to 2 above, automated conversion and installation tools were provided and tested. 5 In addition to 3 above, automated conversion and installation tools were provided and tested. 10. Reusability 11. Installation Ease Chapter 11 Page 76 Operational ease is characteristic of the application. Effective start-up, back-up, and recovery procedures were provided and tested during the system test phase. The application minimizes the need for manual activities, such as tape mounts, paper handling, and direct on-location manual intervention. Score As Descriptions To Determine Degree of Influence 0 No special operational considerations other than the normal back-up procedures were stated by the user. 1 - 4 One, some, or all of the following items apply to the application. Select all that apply. Each item has a point value of one, except as noted otherwise. Effective start-up, back-up, and recovery processes were provided, but operator intervention is required. Effective start-up, back-up, and recovery processes were provided, but no operator intervention is required (count as two items). The application minimizes the need for tape mounts. The application minimizes the need for paper handling. 5 The application is designed for unattended operation. Unattended operation means no operator intervention is required to operate the system other than to start up or shut down the application. Automatic error recovery is a feature of the application. 12. Operational Ease General System Characteristics Page 77 FP Online Class The application has been specifically designed, developed, and supported to be installed at multiple sites for multiple organizations. Score As Descriptions To Determine Degree of Influence 0 User requirements do not require considering the needs of more than one user/installation site. 1 Needs of multiple sites were considered in the design, and the application is designed to operate only under identical hardware and software environments. 2 Needs of multiple sites were considered in the design, and the application is designed to operate only under similar hardware and/or software environments. 3 Needs of multiple sites were considered in the design, and the application is designed to operate under different hardware and/or software environments. 4 Documentation and support plan are provided and tested to support the application at multiple sites and the application is as described by 1 or 2. 5 Documentation and support plan are provided and tested to support the application at multiple sites and the application is as described by 3. 13. Multiple Sites Chapter 11 Page 78 The application has been specifically designed, developed, and supported to facilitate change. The following characteristics can apply for the application: • Flexible query and report facility is provided that can handle simple requests; for example, and/or logic applied to only one internal logical file (count as one item). • Flexible query and report facility is provided that can handle requests of average complexity, for example, and/or logic applied to more than one internal logical file (count as two items). • Flexible query and report facility is provided that can handle complex requests, for example, and/or logic combinations on one or more internal logical files (count as three items). • Business control data is kept in tables that are maintained by the user with online interactive processes, but changes take effect only on the next business day. • Business control data is kept in tables that are maintained by the user with online interactive processes, and the changes take effect immediately (count as two items). Score As Descriptions To Determine Degree of Influence 0 None of the above. 1 Any one of the above. 2 Any two of the above. 3 Any three of the above. 4 Any four of the above. 5 All five of the above. 14. Facilitate Change General System Characteristics Page 79 FP Online Class Skill Builder: The following questions are used to help build on the concepts discussed in this section. They are designed to encourage thought and discussion. 1. What is the value adjustment factor if all of the general system characteristics scored a value of 5 (strong influence)? 2. What is the value adjustment factor if each of the general system characteristics has no influence (a score of 0)? 3. What is the origin of the .65 in the value adjustment factor calculation? 4. What is the possible (theoretical) range of the value adjustment factor? Chapter 11 Page 80 General System Characteristics – Notes Page Page 81 www.SoftwareMetrics.Com Longstreet Consulting Inc HISTORY AND IFPUG Objective of Section: To provide a brief history of Function Points and describe IFPUG. Brief History: Function Point Analysis was developed first by Allan J. Albrecht in the mid 1970s. It was an attempt to overcome difficulties associated with lines of code as a measure of software size, and to assist in developing a mechanism to predict effort associated with software development. The method was first published in 1979, then later in 1983. In 1984 Albrecht refined the method and since 1986, when the International Function Point User Group (IFPUG) was set up, several versions of the Function Point Counting Practices Manual have been published by IFPUG. Growth and Acceptance of Function Point Analysis The acceptance of Function Point Analysis continues to grow. This is indicated by the growth of the International Function Point User Group (IFPUG). Since 1987 membership in IFPUG has grown from 100 members to nearly 600 members in 1997. Additionally, in less than six years conference attendance has grown from 125 in 1988 to over 300 by 1997. Examination of IFPUG clearly indicates that the majority of its is members are from North America, but Function Point analysis growth outside North America is strong. This is evident by the growing number of function point organizations worldwide. There are numerous affiliate organizations of IFPUG. There are affiliate organizations in Italy, France, Germany, Austria, India, The Netherlands, Australia, Japan, and several other countries. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. More Information about IFPUG: More information about joining IFPUG, conferences, committees can be obtained by contacting the IFPUG. Website: www.IFPUG.Org Email: Ifpug@Ifpug.org 12 Chapter 12 Page 82 Page 83 www.SoftwareMetrics.Com Longstreet Consulting Inc CALCULATING ADJUSTED FUNCTION POINT Objective of Section: Describe the calculations necessary for determining the final Function Point Counts. The exercises at the end of the section help the student demonstrate that they have gained the basic knowledge required. Understanding the Equations: There are three sets of equations new projects (Development), existing projects (Baseline or Application) and for enhancement projects. There are two equations for the enhancement projects. The first equation accounts for size of the enhancement project while the second equation adjusts the size of the Application. Forget About the Equations for a Minute: The equations can be very cumbersome and there are many variables. Forget about the exact equations for a moment. When you develop a new application you need to know the entire size of the project. This means you would want to include the number of function points of the application plus any other function points that need to be developed. For example, you may need to develop a mini (temporary) application to assist with conversion efforts. So in the end, you would have the number of function points for the application to be installed plus any other functions you needed to develop. When you have an enhancement project and you are going to modify an existing production application, you are concerned about two things. The first thing is the size of the actual enhancement project. How many function points is this project? The size of this project includes any added functionality, any changed functionality, and any deleted functionality. Also in an enhancement project you may have other functionality needed that is not directly part of the enhancement project. Normally an enhancement project is the size of any (added functionality plus any changed functionality) x the value adjustment factor. The value adjustment factor normally does not change; there is normally no conversion effort, so on and so forth. 13 Chapter 13 Page 84 The second concern is how did the enhancement project change the actual production application. Is the existing production application larger than before? And if it is larger by how much? This would be any added functionality. Also you would want to know of any functionality that exist before and is larger after the enhancement. In practice the size of the existing production application will be impacted by added functionality more. Many organizations learn that existing application size does not change much, but they are changing existing functionality. Definition: The final Function Point Count is obtained by multiplying the VAF times the Unadjusted Function Point (UAF). The standard function point equation is: FP = UAF VAF Where: UAF = Unadjusted Function Points VAF = Value Adjustment Factor Unadjusted Function Point: Type of Complexity of Components Component Low Average High Total External Inputs ___ x 3 = ___ ___ x 4 = ___ ___ x 6 = ___ External Outputs ___ x 4 = ___ ___ x 5 = ___ ___ x 7 = ___ External Inquiries ___ x 3 = ___ ___ x 4 = ___ ___ x 6 = ___ Internal Logical Files ___ x 7 = ___ ___ x 10 = ___ ___ x 15 = ___ External Interface Files ___ x 5 = ___ ___ x 7 = ___ ___ x 10 = ___ Total Number of Unadjusted Function Points _____ Development Project Function Point Calculation: Use the following formula to calculate the development project function point count. Notice there is an additional term CFP which is conversion function points. Often when a new application is replacing an old application, the data must be converted. Sometimes a “mini application” needs to be developed to assist in the conversion. This mini application does not exist after the new application is up and running. This is why development function point calculation is different the application function point count (see next below). Calculating Adjusted Function Points Page 85 FP Online Class DFP = (UFP + CFP) VAF Where: DFP is the development project function point count UFP is the unadjusted function point count CFP is the function points added by the conversion unadjusted function point count VAF is the value adjustment factor Application Function Point Count (Baseline): Use the following formula to establish the initial function point count for an existing application. The user is receiving functionality. There are no changes to the existing functionality or deletions of unneeded functionality. The application function point count does not include conversion requirements. AFP = ADD VAF Additionally, this equation is used to establish the function point count for an application at any point in time. Where: AFP is the initial application function point count. ADD is the unadjusted function point count of those functions that were installed by the development project. Since many enhancement projects (that were not counted) have been installed in the application, the ADD in this case represents all functionality that exists within the application boundary at a particular point in time. VAF is the value adjustment factor of the application. Enhancement Project Function Point Calculation: Use the following formula to calculating the size for enhancement projects. EFP = [(ADD + CHGA + CFP) VAFA] + (DEL VAFB) Where: EFP is the enhancement project function point count. ADD is the unadjusted function point count of those functions that were added by the enhancement project. CHGA is the unadjusted function point count of those functions that were modified by the enhancement project. This number reflects the functions after the modifications. CFP is the function point count added by the conversion. VAFA is the value adjustment factor of the application after the enhancement project. Chapter 13 Page 86 DEL is the unadjusted function point count of those functions that were deleted by the enhancement project. It is important to consider the absolute value of the DEL not the negative value. VAFB is the value adjustment factor of the application before the enhancement project. In practice: EFP = [(ADD + CHGA + CFP) VAFA] + (DEL VAFB) In practice VAFA = VAFB = VAF, so the equation becomes EFP = (ADD + CHGA + CFP+ DEL) VAF) Also normally CFP = 0, so the equation simplifies further EFP = ((ADD + CHGA + DEL) VAF) Simplification of the equation: To examine the equation in detail let’s assume that VAFA = VAFB = 1 and CFP = 0. Hence EFP = (ADD + CHGA + DEL). That is, the size of an enhancement project is a summation of all added functionality, changed functionality and any deleted functionality. In theory and in practice, each piece of the formula must be adjusted by the appropriate Value Adjustment factor. Assume now that VAFA  VAFB. The added and changed after is adjusted by the VAFA, but the deleted is adjusted by the VAFB. Additionally, if CFP  0 then it should be adjusted by VAFA. Application After Enhancement Project: AFP = [(UFPB + ADD + CHGA) - (CHGB + DEL)] VAFA Where: UFPB = Unadjusted Function Point Count Before Enhancement. AFP = Application Function Point Count DEL = is the number of function points deleted (the negative value). All other acronyms are the same as before. Of course, an enhancement calculation can add and/or Delete functionality from the UFPB. Added functionality can be due to new components or added functionality can be due to increase in size of existing components. For example, an existing external input could go from a low to an average – valued at 3 to 4. In Practice Normally VAFA = VAFB = VAF, so the equation can be re-arranged Calculating Adjusted Function Points Page 87 FP Online Class AFP = (UFPB+ADD + CHGA-CHGB – DEL) VAF Let’s assume that CHGA = CHGB and DEL = 0. Then AFP= (UFPB + ADD) VAFA Chapter 13 Page 88 Skill Builder: The following questions are used to help build on the concepts discussed in this section. They are designed to encourage thought and discussion. 1. An application has a base unadjusted function point count of 500, a value adjustment factor of 1.10. What is the adjusted function point count? 2. An application has 100 unadjusted function points and a value adjustment factor of 1.02. An enhancement project adds 25 function points, deletes 20 function points, and changes 15 function points (in this case assume CHGB = CHGA). The new value adjustment factor is 1.05. 3. What is the new (after the enhancement) adjusted function point count? 4. What is the enhancement function point count? An application has the following: 10 Low External Inputs, 12 High External Outputs, 20 Low Internal Logical Files, 15 High External Interface Files, 12 Average External Inquiries, and a value adjustment factor of 1.10. 5. What is the unadjusted function point count? 6. What is the adjusted function point count? Page 89 www.SoftwareMetrics.Com Longstreet Consulting Inc CASE STUDIES Objective of Section: The Case Studies require the student to put together several pieces of knowledge together to solve the case study. The case studies insure that the student is grasping and understanding not only individual components but also the components as they relate to each other. While the exercises at the end of each section are intended to be guided practice, the case studies are intended to be independent practice. The student should be able to work solve the case study working alone or in a small group without instructor guidance. Chapter 14 Page 90 Crossword Puzzle 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. Across Down 1. Updates an ILF 1. A change to a baseline application 5. Mountains in Northern Italy 2. From inside to outside of the boundary, contains no derived data 7. The summation of the GSC’s divided by 100 + .65 3. Logical Groups of Data inside boundary 8. An EO contains this 4. Not to give but to… 11. Good Bye (Italian) 6. Mediterranean ____ 12. Ability to modify data through an elementary process 9. Collection of automated procedures and data supporting a business objective 13. Contains Logical Information 10. Flying alone (a single person) 14. Dracula’s title 12. Brooks thinks this is mythical 15. Not physical but 19. A characteristic of an entity 16. A unique user recognizable field 23. Another name for a software bug 17. Establishes what functions are included in the function point count 25. The set of questions that evaluate the overall complexity of an application 18. Read or maintained by transaction 27. The eternal city 20. What EI’s, EO’s and EQ’s are called 29. Not hello, but good ___ 21. Another measure of software size (Abbreviation) 30. Sí (English meaning) 22. The specification, construction, testing, and delivery of a new information systems 24. International Function Point User Group (Abbreviation) 26. Function points should be counted from the “ “ view 28. Another word for Reused (No. 10 of 14) 31. Function points are not hard they are .. 32. The first function point count Case Studies and Exercises Page 91 FP Online Class Collection Letter Dear , Our records indicate that you are past due . If you do not pay within , then we will kindly repo your . . Warm Regards, Example letter December 18, 1999 Dear Mr. Harmon, Our records indicate that you are past due 255 days. If you do not pay within 5 days from the date of this letter, then we will kindly repo your red Ford 150 Truck. Please have a Merry Christmas and prosperous New Year. Warm Regards Rocky Balboa Questions and other information  The number of past due days (num of days) is date of letter minus the due date. Due date derived from the Payment File  Pay day is calculated.  Repo Man is read from the Employee File  Title and Last Name are read from the Customer File  The greeting is based upon the date of the letter and an appropriate message from the Greeting File. What are the data elements? Is this letter an EO or an EQ, why? How many FTR’s? Chapter 14 Page 92 Control Inputs 1. How many data elements are on the “Checking Preferences” control Screen? 2. How many data elements are on the “General Preferences” control Screen? 3. The how many control inputs are represented by the menu items to the right? 4. If the “default” reads values from a control file, then how is “default” treated? Case Studies and Exercises Page 93 FP Online Class Graphical Information 1. What are the external outputs? 2. What are the data elements for each EO? 3. How are the legends treated? Chapter 14 Page 94 Graphs Part II There are two data ILF’s that contain information needed to produce the graph. There is an additional control file which alters the way the graph looks. 1. Is there a control EI and control ILF for “graphs”? 2. How many total FTR’s are referenced for the graphs? 3. Does this graph represent another EO? Case Studies and Exercises Page 95 FP Online Class The Weather Application Release 1.0 The following application was designed to capture temperature and rainfall by city and state. There is only one input screen, one file and one report. Each field on the following input screen can be modified (add, changed or deleted). The add and change functions are different. All previous entries viewed by using the scroll bar. Assume a VAF of 1.0. Weather Storage File City State Temperature Rain Fall Date Average Temperature and Rain Fall by City and State Temperature Rain Fall Date City 1 State 1 Detail Readings for City 1 Averages City 2 State 2 Detail Readings for City2 Averages Chapter 14 Page 96 Based on the weather application fill in the following table. The exercise is designed to identify the exact number of data elements. Component (EI,EXTERNAL OUTPUT, EQ, ILF and EIF) Number of Data Elements? What are the data elements? What is the total unadjusted number of function points? Page 97 www.SoftwareMetrics.Com Longstreet Consulting Inc Adding A New Customer The following two screens are used to add a new customer to an application. The customer is not considered added until both Address information and Additional Information is completed. The OK and Next buttons both save information to the file. Figure 6 Chapter 14 Page 98 There are four drop down list boxes on the Additional Info tab (Type, Terms, Rep and Tax Item). The first three (Type, Terms and Rep) are read from files that are maintained by the application. Tax item is hard coded. Please ignore the “Define Fields” button. The drop down lists Type, Rep and Terms are displayed at the end of this case study. For this part of the application please answer the following questions. 1. How many external inputs are there? 2. How many total data elements are there on the external input? 3. What are the data elements? 4. In terms of function points what are Type, Terms and Rep (see next page)? 5. In terms of function points how are Type and Terms treated the second time they appear? 6. The Rep and Terms drop down box are used again when invoices are created. Figure 7 Case Studies and Exercises Page 99 FP Online Class Chapter 14 Page 100 Enhanced Weather Application Release 2.0 Release 2.0 is an enhancement to “The Weather Application” Release 1.0. The user wants the ability to save temperature as either Celsius or Fahrenheit. To accomplish this a radio button is added to the input screen, which allows the user to select either Celsius or Fahrenheit. An additional field is added to the file, and an additional field is added to the reports. Assume that the value adjustment factor increases to 1.14. How many “enhancement” function points does this represent? What is the baseline function point of release 2.0? Page 101 www.SoftwareMetrics.Com Longstreet Consulting Inc BikeWare Release 1.0 BikeWare is a software product designed for competitive bike riders. BikeWare captures and stores a variety of information. BikeWare is for a single rider only. The rider wants to be able to change, add or delete information about a ride or rider. The following information is either entered by the rider or calculated. All bold items are stored. The following information is grouped logically into two major groups (ride and rider): Ride Information Average Speed Bike Chill Factor T = Temperature during the Ride W = Average Speed X = .303439 sqr (W) - .0202886 W Bikechill = Int (91.9 - (91.4 - T) (X + .474266)) Cadence Calories Burned = Exponential ((.092037 Average Speed) - 4.26)) (Duration of Ride) Weight of Rider) Date of the Ride Distance of the Ride Duration of Ride Temperature during the Ride Rider Information Age (age of rider in years) Weight (weight of rider) Sex (sex either male or female) Graphs Four separate graphs (see below) can be created by days, by weeks or by months for each item below. A different set of calculations will be used depending of the graph is days, weeks or a months graph. Each graph is available on line or as a hard copy and processing logic is different. Distance of Ride Average Speed Duration of Ride Calories Burned Chapter 14 Page 102 For BikeWare determine the following information: Identify the external inputs, how many data elements and how many files will be referenced? How many files type referenced are there for the add, the change and the delete? Is it always the same? How many internal logical files are there and what are the data elements? How many external outputs? Describe the external outputs also? How many data elements for each external output? Case Studies and Exercises Page 103 www.SoftwareMetrics.Com Longstreet Consulting Inc Pizza Screen Design Option 1 Toppings are read from another application (kitchen application). If the topping is not available it is not displayed. The cost of the Pizza is calculated automatically. When the OK button is clicked the Toppings, Pizza Crust Type and Cost of Pizza are saved. Option 2 The Items in the drop down box are hard code – not read from a file. Available Toppings are read from another application (kitchen application). When a Topping is selected from Available Toppings it is copied to Selected Toppings Figure 8 Chapter 14 Page 104 The Cost of the Pizza is automatically calculated. When the OK button is clicked the Selected Toppings, Pizza Crust Type and Cost of Pizza are the saved. What are the differences if any between Option 1 and Option 2? Please fill in the table below. Option 1 Option 2 Component Data Elements Component Data Elements Note: Components are external input, external inquiries, internal logical files, and external interface files Figure 9 Case Studies and Exercises Page 105 www.SoftwareMetrics.Com Longstreet Consulting Inc www.PIZZACLUB.COM Part 1 WWW.PizzaClub allows customers to order pizza via the Internet. The following is only one screen of many screens. 1. Once the customer has accessed www.PizzaClub.Com they fill out this screen. 2. When the customer clicks on the form the information is saved to a file. 3. If any of the fields are not filled out (populated) the customer receives and error message telling them “All fields must be populated”. 4. What are the data elements? 5. How many unadjusted function points does this screen and one file represent. Chapter 14 Page 106 Figure 10 Case Studies and Exercises Page 107 www.SoftwareMetrics.Com Longstreet Consulting Inc Part 2 Www.PizzaClub.com is going to be enhanced. Instead of allowing the customer to type city and state, they will input the zip code number. The application will search the zip code file and then automatically populate City and State. The customer can override the populated fields. The zip code file is maintained by another application. How many unadjusted function points does this enhancement represent? What are the new components? What data elements are impacted? Chapter 14 Page 108 Control Information What are the data elements in the following control screen (alignment)? Figure 11 -Control Screen Case Studies and Exercises Page 109 www.SoftwareMetrics.Com Longstreet Consulting Inc How many data elements are there in the following “Data Entry Preferences” control screen? If this control screen updates one internal logical file, then how many unadjusted function points does this represent? Figure 12 Chapter 14 Page 110 Word Problem 1 Let’s assume your productivity rate is 10 hours per function point (it takes 10 hours of work to deliver a function point). Additionally, assume your cost per hour is $60. Therefore, the cost to deliver 1 function point is $600. 1. How much would it cost to develop an application with 5,000 function points? 2. Let’s assume you anticipate a Maintenance Rate is $100/Function Point. How much needs to be budgeted to cover Maintenance Expenses for the first year? 3. Assume that the application will be operational for 6 years (application life expectancy is 6 years). Maintenance Costs will be fixed at $100/Function Point Per Year. What is the total expected cost of the application-- including all development and maintenance costs? 4. Should maintenance costs be considered when developing an application? How do you determine expected maintenance costs? Case Studies and Exercises Page 111 www.SoftwareMetrics.Com Longstreet Consulting Inc Word Problem 2 Assume the same cost per function point as before ($600). Suppose a vendor, a tool vendor, and claims that his tool will increase productivity by 50 percent -- cut your cost per function point in half. Assume the following to be true: You are planning on implementing 1,000 function points over the next year. You want the tool to pay for itself within 1 year. 1. What is the maximum amount you would be willing to pay for this tool -- You want to break even.
12262
https://antibioticos.sanidad.gob.es/PDF/Community-acquired_pneumonia_in_childrenguidelines_for_treatment.pdf
Community-acquired pneumonia in children: guidelines for treatment [CONCISE REVIEWS OF PEDIATRIC INFECTIOUS DISEASES] Nelson, John D. M.D. Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX This brief review focuses on management of pediatric patients older than 3 months who have community-acquired pneumonia without empyema and who are otherwise normal and without anatomic or immunologic abnormalities. The subject of pleural empyema was discussed recently in these pages by Campbell and Nataro.1 In children with serious acute infectious diseases for which there is specific antimicrobial therapy, such as meningitis, urinary tract infections and osteomyelitis, obtaining a fluid or tissue specimen for microbiologic analysis is easy and routine. Such is not the situation in acute pneumonia. We rarely do biopsy or needle aspiration of lung tissue and only a small fraction of children with pneumonia have bacteremia or pleural empyema. Bacterial cultures of the nasopharynx or throat correlate poorly with cultures of lung tissue and are as likely to confound as to clarify the etiology. Certain clinical and radiographic signs point one toward viral, bacterial or mycoplasmal etiology, but there is so much overlap that they have little value when dealing with one sick child. The physician faces two decisions: antibiotics versus no antibiotics and, if the former course is elected, broad-spectrum versus narrow spectrum antibiotics. The seemingly easy way out of this dilemma would be to give broad-spectrum antibiotics to every child with pneumonia. This course of action is wasteful and exposes the majority who have viral infections to the real risk of superinfection with resistant bacteria and adverse effects of the drug. In addition, the bacterial ecology of our society has suffered from past decades of over-prescribing antibiotics to patients with respiratory infections resulting in resistant bacteria such as the current problem with resistant Streptococcus pneumoniae. Besides doing cultures for bacteria and viruses, several investigators in recent years have used antigen detection methods and antibody studies to define probable etiologic agents in children with community-acquired pneumonia.2–4 However, their best efforts leave 30–40% of cases unexplained. In August 1999 the Food and Drug Administration approved a rapid immunochromtographic test (NOW®; Binax, Inc., Portland, OR) for pneumococcal antigen in urine which has good sensitivity and specificity for diagnosis of pneumococcal disease in adults. It has not been tested in children. The high proportion of normal children colonized with pneumococci may compromise the specificity of the test. In a remarkable study published in 1971, Mimica and colleagues 5 in Chile performed © 2000 Lippincott Williams & Wilkins, Inc. Volume 19(3), March 2000, pp 251-253 Página 1 de 4 Ovid: Nelson: Pediatr Infect Dis J, Volume 19(3).March 2000.251-253 20/09/2006 needle aspiration of the lung for bacterial culture in 530 infants and children with acute pneumonia and found positive culture results in 235 (44%). The presumption that the reminder were caused by viruses or mycoplasma is confounded by the fact that 370 of the children had received antibiotics before the lung aspiration was done, so growth of susceptible bacteria could have been inhibited. The prior antibiotics might also explain the large number of isolations of Staphylococcus aureus they found, since it is known from past experience and studies that broad spectrum antibiotics, such as tetracycline and chloramphenicol which their patients had taken, cause rapid changes in respiratory microflora and frequent overgrowth of staphylococci. The status of Chlamydia pneumoniae as a lower respiratory pathogen is unsettled. Serologic studies prove that the majority of children develop antibodies to this organism during the school years of ages 5 to 15, but the clinical illness correlates are far from clear. This epidemiology mimics that of Mycoplasma pneumoniae. Because both microorganisms are susceptible in vitro to macrolides, it has been tempting to conclude that school-aged children with possible Mycoplasma or Chlamydia illness should be treated with a macrolide. Tempering this common recommendation is knowledge that, although erythromycin and tetracycline had a modest beneficial effect in controlled studies of M. pneumoniae disease in adults, this has not been shown in children and controlled studies of erythromycin for C. pneumoniae disease have not been done in adults or children. From the recent studies of community-acquired pneumonia 2–4 and from experience we can make several generalizations about microbial etiology: respiratory viruses are most common overall; among bacteria, pneumococci are most common; Haemophilus influenzae type b used to be as common as pneumococci in infants but it has virtually disappeared in immunized populations; Mycoplasma and Chlamydia infections are equally common in school-aged children and may cause half or more of the cases (keeping in mind the above-expressed reservation about Chlamydia and pneumonia); group A streptococci and Staphylococcus aureus are uncommon but, when they occur, cause severe disease; other bacteria, rickettsia and parasites are rare. When assessing a child with community-acquired pneumonia, I find it useful to attempt categorization according to several factors. The Age Factor Adenoviruses and parainfluenza type 3 are common in infancy. Respiratory syncytial virus sometimes causes pneumonia in young infants, but it more typically causes bronchiolitis. Among bacteria, Staphylococcus aureus is most likely to occur in the first 6 months of life. S. pneumoniae and H. influenzae type b are equally common between 6 months and 2 years of age in unimmunized populations. During school age years, the incidence of pneumonia drops sharply but the proportion of cases possibly attributable to M. pneumoniae and C. pneumoniae is high. The Epidemiology Factor Respiratory syncytial virus infection is seasonal in the late fall and winter months and influenza virus infections occur in epidemics during these periods as well. The day care setting is conducive to sharing of respiratory viruses. There are no firm data about day care centers Página 2 de 4 Ovid: Nelson: Pediatr Infect Dis J, Volume 19(3).March 2000.251-253 20/09/2006 and pneumococcal infection but almost surely strains are shared freely. Epidemiologic links to hospitals or to recently hospitalized individuals used to be an important risk factor for methicillin-resistant Staphylococcus aureus infection, but MRSA strains are widespread in most communities nowadays. The Radiographic Feature An interstitial radiographic picture is characteristic of viral infection, lobar consolidation is the hallmark of pneumococcal disease and bronchopneumonia patterns can be caused by any microbe. The problem with these generalizations is that there are many exceptions. The Vaccine Factor Immunization with protein conjugated H. influenzae type b vaccine virtually eliminates that etiologic possibility. The conjugated pneumococcal vaccines currently undergoing field trials appear to decrease the likelihood of pneumococcal pneumonia, but they will not eliminate it because not all antigenic types are included in the vaccine. The Severity of Illness Factor This is a non-factor in trying to discriminate among categories of infectious agents and, in my opinion, should not influence one’s thinking. After assessing the patient and pondering the above factors, the physician must make decisions about diagnostic tests that may be indicated, about the need for hospitalization and about the need for antimicrobial therapy. Probably the hardest decision is the one to withhold antibiotic treatment. This is the best course of action when the patient has findings increasing the likelihood of viral infection (pharyngitis, rhinitis, or similar illness in family members), has no respiratory distress and is alert and active. Pneumonia developing after several days of non-specific respiratory illness increases the likelihood of bacterial superinfection of viral disease and probably warrants antibiotic treatment. For many years ambulatory infants and young children with suspected bacterial pneumonia have been treated orally with amoxicillin, amoxicillin-clavulanate or a cephalosporin. The thinking has been that the targeted pathogens are the same as those causing acute otitis media so it makes sense to use the same rationale in selecting an antibiotic regimen for both conditions. These options and the impact of relatively resistant pneumococci were discussed in depth by a group of experts assembled by the Centers for Disease Control.6 For acute otitis media amoxicillin, in a larger dosage (8O–90 mg/kg daily) than previously recommended, was suggested as primary therapy. Amoxicillin-clavulanate, cefuroxime axetil or, alternatively, ceftriaxone given intramuscularly were recommended as backup agents. For patients requiring hospitalization, a parenteral cephalosporin such as cefuroxime or cefazolin is adequate unless staphylococcal infection is suspected, in which case vancomycin is indicated since many community-acquired staphylococci are methicillin resistant. Many community-aquired S. aureus isolates remain susceptible to clindamycin. (Most such patients have empyema that is beyond the scope of this article; see the article by Campbell and Nataro.1) Página 3 de 4 Ovid: Nelson: Pediatr Infect Dis J, Volume 19(3).March 2000.251-253 20/09/2006 Copyright (c) 2000-2006 Ovid Technologies, Inc. Version: rel10.3.2, SourceID 1.12052.1.159 Chumpa et al 7 of Boston’s Children Hospital reviewed their experience with pneumococcal bacteremia and found that the bacteremic children with pneumonia who were given a parenteral antibiotic at the initial clinic visit were less likely than those given an oral antibiotic to require subsequent hospitalization (0% versus 24%, p=0.03). This should not be an endorsement for routine use of parenteral antibiotics since those treated with oral antibiotics fared quite well. References 1.Campbell J D et al. Pediatr Infect Dis J 1999; 18:725–6. [Context Link] 2.Ruuskanen O et al. Eur J Clin Microbiol Infect Dis 1992; 11:217–23. 3.Block S et al. Pediatr Infect Dis J 1995; 14:471–7. 4.Wubbel L et al. Pediatr Infect Dis J 1999; 18:98–104. [Context Link] 5.Mimica l et al. Am J Dis Child 1971; 122:278–82. [Context Link] 6.Dowell S et al. Pediatr Infect Dis J 1999; 18:1–9. [Context Link] 7.Chumpa A et al. Pediatr Infect Dis J 1999; 18:1081–5. [Context Link] Accession Number: 00006454-200003000-00016 Página 4 de 4 Ovid: Nelson: Pediatr Infect Dis J, Volume 19(3).March 2000.251-253 20/09/2006
12263
https://math.stackexchange.com/questions/4105146/prove-that-the-lines-parallel-to-the-angle-bisectors-through-the-midpoints-of-si
Skip to main content Prove that the lines parallel to the angle bisectors through the midpoints of sides of a triangle are concurrent at the Spieker center Ask Question Asked Modified 4 years, 4 months ago Viewed 889 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. I've been learning about triangle centers, and I have been able to prove that most of them exist. However, this one that I've come across has stumped a bit, and I haven't been able to find any proofs or hints towards how to prove this specific point always exists. I believe the point is called the Spieker center. The problem prompt goes a little like this. Given a triangle ABC, construct lines parallel to the interior angle bisectors of angles A, B, and C, such that they pass through the midpoints of sides BC, CA, and AB respectively. Prove that these lines are concurrent. Any ideas on how I should approach this proof? geometry euclidean-geometry triangles Share CC BY-SA 4.0 Follow this question to receive notifications edited Apr 17, 2021 at 5:48 cosmo5 10.8k22 gold badges1111 silver badges3636 bronze badges asked Apr 16, 2021 at 22:41 Boris PorisBoris Poris 45144 silver badges1313 bronze badges 2 To confirm: This is indeed called the Spieker center. Blue – Blue 04/16/2021 22:45:02 Commented Apr 16, 2021 at 22:45 See here. Jean Marie – Jean Marie 08/16/2024 05:07:06 Commented Aug 16, 2024 at 5:07 Add a comment | 3 Answers 3 Reset to default This answer is useful 6 Save this answer. Show activity on this post. The concurrency is evident using the medial triangle property as stated in the answer by cosmo5. Here is wiki link that you can refer to for more on it (wiki). Here is a more elementary proof simply using, i) the midpoint theorem, ii) alternate interior angles theorem which states that if two parallel lines are cut by a traversal, the alternate interior angles are congruent, and iii) Concurrency of angle bisectors theorem. D,E and F are midpoints of the sides of △ABC. Line through AM is the angle bisector of ∠A. Line through EN is parallel to AM and through midpoint E of side BC. Using midpoint theorem, ADEF is a parallelogram and ∠DEF=∠A. Using property of parallel lines, ∠FEN=∠FMA=∠MAD=∠A2. So line through EN is angle bisector of ∠DEF in △DEF. By the same logic, parallel lines through midpoints to other two angle bisectors of △ABC must be angle bisectors of ∠DFE and ∠EDF. Hence the proof is complete by concurrency of angle bisectors of a triangle. Share CC BY-SA 4.0 Follow this answer to receive notifications answered Apr 17, 2021 at 7:31 Math LoverMath Lover 52.2k33 gold badges2424 silver badges4646 bronze badges Add a comment | This answer is useful 6 Save this answer. Show activity on this post. Rotate the original triangle ABC by 180∘ about its centroid G and scale down to half its size. It will coincide with the triangle formed by its midpoints D,E,F. The angle bisectors of ABC would go to angle bisectors of DEF. Since any line rotated by 180∘, a half turn, remains parallel to itself, the angle bisectors of DEF are the lines to be constructed in the question. We know that angle bisectors of any triangle concur at a single point, namely its incenter. Hence the lines through midpoints and parallel to the angle bisectors of ABC concur at the incenter of DEF. The incenter of DEF is simply the image of incenter of ABC under our transformation (a half turn about centroid followed by scaling by half). The proof is complete. Share CC BY-SA 4.0 Follow this answer to receive notifications answered Apr 17, 2021 at 5:29 cosmo5cosmo5 10.8k22 gold badges1111 silver badges3636 bronze badges 4 +1. I got so caught-up in the bisector-parallels generalization that I looked right past the obvious "the Spieker center is the incenter of the medial triangle" approach for this specific case. :) ... Well, perhaps OP will still find the Extended Ceva Theorem helpful in exploring more circle centers. Blue – Blue 04/17/2021 06:01:41 Commented Apr 17, 2021 at 6:01 1 @Blue But your answer and the links therein were quite useful. Thankyou for writing those previous answers! :) cosmo5 – cosmo5 04/17/2021 06:03:55 Commented Apr 17, 2021 at 6:03 1 It's worth noting that angle-bisector-ness, per se, isn't required for this argument. The rotate-and-scale transformation guarantees that any trio of cevians through A, B, C transfers to correspondingly-arranged parallel cevians through D, E, F; and, thus, the first trio's point of concurrence (if it exists, which it does in the case of the angle bisectors) transfers to the second's. Blue – Blue 04/17/2021 09:15:32 Commented Apr 17, 2021 at 9:15 1 That is very nice! So we have two-sided correspondence : parallel lines through A,D⟺ corresponding cevians. cosmo5 – cosmo5 04/17/2021 10:28:28 Commented Apr 17, 2021 at 10:28 Add a comment | This answer is useful 2 Save this answer. Show activity on this post. I'll exploit a result that I presented in this answer, with different notation: Extended Ceva's Theorem. Defining ratios of signed lengths, α+:=|BA+||A+C|β+:=|CB+||B+A|γ+:=|AC+||C+B| α−:=|CA−||A−B|β−:=|AB−||B−C|γ−:=|BC−||C−A| the lines A+B−←→−−, B+C−←→−−, C+A−←→−− concur iff α+β+γ++α−β−γ−+α+α−+β+β−+γ+γ−=1(⋆) If A+, B+, C+ are midpoints, then α+=β+=γ+=1, and (⋆) reduces to α−β−γ−+α−+β−+γ−=0 However, we'll find that the angle bisector context makes it natural to express the "−" ratios in terms of the "+" ones; so I won't invoke the midpoint property prematurely. Focusing on the angle bisector from A aspect, let A+B−←→−− be parallel to that bisector, and let it meet the extended side AB¯¯¯¯¯¯¯¯ at V. With u, v, θ as labeled in the figure, we can use the Law of Sines on △BA+V and △CA+B− to write α+uc+v=sin12Asinθ=ub−v→v=α+b−cα++1→β−:=vb−v=α+b−cb+c Likewise, γ−=β+c−ac+aα−=γ+a−ba+b Substituting into (⋆), we can write (1+α+)(1−β+γ+)a+(1+β+)(1−γ+α+)b+(1+γ+)(1−α+β+)c=0(⋆⋆) This general condition for concurrence of bisector-parallels through A+, B+, C+ is clearly satisfied when α+=β+=γ+=1 (ie, when A+, B+, C+ are midpoints). □ Share CC BY-SA 4.0 Follow this answer to receive notifications edited Apr 17, 2021 at 4:18 answered Apr 17, 2021 at 2:47 BlueBlue 84.2k1414 gold badges128128 silver badges266266 bronze badges Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions geometry euclidean-geometry triangles See similar questions with these tags. Featured on Meta Community help needed to clean up goo.gl links (by August 25) Linked 19 What is so special about triangles?! -1 Concurrence geometry problem Related 2 Prove that lines intersecting parallel similar triangles are concurrent 1 Without using angle measure how do I prove two lines are parallel to the same line are parallel to each other? 0 Concurrency Proof 2 Prove that the triangle formed by the midpoints between the vertices and the Nagel Point is tangent to the Spieker circle 1 Prove that lines passing through the midpoints of sides of a triangle and the midpoints of cevians are also concurrent 4 Using Menelaus to show: The perpendicular bisectors of the angles bisectors of a triangle meet opposite sides in collinear points 2 Prove that XX′,YY′ and ZZ′ are concurrent at T 1 AD, BE, CF are concurrent in △ABC. Show that lines through midpoints of BC, CA, AB parallel to AD, BE, CF are concurrent. 7 Prove that three specific lines in a triangle are concurrent 3 Prove that perpendiculars dropped from midpoints of orthic triangle to opposite sides are concurrent. Hot Network Questions CX order of memory rounds in surface code Y memory experiment multline: How to put the equation tag into the first line? Chinese periodic table of elements (元素周期表) What was the first film to use the imaginary character twist? On the availability of the "Near Eastern Antiquities / Iran, Arabia and the Levant" in the Louvre museum When does "Kenntnis" (knowledge) take the plural ("Kenntnisse")? Finding the positions of displaced elements in a permutation Can a laptop be infected via Bluetooth? A detail on how MSE loss works in PyTorch Specify PrntScr button save location? Is it normal to meet your supervisor monthly / 1x every 6 months? (as a full-time academic PhD student) What's the opposite saying to "A broken clock is right twice a day"? What would a "big dumb satellite" be like? Microscopic Temperature Measurements Why are Boston Cream doughnuts so relatively light in calories? How does the Linux kernel decide whether to deny memory allocation or invoke the OOM killer when a cgroup exceeds its memory limit? Drywall Repair with Odd Angles or Outlet Cutout Check for exact version of loaded package Why are metaphysicians obsessed with language? Typeset music both with and without manual line breaks, in Lilypond Density of functions with compact support over continuous functions vanishing at infinity What is Active and Delinquent Stake? Is it actually nonsensical according to Carnap that a turnip is not a number? Which sitcom had the most spin-offs, including spin-offs of the spin-offs? Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
12264
https://www.youtube.com/watch?v=rgaWcbG4QEk
IB Standard Level: Using the discriminant to determine the intersection of lines and curves Andrew Chambers 6030 subscribers 48 likes Description 6885 views Posted: 7 Jan 2017 Some IB Standard Level (and Additional Maths) style questions where students need to use the discriminant of a quadratic formula to decide whether a curve intersects a line. 3 comments Transcript: Okay. And I'm going to look at some more examples of uh SL questions where they put together two different topics. So on this particular example, we're going to look at where you have um kind of functions put together with using the discriminant to work out solutions of a quadratic. Um first thing it's just worth kind of being able to visualize what's happening. you're probably going to get, if you get the sort of question, some kind of quadratic and some kind of line. Um, and they can have three different situations. You can either have two possible solutions. If they look like the first example, there's a tangent, you'll get one solution, and if they don't intersect, you basically get no solutions, which is very similar to the idea of the discriminant. Um, you know, the b ^2 minus 4 a c. And it's basically the same idea but uh it just adds an extra step that we have to do. So here's a kind of question that they might ask. So here's f ofx = x^2 and g ofx = uh x. Well they intersect. So intersection is basically when they're going to be equal to each other. So that's what would we do on the on the first step of a question. So if they intersect then they must be equal and then we would we would kind of work with this uh this kind of new equation here. Okay. So that's the sort of thing that we're going to do. So let's actually look at some kind of questions on this. Um here we go. So y = 2x + k and that is a tangent to y = 2x^2 - 3x + 4. Find k. Well, let's just go back here a sec. If it's a tangent, there's going to be one solution. So that's our our case that we've got here. Now first thing we do is say well they're going to intersect. So we just put them equal to each other. So 2x + k is equal to 2x^2 - 3x + 4. And we do what we normally do when we get this s sort of position which is we make it into a quadratic equal to zero. So there we go. I've just rearranged everything. Make it equal to zero. And I've just uh brought everything to one side. Now I've got this. Basically, it's a very similar to to the other kind of questions that you're used to. Basically, I've got a quadratic and well, I want one solution. So, I want one solution to this quadratic here. Well, I now use my b ^2us 4 a c. I need that to be equal to zero because that's going to be the one solution case. And I'd always recommend doing this. Write down a, b, and c. A is 2, B is -5, C is 4 min - K. Stick all that in. So there we go. Remember your brackets. Brackets around the negative - 5 all^ squ - 4 brackets 2 brackets 4 minus k. Use your brackets. And I get this thing here. And then if I just rearrange it, I get k = 7 over 8. And that's my solution. Let's have a look at another one. There we go. f ofx = x^2 + 3. g of x is kx + 2. Again, I've got a quadratic and I've got a line graph intersecting two points. So, if intersecting two points, then I'm going to have two solutions. So, I'm going to have a case with two solutions. Find the values of k exactly the same as before. Make them equal to each other. They're going to intersect. So, this is equal to the other graph. Same again, rearrange it to make it into a quadratic equal to zero. So bring the minus kx. Bring the minus2 over this side. As before, write down what a, b and c are. a is 1. B is minus k. C is 1. This time I want greater than zero because I want two solutions. So b ^2 - 4 a c is greater than zero. So I then get this. So that's minus k all squared. use the brackets minus 4 brackets one bracket 1 greater than zero. Now I just have a little bit more work to do because I end up with k^ squ greater than 4. You may for this last step just kind of visualize what the graph of k squared would look like. It would be this one and therefore it would be this part and this part of the graph. So k greater than two or k less than -2 would give you the answer. If you're not sure on where these inequalities go, choose a number, see what happens. For example, when k is -3, -3 squared. Yeah, that is greater than four. So that is the right way round. Okay. And let's have a look at one last one. So here's again two graphs. This time they do not intersect. Find k. So if they do not intersect, there are no solutions. But nevertheless, we we we start off by just as if they were going to intersect. we still say that they're equal to each other. So this is we got this graph equal to this graph and we basically if they don't intersect then this will have no solutions. So if it has no solutions well first off anyway make it equal to quadratic. If it's got no solutions I need b ^2 minus 4 a c less than zero. This is the case with no solutions as before. A is 1, b is -2 c is -3 + k. Exactly the same as before. Stick your numbers into the brackets and I get 4 + 12 takeway 4k ^ 2 less than zero 16 - 4k less than 0 16 less than 4k. Therefore k is going to be greater than 4. And again that is my solution.
12265
https://pubmed.ncbi.nlm.nih.gov/30851351/
HOPON (Hyperbaric Oxygen for the Prevention of Osteoradionecrosis): A Randomized Controlled Trial of Hyperbaric Oxygen to Prevent Osteoradionecrosis of the Irradiated Mandible After Dentoalveolar Surgery - PubMed Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page. To: Subject: Body: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Comment in Similar articles Cited by Publication types MeSH terms Substances Related information Grants and funding Clinical Trial Int J Radiat Oncol Biol Phys Actions Search in PubMed Search in NLM Catalog Add to Search . 2019 Jul 1;104(3):530-539. doi: 10.1016/j.ijrobp.2019.02.044. Epub 2019 Mar 7. HOPON (Hyperbaric Oxygen for the Prevention of Osteoradionecrosis): A Randomized Controlled Trial of Hyperbaric Oxygen to Prevent Osteoradionecrosis of the Irradiated Mandible After Dentoalveolar Surgery Richard J Shaw1,Christopher J Butterworth2,Paul Silcocks3,Binyam T Tesfaye4,Matthew Bickerstaff3,Richard Jackson3,Anastios Kanatas5,Peter Nixon6,James McCaul7,Prav Praveen8,Terry Lowe9,Manuel Blanco-Guzman10,Lone Forner11,Peter Brennan12,Mike Fardy13,Richard Parkin14,Gary Smerdon15,Ruth Stephenson16,Tristan Cope17,Mark Glover18 Affiliations Expand Affiliations 1 University of Liverpool, Liverpool, United Kingdom. Electronic address: rjshaw@liv.ac.uk. 2 Maxillofacial Prosthodontics, Department of Maxillofacial Surgery, University Hospital Aintree, Liverpool, United Kingdom. 3 Cancer Research UK Liverpool Clinical Trials Unit, University of Liverpool, Liverpool, United Kingdom. 4 Cancer Research UK Liverpool Cancer Trials Unit, Department of Molecular and Clinical Cancer Medicine, University of Liverpool, Liverpool, United Kingdom. 5 OMFS Department, Leeds Dental Institute, Leeds, United Kingdom. 6 Restorative Department, Leeds Dental Institute, Leeds, United Kingdom. 7 Regional Maxillofacial Unit, Queen Elizabeth University Hospital, Glasgow, United Kingdom. 8 Maxillofacial Office, Queen Elizabeth Hospital, Birmingham, United Kingdom. 9 Aberdeen Royal Infirmary, Aberdeen, Scotland. 10 Maxillofacial Unit, Musgrove Park Hospital, Taunton and Somerset NHS Foundation Trust, Taunton, United Kingdom. 11 Departments of Anesthesia and Oral and Maxillofacial Surgery, Centre of Head and Orthopedics, Copenhagen University Hospital, Copenhagen, Denmark. 12 Maxillofacial Unit, Queen Alexandra Hospital, Portsmouth, United Kingdom. 13 University Hospital of Wales, Cardiff, Wales, United Kingdom. 14 OMFS Department, ABUHB, Newport, Wales, United Kingdom. 15 DDRC Healthcare, Hyperbaric Medical Centre, Plymouth, United Kingdom. 16 NHS Grampian, Aberdeen Royal Infirmary, Aberdeen, Scotland. 17 North West Recompression Unit, Murrayfield Hospital, Holmwood, United Kingdom. 18 Hyperbaric Medicine Unit, St. Richard's Hospital, Chichester, United Kingdom. PMID: 30851351 DOI: 10.1016/j.ijrobp.2019.02.044 Item in Clipboard Clinical Trial HOPON (Hyperbaric Oxygen for the Prevention of Osteoradionecrosis): A Randomized Controlled Trial of Hyperbaric Oxygen to Prevent Osteoradionecrosis of the Irradiated Mandible After Dentoalveolar Surgery Richard J Shaw et al. Int J Radiat Oncol Biol Phys.2019. Show details Display options Display options Format Int J Radiat Oncol Biol Phys Actions Search in PubMed Search in NLM Catalog Add to Search . 2019 Jul 1;104(3):530-539. doi: 10.1016/j.ijrobp.2019.02.044. Epub 2019 Mar 7. Authors Richard J Shaw1,Christopher J Butterworth2,Paul Silcocks3,Binyam T Tesfaye4,Matthew Bickerstaff3,Richard Jackson3,Anastios Kanatas5,Peter Nixon6,James McCaul7,Prav Praveen8,Terry Lowe9,Manuel Blanco-Guzman10,Lone Forner11,Peter Brennan12,Mike Fardy13,Richard Parkin14,Gary Smerdon15,Ruth Stephenson16,Tristan Cope17,Mark Glover18 Affiliations 1 University of Liverpool, Liverpool, United Kingdom. Electronic address: rjshaw@liv.ac.uk. 2 Maxillofacial Prosthodontics, Department of Maxillofacial Surgery, University Hospital Aintree, Liverpool, United Kingdom. 3 Cancer Research UK Liverpool Clinical Trials Unit, University of Liverpool, Liverpool, United Kingdom. 4 Cancer Research UK Liverpool Cancer Trials Unit, Department of Molecular and Clinical Cancer Medicine, University of Liverpool, Liverpool, United Kingdom. 5 OMFS Department, Leeds Dental Institute, Leeds, United Kingdom. 6 Restorative Department, Leeds Dental Institute, Leeds, United Kingdom. 7 Regional Maxillofacial Unit, Queen Elizabeth University Hospital, Glasgow, United Kingdom. 8 Maxillofacial Office, Queen Elizabeth Hospital, Birmingham, United Kingdom. 9 Aberdeen Royal Infirmary, Aberdeen, Scotland. 10 Maxillofacial Unit, Musgrove Park Hospital, Taunton and Somerset NHS Foundation Trust, Taunton, United Kingdom. 11 Departments of Anesthesia and Oral and Maxillofacial Surgery, Centre of Head and Orthopedics, Copenhagen University Hospital, Copenhagen, Denmark. 12 Maxillofacial Unit, Queen Alexandra Hospital, Portsmouth, United Kingdom. 13 University Hospital of Wales, Cardiff, Wales, United Kingdom. 14 OMFS Department, ABUHB, Newport, Wales, United Kingdom. 15 DDRC Healthcare, Hyperbaric Medical Centre, Plymouth, United Kingdom. 16 NHS Grampian, Aberdeen Royal Infirmary, Aberdeen, Scotland. 17 North West Recompression Unit, Murrayfield Hospital, Holmwood, United Kingdom. 18 Hyperbaric Medicine Unit, St. Richard's Hospital, Chichester, United Kingdom. PMID: 30851351 DOI: 10.1016/j.ijrobp.2019.02.044 Item in Clipboard Cite Display options Display options Format Abstract Purpose: Hyperbaric oxygen (HBO) has been advocated in the prevention and treatment of osteoradionecrosis (ORN) of the jaw after head and neck radiation therapy, but supporting evidence is weak. The aim of this randomized trial was to establish the benefit of HBO in the prevention of ORN after high-risk surgical procedures to the irradiated mandible. Methods and materials: HOPON was a randomized, controlled, phase 3 trial. Participants who required dental extractions or implant placement in the mandible with prior radiation therapy >50 Gy were recruited. Eligible patients were randomly assigned 1:1 to receive or not receive HBO. All patients received chlorhexidine mouthwash and antibiotics. For patients in the HBO arm, oxygen was administered in 30 daily dives at 100% oxygen to a pressure of 2.4 atmospheres absolute for 80 to 90 minutes. The primary outcome measure was the diagnosis of ORN 6 months after surgery, as determined by a blinded central review of clinical photographs and radiographs. The secondary endpoints included grade of ORN, ORN at other time points, acute symptoms, pain, and quality of life. Results: A total of 144 patients were randomized, and data from 100 patients were analyzed for the primary endpoint. The incidence of ORN at 6 months was 6.4% and 5.7% for the HBO and control groups, respectively (odds ratio, 1.13; 95% confidence interval, 0.14-8.92; P = 1). Patients in the hyperbaric arm had fewer acute symptoms but no significant differences in late pain or quality of life. Dropout was higher in the HBO arm, but the baseline characteristics of the groups that completed the trial were comparable between the 2 arms. Conclusions: The low incidence of ORN makes recommending HBO for dental extractions or implant placement in the irradiated mandible unnecessary. These findings are in contrast with a recently published Cochrane review and previous trials reporting rates of ORN (non-HBO) of 14% to 30% and challenge a long-established standard of care. Copyright © 2019 Elsevier Inc. All rights reserved. PubMed Disclaimer Comment in In Regard to Shaw et al.Laden G.Laden G.Int J Radiat Oncol Biol Phys. 2022 Mar 1;112(3):835-836. doi: 10.1016/j.ijrobp.2021.11.015.Int J Radiat Oncol Biol Phys. 2022.PMID: 35101199 No abstract available. In Reply to Laden.Shaw RJ.Shaw RJ.Int J Radiat Oncol Biol Phys. 2022 Mar 1;112(3):836-837. doi: 10.1016/j.ijrobp.2021.11.011.Int J Radiat Oncol Biol Phys. 2022.PMID: 35101200 No abstract available. Similar articles HOPON (Hyperbaric Oxygen for the Prevention of Osteoradionecrosis): a randomised controlled trial of hyperbaric oxygen to prevent osteoradionecrosis of the irradiated mandible: study protocol for a randomised controlled trial.Shaw R, Butterworth C, Tesfaye B, Bickerstaff M, Dodd S, Smerdon G, Chauhan S, Brennan P, Webster K, McCaul J, Nixon P, Kanatas A, Silcocks P.Shaw R, et al.Trials. 2018 Jan 10;19(1):22. doi: 10.1186/s13063-017-2376-7.Trials. 2018.PMID: 29316962 Free PMC article.Clinical Trial. Limited evidence to demonstrate that the use of hyperbaric oxygen (HBO) therapy reduces the incidence of osteoradionecrosis in irradiated patients requiring tooth extraction.Chuang SK.Chuang SK.J Evid Based Dent Pract. 2012 Sep;12(3 Suppl):248-50. doi: 10.1016/S1532-3382(12)70047-7.J Evid Based Dent Pract. 2012.PMID: 23253853 Interventions for preventing osteoradionecrosis of the jaws in adults receiving head and neck radiotherapy.El-Rabbany M, Duchnay M, Raziee HR, Zych M, Tenenbaum H, Shah PS, Azarpazhooh A.El-Rabbany M, et al.Cochrane Database Syst Rev. 2019 Nov 20;2019(11):CD011559. doi: 10.1002/14651858.CD011559.pub2.Cochrane Database Syst Rev. 2019.PMID: 31745986 Free PMC article. Management of pathologic fractures of the mandible secondary to osteoradionecrosis.Sawhney R, Ducic Y.Sawhney R, et al.Otolaryngol Head Neck Surg. 2013 Jan;148(1):54-8. doi: 10.1177/0194599812463186. Epub 2012 Oct 3.Otolaryngol Head Neck Surg. 2013.PMID: 23034514 The Use of Hyperbaric Oxygen for the Prevention and Management of Osteoradionecrosis of the Jaw: A Dana-Farber/Brigham and Women's Cancer Center Multidisciplinary Guideline.Sultan A, Hanna GJ, Margalit DN, Chau N, Goguen LA, Marty FM, Rabinowits G, Schoenfeld JD, Sonis ST, Thomas T, Tishler RB, Treister NS, Villa A, Woo SB, Haddad R, Mawardi H.Sultan A, et al.Oncologist. 2017 Mar;22(3):343-350. doi: 10.1634/theoncologist.2016-0298. Epub 2017 Feb 16.Oncologist. 2017.PMID: 28209748 Free PMC article.Review. See all similar articles Cited by Interdisciplinary Collaboration in Head and Neck Cancer Care: Optimizing Oral Health Management for Patients Undergoing Radiation Therapy.Kutuk T, Atak E, Villa A, Kalman NS, Kaiser A.Kutuk T, et al.Curr Oncol. 2024 Apr 7;31(4):2092-2108. doi: 10.3390/curroncol31040155.Curr Oncol. 2024.PMID: 38668058 Free PMC article.Review. European white paper: oropharyngeal dysphagia in head and neck cancer.Baijens LWJ, Walshe M, Aaltonen LM, Arens C, Cordier R, Cras P, Crevier-Buchman L, Curtis C, Golusinski W, Govender R, Eriksen JG, Hansen K, Heathcote K, Hess MM, Hosal S, Klussmann JP, Leemans CR, MacCarthy D, Manduchi B, Marie JP, Nouraei R, Parkes C, Pflug C, Pilz W, Regan J, Rommel N, Schindler A, Schols AMWJ, Speyer R, Succo G, Wessel I, Willemsen ACH, Yilmaz T, Clavé P.Baijens LWJ, et al.Eur Arch Otorhinolaryngol. 2021 Feb;278(2):577-616. doi: 10.1007/s00405-020-06507-5. Epub 2020 Dec 19.Eur Arch Otorhinolaryngol. 2021.PMID: 33341909 Free PMC article. Noninvasive Systemic Modalities for Prevention of Head and Neck Radiation-Associated Soft Tissue Injury: A Narrative Review.Kim LN, Rubenstein RN, Chu JJ, Allen RJ Jr, Mehrara BJ, Nelson JA.Kim LN, et al.J Reconstr Microsurg. 2022 Oct;38(8):621-629. doi: 10.1055/s-0042-1742731. Epub 2022 Feb 25.J Reconstr Microsurg. 2022.PMID: 35213927 Free PMC article.Review. Managing Mandibular Osteoradionecrosis.Fritz MA, Arianpour K, Liu SW, Lamarre ED, Genther DJ, Ciolek PJ, Byrne PJ, Prendes BL.Fritz MA, et al.Otolaryngol Head Neck Surg. 2025 Feb;172(2):406-418. doi: 10.1002/ohn.990. Epub 2024 Sep 27.Otolaryngol Head Neck Surg. 2025.PMID: 39327863 Free PMC article.Review. Evaluation and Comparison of the Dose Received by the Mandible, Maxilla, and Teeth in Two Methods of Three-dimensional Conformal Radiation Therapy and Helical Tomotherapy.Pourparvar Z, Shahbazi-Gahrouei D, Najafizade N, Saeb M, Khaniabadi BM, Khaniabadi PM.Pourparvar Z, et al.J Med Signals Sens. 2024 Sep 2;14:26. doi: 10.4103/jmss.jmss_42_23. eCollection 2024.J Med Signals Sens. 2024.PMID: 39380769 Free PMC article. See all "Cited by" articles Publication types Clinical Trial, Phase III Actions Search in PubMed Search in MeSH Add to Search Multicenter Study Actions Search in PubMed Search in MeSH Add to Search Randomized Controlled Trial Actions Search in PubMed Search in MeSH Add to Search Research Support, Non-U.S. Gov't Actions Search in PubMed Search in MeSH Add to Search MeSH terms Anti-Bacterial Agents / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Area Under Curve Actions Search in PubMed Search in MeSH Add to Search Chlorhexidine / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Female Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Hyperbaric Oxygenation / methods Actions Search in PubMed Search in MeSH Add to Search Incidence Actions Search in PubMed Search in MeSH Add to Search Male Actions Search in PubMed Search in MeSH Add to Search Mandible / radiation effects Actions Search in PubMed Search in MeSH Add to Search Mandible / surgery Actions Search in PubMed Search in MeSH Add to Search Middle Aged Actions Search in PubMed Search in MeSH Add to Search Mouthwashes / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Osteoradionecrosis / epidemiology Actions Search in PubMed Search in MeSH Add to Search Osteoradionecrosis / prevention & control Actions Search in PubMed Search in MeSH Add to Search Patient Dropouts / statistics & numerical data Actions Search in PubMed Search in MeSH Add to Search Quality of Life Actions Search in PubMed Search in MeSH Add to Search Tooth Extraction / adverse effects Actions Search in PubMed Search in MeSH Add to Search Substances Anti-Bacterial Agents Actions Search in PubMed Search in MeSH Add to Search Mouthwashes Actions Search in PubMed Search in MeSH Add to Search Chlorhexidine Actions Search in PubMed Search in MeSH Add to Search Related information MedGen PubChem Compound (MeSH Keyword) Grants and funding 12122/CRUK_/Cancer Research UK/United Kingdom C23033/A9397/CRUK_/Cancer Research UK/United Kingdom C23033/A12122 /CRUK_/Cancer Research UK/United Kingdom [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
12266
https://ask.sagemath.org/question/8115/continued-fraction-expansion-of-quadratic-irrationals/
Continued fraction expansion of quadratic irrationals - ASKSAGE: Sage Q&A Forum First time here? Check out the FAQ! Hi there! Please sign inhelp tagsusersbadges ALLUNANSWERED Ask Your QuestionAsk Your Question 4 Continued fraction expansion of quadratic irrationals save cancel continued_fractions asked 14 years ago Menny 69●3●5●8 updated 14 years ago Let me emphasize that I am very new to sage and to computing in general, but I did research as much as I could before asking this question. It is well know that that quadratic irrationals has eventuallyperiodic continued fraction expansion. I really tried to find a function that gives me the period of a given quadratic irrational. The best I could find was in this forum: It explains that you can get using GAP the PrePeriod+Period. It has two disadvantages: the first, you can't get the pure period, i.e., without the preperiod, and the second is that it is the input is not the number itself but the polynomial it solve. If this polynomial has many positive roots it is a problem... I tried to look also in PARI and didn't find anything. I did see that it is written that sage itself does not have this function. Let me also remark that such algorithm exist. My question: Is there a way (via packages that are contained in sage) to find the period of a given quadratic irrational? Thanks a lot! Menny Preview: (hide) save cancel Comments It seems that you are correct that this is not the case - yet. In fact, see for something related. Can you point us to some of the algorithms? Also, the current expansion is in pure Python, so rather slow. kcrisman( 14 years ago ) I didn't find the algorithm yet, but you can see that it is implemented in maple: I'll go hunting for the algorithms. Menny( 14 years ago ) It's also implemented (for some quadratic irrationals) in Mathematica / Wolfram|Alpha. See: (period is 6) and (period is large, you'd have to count it yourself I guess after clicking "more terms" a bunch of times). benjaminfjones( 14 years ago ) I think what @Menny wanted was something that gives the period without having to count and click more terms :) kcrisman( 14 years ago ) I mainly want to be able to work with it through sage and manipulate the results I'm getting. I can seems to find the algorithms.... I looked in Cohen books, and asked for the help of google and didn't find anything yet. Is there a generic references of Number theory algorithms? Menny( 14 years ago ) add a comment 4 Answers Sort by » oldestnewestmost voted 3 answered 14 years ago benjaminfjones 2775●8●44●76 As far as I can tell, there aren't any packages available in Sage beyond the GAP package you mention above. There is a relatively simple algorithm for finding the period and I think it would be worth implementing this in the continued fractions module in Sage. I created a trac ticket for this enhancement: Trac #11345 A colleague of mine who works on diophantine approximation communicated an algorithm to me which I'll paraphrase below. The algorithm takes a specific quadratic irrational and would determine the pre-period (if desired), the periodic sequence, and the length of the period. The algorithm makes use of the following classical results (which I don't have specific references for yet). Theorem(Galois): Let ζ∈Q(D−−√)ζ∈Q(D) be a positive quadratic irrational. Then ζ ζ has a purely periodic continued fraction iff ζ>1 ζ>1 and the conjugate of ζ ζ under the map D−−√↦−D−−√D↦−D is between -1 and 0. Theorem(Lagrange?): The period of a purely periodic quadratic irrational ζ ζ as above has length at most 2 D 2 D (and this upper bound is sharp?). I'll need to find a reference for that. Algorithm: (to determine the period of a quadratic irrational number) INPUT: ζ ζ, a positive quadratic irrational in the form ζ=q+r D−−√ζ=q+r D for rational numbers q,r q,r. Check if ζ ζ is purely periodic (using the theorem above). If so, we know that the period has length at most 2 D 2 D. We then determine the period by listing enough terms in the continued fraction approximation. If ζ ζ isn’t purely periodic, replace ζ ζ with 1 ζ−floor(1 ζ)1 ζ−floor(1 ζ) and go to step 1. We get the length of the pre-period by counting the number of iterations above and we have an immediate upper bound on how much farther we have to search for the length of the period 2 D 2 D. Preview: (hide) save cancel link Comments I've already said this on the Trac ticket, but it might be worth seeing if there is any code from GAP we could use, at least as pseudo-code. Also, how would we check for this being a quadratic surd? Sometimes very complicated things (think Cardano's formula-type stuff) end up being surds upon clever manipulation. kcrisman( 14 years ago ) Just as a note, seems to be almost ready... kcrisman( 11 years ago ) add a comment 2 answered 11 years ago rws 1149●2●21●44 updated 11 years ago The issue (among other things) is addressed in Sage by ticket 14567, after implementation of which you will be able to say: sage: K.<sqrt11> = QuadraticField(11) sage: cf = CFF(sqrt11); cf [3; (3, 6)] sage: len(cf.period()) 2 For literature, see also the OEIS database entry: Preview: (hide) save cancel link add a comment 0 answered 14 years ago Menny 69●3●5●8 Via the reference within this page: I found a more elegant algorithm than the one sketched above: Algorithm for finding the period of a quadratic irrational Input: (a,b,c)∈Z×N×Z(a,b,c)∈Z×N×Z and b b not a perfect square. This triple represents a+b√c a+b c. If c c does not divide b−a 2 b−a 2 then multiply everything by |c||c||c||c| , and get new triple that we call (a′,b′,c′)(a′,b′,c′) Set P 0=a′P 0=a′ , Q 0=c′Q 0=c′, d=c′d=c′. For k≥0 k≥0 set (a) α k=P k+d√Q k α k=P k+d Q k (b) a k=⌊α k⌋a k=⌊α k⌋ (c) P k+1=a k Q k−P k P k+1=a k Q k−P k (d) Q k+1=d−P 2 k+1 Q k Q k+1=d−P k+1 2 Q k Then α 0=a+b√c=[a 0,a 1,a 2,...]α 0=a+b c=[a 0,a 1,a 2,...] Let k<l k<l be the first pair of integers which satisfy that (P k,Q k)=(P l,Q l)(P k,Q k)=(P l,Q l) then the period of α 0 α 0 is (a k,…,a l−1)(a k,…,a l−1) . Hopefully this will help implementing it... Preview: (hide) save cancel link Comments That's very helpful. Thanks for the reference. I had already implemented a first draft of the algorithm I described above, but I think what you've described here is going to be much faster. If you want to look at what I've got so far, email me. benjaminfjones( 14 years ago ) add a comment 0 answered 14 years ago Menny 69●3●5●8 As a continuation of the reply by Benjamin, Let me give some references and give more information. Both Theorems appear in the book "Continued Fractions" By Andrew Mansfield Rockett and Peter Szüsz. The first is Theorem 3 on page 45, and the second is a remark before Theorem 1 on page 50. The second is not stated correctly (there should not be any coeffiecient before the square root): Theorem(Lagrange Estimate): Let t=P 0+D√Q 0 t=P 0+D Q 0 with D D not a perfect square and P 0,Q 0 P 0,Q 0 are integers. Then the length of the period of t t is at most 2 D 2 D. A better asymptotics is given by the following (which is Theorem 1 of page 50): Theorem: Let t t be as above with the addtional assumption that Q 0 Q 0 divide D−P 2 0 D−P 0 2. Then if L(t)L(t) denoted the length of the period of t t then L(t)=O(D−−√l o g(D))L(t)=O(D l o g(D)) I'm not sure what the constant is (it is related to the divisor function)... I didn't read the proof yet! Preview: (hide) save cancel link add a comment Your Answer Please start posting anonymously - your entry will be published after you log in or create a new account. Add Answer Question Tools Follow 1 follower subscribe to rss feed Stats Asked: 14 years ago Seen: 2,609 times Last updated: Feb 20 '14 Related questions Continued fraction of pi by hand Implementing new CF class advice Coercion on continued fractions ContinuedFractions fail on large integers? Recovering numbers from continued fraction plotting complicated function Continued fractions with various algorithms Symbolic expression of the quotient of a continued fraction Generating the non-fundamental solutions of Pell's equation An infinite continued fraction of x n x n Copyright Sage, 2010. Some rights reserved under creative commons license. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license. about |faq |help |privacy policy |terms of service Powered by Askbot version 0.7.59 ( 2025-09-29 11:18:21 +0200 )edit none×
12267
https://www.k5learning.com/free-math-worksheets/fourth-grade-4/word-problems/decimals
Reading & Math for K-5 Sign UpLog In Math Math by Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Numbers Learning numbers Counting Comparing numbers Place Value Rounding Roman numerals Fractions & Decimals Fractions Decimals 4 Operations Addition Subtraction Multiplication Division Order of operations Flashcards Drills & practice Measurement Measurement Money Time Advanced Factoring & prime factors Exponents Proportions Percents Integers Algebra More Shape & geometry Data & graphing Word problems Reading Reading by Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Stories Children's stories Leveled stories Fables Early Reading Phonics Sight words Sentences & passages Comprehension Exercises Context clues Cause & effect Compare & contrast Fact vs. fiction Fact vs. opinion Story Structure Exercises Main idea & details Sequencing Story elements Prediction Conclusions & inferences Kindergarten Early Reading Letters Sounds & phonics Words & vocabulary Reading comprehension Early writing Early Math Shapes Numbers & counting Simple math Early Science & More Science Colors Social skills Other activities Vocabulary Vocabulary by Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Flashcards Dolch sight words Fry sight words Phonics Multiple meaning words Prefixes & suffixes Vocabulary cards Spelling Spelling by Grade Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grammar & Writing By Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grammar Nouns Verbs Adjectives Adverbs Pronouns Other parts of speech Writing Sentences Punctuation Capitalization Narrative writing Opinion writing Informative writing Science Science by Grade Kindergarten Grade 1 Grade 2 Grade 3 Cursive Cursive Writing Worksheets Cursive alphabet Cursive letters Cursive letter joins Cursive words Cursive sentences Cursive passages | Bookstore Math Reading Kindergarten Vocabulary Spelling Grammar & Writing More Science Cursive Bookstore Math Reading Kindergarten More Vocabulary Spelling Grammar & Writing Science Cursive Bookstore Breadcrumbs Worksheets Math Grade 4 Word Problems Decimals Buy Workbook Download & PrintOnly $6.80 Decimals word problems Adding and subtracting decimals These grade 4 math worksheets have word problems involving the addition and subtraction of one-digit decimals. Some questions may i) have 3 terms, ii) involve comparisons or iii) require conversions of fractions with a denominator of 10 or 100. Worksheet #1 Worksheet #2 Worksheet #3 Worksheet #4 Worksheet #5 Worksheet #6 Become a Member These worksheets are available to members only. Join K5 to save time, skip ads and access more content. Learn More Join Now Similar: Writing and comparing fractions word problems Mixed word problems More word problem worksheets Explore all of our math word problem worksheets, from kindergarten through grade 5. What is K5? K5 Learning offers free worksheets, flashcards and inexpensive workbooks for kids in kindergarten to grade 5. Become a member to access additional content and skip ads. Help us give away worksheets Our members helped us give away millions of worksheets last year. We provide free educational materials to parents and teachers in over 100 countries. If you can, please consider purchasing a membership ($24/year) to support our efforts. Members skip ads and access exclusive features. Learn about member benefits Join Now Become a Member This content is available to members only. Join K5 to save time, skip ads and access more content. Learn More Join Now
12268
https://www.e-education.psu.edu/eme460/node/659
Skip to main content Compound Interest Formulas II Print 3. Uniform Series Compound-Amount Factor The third category of problems in Table 1-5 demonstrates the situation that equal amounts of money, A, are invested at each time period for n number of time periods at interest rate of i (given information are A, n, and i) and the future worth (value) of those amounts needs to be calculated. This set of problems can be noted as F/Ai,n . The following graph shows the amount occurred. Think of it as this example: you are able to deposit A dollars every year (at the end of the year, starting from year 1) in an imaginary bank account that gives you i percent interest and you can repeat this for n years (depositing A dollars at the end of the year). You want to know how much you will have at the end of year nth. | 0 | | A | A | | A | A | F=? | | | | | | | 0 | 1 | 2 | ... | n-1 | n | Figure 1-4: Uniform Series Compound-Amount Factor, F/Ai,n In this case, utilizing Equation 1-2 can help us calculate the future value of each single investment and then the cumulative future worth of these equal investments. Future value of first investment occurred at time period 1 equals A(1+i)n−1 Note that first investment occurred in time period 1 (one period after present time) so it is n-1 periods before the nth period and then the power is n-1. And similarly: Future value of second investment occurred at time period 2: A(1+i)n−2 Future value of third investment occurred at time period 3: A(1+i)n−3 Future value of last investment occurred at time period n: A(1+i)n−n=A Note that the last payment occurs at the same time as F. So, the summation of all future values is F=A(1+i)n−1+A(1+i)n−2+A(1+i)n−3+…+A By multiplying both sides by (1+i), we will have F(1+i)=A(1+i)n+ A(1+i)n−1+ A(1+i)n−2+…+ A(1+i) By subtracting first equation from second one, we will have F(1+i)–F=A(1+i)n+ A(1+i)n−1+ A(1+i)n−2+… +A(1+i)–[A(1+i)n−1+A(1+i)n−2+A(1+i)n−3+…+A]F+Fi–F=A(1+i)n+A(1+i)n−1+ A(1+i)n−2+… + A(1+i)– A(1+i)n−1− A(1+i)n−2− A(1+i)n−3−…−A which becomes: Fi=A(1+i)n–A then F=A[(1+i)n−1]/i Equation 1-3 Therefore, Equation 1-3 can determine the future value of uniform series of equal investments as F=A[(1+i)n−1]/i . Which can also be written regarding Table 1-5 notation as: F=AF/Ai,n. Then F/Ai,n =[(1+i)n−1]/i. The factor [(1+i)n−1]/i is called “Uniform Series Compound-Amount Factor” and is designated by F/Ai,n. This factor is used to calculate a future single sum, “F”, that is equivalent to a uniform series of equal end of period payments, “A”. Note that n is the number of time periods that equal series of payments occur. Please review the following video, Uniform Series Compound-Amount Factor (3:42). Uniform Series Compound Amount Factor Click for the transcript of "Uniform Series Compound-Amount Factor" video. PRESENTER: In the third category, equal amounts of money A are enlisted pay to receive at each time period for n number of time periods. n can be years or months, and interest rate is i. And the question asks you to calculate the future value of these payments, a single sum of money that is equivalent to all these series of payments A. Here, given information are A, n, and i. And F is the unknown parameter. These sets of problems can be displayed with the factor F slash A, or F/A. Again, the left side of this slash sign is the unknown parameter F, and the right side is the given variable, which is A. Here, you can see the equation to calculate F from A, i and n. The mathematical proof of this equation is straightforward, and they explain it in Lesson One. We can write this equation, regarding the factor notation, F equals A multiply the factor. This factor is called uniform series compound-amount factor. And it is used to calculate the future single sum F that is equivalent to uniform series of equal ends of period payments A. Let's work on an example to see how this factor can be used. Assume you save $4,000 per year and deposit it, in the end of the year, in an imaginary saving account or some other investment that gives you 6% interest rate per year, compounded annually, for 20 years, starting from year 1 to year 20th. And you want to know how much money will you have in the end of the 20th year. First, we draw the time line. Left-hand side is the present time. We don't have anything there. Note that your investment, it starts from year 1 to year 20th. If there is no extra information in the question, and question says you invest for 20 years, you need to assume your investment, it starts from year 1. So there is no payment at present time, or year zero. Right-hand side is the future time, which is a single amount future value, and it is unknown. Your investment takes 20 years, so n equals 20. And above each year, you have to write $4,000, because you have a payment of $4,000 in the end of each year. So A equals $4,000, n number of years is 20, i interest rate 6%, and F needs to be calculated. And F equals A times the factor F/A. In this factor, i is 6% and 20. And we use the equation to calculate the F. And we find the answer. So if you invest $4,000 per year for 20 years, with 6% interest rate, you will have about $147,000 at the end of the 20th year. Credit: Farid Tayari Example 1-3: Assume you save 4000 dollars per year and deposit it at the end of the year in an imaginary saving account (or some other investment) that gives you 6% interest rate (per year compounded annually), for 20 years. How much money will you have at the end of the 20th year? | 0 | | $4000 | $4000 | | $4000 | $4000 | F=? | | | | | | | 0 | 1 | 2 | ... | 19 | 20 | So A =$4000 n =20 i =6% F=? Please note that n is the number of equal payments. Using Equation 1-3, we will have F=AF/Ai,n=A[(1+i)n−1]/iF=AF/A6%,20=4000 [(1+0.06)20−1]/0.06F=400036.78559 =147142.4 So, you will have 147,142.4 dollars at 20th year. Table 1-8: Uniform Series Compound-Amount Factor | Factor | Name | Formula | Requested variable | Given variables | | F/Ai,n | Uniform Series Compound-Amount Factor | [(1+i)n−1]/i | F: Future value of uniform series of equal investments | A: uniform series of equal investments n: number of time periods i: interest rate | 4. Sinking-Fund Deposit Factor The fourth group in Table 1-5 is similar to the third group but instead of A as given and F as unknown parameters, F is given and A needs to be calculated. This group illustrates the set of problems that ask you to calculate uniform series of equal payments (or investment), A, to be invested for n number of time periods at interest rate of i and accumulated future value of all payments equal to F. Such problems can be noted as A/Fi,n and are displayed in the following graph. Think of it as this example: you are planning to have F dollars in n years and there is a saving account that can give you i percent interest. You want to know how much you have to deposit every year (at the end of the year, starting from year 1) to be able to have F dollars after n years. | | | | | | | | | --- --- --- --- | | 0 | | A=? | A=? | | A=? | A=? | F | | | | | | | 0 | 1 | 2 | ... | n-1 | n | Figure 1-5: Sinking-Fund Deposit Factor, A/Fi,n Equation 1-3 can be rewritten for A (as unknown) to solve these problems: A=F{i/[(1+i)n−1]} Equation 1-4 Equation 1-4 can determine uniform series of equal investments, A, given the cumulated future value, F, the number of the investment period, n, and interest rate i. Table 1-5 notes these problems as: A=FA/Fi,n. Then A/Fi,n=i/[(1+i)n−1]. The factor i/[(1+i)n−1] is called the “sinking-fund deposit factor”, and is designated by A/Fi,n . The factor is used to calculate a uniform series of equal end-of-period payments, A, that are equivalent to a future sum F. Note that n is the number of time periods that equal series of payments occur. Please watch the following video, Sinking Fund Deposit Factor (4:42). Sinking Fund Deposit Factor Click for the transcript of "Sinking Fund Deposit Factor" video. PRESENTER: The fourth group is similar to the third one. But A is the unknown and F is the given variable. This set of problems asks you to calculate uniform series of equal payments, A, to be invested for n number of time periods at interest rate of i. And the accumulated future value of all payments or equivalent future value is F. This set of problems can be summarized with the factor A over F or A slash F. The left side of this last sign is the unknown parameter. Here it is A. And the right side is the given variable, which is F. Equation 1-3 for uniform series compound amount factor can be rewritten for A as unknown to solve these problems, which gives the Equation 1-4. Equation 1-4 can determine uniform series of equal investments, A, for accumulated future value, F, number of investment period n and interest rate i. We can write this equation according to the factor notation, A equals F times the factor A over F. This factor is called the Sinking-Fund Deposit Factor. And it is displayed by A slash F. The factor is used to calculate the uniform series of equal end of the period payments, A, that are equivalent to a future sum, F. For example, referring to example 1-7 in previous video, let's say you plan to have $200,000 after 20 years. And you are offered an investment, which can be the imaginary savings account, that gives you 6% per year compound interest rates. And you want to know how much money, equal payments, you need to save each year, or invest-- deposit in your account in the end of each year. So in summary, you want to have $200,000 after 20 years. And you can invest your money with 6% interest rate. The question is, how much you need to invest per year? Again, the first step is drawing the time line. Left-hand side is the present time. We won't have any payment. So there is no payment at present time or time zero. Right-hand side is the future. And you want to have a single amount of $200,000. So you write $200,000 in the 20th year, or in the end of right-hand side of the time line. Note that $200,000 has the same time dimension as the last payment, A. Both are in the year 20th. Your investment takes 20 years, so n equals 20. And above each year, you have to write A, which is unknown and needs to be calculated. So F equals $200,000. n number of years is 20. i, interest rate, 6%. And A needs to be calculated. We can use the factor notation to summarize the equation. In this factor, i is 6%, n is 20, and F is given, and A needs to be calculated. And we calculate the result. So if you want to have $200,000 in 20 years from now with 6% interest rate, you will need to invest equal amounts of $5,437 per year at the end of each year for 20 years, starting from year one. Credit: Farid Tayari Example 1-4: Referring to Example 1-3, assume you plan to have 200,000 dollars after 20 years, and you are offered an investment (imaginary saving account) that gives you 6% per year compound interest rate. How much money (equal payments) do you need to save each year and invest (deposit it to your account) in the end of each year? | | | | | | | | | --- --- --- --- | | 0 | | A=? | A=? | | A=? | A=? | F=200,000 | | | | | | | 0 | 1 | 2 | ... | 19 | 20 | So F=$200,000 n=20 i=6% A=? Using Equation 1-4, we will have A=FA/Fi,n =F{i/[(1+i)n−1]}A=FA/F6%,20=200,0000.06/[(1+0.06)20−1]A=200,0000.027185=5436.912 So, in order to have 200,000 dollars at 20th year, you have to invest 5,436.9 dollars in the end of each year for 20 years at annual compound interest rate of 6%. Table 1-9: Sinking-Fund Deposit Factor | Factor | Name | Formula | Requested variable | Given variables | | A/Fi,n | Sinking-Fund Deposit Factor | i/[(1+i)n−1] | A: Uniform series of equal end-of-period payments | F: cumulated future value of investments n: number of time periods i: interest rate | Note that i/[(1+i)n−1] ‹ Compound Interest Formulas I up Compound Interest Formulas III ›
12269
https://www.98thpercentile.com/blog/exploring-the-properties-of-a-parallelogram
Exploring the Properties of a Parallelogram ElevatEd Math January 23, 2025 A geometric form with a long history and many intriguing characteristics is the parallelogram. It is among the most studied quadrilaterals in geometry because it offers a special blend of intricacy and simplicity that makes it both approachable and captivating. The main characteristics of a parallelogram will be examined in this blog, along with its geometrical characteristics, related theorems, and applications to practical issues. What is a Parallelogram? A four-sided polygon with two opposite side pairs that are parallel and the same length is called a parallelogram. We learn a lot about the shape's symmetry and characteristics right away from this definition. Beyond this simple explanation, however, the parallelogram's parallel sides and internal angles give birth to several fascinating features. Get Your FREE Math Worksheets Now! Key Properties of a Parallelogram The opposing sides are parallel and equal: The opposing sides of a parallelogram are both parallel and equal in length, which is its most basic characteristic. This indicates that since two sides have the same slope, they will never meet if one pair is stretched. In mathematics, the sides AB and CD are parallel and equal in length, as are the sides AD and BC, if the vertices of a parallelogram are designated as A, B, C, and D. Equal Opposite Angles: The equality of a parallelogram's opposing angles is another essential characteristic. We may determine that ∠A=∠C and ∠B=∠D if we designate the angles of a parallelogram as ∠A, ∠B, ∠C, and ∠D. This results from the fact that a transversal cut between parallel lines produces equal equivalent angles. Supplementary are Adjacent Angles: In a parallelogram, the neighboring angles are supplementary, which means that their total equals 180°. For all adjacent pairs of angles, for instance, ∠A+∠B=180°, ∠B+∠C=180°, and so on. When tackling difficulties involving angle connections in parallelograms, this characteristic is crucial. Diagonals Divide One Another: The fact that a parallelogram's diagonals cut each other in half is one of its most intriguing features. This indicates that a parallelogram's diagonals split one another in half. When you draw diagonals AC and BD in the parallelogram ABCD, each diagonal is divided into two equal-length segments at their intersection. This characteristic offers a geometric basis for more research and is essential in establishing several theorems about parallelograms. Area of a Parallelogram: The area of a parallelogram is calculated as the product of the base and the height. The formula is: Area = Base x Height Special Cases of Other Quadrilaterals Are Parallelograms: The parallelogram is a flexible shape that, depending on the situation, may change into some other well-known quadrilaterals. A parallelogram turns into a rectangle when every angle is 90 degrees. The parallelogram turns into a rhombus when each side has the same length. The form is a rectangle if both pairs of opposite angles are right angles. A square is a specific case of a rectangle and a rhombus when all of its sides and angles are equal. Real-World Applications of Parallelograms Parallelograms' characteristics are not only theoretical; they are used in a variety of disciplines, including physics, engineering, and architecture. For instance, the design of structural elements with uniformly distributed forces across parallel sides, such as beams and trusses, makes use of the parallelogram notion. The stability of constructions like roofs and bridges is significantly influenced by the diagonal bisectivity concept. To sum up, the parallelogram is a basic geometric form with many intriguing characteristics that make it essential to both mathematics and real-world uses. The parallelogram provides information on the relationships between forms, angles, and vectors, from the fundamental notion of parallel and equal sides to more complex ideas like diagonals bisecting one another and the formula for area. The parallelogram is still important to our comprehension of the universe, whether it is in applied science or pure geometry. FAQs (Frequently Asked Questions): Q1: What is a parallelogram? Ans: A parallelogram is a quadrilateral (four-sided polygon) with two pairs of opposite sides that are both parallel and equal in length. Q2: What makes a parallelogram different from other quadrilaterals? Ans: Unlike other quadrilaterals, a parallelogram has opposite sides that are both parallel and equal. This makes it unique compared to trapezoids (only one pair of parallel sides) and other four-sided shapes without parallel sides. Q3: Why do the diagonals of a parallelogram bisect each other? Ans: In a parallelogram, the diagonals cross at their midpoint because the parallel opposite sides create symmetry, dividing each diagonal into two equal parts. Q4: How are parallelograms used in real-world applications? Ans: Parallelograms are used in structural engineering, design, and physics. For example, the parallelogram law of vector addition helps calculate resultant forces and stable shapes like beams and trusses use the properties of parallelograms for balance and support. Book FREE Math Trial Classes Now! Related Articles: 1. Secrets of Rhombuses: A Comprehensive Guide 2. Parallelogram: Exploring the Properties of Parallelogram Shapes 3. What is a Quadrilateral? Definition and Examples 4. Plane Figures and Solid Shapes | Properties & Examples 98thPercentile Team Welcome to the ElevatEd blog space, where we share crucial information on Math, English, Coding, and Public Speaking. Explore, network, and connect with like-minded students to begin elevating your education here at ElevatEd. Try Free Classes for 1 Week Explore Topics Math(100) English(100) Coding(100) Public Speaking(100) SAT(16) Contest(0) Parents’ Corner How to The Power of Repetition: How Boring Practice Brings Phenomenal Success ElevatEd June 17, 2025 How to Bridging Learning Gaps at Home: A Step-by-Step Guide for Parents ElevatEd July 22, 2025 How to 5 Signs Your Child Is Falling Behind in School (And How to Help) ElevatEd July 22, 2025 How to How to Nurture Logical Thinking for Modern Learning | 98thPercentile ElevatEd June 27, 2025 How to Fun Educational Tools to Use in a Classroom | 98thPercentile ElevatEd June 23, 2025 How to Cracking the Code: Creating the Optimal Study Schedules for Your Child ElevatEd June 17, 2025 How to The Power of Repetition: How Boring Practice Brings Phenomenal Success ElevatEd June 17, 2025 How to Bridging Learning Gaps at Home: A Step-by-Step Guide for Parents ElevatEd July 22, 2025 @98thPercentile on Instagram
12270
https://www.picmonic.com/api/v3/picmonics/2495/pdf
Negative Predictive Value (NPV) Negative predictive value refers to the probability that a person with a negative test result does not have the tested disease. NPV allows for clinicians to explain to patients the likelihood of a negative result being truly negative. The formula to calculate NPV is True Negatives (TN) divided by the sum of True Negatives (TN) and False Negatives (FN), or NPV = TN / (TN + FN). PLAY PICMONIC Proportion of Negative Tests that are Truly Negative Proportion of Negative tests that are Truly Negative with Tin A negative test result does not always mean a patient has the disease; a false negative may still occur. More false negatives will lower the NPV. Probability that Person with Negative Test is Healthy Probability-spinner showing Negative-tests as Healthy or Diseased This refers to the probability of NOT having a disease out of all the people who tested negative for it. Formula (TN) True Negatives True Negative with Tin A true negative is a person who does not have a disease and tests negative for the disease. TN will be the numerator in the formula. Divided by / Divide True negatives (TN) will be the numerator that is divided by the calculated denominator below. All Negative Test Results All Negative The denominator is the sum of all negative results for a given test. This will include people who have a disease but test negative for the disease, otherwise known as a false negative. (FN + TN) Fin Plus Tin Add the number of true negatives to the number of false negatives to obtain the denominator in the equation. Considerations Varies Inversely with Prevalence Up-arrow Disease with Down-arrow NPV Prevalence is the amount of patients who are currently affected by a disease. A common disease will have a high prevalence, while a rare disease will have a low prevalence. As the prevalence and pretest probability of a disease increase, the negative predictive value decreases. This inverse relationship is in contrast to the direct relationship shared by positive predictive value and these two factors. Sign up for FREE at picmonic.com
12271
https://www.plannedparenthood.org/about-us/newsroom/press-releases/condoms-help-prevent-hpv
Get Care Resources Explore Topics Resources and Tools Get Involved Condoms Help Prevent HPV For Immediate Release: Jan. 30, 2014 Share This Planned Parenthood Urges Condom Use to Prevent Cervical Cancer New York, NY — In response to a new study published today by the New England Journal of Medicine (NEJM), Planned Parenthood Federation of America renewed its recommendation for sexually active individuals to use condoms to prevent sexually transmitted infections, including human papilloma virus (HPV), the leading cause of cervical cancer. The NEJM study found that consistent condom usage significantly reduces the risk of HPV transmission. "For sexually active individuals, condoms are the best protection against sexually transmitted infections. This important study confirms that condoms can protect against transmission of HPV, which means condoms continue to be an important option in cervical cancer protection," said Vanessa Cullins, PPFA vice president for medical affairs. "At Planned Parenthood our goal is to keep our patients safe, healthy, and cancer-free — and that includes educating them about correct and consistent condom use if they choose to be sexually active." The new NEJM study on condom usage and HPV transmission supports previous scientific studies that have shown that condoms significantly reduce the risk of transmission of HIV, gonorrhea, chlamydia and the herpes simplex virus as well. The study found that women whose partners used condoms during all instances of sexual intercourse were 70 percent less likely to become infected with HPV than those who used condoms only five percent of the time. "Condoms are an inexpensive and effective birth control method that every man and woman should have in their medicine cabinets," added Cullins. "This study proves that claims by condom-use opponents suggesting that condom use leads to increased numbers of HPV infections are false and alarmist." In early June, the FDA approved the first vaccine against two types of human papilloma virus (HPV) that cause about 70 percent of cervical cancer cases. Worldwide, cervical cancer is the second leading cause of cancer deaths among women. Each year approximately 10,000 cases of cervical cancer are diagnosed in the United States, and 4,000 American women die from the disease. Source Planned Parenthood Federation of America Contact Erin Kiernon, 202-973-4975 Gustavo Suarez, 212-261-4339 Published May 11, 2014 Planned Parenthood Federation of America, Inc. (PPFA) works to protect and expand access to sexual and reproductive health care and education, and provides support to its member affiliates. Planned Parenthood affiliates are separately incorporated public charities that operate health centers across the U.S. as trusted sources of health care and education for people of all genders in communities across the country. PPFA is tax-exempt under Internal Revenue Code section 501(c)(3) - EIN 13-1644147. Donations are tax-deductible to the fullest extent allowable under the law. © 2025 Planned Parenthood Federation of America Inc. This website uses cookies Planned Parenthood cares about your healthcare privacy and information preferences. We and our third-party vendors use cookies and other tools to collect, store, monitor, and analyze information about your interaction with our site, to improve performance, analyze your use of our sites and assist in our marketing efforts. We also use analytics to better understand how users book appointments. You may edit the use of these cookies and other tools at any time by visiting Cookie Settings. By clicking “Allow All Cookies” you consent to our collection and use of such data, and our Terms of Use. For more information, see our Privacy Notice. Planned Parenthood Federation of America uses tracking technologies to improve your site experience. By using this site you agree to our general use of cookies on some of our pages. To learn more, see our Privacy Notice or by viewing your Cookie Settings. Cookie Settings We, and our third-party partners, use cookies, pixels, and other tracking technologies to collect, store, monitor, and process certain information about you when you access and use our services, read our emails, or otherwise engage with us. The information collected might relate to you, your preferences, or your device. We use that information to make the site work, analyze performance and traffic on our website, to provide a more personalized web experience, and assist in our marketing efforts. We also share information with our social media, advertising, and analytics partners. You can change your default settings according to your preference. You cannot opt-out of required cookies when utilizing our site; this includes necessary cookies that help our site to function (such as remembering your cookie preference settings). For more information, please see our Privacy Notice. Marketing We use online advertising to promote our mission and help constituents find our services. Marketing pixels help us measure the success of our campaigns. Performance We use qualitative data, including session replay, to learn about your user experience and improve our products and services. Analytics We use web analytics to help us understand user engagement with our website, trends, and overall reach of our products.
12272
https://www.sciencedirect.com/science/article/pii/S0196064421003061
Consensus Recommendations on the Treatment of Opioid Use Disorder in the Emergency Department - ScienceDirect Skip to main contentSkip to article Journals & Books ViewPDF Download full issue Search ScienceDirect Outline Introduction Consensus Recommendation Scope Methods Results and Recommendations Conclusion Acknowledgments Supplementary Data References Show full outline Cited by (76) Extras (1) Supplemental Table 1 Annals of Emergency Medicine Volume 78, Issue 3, September 2021, Pages 434-442 The practice of emergency medicine/concepts Consensus Recommendations on the Treatment of Opioid Use Disorder in the Emergency Department Author links open overlay panelKathryn Hawk MD, MHS a, Jason Hoppe DO b, Eric Ketcham MD c, Alexis LaPietra DO c, Aimee Moulin MD d, Lewis Nelson MD e, Evan Schwarz MD f, Sam Shahid MBBS, MPH g, Donald Stader MD h, Michael P.Wilson MD i, Gail D’Onofrio MD, MS a Show more Outline Add to Mendeley Share Cite rights and content Under a Creative Commons license Open access The treatment of opioid use disorder with buprenorphine and methadone reduces morbidity and mortality in patients with opioid use disorder. The initiation of buprenorphine in the emergency department (ED) has been associated with increased rates of outpatient treatment linkage and decreased drug use when compared to patients randomized to receive standard ED referral. As such, the ED has been increasingly recognized as a venue for the identification and initiation of treatment for opioid use disorder, but no formal American College of Emergency Physicians (ACEP) recommendations on the topic have previously been published. The ACEP convened a group of emergency physicians with expertise in clinical research, addiction, toxicology, and administration to review literature and develop consensus recommendations on the treatment of opioid use disorder in the ED. Based on literature review, clinical experience, and expert consensus, the group recommends that emergency physicians offer to initiate opioid use disorder treatment with buprenorphine in appropriate patients and provide direct linkage to ongoing treatment for patients with untreated opioid use disorder. These consensus recommendations include strategies for opioid use disorder treatment initiation and ED program implementation. They were approved by the ACEP board of directors in January 2021. Previous article in issue Next article in issue Introduction In 2019, the National Safety Council announced that for the first time in history, a person in the United States was more likely to die of an unintentional opioid overdose than in a motor vehicle collision.1 After a brief decrease in opioid-associated mortality from 2017 to 2018 of 1.7% (47,600 to 46,802), the US Centers for Disease Control and Prevention (CDC) reported 50,042 deaths in 2019, an increase of 9.4%, with even greater increases in overdose deaths projected due to the coronavirus disease 2019 (COVID-19) pandemic.2 Provisional reporting by the CDC reveals new increases in rates of drug overdose in all US states, with an overall increase in drug overdose deaths of 26.8% and 19 states showing increases of more than 30% between August 2019 and August 2020.2 Increased availability of highly potent illicit fentanyl and fentanyl analogues and the social isolation and treatment interruption associated with the COVID-19 pandemic represent drivers of the worsening opioid crisis, augmenting existing barriers and treatment gaps in the opioid cascade of care, a quality measurement framework that includes treatment engagement, medication initiation, retention, and remission.3, 4, 5, 6 The treatment of opioid use disorder with buprenorphine or methadone has been associated with improved quality of life, reduced drug use, diminished HIV/Hepatitis C transmission, reduced opioid overdose, and decreased all-cause mortality.7, 8, 9, 10, 11, 12 With only 18% of individuals with opioid use disorder receiving medication for opioid use disorder treatment within the past year,13 opioid overdose remains the leading cause of unintentional death for adults under the age of 50 in the United States, claiming an average of approximately 130 lives every day.14 The Opioid Crisis and Emergency Departments The first 2 decades of this millennium were characterized by dramatic increases in rates of opioid prescription, opioid overdose, and opioid-related utilization of inpatient and emergency department (ED) care.15, 16, 17 As the opioid crisis has worsened, ED visits for opioid-related adverse drug events, complications of injection drug use, and opioid withdrawal have become increasingly common, resulting in ED visits for opioid-related presentations more than doubling between 2010 and 2018.18,19 Patients who survive an opioid overdose are 100 times more likely to die by drug overdose in the following year and 18 times more likely to die by suicide compared to the general population.20 One-year mortality following an ED visit for opioid overdose is 4.7% to 5.5%.12,21,22 Despite the extraordinarily high mortality, only one third of patients seen in the ED for nonfatal overdose received medication for opioid use disorder in the following year.12 Importantly, compared to patients who did not receive medication for opioid use disorder after overdose, those who received buprenorphine had a significant reduction in mortality (adjusted hazard ratio 0.63; confidence interval [CI] 0.46 to 0.87), as did those receiving methadone (adjusted hazard ratio 0.47; CI 0.32 to 0.71).12 Analysis of another cohort of 6,451 commercially insured patients discharged from an ED after nonfatal opioid overdose found that only 16.6% of patients received treatment for opioid use disorder in the 90 following days.23 Emergency physicians have an opportunity to provide evidence-based interventions and improve the care of patients with untreated opioid use disorder. Evidence strongly supports the initiation of pharmacotherapy for patients with untreated opioid use disorder at any and all points of contact with the health care system; this has been advocated by the Surgeon General, the National Institute on Drug Abuse, the Substance Abuse and Mental Health Services Administration (SAMHSA), the National Academy of Sciences, and the American College of Emergency Physicians (ACEP).24, 25, 26, 27, 28, 29 The ED is often the only contact individuals with opioid use disorder have with the health care system, and initiating treatment during the visit can make an enormous contribution to improving access to lifesaving care for people with opioid use disorder. Fundamentals: Opioid Use Disorder and Food and Drug Administration (FDA)-Approved Medications Opioid use disorder is a chronic disease characterized by changes in brain function whose pathophysiology, like that of most chronic diseases, is heavily influenced by genetic and environmental factors.30 The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition criteria for diagnosing opioid use disorder relate to loss of control, physiologic changes, and personal consequences, and the presence of these criteria defines mild (2 to 3 criteria), moderate (4 to 5), or severe (6 or more) disease.31 As with many chronic diseases, pharmacotherapy plays a central, not adjunctive, role in treatment of opioid use disorder. Individuals with moderate to severe opioid use disorder are eligible for initiation of medication for opioid use disorder. There are 3 medications approved by the FDA for the treatment of OUD. In the ED, initiation of treatment is driven by a combination of federal regulation, patient characteristics, and pharmacological properties. Naltrexone is a long-acting, competitive μ-opioid receptor antagonist that is used in the treatment of both opioid use disorder and alcohol use disorder.32Naltrexone results in precipitated opioid withdrawal in patients dependent on opioid agonists, and patients must be abstinent from opioids for at least 7 to 10 days prior to administration by any route.26 Naltrexone does not treat opioid withdrawal and is not as effective at reducing mortality for patients with opioid use disorder as agonist opioid treatment.12 Methadone is a synthetic μ-opioid receptor full agonist used for treatment of chronic pain and for opioid use disorder. Although therapeutic use of methadone is generally safe, rapid dose escalation results in potentially fatal respiratory depression. Additionally, high doses may cause a similar effect, which is enhanced by combination with sedatives.32 Methadone is effective for the management of opioid withdrawal, and, as a full agonist, it does not cause precipitated withdrawal. Physicians can administer methadone in the ED or hospital for the treatment of opioid withdrawal, but the use of methadone for the treatment of opioid use disorder is limited to patients enrolled in regulated opioid treatment programs.33 Buprenorphine, a synthetic partial μ-opioid receptor agonist, is primarily used for the treatment of opioid use disorder, though it is also prescribed for pain. The μ-opioid receptor affinity is sufficiently strong to prevent other opioids from binding, exerting a “blocking effect.” As a partial agonist, buprenorphine has a ceiling effect on respiratory depression, meaning sedation and respiratory depression is diminished, and it has a similar plateau in analgesic efficacy.32 Due to its high affinity and partial agonism at the μ-opioid receptor, buprenorphine can behave as an antagonist by displacing a full agonist bound to the opioid receptor, leading to precipitated opioid withdrawal in opioid-dependent patients. In patients with abstinence-related withdrawal or withdrawal precipitated by naloxone, buprenorphine provides a sufficient agonist effect to ameliorate the withdrawal syndrome. Consensus Recommendation Scope These recommendations provide evidence-based assistance for emergency physicians treating ED patients with opioid withdrawal and initiating treatment for opioid use disorder with direct linkage to ongoing addiction care. Methods ACEP staff solicited applications from experts in the ACEP Pain Management and Addiction Medicine Section to develop a consensus guideline that was conceived by ACEP and supported through a grant from SAMHSA. ACEP staff reviewed and approved applications and credentials for the 10 responding physicians with expertise in emergency and addiction medicine, medical toxicology, administration, and research from diverse practice settings. Consistent with the patient/population, intervention, comparisons, and outcomes (PICO) framework, experts identified a research question focused on ED practices to improve outcomes of patients with opioid use disorder to guide the literature search and consensus recommendations.34 Recommendations were discussed and reviewed iteratively by all members throughout the process. All members attested to meeting International Committee of Medical Journal Editors guidelines. These consensus recommendations were reviewed and approved by the ACEP board of directors on January 28,2021. Literature Review A rapid review was conducted to inform the development of an evidence-based consensus guideline. The authors worked with a medical librarian to identify key words and phrases as well as inclusion and exclusion criteria to be applied in database searches. The librarian then performed searches in the MEDLINE and Scopus databases on July 27, 2020, and August 14, 2020, using the following key words/phrases or variations and combinations of the key words/phrases: addiction treatment, analgesics, opioid, buprenorphine, naloxone drug combination, buprenorphine/naloxone, clonidine, harm reduction, heroin, lofexidine, medication for opioid use disorder, medication-assisted treatment, methadone, naloxone, opiate substitution treatment, opioid addiction, opioid overdose, opioid use disorder, opioid withdrawal, opioid-related disorders, precipitated withdrawal, suboxone, survival analysis, and thiorphan. All searches were limited to studies of adult humans. Additional publications were identified by reviewing the reference lists of selected publications and by consulting with content experts and were used to tailor the literature search. Resulting titles and abstracts were audited independently by 2 reviewers (JH and MW) to determine if the article addressed the PICO elements of treating withdrawal or initiation of treatment for opioid use disorder in ED patients with opioid use disorder or opioid withdrawal with a focus on the outcomes: mortality, morbidity (including accepting treatment for medical care not related to opioid use disorder), and linkage to opioid use disorder treatment. Exclusion criteria included initiation of opioid use disorder treatment in the outpatient setting, even if follow-up measurements included ED visits; economic impact, even if this involved the ED; and take-home naloxone, if this was the only intervention reported. Data collection and processing Multiple reports of single trials were deduplicated by the medical librarian and subsequently exported to Rayyan.35 The 2 primary reviewers (JH and MW) initially reviewed titles and abstracts independently, with subsequent inclusion of articles by consensus. In case of disagreement, the full text of the manuscript was examined. Disagreements in which consensus could still not be reached were resolved with a third reviewer (KH). All included manuscripts were subsequently made available to all experts on the consensus panel, who performed their own individual assessments of quality and bias. Rapid literature review results Seven hundred seventy-six articles published between January 1, 1970, and August 14, 2020, were identified in the searches. After the inclusion and exclusion criteria were applied, 60 articles were made available to consensus experts (Table E1, available at Results and Recommendations Based on the literature review, clinical experience, and expert consensus, we recommend that ED clinicians treat opioid withdrawal and offer buprenorphine with direct linkage to ongoing medication for opioid use disorder treatment for patients with untreated opioid use disorder. There is strong evidence demonstrating reduced morbidity and mortality for patients with opioid use disorder who are treated with opioid agonist treatment outside of the ED setting.7, 8, 9, 10, 11, 12 Initiating buprenorphine in the ED is effective for engaging patients in formal addiction treatment. In a randomized controlled trial, 78% (89 of 114; 95% CI 70% to 85%) of ED patients with opioid use disorder who received buprenorphine in the ED with referral for ongoing buprenorphine were engaged in formal addiction treatment at 30 days, compared to the 37% (38 of 102; 95% CI 28% to 47%) and 45% (50 of 111; 95% CI 36% to 54%) of patients who received brief intervention with standard or facilitated referral, respectively.36 The buprenorphine group reduced the number of days of illicit opioid use per week from 5.4 days (95% CI 5.1 to 5.7) to 0.9 days (95% CI 0.5 to 1.3), versus reductions from 5.4 days (95% CI 5.1 to 5.7) to 2.3 days (95% CI 1.7 to 3.0) in the referral group and from 5.6 days (95% CI 5.3 to 5.9) to 2.4 days (95% CI 1.8 to 3.0) in the brief intervention group.36 An analysis from the health care perspective using cost-effectiveness acceptability curves found that at all positive willingness-to-pay values, ED-initiated buprenorphine treatment was more cost-effective than brief intervention with standard or facilitated referral.37 Many EDs have instituted buprenorphine programs, which, alongside the broad state-wide implementation by the California Bridge Project,38 have provided evidence of feasibility in ED populations.39, 40, 41 Massachusetts adopted legislation (Chapter 208 of the Acts of 2018) requiring acute care hospitals that provide emergency services to have protocols and the capacity to initiate opioid agonist therapy to patients who present after opioid-related overdoses.42,43 The use of buprenorphine has increased from 12.3 per 100,000 ED visits in 2002 to 2003 to 42.8 per 100,000 ED visits in 2016 to 2017 (odds ratio for linear trend 3.31; 95% CI 1.04 to 10.5).44 Patient Selection Individuals can be identified during their visit by direct history taking, review of electronic health record, physical examination, or screening techniques. In light of the widely recognized treatment gap and the high mortality of patients with untreated opioid use disorder, all patients meeting criteria for opioid use disorder who are not currently enrolled in treatment should be offered treatment and assessed for active suicidal ideation.13,18,20 Although concomitant use of buprenorphine and sedatives increases the risk of adverse outcomes, including overdose and death, medication for opioid use disorder should not be denied to patients with this high-risk use. In 2017, an FDA advisory specifically addressed this issue, stating that “buprenorphine should not be withheld from patients taking benzodiazepines or other medications that depress the central nervous system (CNS), despite potential side effects, given the potential harm of untreated opioid use disorder.”45 When initiating buprenorphine, patients should be educated about the risks of concomitant use ofbenzodiazepines, sedatives, opioid analgesics, and alcohol. Enhancing Patient Motivation to Start Treatment Using nonstigmatizing language and normalizing the process of ED initiation of buprenorphine and referral to ongoing treatment increases the likelihood that an individual will accept treatment. The Brief Negotiation Interview,46 which includes asking permission to discuss their drug use, providing feedback, and enhancing motivation by eliciting patients’ reasons to change and negotiating next steps, is effective. Integration into the flow of the ED and the electronic record with decisional support will improve success.47,48 Protocols Although specific protocols may vary among EDs, most include assessment for (1) moderate to severe opioid use disorder using questions derived from the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition criteria; (2) the degree of opioid withdrawal by the Clinical Opiate Withdrawal Scale; and (3) pregnancy. Depending on the degree of withdrawal, the patient should be offered treatment with buprenorphine in the ED or provided a prescription for unobserved (home) induction.49,50 Patients should be discharged with a prescription for sufficient buprenorphine until an outpatient appointment to an opioid treatment provider or program (ideally within 1 week).51,52 If no Drug Addiction Treatment Act of 2000 (DATA 2000)-waivered provider is available, a plan for access to medication under the “3-day rule” should be made (further information can be found in the section on ED Administration of Buprenorphine). The inclusion of harm reduction strategies (including overdose education and naloxone distribution) or prescriptions is also an essential component of the ED visit.53 Sample protocols are available online, in EM textbooks, and on the ACEP Emergency Medicine Quality Network Opioid Initiative website.38,49,50,54, 55, 56 In contrast to traditional outpatient induction protocols and FDA labeling,26 ED-based protocols often start with administration of at least 8 mg of buprenorphine for patients with clinical signs of opioid withdrawal, and some protocols include an option for 24 mg or more during the ED visit based on provider experience, buprenorphine and/or specialist consultation, or other factors.38,49,50,54, 55, 56 Referrals Each patient with ED-initiated buprenorphine should be provided with a direct specific referral (when possible, with an appointment time) to a provider that aligns with the patient’s insurance and other preferences. EDs should engage their community stakeholders in protocol development and enhance bidirectional communication to improve the continuity of care of patients with opioid use disorder. As with all chronic diseases, EDs are dependent on local resources. When no connection to local treatment providers exists, champions should consider online treatment provider resources such as the SAMHSA treatment finder website,57 which includes information about treatment services offered and payment/insurance. The addition of health advocates or navigators in the ED who assist with motivating patients to accept treatment and navigating their paths to treatment may facilitate better outcomes.39,58 Whether these individuals are social workers, counselors, or peers, success depends on their ability to be integrated into the fabric of the ED.59 A process of reviewing the fidelity of these interventions to ensure that they are evidence based should be in place. Documentation When prescribing buprenorphine for opioid use disorder, the diagnosis of opioid use disorder should be included on the patient’s medical record. Adding the diagnosis of opioid use disorder in the ED can expedite the referral process for many outpatient treatment providers and further support insurance preauthorization requests. This diagnosis should be supported by ED documentation.31,60 Implementation Tips Sustained success is built on the inclusion of multidisciplinary champions in developing protocols, normalizing the care of individuals with addiction, and developing monitoring and feedback systems to the staff. The inherent nature of ED care means that staff often only observe the negative consequences of untreated addiction rather than the successes that patients can have with evidence-based treatment. Leadership involvement in setting expectations for initiating treatment of opioid use disorder and providing referral for long-term addiction care is critical for departmental practice change.47,61,62 Integrating decision support into the electronic health record is highly effective for streamlining the process of ED-initiated buprenorphine with referral for ongoing treatment and is integral for patient and provider satisfaction. A pilot test of a user-centered clinical decision support integrated within the electronic health record more than doubled prescription rates of ED-initiated buprenorphine and naloxone while doubling the number of unique physicians adopting the practice.48,63 DATA 2000 Training/DEA X-waiver DATA 2000 permitted US physicians to obtain a Drug Enforcement Administration (DEA) DATA waiver (by applying to SAMHSA after completing an approved 8-hour course) to treat patients with opioid use disorder using buprenorphine, the only schedule III, IV, or V medication that is FDA-approved for the treatment of opioid use disorder.64,65 Once SAMHSA approves the physician’s application, the physician receives an X-waiver designation from the DEA. In all, DATA 2000 enables physicians to prescribe buprenorphine for opioid use disorder to be filled by patients at a pharmacy rather than obtaining medication from a traditional opioid treatment program. The Comprehensive Addiction Recovery Act of 2016 allowed for qualifying physician assistants and nurse practitioners to obtain X-waivers after completing 24 hours of approved coursework.66 Patient limits have been incorporated into both laws, but they refer only to outpatients in longitudinal care and are unlikely to be a limitation to emergency clinicians prescribing only from the ED. Legislative and regulatory efforts are underway to reduce or eliminate the expectation for training and certification prior to prescribing buprenorphine for patients with opioid use disorder.67,68 On April 28, 2021, the Department of Health and Human Services released new practice guidelines exempting state licensed, DEA-registered clinicians treating up to 30 patients at one time from completing the 8 (or 24) hour DATA 2000 training previoulsy required to obtain an X-waiver.69 As patients count towards a clinicians 30 patient limit until care is transfered to another clinican or 30 days from prescription end, most EM clinicians can practice under this exemption unless prescribing outside of traditional EM setting. Importantly, physicians must still apply for a X-waiver by filing a Notice of Intent with SAMHSA, who may take up to 45 days to approve the application.69 ED Administration of Buprenorphine An emergency physician has historically been able to administer buprenorphine in the ED under the “3-day rule” without having obtained an X-waiver.70 A patient may be discharged and return to the ED repeatedly within 72 hours to receive medication from the same or a different provider. The dose is not specified; however, the medication must be administered to the patient while in the ED. The “3-day rule” was recently modified in the Further Continuing Appropriations Act, 2021, and Other Extensions Act, which was signed into law on December 11, 2020.71 The Act requires the Attorney General (who will delegate this responsibility to the DEA), within 180 days, to revise current regulations governing the 3-day rule to allow practitioners to dispense not more than a 3-day supply of medication to one person or for one person’s use at one time for the purpose of initiating maintenance treatment or detoxification treatment (or both) without a DATA 2000 waiver. Patients with opioid use disorder who are being admitted to the hospital for a medical condition may be initiated and maintained on medication for opioid use disorder treatment while hospitalized without restriction, even without a DATA waiver. Special Populations Historically, buprenorphine as a single agent (without naloxone) was considered the preferred formulation for pregnant patients requiring medication for opioid use disorder, although recent investigation has revealed no adverse events with the combined buprenorphine/naloxone product.72 The American College of Obstetrics and Gynecology recommends the use of either formulation.73 The addition of naloxone to buprenorphine is intended to reduce intravenous use, misuse, diversion, and street value, though it does have a higher cost to the uninsured patient. Buprenorphine is approved for patients aged 16 years and older and should be considered for adolescents with opioid use disorder.33 It is important to check local regulations and hospital policy before prescribing to adolescents. Buprenorphine can be safely used in geriatric patients with opioid use disorder; however, as it is an opioid, dosing may need to be adjusted and education should be provided to the patient and family regarding the risk of concomitant use of sedating medications. Stigma and Language Patients report feeling heavily stigmatized during health care interactions, sometimes leading to distrust and an unwillingness to seek medical care.74, 75, 76 Best practices support the avoidance of stigmatizing and derogatory terms such as “abuse,” “addict,” and being “clean.”74,75 Language should be patient-centered, professional, and objective. For example, person-centered language such as “person who injects drugs” and “patient with an opioid use disorder” avoid negative connotations and conflation of the individual with specific behaviors to facilitate the development of a therapeutic alliance with the patient. Given the evidence supporting a central role of pharmacotherapy in the treatment of opioid use disorder, many professionals are making a concerted effort to move away from the term “medication-assisted treatment, replacing it with more accurate terminology, “medication for opioid use disorder.”26,74,27 Those who desire to preserve the medication-assisted treatment acronym often use “medication for addiction treatment.”77 The updated terminology conveys that medication, or the option of it, is essential for treating patients with opioid use disorder, while the term “medication-assisted treatment” implies that medication is more of an adjunctive therapy.77, 78, 79 Stigma is often unwitting, unrecognized, and unaddressed by physicians, and stigmatizing attitudes, language, and behaviors are often solidified during training. Raising awareness of nonpatient-centered language, attitudes, and behaviors among all members of the health care team is essential to transforming ED culture and care of opioid use disorder. Conclusion Detecting and offering evidenced-based treatments for patients with opioid use disorder is aligned with the goals of emergency medicine to intervene on high-mortality disease processes. The window of opportunity to begin lifesaving pharmacotherapy may be fleeting. The ED’s 24-hour, 365-day accessibility ideally positions it as a cornerstone of care to close the enormous treatment gap for patients with opioid use disorder.61 Acknowledgments The authors would like to thank Travis Schulz, MLS, AHIP, for assistance in designing and conducting the literature search. The authors would also like to acknowledge the contributions of our patients with opioid use disorder, from whom we continue to learn. Supplementary Data Download: Download Word document (32KB) Supplemental Table 1. Recommended articles References 1For the First Time, We’re More Likely to Die From Accidental Opioid Overdose Than Motor Vehicle Crash. National Safety Council Accessed December 13, 2020 Google Scholar 2Provisional Drug Overdose Death Counts National Center for Health Statistics (2021), Accessed 24th Mar 2021 Google Scholar 3N. Wilson, M. Kariisa, P. Seth, et al. Drug and opioid-involved overdose deaths - United States, 2017-2018 MMWR Morb Mortal Wkly Rep, 69 (2020), pp. 290-297 CrossrefView in ScopusGoogle Scholar 4N.D. Volkow Collision of the COVID-19 and addiction epidemics Ann Intern Med, 173 (2020), pp. 61-62 CrossrefView in ScopusGoogle Scholar 5COVID-19 impact on US national overdose crisis 2020. Alter A, Yeager C Accessed 24th Mar 2021 Google Scholar 6A.R. Williams, E.V. Nunes, A. Bisaga, et al. Development of a Cascade of Care for responding to the opioid epidemic Am J Drug Alcohol Abuse, 45 (2019), pp. 1-10 CrossrefGoogle Scholar 7S.E. Wakeman, M.R. Larochelle, O. Ameli, et al. Comparative effectiveness of different treatment pathways for opioid use disorder JAMA Netw Open, 3 (2020), Article e1920622 CrossrefView in ScopusGoogle Scholar 8R.P. Mattick, C. Breen, J. Kimber, et al. Buprenorphine maintenance versus placebo or methadone maintenance for opioid dependence Cochrane Database Syst Rev (2014), p. CD002207 Google Scholar 9R.P. Mattick, C. Breen, J. Kimber, et al. Methadone maintenance therapy versus no opioid replacement therapy for opioid dependence Cochrane Database Syst Rev (2009), p. CD002209 View in ScopusGoogle Scholar 10L.A. Marsch The efficacy of methadone maintenance interventions in reducing illicit opiate use, HIV risk behavior and criminality: a meta-analysis Addiction, 93 (1998), pp. 515-532 CrossrefView in ScopusGoogle Scholar 11L. Sordo, G. Barrio, M.J. Bravo, et al. Mortality risk during and after opioid substitution treatment: systematic review and meta-analysis of cohort studies BMJ, 357 (2017), p. j1550 CrossrefView in ScopusGoogle Scholar 12M.R. Larochelle, T.J. Stopka, Z. Xuan, et al. Medication for opioid use disorder after nonfatal opioid overdose and mortality Ann Intern Med, 169 (2018), pp. 137-145 CrossrefView in ScopusGoogle Scholar 13Substance Abuse and Mental Health Services Administration. Key Substance Use and Mental Health Indicators in the United States: Results from the 2018 National Survey on Drug Use and Health. U.S. Department of Health and Human Services HHS Publication No. PEP19-5068. Accessed December 14, 2020 (2019) Google Scholar 14Opioid Overdose Crisis National Institute on Drug Abuse Accessed December 5, 2020 Google Scholar 15K. Hasegawa, D.F. Brown, Y. Tsugawa, et al. Epidemiology of emergency department visits for opioid overdose: a population-based study Mayo Clin Proc, 89 (2014), pp. 462-471 View PDFView articleView in ScopusGoogle Scholar 16A. Tadros, S.M. Layman, S.M. Davis, et al. Emergency visits for prescription opioid poisonings J Emerg Med, 49 (2015), pp. 871-877 View PDFView articleView in ScopusGoogle Scholar 17A.J. Weiss, A. Elixhauser, M.L. Barrett, et al. Opioid-Related Inpatient Stays and Emergency Department Visits by State, 2009–2014. Agency for Healthcare Research and Quality HCUP Statistical Brief #219. Accessed December 5, 2020 (2016) Google Scholar 18K. Fingar, H. Skinner, J. Johann, et al. Geographic Variation in Substance-Related Inpatient Stays Across States and Counties in the United States, 2013-2015. Agency for Healthcare Research and Quality HCUP Statistical Brief #245. Accessed December 14, 2020 (2018) Google Scholar 19HCUP Fast Stats Healthcare Cost and Utilization Project Accessed March 9, 2021 www.hcup-us.ahrq.gov/faststats/opioid/opioiduse.jsp Google Scholar 20S. Goldman-Mellor, M. Olfson, C. Lidon-Moyano, et al. Mortality following nonfatal opioid and sedative/hypnotic drug overdose Am J Prev Med, 59 (2020), pp. 59-67 View PDFView articleView in ScopusGoogle Scholar 21S.G. Weiner, O. Baker, D. Bernson, et al. One-year mortality of patients after emergency department treatment for nonfatal opioid overdose Ann Emerg Med, 75 (2020), pp. 13-17 View PDFView articleView in ScopusGoogle Scholar 22T. Gomes, M. Tadrous, M.M. Mamdani, et al. The burden of opioid-related mortality in the United States JAMA Netw Open, 1 (2018), Article e180217 CrossrefView in ScopusGoogle Scholar 23A.S. Kilaru, A. Xiong, M. Lowenstein, et al. Incidence of treatment for opioid use disorder following nonfatal overdose in commercially insured patients JAMA Netw Open, 3 (2020), Article e205852 CrossrefGoogle Scholar 24U.S. Department of Health and Human Services Office of the Surgeon General. Facing Addiction in America: The Surgeon General’s Spotlight on Opioids. U.S. Department of Health and Human Services Accessed December 15, 2020 (2018) Google Scholar 25Initiating Buprenorphine Treatment in the Emergency Department National Institute on Drug Abuse Accessed December 10, 2020 Google Scholar 26Substance Abuse and Mental Health Services Administration Medications for Opioid Use Disorder. Substance Abuse and Mental Health Services Administration Treatment Improvement Protocol (TIP) Series 63. Accessed December 16, 2020 (2020) Google Scholar 27National Academies of Sciences, Engineering, and Medicine 2019. Medications for Opioid Use Disorder Save Lives The National Academies Press (2019), 10.17226/25310 Google Scholar 28American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Opioids, B.W. Hatten, S.V. Cantrill, et al. Clinical policy: critical issues related to opioids in adult patients presenting to the emergency department Ann Emerg Med, 76 (2020), pp. e13-e39 View PDFView articleGoogle Scholar 29Kaczorowski J, Bilodeau J, Orkin AM, et al. Emergency department-initiated interventions for patients with opioid use disorder: a systematic review. Acad Emerg Med. Published online July 28, 2020. doi:10.1111/acem.14054 Google Scholar 30S.C. Wang, Y.C. Chen, C.H. Lee, et al. Opioid addiction, genetic susceptibility, and medical treatments: a review Int J Mol Sci, 20 (2019), p. 4294 CrossrefView in ScopusGoogle Scholar 31American Psychiatric Association Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition), American Psychiatric Association (2013), p. 541 Google Scholar 32L.S. Nelson, D. Olsen Opioids L.S. Nelson, M. Howland, N.A. Lewin, et al. (Eds.), Goldfrank's Toxicologic Emergencies (11th edition), McGraw-Hill (2019) Accessed December 7, 2020 Google Scholar 33K. Kampman, M. Jarvis American Society of Addiction Medicine (ASAM) National Practice Guideline for the Use of Medications in the Treatment of Addiction Involving Opioid Use J Addict Med, 9 (2015), pp. 358-367 View in ScopusGoogle Scholar 34C. Schardt, M.B. Adams, T. Owens, et al. Utilization of the PICO framework to improve searching PubMed for clinical questions BMC Med Inform Decis Mak, 7 (2007), p. 16 View in ScopusGoogle Scholar 35M. Ouzzani, H. Hammady, Z. Fedorowicz, et al. Rayyan-a web and mobile app for systematic reviews Syst Rev, 5 (2016), p. 210, 10.1186/s13643-016-0384-4 View in ScopusGoogle Scholar 36G. D'Onofrio, P.G. O'Connor, M.V. Pantalon, et al. Emergency department-initiated buprenorphine/naloxone treatment for opioid dependence: a randomized clinical trial JAMA, 313 (2015), pp. 1636-1644 CrossrefView in ScopusGoogle Scholar 37S.H. Busch, D.A. Fiellin, M.C. Chawarski, et al. Cost-effectiveness of emergency department-initiated treatment for opioid dependence Addiction, 112 (2017), pp. 2002-2010 CrossrefView in ScopusGoogle Scholar 38The CA Bridge Program Accessed October 21, 2020 Accessed 21st Oct 2020 Google Scholar 39C. Bogan, L. Jennings, L. Haynes, et al. Implementation of emergency department-initiated buprenorphine for opioid use disorder in a rural southern state J Subst Abuse Treat, 112S (2020), pp. 73-78 View PDFView articleView in ScopusGoogle Scholar 40K.A. Kaucher, E.H. Caruso, G. Sungar, et al. Evaluation of an emergency department buprenorphine induction and medication-assisted treatment referral program Am J Emerg Med, 38 (2020), pp. 300-304 View PDFView articleView in ScopusGoogle Scholar 41A. Srivastava, M. Kahan, I. Njoroge, et al. Buprenorphine in the emergency department: randomized clinical controlled trial of clonidine versus buprenorphine for the treatment of opioid withdrawal Can Fam Physician, 65 (2019), pp. e214-e220 Google Scholar 42The General Court of the Commonwealth of Massachusetts. An Act for Prevention and Access to Appropriate Case and Treatment of Addiction. 2018. Session Laws, Acts (2018), Chapter 208 Accessed December 16, 2020 Accessed 16th Dec 2020 Google Scholar 43Guidelines for Medication for Addiction Treatment for Opioid Use Disorder within the Emergency Department Massachusetts Health and Hospital Association Accessed December 16, 2020 Accessed 16th Dec 2020 Google Scholar 44T.G. Rhee, G. D’Onofrio, D.A. Fiellin Trends in the use of buprenorphine in US emergency departments, 2002-2017 JAMA Netw Open, 3 (2020), Article e2021209 CrossrefGoogle Scholar 45U.S. Food and Drug Administration FDA urges caution about withholding opioid addiction medications from patients taking benzodiazepines or CNS depressants: careful medication management can reduce risks FDA Drug Safety Communication (2017) September 20, 2017. Accessed December 10, 2020 Google Scholar 46M.V. Pantalon, J. Dziura, F.Y. Li, et al. An interventionist adherence scale for a specialized brief negotiation interview focused on treatment engagement for opioid use disorders Subst Abus, 38 (2017), pp. 191-199 CrossrefView in ScopusGoogle Scholar 47K.F. Hawk, G. D'Onofrio, M.C. Chawarski, et al. Barriers and facilitators to clinician readiness to provide emergency department-initiated buprenorphine JAMA Netw Open, 3 (2020), Article e204561 CrossrefView in ScopusGoogle Scholar 48E.R. Melnick, B. Nath, O.M. Ahmed, et al. Progress report on EMBED: a pragmatic trial of user-centered clinical decision support to implement EMergency department-initiated BuprenorphinE for opioid use Disorder J Psychiatr Brain Sci, 5 (2020), Article e200003 Google Scholar 49K. Hawk, E.A. Samuels, S.G. Weiner, et al. Substance Use Disorders J.E. Tintinalli, O. Ma, D.M. Yealy, et al. (Eds.), Tintinalli's Emergency Medicine: A Comprehensive Study Guide (9 th edition), McGraw-Hill (2020) Accessed December 16, 2020 Accessed 16th Dec 2020 Google Scholar 50A.A. Herring, J. Perrone, L.S. Nelson Managing opioid withdrawal in the emergency department with buprenorphine Ann Emerg Med, 73 (2019), pp. 481-487 View PDFView articleView in ScopusGoogle Scholar 51J.D. Lee, J. McNeely, E. Grossman, et al. Clinical case conference: unobserved “home” induction onto buprenorphine J Addict Med, 8 (2014), pp. 309-314 View in ScopusGoogle Scholar 52J.D. Lee, F. Vocci, D.A. Fiellin Unobserved “home” induction onto buprenorphine J Addict Med, 8 (2014), pp. 299-308 View in ScopusGoogle Scholar 53J.M. Adams Increasing naloxone awareness and use: the role of health care practitioners JAMA, 319 (2018), pp. 2073-2074 View in ScopusGoogle Scholar 54R.J. Strayer, K. Hawk, B.D. Hayes, et al. Management of opioid use disorder in the emergency department: a white paper prepared for the American Academy of Emergency Medicine J Emerg Med, 58 (2020), pp. 522-546 View PDFView articleView in ScopusGoogle Scholar 55ED-Initiated Buprenorphine. Yale School of Medicine Accessed December 7, 2020. Google Scholar 56E-QUAL Network Opioid Initiative. American College of Emergency Physicians Accessed December 7, 2020 Accessed 7th Dec 2020 Google Scholar 57Buprenorphine Practitioner Locator Substance Abuse and Mental Health Services Administration Accessed March 2, 2021 Google Scholar 58E.A. Samuels, S.L. Bernstein, B.D.L. Marshall, et al. Peer navigation and take-home naloxone for opioid overdose emergency department patients: preliminary patient outcomes J Subst Abuse Treat, 94 (2018), pp. 29-34 View PDFView articleView in ScopusGoogle Scholar 59G. D'Onofrio, L.C. Degutis Integrating Project ASSERT: a screening, intervention, and referral to treatment program for unhealthy alcohol and drug use into an urban emergency department Acad Emerg Med, 17 (2010), pp. 903-911 CrossrefView in ScopusGoogle Scholar 60How to Prepare for a Visit from the Drug Enforcement Agency (DEA) Regarding Buprenorphine Prescribing. Providers Clinical Support System Accessed December 7, 2020. Google Scholar 61G. D'Onofrio, R.P. McCormack, K. Hawk Emergency departments - a 24/7/365 option for combating the opioid crisis N Engl J Med, 379 (2018), pp. 2487-2490 CrossrefView in ScopusGoogle Scholar 62D.D. Im, A. Chary, A.L. Condella, et al. Emergency department clinicians' attitudes toward opioid use disorder and emergency department-initiated buprenorphine treatment: a mixed-methods study West J Emerg Med, 21 (2020), pp. 261-271 CrossrefView in ScopusGoogle Scholar 63W.C. Holland, B. Nath, F. Li, et al. Interrupted time series of user-centered clinical decision support implementation for emergency department-initiated buprenorphine for opioid use disorder Acad Emerg Med, 27 (2020), pp. 753-763 CrossrefView in ScopusGoogle Scholar 64106th United States Congress (1999-2000). Drug Addiction Treatment Act of 2000. 2000. H.R.2634. Google Scholar 65Federal Register. Medication Assisted Treatment for Opioid Use Disorders Department of Health and Human Services Federal Register/Vol 81 (2016) No. 187 Google Scholar 66Federal Register Implementation of the Provision of the Comprehensive Addiction and Recovery Act of 2016 Relating to the Dispensing of Narcotic Drugs for Opioid Use Disorder. Drug Enforcement Administration Federal Register/Vol 83 (2018) No. 15 Google Scholar 67116th United States Congress (2019-2020) Mainstreaming Addiction Treatment Act of 2019. 2019. H.R.2482 Accessed March 2, 2021 Google Scholar 68U.S. Department of Health and Human Services HHS Expands Access to Treatment for Opioid Use Disorder Accessed March 2, 2021 Accessed 2nd Mar 2021 Google Scholar 69Federal Register: Practice Guidelines for the Administration of Buprenorphine for Treating Opioid Use Disorder. Available at: Accessed Google Scholar 70Emergency Narcotic Addiction Treatment. U.S. Department of Justice Drug Enforcement Administration Accessed 24th Mar 2021 Google Scholar 71H.R.8900 - Further Continuing Appropriations Act, 2021, and Other Extensions Act. Available at: Accessed May 30, 2021 Google Scholar 72N. Mullins, S.L. Galvin, M. Ramage, et al. Buprenorphine and naloxone versus buprenorphine for opioid use disorder in pregnancy: a cohort study J Addict Med, 14 (2020), pp. 185-192 CrossrefView in ScopusGoogle Scholar 73Committee Opinion No 711: Opioid Use and Opioid Use Disorder in Pregnancy Obstet Gynecol, 130 (2017), pp. e81-e94 Google Scholar 74Saitz R, Miller SC, Fiellin DA, et al. Recommended use of terminology in addiction medicine. J Addict Med. Published online May 29, 2020. Google Scholar 75Zgierska AE, Miller MM, Rabago DP, et al. Language matters: it is time we change how we talk about addiction and its treatment. J Addict Med. Published online May 29,2020. Google Scholar 76C.E. Paquette, J.L. Syvertsen, R.A. Pollini Stigma at every turn: health services experiences among people who inject drugs Int J Drug Policy, 57 (2018), pp. 104-110 View PDFView articleView in ScopusGoogle Scholar 77Definition of Addiction American Society Addiction Medicine (2019) Last accessed December 15, 2020 Google Scholar 78M.P. Botticelli, H.K. Koh Changing the language of addiction JAMA, 316 (2016), pp. 1361-1362 CrossrefView in ScopusGoogle Scholar 79J.F. Kelly, S.E. Wakeman, R. Saitz Stop talking 'dirty': clinicians, language, and quality of care for the leading cause of preventable death in the United States Am J Med, 128 (2015), pp. 8-9 View PDFView articleGoogle Scholar Cited by (76) Transitions in care between hospital and community settings for individuals with a substance use disorder: A systematic review 2023, Drug and Alcohol Dependence Show abstract Individuals with a substance use disorder (SUD) have high rates of hospital service utilization including emergency department (ED) presentations and hospital admissions. Acute care settings offer a critical opportunity to engage individuals in addiction care and improve health outcomes especially given that the period of transition from hospital to community is challenging. This review summarizes literature on interventions for optimizing transitions in care from hospital to community for individuals with a SUD. The literature search focused on key terms associated with transitions in care and SUD. The search was conducted on three databases: MEDLINE, CINAHL, and PsychInfo. Eligible studies evaluated interventions acting prior to or during transitions in care from hospital to community and reported post-discharge engagement in specialized addiction care and/or return to hospital and were published since 2010. Title and abstract screening were conducted for 2337 records. Overall, 31 studies met inclusion criteria, including 7 randomized controlled trials and 24 quasi-experimental designs which focused on opioid use (n=8), alcohol use (n=5), or polysubstance use (n=18). Interventions included pharmacotherapy initiation (n=7), addiction consult services (n=9), protocol implementation (n=3), screening, brief intervention, and referral to treatment (n=2), patient navigation (n=4), case management (n=1), and recovery coaching (n=3). Both pharmacologic and psychosocial interventions implemented around transitions from acute to community care settings can improve engagement in care and reduce hospital readmission and ED presentations. Future research should focus on long-term health and social outcomes to improve quality of care for individuals with a SUD ### A Neuropharmacological Model to Explain Buprenorphine Induction Challenges 2022, Annals of Emergency Medicine Show abstract Buprenorphine induction for treating opioid use disorder is being implemented in emergency care. During this era of high-potency synthetic opioid use, novel and divergent algorithms for buprenorphine induction are emerging to optimize induction experience, facilitating continued treatment. Specifically, in patients with chronic fentanyl or other drug exposures, some clinicians are using alternative buprenorphine induction strategies, such as quickly maximizing buprenorphine agonist effects (eg, macrodosing) or, conversely, giving smaller initial doses and slowing the rate of buprenorphine dosing to avoid antagonist/withdrawal effects (eg, microdosing). However, there is a lack of foundational theory and empirical data to guide clinicians in evaluating such novel induction strategies. We present data from clinical studies of buprenorphine induction and propose a neuropharmacologic working model, which posits that acute clinical success of buprenorphine induction (achieving a positive agonist-to-withdrawal balance) is a nonlinear outcome of the opioid balance at the time of initial buprenorphine dose and mu-opioid–receptor affinity, lipophilicity, and mu-opioid–receptor intrinsic efficacy (the “ALE value”) of the prior opioid. We discuss the rationale for administering smaller or larger doses of buprenorphine to optimize the patient induction experience during common clinical situations. ### Naloxone and Buprenorphine Prescribing Following US Emergency Department Visits for Suspected Opioid Overdose: August 2019 to April 2021 2022, Annals of Emergency Medicine Show abstract Nonfatal emergency department (ED) visits for opioid overdose are important opportunities to prescribe naloxone and buprenorphine, both of which can prevent future overdose-related mortality. We assessed the rate of this prescribing using national data from August 2019 to April 2021, a period during which US opioid overdose deaths reached record levels. We conducted a retrospective cohort analysis using Symphony Health’s Integrated Dataverse, which includes data from 5,800 hospitals and 70,000 pharmacies. Of ED visits for opioid overdose between August 4, 2019, and April 3, 2021, we calculated the proportion with at least 1 naloxone prescription within 30 days and repeated this analysis for buprenorphine. To contextualize the naloxone prescribing rate, we calculated the proportion of ED visits for anaphylaxis with at least 1 prescription for epinephrine—another life-saving rescue medication—within 30 days. Analyses included 148,966 ED visits for opioid overdose. Mean weekly visits increased 23.6% during the period between April 26, 2020 and October 3, 2020 compared with the period between August 4, 2019 to April 25, 2020. Visits declined to prepandemic levels between October 4, 2020 and March 13, 2021, after which visits began to rise. Naloxone and buprenorphine were prescribed within 30 days at 7.4% and 8.5% of the 148,966 visits, respectively. The naloxone prescribing rate (7.4%) was substantially lower than the epinephrine prescribing rate (48.9%) after ED visits for anaphylaxis. Between August 4, 2019, and April 3, 2021, naloxone and buprenorphine were only prescribed after 1 in 13 and 1 in 12 ED visits for opioid overdose, respectively. Findings suggest that clinicians are missing critical opportunities to prevent opioid overdose-related mortality. ### Perceived stigma, barriers, and facilitators experienced by members of the opioid use disorder community when seeking healthcare 2023, Journal of Nursing Scholarship ### Implementation Facilitation to Promote Emergency Department-Initiated Buprenorphine for Opioid Use Disorder 2023, JAMA Network Open ### Perspectives About Emergency Department Care Encounters Among Adults With Opioid Use Disorder 2022, JAMA Network Open View all citing articles on Scopus All authors attest to meeting the 4 ICMJE.org authorship criteria: (1) Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND (2) Drafting the work or revising it critically for important intellectual content; AND (3) Final approval of the version to be published; AND (4) Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Funding and support: By Annals policy, all authors are required to disclose any and all commercial, financial, and other relationships in any way related to the subject of this article as per ICMJE conflict of interest guidelines (see www.icmje.org). Funding for this initiative was made possible (in part) by grant no. 3H79TI080816 from SAMHSA. The views expressed in written publications do not reflect the official policies of the Department of Health and Human Services, nor does mention of trade names, commercial practices, or organizations imply endorsement by the US Government. Publication dates: Received for publication December 13, 2020. Revisions received March 10, 2021; March 25, 2021. Accepted for publication April 6,2021. Available online June 23, 2021. Supervising editor: Donald M. Yealy, MD. Specific detailed information about possible conflict of interest for individual editors is available at © 2021 by the American College of Emergency Physicians. Substances (4) Generated by ​, an expert-curated chemistry database. Recommended articles Multisystem Inflammatory Syndrome in Children The Journal of Emergency Medicine, Volume 62, Issue 1, 2022, pp. 28-37 Muhammad Waseem, …, Tian Liang View PDF ### Managing Diabetic Ketoacidosis in Children Annals of Emergency Medicine, Volume 78, Issue 3, 2021, pp. 340-345 Leah Tzimenatos, Lise E.Nigrovic ### Diagnosis and Management of Cellulitis and Abscess in the Emergency Department Setting: An Evidence-Based Review The Journal of Emergency Medicine, Volume 62, Issue 1, 2022, pp. 16-27 Brit Long, Michael Gottlieb ### Chronic Drug Use and Abdominal Pain Emergency Medicine Clinics of North America, Volume 39, Issue 4, 2021, pp. 821-837 Alexis L.Cates, Brenna Farmer ### Emergency department‐initiated buprenorphine protocols: A national evaluation JACEP Open, Volume 2, Issue 6, 2021, Article e12606 Clara Z.Guo, …, Ethan Cowan View PDF ### Prevalence and charges of opioid-related visits to U.S. emergency departments Drug and Alcohol Dependence, Volume 221, 2021, Article 108568 James R.Langabeer, …, Tiffany Champagne-Langabeer Show 3 more articles Article Metrics Citations Citation Indexes 76 Policy Citations 8 Captures Mendeley Readers 75 Mentions News Mentions 5 Social Media Shares, Likes & Comments 7 View details About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Contextual Advertising Cookies [x] Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices × Ask questions. Get cited responses. Instantly surface and explore evidence from the largest base of trusted, peer-reviewed, full-text content with ScienceDirect AI. Unlock your access
12273
https://artofproblemsolving.com/wiki/index.php/2017_AIME_II_Problems/Problem_5?srsltid=AfmBOooYrf4UatN8-4xW3wf6N9VwS_1DMsJdN9WLe9KK7mFXz2saUZI8
Art of Problem Solving 2017 AIME II Problems/Problem 5 - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki 2017 AIME II Problems/Problem 5 Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search 2017 AIME II Problems/Problem 5 Contents [hide] 1 Problem 2 Solution 1 3 Solution 2 4 Solution 3 5 Solution 4 (Short Casework) 6 See Also Problem A set contains four numbers. The six pairwise sums of distinct elements of the set, in no particular order, are , , , , , and . Find the greatest possible value of . Solution 1 Let these four numbers be , , , and , where . needs to be maximized, so let and because these are the two largest pairwise sums. Now needs to be maximized. Notice that . No matter how the numbers , , , and are assigned to the values , , , and , the sum will always be . Therefore we need to maximize . The maximum value of is achieved when we let and be and because these are the two largest pairwise sums besides and . Therefore, the maximum possible value of . Solution 2 Let the four numbers be , , , and , in no particular order. Adding the pairwise sums, we have , so . Since we want to maximize , we must maximize . Of the four sums whose values we know, there must be two sums that add to . To maximize this value, we choose the highest pairwise sums, and . Therefore, . We can substitute this value into the earlier equation to find that . Solution 3 Note that if are the elements of the set, then . Thus we can assign . Then . Solution 4 (Short Casework) There are two cases we can consider. Let the elements of our set be denoted , and say that the largest sums and will be consisted of and . Thus, we want to maximize , which means has to be as large as possible, and has to be as small as possible to maximize and . So, the two cases we look at are: Case 1: Case 2: Note we have determined these cases by maximizing the value of determined by our previous conditions. So, the answers for each ( after some simple substitution ) will be: Case 1: Case 2: See the first case has our largest , so our answer will be See Also 2017 AIME II (Problems • Answer Key • Resources) Preceded by Problem 4Followed by Problem 6 1•2•3•4•5•6•7•8•9•10•11•12•13•14•15 All AIME Problems and Solutions These problems are copyrighted © by the Mathematical Association of America, as part of the American Mathematics Competitions. Retrieved from " Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
12274
https://math.stackexchange.com/questions/3740567/a-square-is-cut-into-three-equal-area-regions-by-two-parallel-lines-find-area-of
geometry - A square is cut into three equal area regions by two parallel lines find area of square. - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more A square is cut into three equal area regions by two parallel lines find area of square. [closed] Ask Question Asked 5 years, 3 months ago Modified5 years, 3 months ago Viewed 532 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. Closed. This question does not meet Mathematics Stack Exchange guidelines. It is not currently accepting answers. Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc. Closed 5 years ago. Improve this question A square is cut into three equal area regions by two parallel lines that are 1 cm apart, each one passing through exactly one of two diagonally opposed vertices. What is the area of the square ? geometry puzzle Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Jun 30, 2020 at 20:31 amir bahadoryamir bahadory asked Jun 30, 2020 at 20:28 amir bahadoryamir bahadory 4,766 3 3 gold badges 19 19 silver badges 42 42 bronze badges 7 Is this a stock image or did you change the problem from 1cm to 6in?David P –David P 2020-06-30 20:31:38 +00:00 Commented Jun 30, 2020 at 20:31 @DavidPeterson. No "1 cm " is true.i edited.amir bahadory –amir bahadory 2020-06-30 20:32:57 +00:00 Commented Jun 30, 2020 at 20:32 2 What did you try?David G. Stork –David G. Stork 2020-06-30 20:36:03 +00:00 Commented Jun 30, 2020 at 20:36 3 @amirbahadory: You must show your own efforts to get answer(s).Harish Chandra Rajpoot –Harish Chandra Rajpoot 2020-06-30 21:12:03 +00:00 Commented Jun 30, 2020 at 21:12 2 Please read How to ask a good question, amir.amWhy –amWhy 2020-06-30 21:22:04 +00:00 Commented Jun 30, 2020 at 21:22 |Show 2 more comments 1 Answer 1 Sorted by: Reset to default This answer is useful 5 Save this answer. Show activity on this post. Let A E=x A E=x. By symmetry (equal area etc.) F C=x F C=x. Let the side of square be a a. Then D E=B F=a 2+x 2−−−−−−√D E=B F=a 2+x 2. Area of the parallelogram F B E D F B E D = a 2+x 2−−−−−−√⋅1=a 2+x 2−−−−−−√a 2+x 2⋅1=a 2+x 2. Area of △D A E△D A E = Area of △F C B△F C B = a x 2 a x 2. a x 2=a 2+x 2−−−−−−√⟹x 2=4 a 2 a 2−4.a x 2=a 2+x 2⟹x 2=4 a 2 a 2−4. And area of square is thrice the area of the triangle gives us a 2=3 a x 2⟹2 a=3 x.a 2=3 a x 2⟹2 a=3 x. Solve these to get a 2=13 a 2=13 (area of the square). Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Jun 30, 2020 at 21:05 answered Jun 30, 2020 at 20:56 Anurag AAnurag A 42.4k 1 1 gold badge 40 40 silver badges 73 73 bronze badges 0 Add a comment| Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions geometry puzzle See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 7Can you divide a square into 5 equal area regions 0n lines cut a plane into at least (n+1)(n+2)n/3 regions. 4Dividing a square into two regions with minimal interface 3"Given three parallel straight lines. Construct a square three of whose vertices belong to these lines." Are all three lines required? 1Find area of a triangle divided into 6 parts and areas of 3 parts are known. 8Dividing a polygon into 6 equal regions 1Is there a formula to get the number of regions of a plane bounded by lines, where two or more lines are parallel? 7In a triangle, lines are drawn from each vertex to the opposite side. Can there be seven regions of integer area? 3“Formula for Counting Regions Created by Parallel and Perpendicular Lines in a Box Hot Network Questions Riffle a list of binary functions into list of arguments to produce a result how do I remove a item from the applications menu Does the curvature engine's wake really last forever? Can you formalize the definition of infinitely divisible in FOL? Xubuntu 24.04 - Libreoffice What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? Is direct sum of finite spectra cancellative? Discussing strategy reduces winning chances of everyone! What is this chess h4 sac known as? Is it ok to place components "inside" the PCB alignment in a table with custom separator Non-degeneracy of wedge product in cohomology Do sum of natural numbers and sum of their squares represent uniquely the summands? What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? ICC in Hague not prosecuting an individual brought before them in a questionable manner? What’s the usual way to apply for a Saudi business visa from the UAE? Change default Firefox open file directory Alternatives to Test-Driven Grading in an LLM world How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? Why is the definite article used in “Mi deporte favorito es el fútbol”? Another way to draw RegionDifference of a cylinder and Cuboid What meal can come next? Lingering odor presumably from bad chicken Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
12275
https://study.com/academy/practice/quiz-worksheet-system-of-linear-equations.html
Quiz & Worksheet - System of Linear Equations | Study.com Log In Sign Up Menu Plans Courses By Subject College Courses High School Courses Middle School Courses Elementary School Courses By Subject Arts Business Computer Science Education & Teaching English (ELA) Foreign Language Health & Medicine History Humanities Math Psychology Science Social Science Subjects Art Business Computer Science Education & Teaching English Health & Medicine History Humanities Math Psychology Science Social Science Art Architecture Art History Design Performing Arts Visual Arts Business Accounting Business Administration Business Communication Business Ethics Business Intelligence Business Law Economics Finance Healthcare Administration Human Resources Information Technology International Business Operations Management Real Estate Sales & Marketing Computer Science Computer Engineering Computer Programming Cybersecurity Data Science Software Education & Teaching Education Law & Policy Pedagogy & Teaching Strategies Special & Specialized Education Student Support in Education Teaching English Language Learners English Grammar Literature Public Speaking Reading Vocabulary Writing & Composition Health & Medicine Counseling & Therapy Health Medicine Nursing Nutrition History US History World History Humanities Communication Ethics Foreign Languages Philosophy Religious Studies Math Algebra Basic Math Calculus Geometry Statistics Trigonometry Psychology Clinical & Abnormal Psychology Cognitive Science Developmental Psychology Educational Psychology Organizational Psychology Social Psychology Science Anatomy & Physiology Astronomy Biology Chemistry Earth Science Engineering Environmental Science Physics Scientific Research Social Science Anthropology Criminal Justice Geography Law Linguistics Political Science Sociology Teachers Teacher Certification Teaching Resources and Curriculum Skills Practice Lesson Plans Teacher Professional Development For schools & districts Certifications Teacher Certification Exams Nursing Exams Real Estate Exams Military Exams Finance Exams Human Resources Exams Counseling & Social Work Exams Allied Health & Medicine Exams All Test Prep Teacher Certification Exams Praxis Test Prep FTCE Test Prep TExES Test Prep CSET & CBEST Test Prep All Teacher Certification Test Prep Nursing Exams NCLEX Test Prep TEAS Test Prep HESI Test Prep All Nursing Test Prep Real Estate Exams Real Estate Sales Real Estate Brokers Real Estate Appraisals All Real Estate Test Prep Military Exams ASVAB Test Prep AFOQT Test Prep All Military Test Prep Finance Exams SIE Test Prep Series 6 Test Prep Series 65 Test Prep Series 66 Test Prep Series 7 Test Prep CPP Test Prep CMA Test Prep All Finance Test Prep Human Resources Exams SHRM Test Prep PHR Test Prep aPHR Test Prep PHRi Test Prep SPHR Test Prep All HR Test Prep Counseling & Social Work Exams NCE Test Prep NCMHCE Test Prep CPCE Test Prep ASWB Test Prep CRC Test Prep All Counseling & Social Work Test Prep Allied Health & Medicine Exams ASCP Test Prep CNA Test Prep CNS Test Prep All Medical Test Prep College Degrees College Credit Courses Partner Schools Success Stories Earn credit Sign Up Math Courses Algebra II: High School Linear Equation | Definition, System & Examples VideoQuizCourse Instructions Choose an answer and hit 'next'. You will receive your score and answers at the end. 1. Which of the following statements are FALSE? A system of linear equations can sometimes be used to solve a problem when there is more than one unknown. If a set of values for the variables in a system of linear equations makes just one equation in the system true, then that set of values is a solution to the whole system. A solution to a system of linear must make all of the equations in the system true. None of these statements are false. Next Worksheet Print worksheet 1. Is the set of values r = 3, s = 7, and t = 1 a solution to the following system of linear equations? Yes, they are a solution to the system. No, they are not a solution to the system. The system has no solution. They are only a solution to one of the equations in the system. 2. Which of the following sets of values is a solution to the following system of linear equations? x = 4, y = 8 x = 2, y = 6 x = 1, y = 1 x = 5, y = 4 Create your account to access this entire worksheet A Premium account gives you access to all lessons, practice exams, quizzes, & worksheets Access to all video lessons Quizzes, practice exams, & worksheets Certificate of Completion Access to instructors Create an account About this quiz and worksheet About This Quiz & Worksheet This combination of printable quiz and worksheet provides a way to test yourself about systems of linear equations. Practice problems will involve writing and solving linear equations, answering true and false questions, and determining solution sets. Quiz & Worksheet Goals These quiz questions will focus on your ability to: Identify linear equations Write a linear equation based on a word problem Solve linear equations Skills Practiced During this assessment, you will demonstrate your capability in: Problem solving - use acquired knowledge to solve practice problems involving systems of linear equations Reading comprehension - ensure that you draw the most important information about systems of linear equations from the related lesson Critical thinking - apply relevant concepts to examine information about identifying a linear equation in a different light Information recall - access the knowledge you've gained regarding systems of linear equations Additional Learning By reviewing the associated lesson called System of Linear Equations: Definition & Examples, you can learn more about the subject. Objectives covered by this lesson comprise: Define linear equation, solution, and system of linear equations Relate linear equation problem solving to real-world situations Solve given practice problems to determine solutions of a system of linear equations Know how to determine whether an equation is linear or not Practice Exams Final exam Algebra II: High School Status: not started Take Exam More quizzes Create an account to start this course today Try it risk-free for 30 days Algebra II: High School 22 chapters 197 quizzes Chapter 1 Algebra II: Real Numbers Types of Numbers & Its Classifications QuizGraphing Rational Numbers on a Number Line | Chart & Examples QuizNotation for Rational Numbers, Fractions & Decimals QuizThe Order of Real Numbers: Inequalities QuizFinding the Absolute Value of a Real Number QuizHow to Rationalize the Denominator with a Radical Expression QuizTranscendental vs. Algebraic Numbers | Overview & Examples QuizPythagorean Triple | Definition, List & Examples Quiz Chapter 2 Algebra II: Complex and Imaginary Numbers Imaginary Numbers | Definition, History & Examples QuizHow to Add, Subtract and Multiply Complex Numbers QuizComplex Numbers & Conjugates | Multiplication & Division QuizHow to Graph a Complex Number on the Complex Plane QuizHow to Solve Quadratics with Complex Numbers as the Solution Quiz Chapter 3 Algebra II: Exponents and Exponential Expressions Review How to Use Exponential Notation QuizScientific Notation | Definition, Conversion & Examples QuizSimplifying and Solving Exponential Expressions QuizExponential Expressions & The Order of Operations QuizMultiplying Exponents | Overview, Methods & Rules QuizDividing Exponential Expressions QuizThe Power of Zero: Simplifying Exponential Expressions QuizNegative Exponents: Writing Powers of Fractions and Decimals QuizPower of Powers: Simplifying Exponential Expressions Quiz Chapter 4 Algebra II: Properties of Functions Review Function in Math | Definition & Examples QuizTransformations: How to Shift Graphs on a Plane QuizHow to Add, Subtract, Multiply and Divide Functions QuizDomain & Range of a Function | Definition, Equation & Examples QuizHow to Compose Functions QuizInverse Functions | Definition, Methods & Calculation QuizApplying Function Operations Practice Problems Quiz Chapter 5 Algebra II: Linear Equations Review Linear Equations | Definition, Formula & Solution QuizApplying the Distributive Property to Linear Equations QuizForms of a Linear Equation | Overview, Graphs & Conversion QuizAbstract Algebraic Examples and Going from a Graph to a Rule QuizUndefined & Zero Slope Graph | Definition & Examples QuizParallel vs Perpendicular vs Transverse Lines Overview & Examples QuizParallel & Perpendicular Lines | Equation, Graph & Examples QuizLinear Equation | Parts, Writing & Examples Quiz Chapter 6 Algebra II: Systems of Linear Equations Review System of Equations in Algebra | Overview, Methods & Examples QuizHow Do I Use a System of Equations? QuizHow to Solve a System of Linear Equations in Two Variables QuizHow to Solve a Linear System in Three Variables With a Solution QuizSolving System of Equations with 3 Variables | Steps & Examples Quiz Chapter 7 Algebra II: Inequalities Review Inequality Signs in Math | Symbols, Examples & Variation QuizGraphing Inequalities | Definition, Rules & Examples QuizInequality Notation | Overview & Examples QuizGraphing Inequalities | Overview & Examples QuizSolve & Graph an Absolute Value Inequality | Formula & Examples QuizAbsolute Value Inequalities | Definition, Calculation & Examples QuizTranslating Math Sentences to Inequalities Quiz Chapter 8 Algebra II: Absolute Value Review Absolute Value | Explanation & Examples QuizAbsolute Value Expression | Evaluation, Simplification & Examples QuizSolving Absolute Value Functions & Equations | Rules & Examples QuizAbsolute Value | Overview & Practice Problems QuizAbsolute Value | Graph & Transformations QuizGraphing Absolute Value Functions | Definition & Translation Quiz Chapter 9 Algebra II: Graphing and Factoring Quadratic Equations Review Tables & Graphs in the Real World | Uses & Examples QuizScatter Plot Graph | Overview, Uses & Examples QuizParabola | Definition & Parabolic Shape Equation QuizTypes of Parabolas | Overview, Graphs & Examples QuizMultiplying Binomials Using FOIL and the Area Method QuizMultiplying Binomials | Overview, Methods & Examples QuizFactoring Quadratic Equations Using Reverse Foil Method QuizFactoring Quadratic Equations | Solution & Examples QuizQuadratic Trinomial | Definition, Factorization & Examples QuizHow to Complete the Square | Method & Examples QuizCompleting the Square Practice Problems QuizHow to Solve a Quadratic Equation by Factoring Quiz Chapter 10 Algebra II: Quadratic Equations Review Quadratic Equation | Definition, Formula & Examples QuizSolving Quadratics: Assigning the Greatest Common Factor and Multiplication Property of Zero QuizQuadratic Function | Formula, Equations & Examples QuizHow to Solve Quadratics That Are Not in Standard Form QuizSolving Quadratic Inequalities Using Two Binomials Quiz Chapter 11 Algebra II: Factoring Factoring in Algebra | Definition, Equations & Examples QuizFinding the Prime Factorization of a Number | Meaning & Examples QuizUsing Prime Factorizations to Find the Least Common Multiples QuizEquivalent Expressions and Fraction Notation QuizUsing Fraction Notation: Addition, Subtraction, Multiplication & Division QuizFactoring Out Variables: Instructions & Examples QuizCombining Numbers and Variables When Factoring QuizTransforming Factoring Into A Division Problem QuizFactoring by Grouping | Definition, Steps & Examples QuizRational Root Theorem | Overview & Examples Quiz Chapter 12 Algebra II: Polynomials How to Evaluate a Polynomial in Function Notation QuizUnderstanding Basic Polynomial Graphs QuizBasic Transformations of Polynomial Graphs QuizCubic, Quartic & Quintic Equations | Graphs & Examples QuizAdding, Subtracting & Multiplying Polynomials | Steps & Examples QuizPascal's Triangle | Overview, Formula & Uses QuizBinomial Theorem | Coefficient Calculation, Formula & Examples QuizPolynomial Long Division | Overview & Examples QuizSynthetic Division of Polynomials | Method & Examples QuizDividing Polynomials with Long and Synthetic Division: Practice Problems QuizFactor & Remainder Theorem | Definition, Formula & Examples QuizOperations with Polynomials in Several Variables Quiz Chapter 13 Algebra II: Rational Expressions Review How to Multiply and Divide Rational Expressions QuizMultiplying and Dividing Rational Expressions: Practice Problems QuizAdding & Subtracting Rational Expressions | Overview & Examples QuizPractice Adding and Subtracting Rational Expressions QuizRational Equations | Definition, Formula & Examples QuizRational Equations: Practice Problems QuizDivision and Reciprocals of Rational Expressions QuizSimplifying Complex Rational Expressions | Steps & Examples QuizSolving Direct Variation | Equation, Problems & Examples QuizReflection Rules in Math | Graph, Formula & Examples QuizSolving Rational Equations and Finding the Least Common Denominator Quiz Chapter 14 Algebra II: Graphing and Functions Graphing Basic Functions QuizCompounding Functions and Graphing Functions of Functions QuizInverse Function | Graph & Examples QuizY Square Root of X | Graph, Domain & Common Mistakes QuizPolynomial Functions: Properties and Factoring QuizPolynomial Functions: Exponentials and Simplifying QuizExponentials, Logarithms & the Natural Log QuizSlopes of a Line | Graphs, Formula & Examples QuizEquation of a Line Using Point-Slope Formula QuizHorizontal and Vertical Asymptotes QuizImplicit Function Overview, Formula & Examples Quiz Chapter 15 Algebra II: Conic Sections Defining and Graphing Ellipses in Algebra QuizHow to Write the Equation of an Ellipse in Standard Form QuizThe Circle: Definition, Conic Sections & Distance Formula QuizHyperbola | Definition, Equation & Graphs QuizHyperbola Standard Form | Definition, Equations & Examples QuizParabola | Definition, Formula & Examples QuizParabola | Equation, Formula & Examples QuizConic Sections | Definition, Equations & Types Quiz Chapter 16 Algebra II: Roots and Radical Expressions Review Square Root | Definition, Formula & Examples QuizEstimating Square Roots | Overview & Examples QuizSimplifying Square Roots When not a Perfect Square QuizSimplifying Square Root Expressions | Steps & Examples QuizRadicands and Radical Expressions QuizEvaluating Square Roots of Perfect Squares QuizFactoring Radical Expressions QuizSimplifying Square Roots of Powers in Radical Expressions QuizMultiplying then Simplifying Radical Expressions QuizHow to Divide Radicals, Square Roots & Rational Expressions QuizSimplifying Square Roots | Overview & Examples QuizRationalizing the Denominator | Overview & Examples QuizAddition and Subtraction Using Radical Notation QuizHow to Multiply Radical Expressions QuizSolving Radical Equations | Overview & Examples QuizSolving Radical Equations with Two Radical Terms Quiz Chapter 17 Algebra II: Exponential and Logarithmic Functions Exponential Function | Definition, Equation & Examples QuizExponential Growth & Decay | Formula, Function & Graphs QuizLogarithms | Overview, Process & Examples QuizChange of Base Formula | Logarithms, Examples & Proof QuizGraphing Logarithms | Overview, Transformations & Examples QuizEvaluating Logarithms | Properties & Examples QuizLogarithmic Properties | Product, Power & Quotient Properties QuizPractice Problems for Logarithmic Properties QuizExponential Equations | Definition, Solutions & Examples QuizSolving Logarithmic Equations | Properties & Examples Quiz Chapter 18 Algebra II: Sequences & Series Sequences in Math | Overview, Types & Examples QuizHow to Use Factorial Notation: Process and Examples QuizHow to Write a Series in Summation Notation | Overview & Examples QuizGeneral Term of an Arithmetic Sequence | Overview, Formula & Uses QuizUnderstanding Arithmetic Series in Algebra QuizGeometric Sequence | Definition, Formula & Examples QuizSum of a Geometric Series | Formula & Examples QuizSum of Infinite Geometric Series | Formula, Sequence & Examples QuizRecursive Rule | Formulas & Examples QuizUsing Sigma Notation for the Sum of a Series QuizMathematical Induction: Uses & Proofs QuizHow to Find the Value of an Annuity QuizHow to Use the Binomial Theorem to Expand a Binomial QuizSpecial Sequences and How They Are Generated Quiz Chapter 19 Algebra II: Combinatorics Review How to Use the Fundamental Counting Principle QuizPermutation Definition, Formula & Examples QuizCombination in Mathematics | Definition, Formula & Examples QuizIndependent & Dependent Events | Overview, Probability & Examples QuizConditional Probability | Definition, Equation & Examples Quiz Chapter 20 Algebra II: Calculations, Ratios, Percent & Proportions Review Ratios & Rates | Differences & Examples QuizHow to Solve Problems with Money QuizProportion | Definition, Formula & Types QuizCalculations with Ratios and Proportions QuizPercents: Definition, Application & Examples QuizHow to Solve Word Problems That Use Percents QuizSimple Interest Problems | Definition, Formula & Examples QuizCompounding Interest | Formula, Types & Examples QuizTaxes & Discounts: Calculations & Examples QuizHow to Solve Problems with Time QuizDistance Equations | Formula, Calculation & Examples Quiz Chapter 21 Algebra II: Statistics Measures of Central Tendency | Definition, Formula & Examples QuizMeasures of Dispersion and Skewness QuizNormal Distribution | Curve, Table & Examples QuizRegression Analysis: Definition & Examples QuizOrganizing and Understanding Data with Tables & Schedules QuizPie Chart vs. Bar Graph | Overview, Uses & Examples Quiz Chapter 22 Algebra II: Trigonometry Transforming sin & cos Graphs | Graphing sin and cosine Functions QuizGraphing Tangent Functions | Period, Phase & Amplitude QuizUnit Circle Quadrants | Converting, Solving & Memorizing QuizTrig Functions using the Unit Circle | Formula & Examples QuizSpecial Right Triangles | Definition, Types & Examples QuizLaw of Sines Formula & Examples QuizLaw of Cosines | Definition & Equation QuizDouble Angle Formula | Sin, Cos & Tan QuizRadians to Degree Formula & Examples QuizHow to Solve Trigonometric Equations for X QuizPrevious lesson Trig Identities | Formula, List & Properties 7:11 minTrig Identities | Formula, List & Properties Quiz Explore our library of over 88,000 lessons Search Browse Browse by subject College Courses Business English Foreign Language History Humanities Math Science Social Science See All College Courses High School Courses AP Common Core GED High School See All High School Courses Other Courses College & Career Guidance Courses College Placement Exams Entrance Exams General Test Prep K-8 Courses Skills Courses Teacher Certification Exams See All Other Courses Study.com is an online platform offering affordable courses and study materials for K-12, college, and professional development. It enables flexible, self-paced learning. Plans Study Help Test Preparation College Credit Teacher Resources Working Scholars® Online Tutoring About us Blog Careers Teach for Us Press Center Ambassador Scholarships Support FAQ Site Feedback Terms of Use Privacy Policy DMCA Notice ADA Compliance Honor Code for Students Mobile Apps Contact us by phone at (877) 266-4919, or by mail at 100 View Street #202, Mountain View, CA 94041. © Copyright 2025 Study.com. All other trademarks and copyrights are the property of their respective owners. All rights reserved. Support ×
12276
https://www.mathportal.org/calculators/popular-problems/complex-division.php?ch1=&combo1=1&combo2=1&val1=1&val2=1&val3=2&val4=2
Divide 1+i by 2+2i Site map Math Tests Math Lessons Math Formulas Calculators Math Calculators, Lessons and Formulas It is time to solve your math problem mathportal.org HW Help (paid service) Math Lessons Math Formulas Calculators ◀ back to index Question Divide complex numbers: 2+2 i 1+i​ Answer 2+2 i 1+i​=2 1​ Explanation Step 1: Determine the conjugate of the denominator. ( to find the conjugate just change the sign of the imaginary part ). In this example, the conjugate of 2+2 i is 2−2 i. Step 2: Multiply both the numerator and denominator by the conjugate: 2+2 i 1+i​​=2+2 i 1+i​⋅2−2 i 2−2 i​==(2+2 i)⋅(2−2 i)(1+i)⋅(2−2 i)​​ Step 3: Simplify numerator and denominator (use i 2=−1) (1+i)⋅(2−2 i)​=1⋅2+1⋅(−2 i)+(1 i)⋅(2)+(1 i)⋅(−2 i)==2−2 i+2 i−2(−1)==4​(2+2 i)⋅(2−2 i)​=2⋅2+2⋅(−2 i)+(2 i)⋅(2)+(2 i)⋅(−2 i)==4−4 i+4 i−4(−1)==8​ Step 4: Separate real and imaginary parts: 2+2 i 1+i​=8 4​=8 4​+8 0​i=2 1​ This page was created using Complex Division Calculator About the Author Welcome to MathPortal. This website's owner is mathematician Miloš Petrović. I designed this website and wrote all the calculators, lessons, and formulas. If you want to contact me, probably have some questions, write me using the contact form or email me on mathhelp@mathportal.org. Send Me A Comment Comment: Email (optional): Main Navigation Online Math Calculators Math Formulas Math Lessons Math Tests Site Map Privacy Policy
12277
https://math.stackexchange.com/questions/4277961/how-does-this-substitution-work-x-arctanu
integration - How does this substitution work, x=arctan(u)? - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more How does this substitution work, x=arctan(u)? Ask Question Asked 3 years, 11 months ago Modified3 years, 11 months ago Viewed 192 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. I was trying to solve an integral and got the following results: ∫1 s i n(2 x)d x=∫1 2 s i n(x)c o s(x)d x=1 2∫s i n x s i n 2(x)c o s(x)d x=∫1 s i n(2 x)d x=∫1 2 s i n(x)c o s(x)d x=1 2∫s i n x s i n 2(x)c o s(x)d x= u=c o s(x),d u=−s i n(x)d x u=c o s(x),d u=−s i n(x)d x s i n 2 x=1−c o s 2 x s i n 2 x=1−c o s 2 x −1 2∫1 u(1−u 2)d u=−1 2∫1−u(u+1)(u−1)d u=−1 2∫1 u(1−u 2)d u=−1 2∫1−u(u+1)(u−1)d u= 1 2[−∫1 u d u+1 2∫1 u+1 d u+1 2∫1 u−1 d u]=1 2[−∫1 u d u+1 2∫1 u+1 d u+1 2∫1 u−1 d u]= −1 2 ln(|cos x|)+1 4 ln(|cos x+1|)+1 4 ln(|cos x−1|)−1 2 ln⁡(|cos⁡x|)+1 4 ln⁡(|cos⁡x+1|)+1 4 ln⁡(|cos⁡x−1|) The textbook gave me a different solution method, which is also understandable, using a similar technique: ∫s i n(2 x)1−c o s 2(2 x)d x=∫s i n(2 x)1−c o s 2(2 x)d x= y=c o s(2 x),d y=−2 s i n(2 x)d x y=c o s(2 x),d y=−2 s i n(2 x)d x 1 2∫d y y 2−1=1 2∫d y y 2−1= 1 4∫1 y−1−1 y+1 d y=1 4∫1 y−1−1 y+1 d y= 1 4 ln∣∣∣cos(2 x)−1 cos(2 x)+1∣∣∣1 4 ln⁡|cos⁡(2 x)−1 cos⁡(2 x)+1| However, when tried to solve with symbolab it turned out disgustingly simple: ∫d x s i n(2 x)=1 2∫1 cos(x)sin(x)d x=1 2∫sec(x)sin(x)d x=∫d x s i n(2 x)=1 2∫1 cos(x)sin(x)d x=1 2∫sec(x)sin(x)d x= x=a r c t a n(u)x=a r c t a n(u) 1 2∫1 u d u=1 2 ln|u|=1 2 ln|tan(x)|1 2∫1 u d u=1 2 ln⁡|u|=1 2 ln⁡|tan⁡(x)| And I really can't understand this substitution part. Can someone explain how does this work? integration Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Oct 16, 2021 at 1:45 AnonymousAnonymous asked Oct 16, 2021 at 1:12 AnonymousAnonymous 233 1 1 silver badge 12 12 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. I'd personally rather write it as u=tan x u=tan⁡x, but: divide the numerator and denominator by cos x cos⁡x (or multiply by sec x sec⁡x ) to get ∫sec 2 x tan x d x∫sec 2⁡x tan⁡x d x from which it is now apparent. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Oct 16, 2021 at 1:49 Paco AdajarPaco Adajar 855 4 4 silver badges 10 10 bronze badges 1 Ah yes! Thanks for highlighting the missing step, now it totally makes sense.Anonymous –Anonymous 2021-10-16 01:53:02 +00:00 Commented Oct 16, 2021 at 1:53 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions integration See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 6Trig substitution integral 1Is this a valid method for ∫1 x 4+1 d x∫1 x 4+1 d x 3How can I integrate ∫arctan(sec x+tan x)d x∫arctan⁡(sec⁡x+tan⁡x)d x 0Why does this type of substitution work in integration? 0Calculating an integral without using substitution 6How to evaluate ∫arctan x x 1+x 2√exp(−arctan x x)d x∫arctan⁡x x 1+x 2 exp⁡(−arctan⁡x x)d x 0Find the integral ∫sec 3 x∫sec 3⁡x via u-substitution 2How to find the integral of tan x−sec x tan x tan⁡x−sec⁡x tan⁡x? 3Integration with trig substitution 2Why does trig substitution yield an antiderivative that is not valid for this integral? Hot Network Questions Passengers on a flight vote on the destination, "It's democracy!" How to start explorer with C: drive selected and shown in folder list? For every second-order formula, is there a first-order formula equivalent to it by reification? Bypassing C64's PETSCII to screen code mapping Storing a session token in localstorage Is encrypting the login keyring necessary if you have full disk encryption? Do sum of natural numbers and sum of their squares represent uniquely the summands? Should I let a player go because of their inability to handle setbacks? Another way to draw RegionDifference of a cylinder and Cuboid Why include unadjusted estimates in a study when reporting adjusted estimates? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Why do universities push for high impact journal publications? Vampires defend Earth from Aliens Is direct sum of finite spectra cancellative? Do we need the author's permission for reference What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? With line sustain pedal markings, do I release the pedal at the beginning or end of the last note? Riffle a list of binary functions into list of arguments to produce a result Gluteus medius inactivity while riding In the U.S., can patients receive treatment at a hospital without being logged? Lingering odor presumably from bad chicken Why are LDS temple garments secret? Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Checking model assumptions at cluster level vs global level? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
12278
https://fiveable.me/key-terms/formal-logic-i/biconditional-statement
Biconditional Statement - (Formal Logic I) - Vocab, Definition, Explanations | Fiveable | Fiveable new!Printable guides for educators Printable guides for educators. Bring Fiveable to your classroom ap study content toolsprintablespricing my subjectsupgrade All Key Terms Formal Logic I Biconditional Statement 👁️‍🗨️formal logic i review key term - Biconditional Statement Citation: MLA Definition A biconditional statement is a logical assertion that connects two statements with the phrase 'if and only if,' indicating that both statements are equivalent; meaning that if one statement is true, the other must also be true, and vice versa. This type of statement is crucial in logic as it establishes a strong relationship between conditions, allowing for clearer arguments and proofs. 5 Must Know Facts For Your Next Test In a biconditional statement, both components must either be true or false together; this means it has the same truth value in both directions. Biconditional statements can be represented symbolically as P ↔ Q, where P and Q are the individual statements being connected. They are often used in definitions, such as 'a triangle is equilateral if and only if all its sides are equal.' Biconditionals are crucial for constructing valid indirect proofs by ensuring that if you can show one part to be true or false, the other follows accordingly. When working with complex arguments, recognizing biconditional relationships can simplify logical reasoning and clarify the structure of arguments. Review Questions How does a biconditional statement differ from a conditional statement in logical reasoning? A biconditional statement differs from a conditional statement in that it establishes a two-way relationship between two propositions, while a conditional statement expresses a one-way relationship. In a biconditional, the truth of one proposition guarantees the truth of the other, represented as 'P if and only if Q.' In contrast, a conditional states 'if P, then Q,' meaning that Q can be true even if P is false unless explicitly stated otherwise. In what ways do biconditional statements facilitate indirect proof methods in complex logical arguments? Biconditional statements facilitate indirect proof methods by allowing one to assume either part of the biconditional to derive conclusions about the other part. For example, if you need to prove that P leads to Q (or vice versa), establishing P ↔ Q lets you use both directions interchangeably. This flexibility enhances the power of indirect proofs since proving one side of the biconditional effectively proves the other side as well. Evaluate the importance of recognizing biconditional relationships when analyzing complex logical structures and how this impacts overall argument validity. Recognizing biconditional relationships when analyzing complex logical structures is vital because it clarifies how different statements relate to each other. Understanding these connections helps in validating arguments and determining which premises are necessary for conclusions. By establishing strong connections through biconditionals, one can avoid logical fallacies and enhance argument coherence, ultimately strengthening the logical foundation on which conclusions rest. Related terms Conditional Statement:A statement of the form 'if P, then Q,' where P is the hypothesis and Q is the conclusion. Conjunction: A compound statement formed by joining two statements with 'and,' which is true only if both individual statements are true. Disjunction: A compound statement formed by joining two statements with 'or,' which is true if at least one of the individual statements is true. Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. 0
12279
https://askfilo.com/user-question-answers-physics/to-prove-h-c-1240-ev-nm-36323930333631
Question asked by Filo student To Prove: h c = 1240 eV nm . Views: 5,422 students Updated on: Dec 10, 2023 Text SolutionText solutionverified iconVerified Step 1: To prove the equation hc=1240 eV nm, we first need to find the value of hc. Step 2: The value of hc can be calculated by multiplying the value of Planck's constant h and the speed of light c. Step 3: The value of Planck's constant h is 6.63×10−34 J s. Step 4: The value of the speed of light c is 3.0×108 m/s. Step 5: To find the value of hc, we can multiply the values of h and c obtained in steps 3 and 4. Step 6: hc=(6.63×10−34 J s)(3.0×108 m/s). Step 7: Calculate the product of 6.63×10−34 J s and 3.0×108 m/s to find the value of hc. Step 8: hc=1.989×10−25 J m. Step 9: Now, convert the value of hc from joules times meters to electron volts times nanometers. Step 10: Since 1 joule is equal to 1.6×10−19 J, we can use the conversion factor 1.6×10−19 J1 eV​. Step 11: Similarly, since 1 m is equal to 1.0×109 nm, we can use the conversion factor 1 m1.0×109 nm​. Step 12: Multiply the value of hc from step 8 by both conversion factors from steps 10 and 11. Step 13: hc=(1.989×10−25 J m)(1.6×10−19 J1 eV​)(1 m1.0×109 nm​). Step 14: Calculate the product of 1.989×10−25 J m, 1.6×10−19 J1 eV​, and 1 m1.0×109 nm​ to find the value of hc in electron volts times nanometers. Step 15: hc=1240 eV nm. Step 16: Therefore, the equation hc=1240 eV nm has been proven. Students who ask this question also asked Views: 5,616 Topic: Physics View solution Views: 5,534 Topic: Physics View solution Views: 5,358 Topic: Physics View solution Views: 5,537 Topic: Physics View solution Stuck on the question or explanation? Connect with our tutors online and get step by step solution of this question. | | | --- | | Question Text | To Prove: h c = 1240 eV nm . | | Updated On | Dec 10, 2023 | | Topic | All topics | | Subject | Physics | | Class | High School | | Answer Type | Text solution:1 | Are you ready to take control of your learning? Download Filo and start learning with your favorite tutors right away! Questions from top courses Explore Tutors by Cities Blog Knowledge © Copyright Filo EdTech INC. 2025
12280
https://it.vikidia.org/wiki/Dilatazione_termica
Dilatazione termica - Vikidia, l'enciclopedia libera dagli 8 ai 13 anni Dilatazione termica Da Vikidia, l'enciclopedia libera dagli 8 ai 13 anni. Vai alla navigazioneVai alla ricerca Leggi come una mappa mentale La dilatazione termica è la proprietà dei materiali di aumentare il proprio volume se riscaldati. Se riscaldo un materiale o un corpo, avendo ricevuto calore, aumenta l'agitazione delle rispettive particelle che tendono a distanziarsi, di conseguenza aumenta anche il proprio volume. La dilatazione termica avviene nei solidi, nei liquidi e nei gas. [x] Indice 1 Solidi 2 Dilatazione lineare 3 Dilatazione superficiale 4 Dilatazione dei liquidi 5 Dilatazione dei gas 6 L'acqua un caso particolare 7 Esempi 8 Note 9 Bibliografia 10 Collegamenti esterni Solidi[modifica | modifica sorgente] Nei corpi solidi avvengono tre tipi di dilatazione: dilatazione cubica, dilatazione superficiale e dilatazione lineare. Nei solidi la dilatazione è minore perché le loro particella sono ferme l'una rispetto alle altre. Dilatazione lineare[modifica | modifica sorgente] Nella dilatazione lineare il corpo si dilata in lunghezza o in altezza per esempio i giunti nella fotografia d'estate si avvicinano, poiché la strada tende a dilatarsi, d' inverno si allargano. Dilatazione superficiale[modifica | modifica sorgente] Nella dilatazione superficiale è quando si espande la superficie dell'oggetto. Dilatazione dei liquidi[modifica | modifica sorgente] In alcuni liquidi la dilatazione termica è rilevante, chiaramente visibile, ad esempio nell'alcool. Dilatazione dei gas[modifica | modifica sorgente] Gli aeriformi gas aumentano il volume. Le loro molecole sono libere, rispetto ai liquidi. L'acqua un caso particolare[modifica | modifica sorgente] L' acqua diversa da tutti i liquidi quando viene scaldata si contrae. Esempi[modifica | modifica sorgente] Se un corpo viene scaldato esso si dilata perché l'agitazione delle sue particelle aumenta, mentre se un corpo si raffredda l'agitazione delle particelle diminuisce e si contrae. Note[modifica | modifica sorgente] Bibliografia[modifica | modifica sorgente] Scienze Focus Di Luigi Leopardi,Chiara Cateni,Francesca Bolognani,Massimo Temporelli. Anno di pubblicazione 2015 Collegamenti esterni[modifica | modifica sorgente] Portale:Chimica Portale:Fisica Estratto da " Categoria: Termodinamica Categorie nascoste: Portale:Chimica/Pagine collegate Portale:Fisica/Pagine collegate Menu di navigazione Strumenti personali Accesso non effettuato discussioni contributi registrati entra Namespace Pagina Discussione [x] italiano Visite Leggi Modifica Modifica sorgente Cronologia [x] Altro Ricerca Navigazione Pagina principale Portale comunità Il Vikidiano Ultime modifiche Una voce a caso Nuove pagine Sportello informazioni La casa sull'albero Area giochi Aiuto IRC Principi fondatori Fai una donazione Strumenti Puntano qui Modifiche correlate Pagine speciali Link permanente Informazioni sulla pagina Cita questa pagina Stampa/esporta Crea un libro Scarica come PDF Versione stampabile In altre lingue Wikipedia Questa pagina è stata modificata per l'ultima volta il 21 gen 2022 alle 10:26. Il contenuto è disponibile in base alla licenza Creative Commons Attribution-Share Alike 3.0, se non diversamente specificato. Informativa sulla privacy Informazioni su Vikidia Avvertenze Versione mobile
12281
https://www.wyzant.com/resources/answers/425753/how_do_i_determine_if_a_line_pass_through_the_origin
Log in Sign up Search Search Find an Online Tutor Now Ask Ask a Question For Free Login Slope Intercept Form Shade W. asked • 12/02/17 how do i determine if a line pass through the origin equation y=2x -1 Follow • 2 Add comment More Report 2 Answers By Expert Tutors By: Arturo O. answered • 12/02/17 Tutor 5.0 (66) Experienced Physics Teacher for Physics Tutoring See tutors like this See tutors like this You can also test to see if x = 0 y = 0 is a solution. If yes, then the line passes through the origin. If no, then it does not pass through the origin. Upvote • 1 Downvote Add comment More Report Carol H. answered • 12/02/17 Tutor 4.9 (285) Experienced Mathematics Tutor w/ Master's Degree in Math See tutors like this See tutors like this The slope intercept form is y = mx + b, where b is the y-intercept. In the equation y = 2x - 1, the y-intercept is -1. So, if you have an equation like y = 4x, there is no "b" term. Therefore, the y-intercept is zero, and the line passes through the origin. Upvote • 1 Downvote Add comment More Report Still looking for help? Get the right answer, fast. Ask a question for free Get a free answer to a quick problem.Most questions answered within 4 hours. OR Find an Online Tutor Now Choose an expert and meet online. No packages or subscriptions, pay only for the time you need. RELATED TOPICS Math Algebra 1 Algebra 2 Algebra Equations Linear Equations Slope Slope In Standard Form Slope Intercept Math Help ... Parallel Lines Math Equations Y Intercept Perpendicular Lines Standard Form Y Intercept Slope Equation Point Slope Form Math Help For College Slope Intercept Equation RELATED QUESTIONS ##### Determine if the lines are parallel, perpendicular or neither for -y=3x-2 and -6x+2y=6 Answers · 3 ##### How to write an equation for the line in point-slope form and then re-write in slope-intercept form. Answers · 1 ##### (2,-2) slope=2 how to put it in slope intercept form Answers · 3 ##### Write the slope-intercept equation for the line that passes through (-14, -3) and is perpendicular to -7x – 4y = -4 Answers · 3 ##### how can you write y-3=2(x-1) in slope-intercept form? Answers · 4 RECOMMENDED TUTORS Sayaka M. 5.0 (1,479) Don S. 5 (11) Evan W. 5.0 (2,078) See more tutors find an online tutor Algebra 1 tutors Precalculus tutors Algebra 2 tutors Algebra tutors College Algebra tutors AP Calculus AB tutors 7th Grade Math tutors Trigonometry tutors
12282
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Map%3A_Organic_Chemistry_(Wade)_Complete_and_Semesters_I_and_II
Map: Organic Chemistry (Wade), Complete and Semesters I and II - Chemistry LibreTexts Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode Organic Chemistry Bookshelves { "Map:Organic_Chemistry(Wade)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Map:Organic_Chemistry_II(Wade)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Map:Organic_Chemistry_I(Wade)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "Basic_Principles_of_Organic_Chemistry_(Roberts_and_Caserio)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Book:How_to_be_a_Successful_Organic_Chemist(Sandtorv)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Book:Organic_Chemistry-A_Carbonyl_Early_Approach(McMichael)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Book:Organic_Chemistry_with_a_Biological_Emphasis_v2.0(Soderberg)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Book:Virtual_Textbook_of_OChem(Reusch)UNDER_CONSTRUCTION" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Catalytic_Asymmetric_Synthesis(Punniyamurthy)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Complex_Molecular_Synthesis_(Salomon)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Intermediate_Physical_Organic_(Morsch)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Introduction_to_Organic_Spectroscopy : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Logic_of_Organic_Synthesis_(Rao)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Map:Essential_Organic_Chemistry(Bruice)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Map:Organic_Chemistry(Bruice)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Map:Organic_Chemistry(Smith)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Map:Organic_Chemistry(Vollhardt_and_Schore)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Map:Organic_Chemistry(Wade)Complete_and_Semesters_I_and_II" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Nuclear_Magnetic_Resonance:_Applications_to_Organic_Chemistry(Roberts)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "OCLUE:Organic_Chemistry_Life_the_Universe_and_Everything(Copper_and_Klymkowsky)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_(Morsch_et_al.)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_(OpenStax)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_-Part_1_Fundamentals_(Malik)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_III_(Morsch_et_al.)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_II_(Morsch_et_al.)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_I_(Cortes)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_I_(Liu)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_I_(Morsch_et_al.)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_Lab_Techniques_(Nichols)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Chemistry_Nomenclature_Workbook_(O\'Donnell)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Organic_Synthesis_(Shea)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Polymer_Chemistry_(Schaller)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Radical_Reactions_of_Carbohydrates_(Binkley)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Supplemental_Modules_(Organic_Chemistry)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "Understanding_Organic_Chemistry_Through_Computation_(Boaz_and_Pearce)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { Analytical_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Biological_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Environmental_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", General_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Inorganic_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Introductory_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Organic_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Physical_and_Theoretical_Chemistry_Textbook_Maps : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Wed, 28 Dec 2022 17:52:11 GMT Map: Organic Chemistry (Wade), Complete and Semesters I and II 424270 424270 Joshua Halpern { } Anonymous Anonymous User 2 false false [ "article:topic-category" ] [ "article:topic-category" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Bookshelves 3. Organic Chemistry 4. Map: Organic Chemistry (Wade), Complete and Semesters I and II Expand/collapse global location Home Campus Bookshelves Bookshelves Introductory, Conceptual, and GOB Chemistry General Chemistry Organic Chemistry Supplemental Modules (Organic Chemistry) Organic Chemistry (OpenStax) Organic Chemistry (Morsch et al.) Organic Chemistry with a Biological Emphasis (Soderberg) Organic Chemistry Lab Techniques (Nichols) Basic Principles of Organic Chemistry (Roberts and Caserio) Organic Chemistry Nomenclature Workbook (O'Donnell) Book: How to be a Successful Organic Chemist (Sandtorv) Book: Virtual Textbook of OChem (Reusch) UNDER CONSTRUCTION Organic Chemistry - A "Carbonyl Early" Approach (McMichael) Organic Chemistry I (Liu) Map: Organic Chemistry (Bruice) Map: Essential Organic Chemistry (Bruice) Map: Organic Chemistry (Vollhardt and Schore) Map: Organic Chemistry (Smith) Logic of Organic Synthesis (Rao) Complex Molecular Synthesis (Salomon) Catalytic Asymmetric Synthesis (Punniyamurthy) Radical Reactions of Carbohydrates (Binkley) Organic Chemistry I (Cortes) OCLUE: Organic Chemistry, Life, the Universe, and Everything (Copper and Klymkowsky) Nuclear Magnetic Resonance: Applications to Organic Chemistry (Roberts) Polymer Chemistry (Schaller) Map: Organic Chemistry (Wade), Complete and Semesters I and II Introduction to Organic Spectroscopy Organic Synthesis (Shea) Intermediate Physical Organic (Morsch) Organic Chemistry I (Morsch et al.) Organic Chemistry II (Morsch et al.) Organic Chemistry III (Morsch et al.) Understanding Organic Chemistry Through Computation (Boaz and Pearce) Organic Chemistry -Part 1 Fundamentals (Malik) Inorganic Chemistry Analytical Chemistry Physical & Theoretical Chemistry Biological Chemistry Environmental Chemistry Learning Objects Map: Organic Chemistry (Wade), Complete and Semesters I and II Last updated Dec 28, 2022 Save as PDF Detailed Licensing Map: Organic Chemistry (Wade) Donate Page ID 424270 ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents No headers Map: Organic Chemistry (Wade)Welcome to organic chemistry! This text has been written for students. It emphasizes the practical details and skills needed to master this challenging subject. Learning organic chemistry is brain yoga! Our brains become strong and flexible with practice. This is a map covering all of the material in Wade's Organic Chemistry. It is over 1400 pages and cannot be printed at the LibreTexts Bookstore. Front Matter 1: Introduction and Review 2: Structure and Properties of Organic Molecules 3: Functional Groups and Nomenclature 4: Structure and Stereochemistry of Alkanes 5: An Introduction to Organic Reactions using Free Radical Halogenation of Alkanes 6: Stereochemistry at Tetrahedral Centers 7: Alkyl Halides- Nucleophilic Substitution and Elimination 8: Structure and Synthesis of Alkenes 9: Reactions of Alkenes 10: Alkynes 11: Infrared Spectroscopy and Mass Spectrometry 12: Nuclear Magnetic Resonance Spectroscopy 13: Structure and Synthesis of Alcohols 14: Reactions of Alcohols 15: Ethers, Epoxides and Thioethers 16: Conjugated Systems, Orbital Symmetry, and Ultraviolet Spectroscopy 17: Aromatic Compounds 18: Reactions of Aromatic Compounds 19: Ketones and Aldehydes 20: Amines 21: Carboxylic Acids 22: Carboxylic Acid Derivatives and Nitriles 23: Alpha Substitutions and Condensations of Carbonyl Compounds 24: Carbohydrates 25: Amino Acids, Peptides, and Proteins 26: Lipids 27: Nucleic Acids Back Matter Show all Map: Organic Chemistry II (Wade)Welcome to organic chemistry! This text has been written for students. It emphasizes the practical details and skills needed to master this challenging subject. Learning organic chemistry is brain yoga! Our brains become strong and flexible with practice. This textmap is for Semester II and can be printed at the online LibreTexts Bookstore. Front Matter 13: Structure and Synthesis of Alcohols 14: Reactions of Alcohols 15: Ethers, Epoxides and Thioethers 16: Conjugated Systems, Orbital Symmetry, and Ultraviolet Spectroscopy 17: Aromatic Compounds 18: Reactions of Aromatic Compounds 19: Ketones and Aldehydes 20: Amines 21: Carboxylic Acids 22: Carboxylic Acid Derivatives and Nitriles 23: Alpha Substitutions and Condensations of Carbonyl Compounds 24: Carbohydrates 25: Amino Acids, Peptides, and Proteins 26: Lipids 27: Nucleic Acids Back Matter Show all Map: Organic Chemistry I (Wade)Welcome to organic chemistry! This covers first semester materials up to IR and NMR analysis. The book is about 740 pages long and can be printed at the LibreTexts online bookstore Front Matter 1: Introduction and Review 2: Structure and Properties of Organic Molecules 3: Functional Groups and Nomenclature 4: Structure and Stereochemistry of Alkanes 5: An Introduction to Organic Reactions using Free Radical Halogenation of Alkanes 6: Stereochemistry at Tetrahedral Centers 7: Alkyl Halides- Nucleophilic Substitution and Elimination 8: Structure and Synthesis of Alkenes 9: Reactions of Alkenes 10: Alkynes 11: Infrared Spectroscopy and Mass Spectrometry 12: Nuclear Magnetic Resonance Spectroscopy Back Matter Show all Map: Organic Chemistry (Wade), Complete and Semesters I and II is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts. Back to top Detailed Licensing Map: Organic Chemistry (Wade) Was this article helpful? Yes No Recommended articles Organic Chemistry Article typeBook or Unit Tags This page has no tags. © Copyright 2025 Chemistry LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ Complete your gift to make an impact
12283
https://arxiv.org/pdf/2302.14790
Sequential Quadratic Optimization for Stochastic Optimization with Deterministic Nonlinear Inequality and Equality Constraints Frank E. Curtis and Daniel P. Robinson Department of Industrial and Systems Engineering, Lehigh University Baoyu Zhou Booth School of Business, The University of Chicago COR@L Technical Report 23T-017 arXiv:2302.14790v1 [math.OC] 28 Feb 2023 Sequential Quadratic Optimization for Stochastic Optimization with Deterministic Nonlinear Inequality and Equality Constraints Frank E. Curtis ∗1, Daniel P. Robinson †1, and Baoyu Zhou ‡2 1 Department of Industrial and Systems Engineering, Lehigh University 2 Booth School of Business, The University of Chicago March 1, 2023 Abstract A sequential quadratic optimization algorithm for minimizing an objective function defined by an expectation subject to nonlinear inequality and equality constraints is proposed, analyzed, and tested. The context of interest is when it is tractable to evaluate constraint function and derivative values in each iteration, but it is intractable to evaluate the objective function or its derivatives in any iteration, and instead an algorithm can only make use of stochastic objective gradient estimates. Under loose assump-tions, including that the gradient estimates are unbiased, the algorithm is proved to possess convergence guarantees in expectation. The results of numerical experiments are presented to demonstrate that the proposed algorithm can outperform an alternative approach that relies on the ability to compute more accurate gradient estimates. 1 Introduction We propose a sequential quadratic optimization (commonly known as SQP) algorithm for minimizing an ob-jective function defined by an expectation subject to nonlinear inequality and equality constraints. Such opti-mization problems arise in a plethora of application areas, including, but not limited to, machine learning , network optimization , resource allocation , portfolio optimization , risk-averse partial-differential-equation-constrained optimization , maximum-likelihood estimation , and multi-stage optimization . The design and analysis of deterministic algorithms for solving continuous optimization problems involv-ing inequality and equality constraints has been a well-studied topic for decades. Numerous types of such algorithms, such as penalty methods, interior-point methods, and SQP methods, have been designed to solve such problems. Penalty methods are based on the idea of using unconstrained optimization algorithms to minimize a weighted sum—determined by a penalty parameter—of the objective and a measure of constraint violation; e.g., see [11, 19, 45] for algorithms that make use of nondifferentiable (exact) penalty functions and see [15, 14, 21, 46] for algorithms that make use of differentiable (exact) penalty functions. While they are able to offer convergence guarantees from remote starting points, the numerical performance of penalty methods often suffers from ill-conditioning of the penalty functions and/or sensitivity of the algorithm’s performance on the particular scheme employed for updating the penalty parameter . Interior-point methods are designed to use barrier functions to guide the algorithm along a central path through the interior of the feasible region (or, at least, the interior of a set defined by bounds on a subset of the variables) ∗E-mail: frank.e.curtis@lehigh.edu †E-mail: daniel.p.robinson@lehigh.edu ‡E-mail: baoyu.zhou@chicagobooth.edu 2to a solution [9, 10, 29, 30, 43, 44]. Such algorithms have been shown to be very effective in practice, which is why many state-of-the-art software packages for continuous nonlinear optimization are built on interior-point methods; see, e.g., [10, 42]. Overall, both penalty and interior-point methods involve the use of additional objective terms to handle the presence of inequality constraints. Alternatively, in this paper, we present, analyze, and demonstrate the numerical performance of an SQP method for solving continuous nonlinear optimization problems. The SQP paradigm is based on the idea of, at each iterate, solving a subproblem (or subproblems) defined based on a local linearization of the constraint function and a local quadratic approximation of the objective. Unlike in the deterministic setting, for which numerous SQP algorithms have been proposed (see, e.g., [18, 20, 24, 34]), there have been few stochastic algorithms proposed for the setting of solving optimization problems with nonlinear constraints. That said, in the past few years, a couple of classes of stochastic SQP methods have been designed for optimization subject to nonlinear equality constraints. For example, the article proposes an SQP algorithm that uses stochastic objective gradient estimates for solving such problems that employs an adaptive step size policy based on Lipschitz constants (or estimates of them). For an alternative setting in which one is willing to compute objective value estimates as well, and to refine objective function and gradient estimates within a given iteration until probabilistic conditions of accuracy are satisfied, the article proposes a line-search stochastic SQP method. There have subsequently been multiple extensions of the methods in and , as well as work on different, but related algorithmic strategies—still for the setting of only nonlinear equality constraints. There has been work on relaxing constraint qualifications , allowing matrix-free and inexact solves of the arising linear systems , using a trust-region methodology , incorporating noisy (potentially biased) function and gradient estimates [5, 35], employing variance-reduction strategies [2, 4], considering sketch-and-project techniques , and analyzing the worst-case complexity (see ) of the method proposed in . Unlike the setting of equality constraints only, to our knowledge there has been very little work on the design and analysis of stochastic algorithms for optimization subject to nonlinear (nonconvex) inequality and equality constraints. Three exceptions are: the active-set line-search SQP algorithms proposed in and (very recently) in and the momentum-based augmented Lagrangian method (a penalty method) proposed in . We expect that our proposed SQP algorithm will perform well in comparison to a stochastic-gradient-based penalty method. We demonstrate with numerical experiments that our approach can outperform the algorithm proposed in . We remark in passing that interior-point methods often outperform SQP methods in the deterministic setting, but as far as we are aware there exists no interior-point method designed for the stochastic setting that we consider. 1.1 Contributions In this paper, we build on the algorithmic strategy and analysis in to propose and analyze an adaptive stochastic SQP algorithm for solving nonlinear optimization problems subject to (deterministic) inequality and equality constraints. This work involves significant advancements beyond that in that are necessary since, unlike in the setting of only having equality constraints, the presence of inequality constraints auto-matically guarantees that, at a given iterate, the search direction computed in a stochastic SQP method will be a biased estimate of the “true” search direction, i.e., the one that would be computed if the actual gradient of the objective function were available. This necessitates a distinct change in the design of the algorithm as well as distinct alterations to the convergence analysis, since the analysis in relies heavily on the search directions being (conditionally) unbiased estimators of their “true” counterparts. The algorithm from the literature that can be seen as the nearest alternative approach is the algorithm in . However, there are substantial differences between the algorithm and analysis in and those presented in this paper. Like in for the equality-only case, the algorithm in is designed for the setting in which one is willing to refine function and gradient estimates within an iteration until probabilistic conditions of accuracy are satisfied, and in this manner the analysis of that algorithm offers guarantees that are relatively closer to those offered for a deterministic algorithm. By contrast, the algorithm in this paper, like the algorithm in , is designed to allow the stochastic gradient estimates to be potentially much less accurate, and in such a context we are satisfied with offering convergence guarantees in expectation. We compare the numerical 3performance of our proposed algorithm with that in to demonstrate that there are settings in which our proposed approach has advantages in practice. 1.2 Notation We use R to denote the set of real numbers, R to denote the set of extended-real numbers (i.e., R := R ∪{−∞ , ∞} ), and R≥a (resp., R>a ) to denote the set of real numbers greater than or equal to (resp., greater than) a ∈ R. We append a superscript to such a set to denote the space of vectors or matrices whose elements are restricted to the indicated set; e.g., we use Rn to denote the set of n-dimensional real vectors and Rm×n to denote the set of m-by-n-dimensional real matrices. We use N := {1, 2, . . . } to denote the set of positive integers and, given n ∈ N, we use [ n] := {1, . . . , n } to denote the set of positive integers less than or equal to n. Given ( a, b ) ∈ Rn × Rn, we write a ⊥ b to mean, with ai and bi denoting the ith elements of a and b, respectively, that ai = 0 and/or bi = 0 for all i ∈ [n]. Given real symmetric matrices A ∈ Rn×n and B ∈ Rn×n, we write A  B (resp., A  B) to indicate that A − B is positive semidefinite (resp., positive definite). Given H ∈ Rn×n with H  0 and a ∈ Rn, we denote the norm ‖a‖H := √aT Ha .Our problem of interest is defined with respect to a variable x ∈ Rn and the algorithm that we propose and analyze is iterative, meaning that, in any run, it generates an iterate sequence that we denote as {xk} with xk ∈ Rn for all generated k ∈ N, i.e., {xk} ⊂ Rn. (We use such notation throughout the paper when the elements of sequence are contained within a given set. We say “for all generated k ∈ N” since our proposed algorithm might terminate finitely. Whether a subscript is being used to indicate the element of a vector or the index number of a sequence is always made clear by the context. The ith element of an iterate xk is denoted [ xk]i.) We use subscripts similarly to denote other quantities corresponding to each iteration of the algorithm; e.g., we introduce a merit parameter denoted as τ ∈ R>0 whose value in iteration k ∈ N is denoted as τk ∈ R>0, and corresponding to a constraint function c (see problem (1) below) we denote its value at xk as ck := c(xk). The iteration-dependent quantities mentioned in the previous paragraph—and additional ones introduced in the description of our algorithm—represent realizations of the random variables in a stochastic process generated by the algorithm. Specifically, the behavior of our algorithm is dictated by prescribed initial conditions and a sequence of stochastic objective gradient estimators that we denote by {Gk}. After proving preliminary results that hold for every run of the algorithm, we present our ultimate convergence theory for our algorithm in terms of a filtration defined in terms of σ-algebras dependent on the initial conditions of the algorithm and {Gk}. 1.3 Organization A statement of our problem of interest and preliminary assumptions about its objective and constraint functions, as well as about user-defined quantities in our proposed algorithm, are stated in Section 2. Adescription of our proposed algorithm is provided in Section 3. Convergence-in-expectation of the algorithm is proved under reasonable assumptions in Section 4. The results of numerical experiments are presented in Section 5 and concluding remarks are given in Section 6. 2 Setting We formulate our problem of interest as min x∈Rn f (x) subject to (s.t.) c(x) = 0 and x ≥ 0 with f (x) = Eω [F (x, ω )] , (1) where f : Rn → R and c : Rn → Rm are continuously differentiable, ω is a random variable with associated probability space (Ω , F, P), F : Rn × Ω → R, and Eω [·] denotes expectation taken with respect to the distribution of ω. Our algorithm and analysis extend easily to the setting in which the nonnegativity constraint in (1) is generalized to l ≤ x ≤ u for some ( l, u ) ∈ Rn × Rn with li ≤ ui for all i ∈ [n]; we 4merely consider nonnegativity in (1) for the sake of notational simplicity. It is also worth mentioning that any smooth constrained optimization problem can be reformulated as (1) (or at least as such a problem with generalized bound constraints); e.g., inequality constraints cI (x) ≤ 0, where cI : Rn → RmI is continuously differentiable, can be reformulated to fit into the form of (1) through the incorporation of slack variables, say s ∈ RmI , to have the constraints cI (x) + sI = 0 and sI ≥ 0. We make the following assumption throughout the remainder of the paper pertaining to the functions in problem (1) and our proposed algorithm. As seen in the following section, our algorithm seeks feasibility and stationarity with respect to (1) by generating an iterate sequence that stays feasible with respect to the bound constraints, meaning that, in any run of the algorithm, xk ∈ Rn ≥0 for all generated k ∈ N. Assumption 1. Let X ⊂ Rn be an open convex set that almost-surely contains the iterate sequence {xk} ⊂ Rn ≥0 generated in any realization of a run of the algorithm. The objective function f : Rn → R is continuously differentiable and bounded below over X and the objective gradient function ∇f : Rn → Rn is Lipschitz continuous and bounded in norm over X . Similarly, for all i ∈ [m], the constraint function ci : Rn → R is continuously differentiable and bounded over X and the constraint gradient function ∇ci : Rn → Rn is Lipschitz continuous and bounded in norm over X . Finally, the constraint Jacobian ∇cT : Rn → Rm×n has full row rank over X . Under Assumption 1, there exists finf ∈ R and a tuple of positive constants ( κ∇f , κ c, κ ∇c, L, Γ) ∈ R>0 × R>0 × R>0 × R>0 × R>0 such that for all x ∈ X one has f (x) ≥ finf , ‖∇ f (x)‖2 ≤ κ∇f , ‖c(x)‖2 ≤ κc, and ‖∇ c(x)‖2 ≤ κ∇c, (2) and for all ( x, x ) ∈ X × X one has ‖∇ f (x) − ∇ f (x)‖2 ≤ L‖x − x‖2 and ‖∇ c(x)T − ∇ c(x)T ‖2 ≤ Γ‖x − x‖2. (3) In addition, due to the continuous differentiability of the objective and constraint functions and the full row rank of the constraint Jacobians, it follows that at any (local) minimizer of (1), call it x ∈ Rn, there exists y ∈ Rm and z ∈ Rn such that the following Karush-Kuhn-Tucker (KKT) conditions are satisfied: ∇f (x) + ∇c(x)y − z = 0 , c(x) = 0 , 0 ≤ x ⊥ z ≥ 0. (4) We refer to any x ∈ Rn such that there exists ( y, z ) ∈ Rm × Rn satisfying (4) as a first-order stationary point (or KKT point) with respect to (1). Since our algorithm generates iterates that are feasible with respect to the bound constraints, but not necessarily with respect to the equality constraints, we need to account for the possible existence of points that are infeasible for (1), but are stationary with respect to the minimization of a constraint violation measure over Rn ≥0 . We refer to a point that is infeasible for (1) as an infeasible stationary point if it is stationary with respect to the minimization of 12 ‖c(x)‖22 subject to x ∈ Rn ≥0 , meaning 0 ≤ x ⊥ ∇ c(x)c(x) ≥ 0. (5) Each iteration of our algorithm requires a stochastic estimate of the gradient of the objective at the current iterate. In a given run at iteration k ∈ N, the realization of the iterate and gradient estimate is ( xk, g k), which later in our analysis we denote as a realization of the pair of random variables ( Xk, G k). (See Section 4.3 for a complete description of a stochastic process that we analyze.) With respect to the gradient estimators, we make Assumption 2 below. For the prescribed (i.e., not random) sequence {ρk} ⊂ R>0 referenced in the assumption, we state precise conditions that it must satisfy in Section 4.3. In the assumption and throughout the remainder of the paper, we use Ek[·] to denote expectation taken with respect to the distribution of ω conditioned on a trace σ-algebra of an event E, denoted by Fk; see Section 4.3. Assumption 2. For a prescribed {ρk} ⊂ R>0, one finds for all k ∈ N that Ek[Gk] = ∇f (Xk) and Ek[‖Gk − ∇ f (Xk)‖22] ≤ ρk. (6) 5One might relax the latter condition in (6) and obtain guarantees that are similar to those that we prove; see, e.g., . We employ (6) for simplicity, since it is sufficient for demonstrating the guarantees that our algorithmic approach can offer. Each iteration of our algorithm also makes use of a symmetric and positive-definite (SPD) matrix, denoted as Hk ∈ Rn×n for iteration k ∈ N, to define a quadratic term in the subproblem that is solved for computing the search direction. For simplicity, we assume that the sequence {Hk} is prescribed, e.g., one may consider Hk = I for all k ∈ N. More generally, one could consider a more sophisticated scheme such as setting, for all k ∈ N, the matrix Hk as a stochastic estimate of the Hessian of the objective function and/or a Lagrangian function as long as it is sufficiently positive definite and bounded and the choice is made to be conditionally uncorrelated with the stochastic gradient estimate. However, since considering such a loose requirement would only obfuscate our analysis without adding significant value, we assume for simplicity that {Hk} is prescribed and merely satisfies the following. Assumption 3. There exists (κH , ζ ) ∈ R>0 × R>0 with κH ≥ ζ such that, for all k ∈ N, the SPD matrix Hk ∈ Rn×n has κH I  Hk  ζI . Observe from Assumption 3 that we are not assuming that accurate second-order information is being used by the algorithm. Hence, our convergence guarantees are of the type that may be expected for a first-order-type algorithm, although in situations when it is computationally tractable, one might find better performance if Hk incorporates some (approximate) second-order derivative information. 3 Algorithm In this section, we present our proposed algorithm. We state the algorithm in terms of a particular realization of it (e.g., denoting the iterate for each k ∈ N as xk), although our subsequent analysis of it (starting in Section 4.3) will be written in terms of the stochastic process that the algorithm defines. Each iteration of our algorithm proceeds as follows. First, given the current iterate xk ∈ Rn ≥0 , the algorithm computes a direction whose purpose is to determine the progress that can be made in terms of reducing a measure of violation of a linearization of the equality constraints subject to the bound constraints. This is done in a manner that regularizes the component of the direction that lies in the null space of the constraint Jacobian. Specifically, the iteration commences by computing a direction vk := uk + ∇c(xk)wk ∈ Rn, where uk ∈ Null( ∇c(xk)T ) and ∇c(xk)wk ∈ Range( ∇c(xk)), by solving the quadratic optimization subproblem min u∈Rn,w ∈Rm 12 ‖ck + ∇c(xk)T ∇c(xk)w‖22 + 12 μk‖u‖22 s.t. ∇c(xk)T u = 0 and xk + u + ∇c(xk)w ≥ 0, (7) where μk ∈ R>0 is a user-prescribed parameter. Observe that since xk ∈ Rn ≥0 , this subproblem is always feasible, and by construction it is convex. Generally, the solution of (7) might not be unique, but in our setting it is unique since ∇c(xk)T has full row rank. In our analysis, we show that the solution of subproblem (7) is given by ( uk, w k) = (0 , 0) if and only if the current iterate xk is stationary with respect to the minimization of 12 ‖c(x)‖22 over x ∈ Rn ≥0 . This means, e.g., that if ck 6 = 0, but the solution of (7) is ( uk, w k) = (0 , 0)—which, by the Fundamental Theorem of Linear Algebra, occurs if and only if vk = uk + ∇c(xk)wk = 0—then it is reasonable to terminate since xk is an infeasible stationary point (see (5)), as in our algorithm. After computing vk ∈ Rn by solving (7) and generating a stochastic objective gradient estimate gk ∈ Rn (see Assumption 2), the algorithm next computes a search direction dk ∈ Rn by solving the quadratic optimization subproblem min d∈Rn gTk d + 12 dT Hkd s.t. ∇c(xk)T d = ∇c(xk)T vk and xk + d ≥ 0. (8) By construction, this subproblem is feasible, and under Assumption 3 it is convex. The search direction dk is designed to achieve the same progress toward linearized feasibility within the nonnegative orthant that 6is achieved by vk, then within the null space of ∇c(xk)T and the nonnegative orthant aims to minimize a (stochastically estimated) local quadratic approximation of the objective function at xk.The remainder of the kth iteration proceeds in a similar manner as in [1, 12]. In particular, with the `2-norm merit function in mind, namely, φ : Rn × R>0 → R defined by φ(x, τ ) = τ f (x) + ‖c(x)‖2, the algorithm next sets a value for the merit parameter τk ∈ R>0. This is done by considering a local model of this merit function, namely, l : Rn × R>0 × Rn × Rn → R defined by l(x, τ, g, d ) = τ (f (x) + gT d) + ‖c(x) + ∇c(x)T d‖2,and in particular the reduction in this model defined for all k ∈ N by ∆l(xk, τ k, g k, d k) := l(xk, τ k, g k, 0) − l(xk, τ k, g k, d k)= − τkgTk dk + ‖ck‖2 − ‖ ck + ∇c(xk)T dk‖2, (9) and setting τk such that this reduction is sufficiently large. Specifically, if dk 6 = 0, then with user-prescribed (τ , σ ) ∈ (0 , 1) × (0 , 1), the algorithm first sets τ trial k ←  ∞ if gTk dk + 12 dTk Hkdk ≤ 0 (1 −σ)( ‖ck‖2−‖ ck+∇c(xk)Tdk‖2) gTkdk+ 12dTkHkdk otherwise, (10) then sets the merit parameter value as τk ← { τk−1 if τk−1 ≤ τ trial k min {(1 − τ )τk−1, τ trial k } otherwise. (11) (The value τ0 ∈ R>0 is also prescribed by the user.) On the other hand, if dk = 0, then the algorithm simply sets τ trial k ← ∞ and τk ← τk−1. We show in our analysis (see Lemma 8) that this procedure for setting τk ensures that ∆ l(xk, τ k, g k, d k) is sufficiently large relative to the squared norm of the search direction and the improvement offered toward linearized feasibility. For use in the step size procedure, the algorithm next sets a value ξk ∈ R>0 (referred to as the ratio parameter) that acts as an estimate for a lower bound of the ratio between the model reduction and a multiple of the squared norm of the search direction. Specifically, if dk 6 = 0, it sets ξtrial k ← ∆l(xk ,τ k ,g k ,d k ) τk‖dk‖22 , then ξk ← { ξk−1 if ξk−1 ≤ ξtrial k min {(1 − ξ )ξk−1, ξ trial k } otherwise, (12) where ( ξ0,  ξ ) ∈ R>0 × (0 , 1) are user-prescribed parameters; see [1, 12] for further motivation. On the other hand, if dk = 0, then it sets ξtrial k ← ∞ and ξk ← ξk−1.The step size selection procedure, which for all k ∈ N chooses the step size αk ∈ R>0, can now be summarized as follows. First, suppose that dk 6 = 0. With user-prescribed η ∈ (0 , 1), θ ∈ R>0, and {βk} with βk ∈ (0 , 1] for all k ∈ N such that αmin k ← 2(1 −η)βk ξk τk τkL+Γ ∈ (0 , 1] for all k ∈ N, (13) and with the strongly convex function ϕk : R≥0 → R defined by ϕk(α) = ( η − 1) αβ k∆l(xk, τ k, g k, d k) + ‖ck + α∇c(xk)T dk‖2 − ‖ ck‖2 α(‖ck‖2 − ‖ ck + ∇c(xk)T dk‖2) + 12 (τkL + Γ) α2‖dk‖22, (14) the algorithm sets the values αϕk ← max {α ∈ R≥0 : ϕk(α) ≤ 0} and αmax k ← min {1, α ϕk , α min k θβ k}. (15) The algorithm then chooses the step size αk as any value in [ αmin k , α max k ]. Second, if dk = 0, then the algorithm simply sets all step size values to 1. A complete statement of our algorithm is given as Algorithm 1. 7Algorithm 1 Stochastic SQP Require: x1 ∈ Rn ≥0 ; {μk} ⊂ R>0; {Hk} ⊂ Rn×n satisfying Assumption 3; τ0 ∈ R>0; ξ0 ∈ R>0; {σ, η,  τ ,  ξ } ⊂ (0 , 1); {βk} ⊂ (0 , 1] satisfying (13); θ ∈ R>0; {ρk} ⊂ R>0; Lipschitz constants L ∈ R>0 and Γ ∈ R>0 (see (3)) 1: for k ∈ N do 2: compute vk ∈ Rn by solving (7) 3: if ck 6 = 0 and vk = 0 then terminate and return xk (infeasible stationary) 4: compute gk ∈ Rn (recall Assumption 2) 5: compute dk ∈ Rn by solving (8) 6: if dk = 0 then 7: set τ trial k ← ∞ , τk ← τk−1, ξtrial k ← ∞ , and ξk ← ξk−1 8: set αmin k ← 1, αϕk ← 1, αmax k ← 1, and αk ← 1 9: else 10: set τ trial k by (10) and τk by (11) 11: set ξtrial k and ξk by (12) 12: set αmin k by (13) and both αϕk and αmax k by (15) 13: choose αk ∈ [αmin k , α max k ] 14: end if 15: Set xk+1 ← xk + αkdk 16: end for 4 Analysis In this section, we provide theoretical results for Algorithm 1. We begin by introducing common assumptions under which one can establish stationarity measures for problem (1) that are defined by solutions of (7) and/or (8). These stationarity measures allow us to connect our convergence guarantees for Algorithm 1 with stationarity conditions for (1). Then, under Assumptions 1 and 3, we prove generally applicable results pertaining to the behavior of algorithmic quantities in any run of the algorithm. These results reveal that the algorithm is well defined in the sense that any run will either terminate and return an infeasible stationary point or generate an infinite sequence of iterates. We then consider convergence properties of the algorithm in the event that the (monotonically nonincreasing) merit parameter sequence eventually produces values that are sufficiently small, yet bounded away from zero, which, as shown in our analysis, means that the sequence ultimately becomes constant at a sufficiently small value. This analysis, which includes our main convergence results for the algorithm, is provided under Assumption 4 stated on page 16. We follow this analysis with a section on theoretical results related to the occurrence of the event in Assumption 4. As in for the equality-constraints-only setting, this discussion illuminates the fact that while the event in Assumption 4 is not always guaranteed to occur due to the looseness of our assumptions about properties of the stochastic gradient estimates, the event represents likely behavior in practice, which shows that our convergence results about the algorithm are meaningful for real-world situations. We conclude this section with a discussion of the behavior of the algorithm in the deterministic setting, i.e., when the true gradient of the objective is employed in all iterations. This discussion is meant to provide confidence to a user that our algorithm is based on one that has state-of-the-art convergence properties under common assumptions in the deterministic setting. 4.1 Subproblems and Stationarity Measures We begin by showing that subproblem (7) yields a zero solution if and only if the point defining the sub-problem is feasible for problem (1) or an infeasible stationary point. Lemma 1. Suppose that Assumption 1 holds, x ∈ X ∩ Rn ≥0 , and, given μ ∈ R>0, consider the quadratic 8optimization problem (recall (7)) min u∈Rn,w ∈Rm 12 ‖c(x) + ∇c(x)T ∇c(x)w‖22 + 12 μ‖u‖22 s.t. ∇c(x)T u = 0 and x + u + ∇c(x)w ≥ 0. (16) Then, the unique optimal solution of problem (16) is (u, w ) = (0 , 0) if and only if x is feasible for problem (1) or an infeasible stationary point (i.e., it satisfies (5)) , whereas (u, w ) 6 = (0 , 0) if and only if ‖c(x)‖2 > ‖c(x) + ∇c(x)T ∇c(x)w‖2.Proof. Suppose the conditions of the lemma hold and let ( u, w ) be the unique optimal solution of (16). Since x ∈ Rn ≥0 , it follows that (0 , 0) is feasible for (16). In addition, necessary and sufficient optimality conditions for (16) are that, corresponding to ( u, w ) ∈ Rn × Rm, there exists ( γ, δ ) ∈ Rm × Rn with ∇c(x)T ∇c(x)c(x) + ∇c(x)T ∇c(x)∇c(x)T ∇c(x)w − ∇ c(x)T δ = 0 ,μu + ∇c(x)γ − δ = 0 , ∇c(x)T u = 0 , and 0 ≤ δ ⊥ x + u + ∇c(x)w ≥ 0. (17) If ( u, w ) = (0 , 0), then it follows from (17) that ∇c(x)T ∇c(x)c(x) − ∇ c(x)T δ = 0 , ∇c(x)γ − δ = 0 , and 0 ≤ δ ⊥ x ≥ 0. (18) Since ∇c(x)T has full row rank, (18) implies γ = ( ∇c(x)T ∇c(x)) −1∇c(x)T δ = c(x), δ = ∇c(x)c(x), and 0 ≤ ∇ c(x)c(x) ⊥ x ≥ 0, which from (5) means that x is either feasible or an infeasible stationary point, as desired. On the other hand, if x is either feasible or an infeasible stationary point, meaning 0 ≤ ∇ c(x)c(x) ⊥ x ≥ 0, then u = 0, w = 0, γ = c(x), and δ = ∇c(x)c(x) satisfy (17), and this solution (i.e., ( u, w ) = (0 , 0)) is unique since the objective of (16) is strongly convex. Now let us show that the unique optimal solution of (16) is ( u, w ) 6 = (0 , 0) if and only if ‖c(x)‖2 > ‖c(x) + ∇c(x)T ∇c(x)w‖2. If ‖c(x)‖2 > ‖c(x) + ∇c(x)T ∇c(x)w‖2, then w 6 = 0 follows trivially, giving the desired conclusion. To prove the reverse implication, let us consider two cases. If u 6 = 0, then, since (0 , 0) is feasible for (16), 12 ‖c(x)‖22 ≥ 12 ‖c(x) + ∇c(x)T ∇c(x)w‖22 + 12 μ‖u‖22 > 12 ‖c(x) + ∇c(x)T ∇c(x)w‖22, as desired. Second, if u = 0 and w 6 = 0, then w is the minimizer of the strongly convex objective 12 ‖c(x) + ∇c(x)T ∇c(x)w‖22 subject to x + ∇c(x)w ≥ 0. Since 0 is feasible for this problem, w 6 = 0 means that 12 ‖c(x)‖22 > 12 ‖c(x) + ∇c(x)T ∇c(x)w‖22, as desired. We now show that, under common assumptions and given xk ∈ Rn ≥0 , the quantity ‖vk‖22, where vk ∈ Rn solves subproblem (7), represents a stationarity measure with respect to the problem to minimize 12 ‖c(x)‖22 subject to x ∈ Rn ≥0 . (The assumption in the lemma that μk = μ ∈ R>0 for all k ∈ N could be relaxed; see Remark 1 at the end of this subsection. We consider this case for the sake of brevity.) Lemma 2. Suppose that Assumption 1 holds and there exists infinite S ⊆ N such that for some sequence {xk} ⊂ X ∩ Rn ≥0 one finds {xk}k∈S → x∗ for some x∗ ∈ X ∩ Rn ≥0 where, with A(x) := {i ∈ [n] : xi = 0 }, IA(x) denoting the matrix composed of rows of I ∈ Rn×n corresponding to indices in A(x), and ∇c(x)A(x) denoting the matrix composed of rows of ∇c(x) corresponding to indices in A(x), one finds that (i) [∇c(x∗)c(x∗)] i > 0 for all i ∈ A (x∗) and (ii) the following matrix has full row rank: [ 0 ∇c(x∗)T ∇c(x∗)A(x∗) IA(x∗) ] .Then, with μk = μ ∈ R>0 for all k ∈ N, and with (uk, w k) solving subproblem (7) and vk := uk + ∇c(x)wk for all k ∈ N, it follows that x∗ satisfies the stationarity conditions (5) if and only if {vk}k∈S → 0. 9Proof. Let A∗ := A(x∗) and j(x) := ∇c(x)T and consider the linear system  j(x)j(x)T j(x)j(x)T 0 0 −j(x)A∗ 0 μI j(x)T −IT A∗ 0 j(x) 0 0 j(x)T A∗ IA∗ 0 0  wuγδA∗  =  −j(x)j(x)T c(x)00 −xA∗  . Since, under the conditions of the lemma, the matrix in this linear system is nonsingular when x = x∗ (e.g., this follows from [39, Theorem 1.5.1] and (ii)), it follows that there exists an open ball B∗ centered at x∗ such that, for each x ∈ B ∗ ∩ X ∩ Rn ≥0 , this linear system has a unique solution, call it ( w(x), u (x), γ (x), δ A∗ (x)), and—due to continuity of the left-hand-side matrix and right-hand-side vector with respect to x—this solu-tion varies continuously over B∗ ∩X ∩ Rn ≥0 . If x∗ satisfies (5), then it follows that (0 , 0, c (x∗), [j(x∗)T c(x∗)] A∗ )(with [ j(x∗)T c(x∗)] A∗ > 0) is the unique solution of the system at x = x∗, and for all x ∈ B ∗ ∩ X ∩ Rn ≥0 the solution of the system in conjunction with δi = 0 for all i / ∈ A ∗ satisfies (17), meaning that the components (u(x), w (x)) represent the unique optimal solution of problem (16). Hence, with respect to the quantities in the lemma and using Assumption 1, one finds that {vk}k∈S → 0, as desired. To prove the reverse in-clusion, suppose that {vk}k∈S → 0, from which it follows by the Fundamental Theorem of Linear Algebra and (ii) that {(uk, w k)}k∈S → 0. For all k ∈ S , let ( uk, w k, γ k, δ k) be a primal-dual optimal solution of (7) (satisfying optimality conditions of the form in (17)). One finds under the conditions of the lemma that, for all sufficiently large k ∈ S , this solution has [ δk]i = 0 for all i / ∈ A (x∗) whereas ( uk, w k, γ k, [δk]A∗ ) solves the linear system above at x = xk. Since, by the arguments above, this solution varies continuously within B∗ ∩ X ∩ Rn ≥0 , the fact that {xk}k∈S → x∗ implies that x∗ satisfies (5), as desired. In fact, under the conditions of the prior lemma, the quantity ‖ck‖2 − ‖ ck + ∇c(xk)T vk‖2 also represents a stationarity measure for the problem to minimize 12 ‖c(x)‖22 subject to x ∈ Rn ≥0 . This is shown in the following lemma. Lemma 3. Suppose that Assumption 1 holds, μk = μ ∈ R>0 for all k ∈ N, and there exists λ ∈ R>0 and infinite Sλ ⊆ N such that for some {xk} ⊂ X ∩ Rn ≥0 one finds ∇c(xk)T ∇c(xk)  λI for all k ∈ S λ. Then, there exists κv, 2 ∈ R>0 such that ‖ck‖2 − ‖ ck + ∇c(xk)T vk‖2 ≥ κv, 2‖vk‖22 for all k ∈ S λ, (19) where vk = uk + ∇c(xk)wk with (uk, w k) being the unique optimal solution of (7) . Consequently, under the conditions of Lemma 2 with S defined in that lemma, if Sλ defined as all sufficiently large indices in S satisfies the conditions above, then it follows that {vk}k∈S → 0 if and only if {‖ ck‖2 − ‖ ck + ∇c(xk)T vk‖2}k∈S → 0.Proof. Consider arbitrary k ∈ S λ. Under the stated conditions with jk := ∇c(xk)T , Lemma 1 implies ‖ck + jkvk‖2 ≤ ‖ ck‖2. Hence, by Assumption 1, ‖ck‖22 − ‖ ck + jkvk‖22 = ( ‖ck‖2 + ‖ck + jkvk‖2)( ‖ck‖2 − ‖ ck + jkvk‖2) ≤ 2‖ck‖2(‖ck‖2 − ‖ ck + jkvk‖2) ≤ 2κc(‖ck‖2 − ‖ ck + jkvk‖2). (20) If vk = 0, then (19) follows trivially. Hence, we may proceed under the assumption that vk 6 = 0, which by vk = uk + jTk wk and the Fundamental Theorem of Linear Algebra means that uk 6 = 0 and/or wk 6 = 0. If wk = 0, then it follows by construction of (7) that uk = 0 as well. Hence, we may conclude from vk 6 = 0 that, in fact, wk 6 = 0. Since ( uk, w k) is the unique optimal solution of (7), it follows that α∗ k = 1 is the optimal solution of the strongly convex quadratic optimization problem min α∈[0 ,1] 12 ‖ck + αj kjTk wk‖22 + 12 μk‖αu k‖22, (21) which further implies (since an optimality condition of (21) is that the derivative of its objective function with respect to α is less than or equal to zero at α∗ k = 1) that −cTk jkjTk wk ≥ ‖ jkjTk wk‖22 + μk‖uk‖22. Consequently, one finds ‖ck‖22 − ‖ ck + jkvk‖22 = ‖ck‖22 − ‖ ck + jkjTk wk‖22 = −2cTk jkjTk wk − ‖ jkjTk wk‖22 ≥ ‖ jkjTk wk‖22 + 2 μk‖uk‖22. (22) 10 With (20) and (22), it follows from Assumption 1, the conditions of the lemma, and since the Cauchy-Schwarz inequality implies ‖wk‖2 ≥ ‖ jTk wk‖2/‖jTk ‖2 that ‖ck‖2 − ‖ ck + jkvk‖2 ≥ (2 κc)−1(‖ck‖22 − ‖ ck + jkvk‖22) ≥ (2 κc)−1(‖jkjTk wk‖22 + 2 μk‖uk‖22) ≥ (2 κc)−1(λ2‖wk‖22 + 2 μk‖uk‖22) ≥ (2 κc)−1( λ2 κ2 ∇c ‖jTk wk‖22 + 2 μk‖uk‖22) ≥ (2 κc)−1 min { λ2 κ2 ∇c , 2μk}(‖jTk wk‖22 + ‖uk‖22)= (2 κc)−1 min { λ2 κ2 ∇c , 2μ}‖ vk‖22 =: κv, 2‖vk‖22, which gives (19), as desired. Next, we show that if the point defining subproblem (8) is not an infeasible stationary point for prob-lem (1), then the subproblem with gk = ∇f (xk) yields a zero solution if and only if the point defining the subproblem is stationary for (1). Lemma 4. Suppose that Assumption 1 holds and, with respect to x ∈ X ∩ Rn ≥0 , one finds c(x) = 0 . Given H ∈ Rn×n with H  0, consider (recall (8)) min d∈Rn ∇f (x)T d + 12 dT Hd s.t. c (x) + ∇c(x)T d = 0 and x + d ≥ 0. (23) Then, one finds that the optimal solution of problem (23) is d = 0 if and only if x is a KKT point (i.e., first-order stationary point ) for problem (1) .Proof. Suppose the conditions of the lemma hold and let d be the optimal solution of (23). Since x ∈ Rn ≥0 and c(x) = 0, it follows that the zero vector is feasible for (23). In addition, necessary and sufficient optimality conditions for subproblem (23) are that, corresponding to d ∈ Rn, there exist y ∈ Rm and z ∈ Rn such that ∇f (x) + Hd + ∇c(x)y − z = 0 , ∇c(x)T d = 0 , and 0 ≤ x + d ⊥ z ≥ 0. (24) If d = 0, then since c(x) = 0 it follows that ( x, y, z ) satisfies (4), as desired. On the other hand, if x is a KKT point for (1), then there exist y ∈ Rm and z ∈ Rn such that ( x, y, z ) satisfies (4), which in turn means that d = 0 along with ( y, z ) satisfies (24), and this solution is unique since the objective of (23) is strongly convex. We conclude this subsection by showing that, under common assumptions and given xk ∈ Rn ≥0 , the quantity ‖dk‖22, where dk ∈ Rn solves subproblem (8) with gk = ∇f (xk), represents a stationarity measure with respect to (1). (The assumption in the lemma that Hk = H for some H  0 for all k ∈ N could be relaxed; see Remark 1 at the end of this subsection. We consider this case for the sake of brevity.) Lemma 5. Suppose that Assumption 1 holds and there exists infinite S ⊆ N such that for some sequence {xk} ⊂ X ∩ Rn ≥0 one finds {xk}k∈S → x∗ for some x∗ ∈ X ∩ Rn ≥0 with c(x∗) = 0 and, with the notation in Lemma 2, one finds that (i) −∇ f (x∗) = ∇c(x∗)y − IT A(x∗) zA(x∗) for some (y, z A(x∗)) ∈ Rm × R|A (x∗)| 0 and (ii) the following matrix has full row rank: [∇c(x∗)T IA(x∗) ] .Then, with Hk = H for some H  0 for all k ∈ N, and with dk solving (8) with gk = ∇f (xk) for all k ∈ N, x∗ satisfies (4) if and only if {‖ dk‖22}k∈S → 0. 11 Proof. Letting A∗ := A(x∗) and considering the linear system of equations  H ∇c(x) −IT A∗ ∇c(x)T 0 0 IA∗ 0 0  dyzA∗  =  −∇ f (x)0 −xA∗  , the proof follows under the conditions of the lemma using the same line of deduction as the proof of Lemma 2, which we omit for the sake of brevity. Remark 1. One might relax the condition in Lemma 2 that μ = μk for all k ∈ N and similarly relax the condition in Lemma 5 that Hk = H  0 for all k ∈ N, such as by requiring merely that {μk}k∈S and {Hk}k∈S have bounded subsequences that converge to some μ ∈ R>0 and H  0, respectively. In these cases, the “if and only if ” statements would be replaced by an “if ” statements, which in fact is all that is needed for our subsequent analysis and discussions. Nevertheless, for brevity in the proofs, we provide the conditions that offer the stronger conclusions in these lemmas. 4.2 General Algorithm Behavior We now prove generally applicable results that hold for every run of Algorithm 1. Our initial results in this section presume that iteration k ∈ N is reached and certain properties hold with respect to algorithmic quantities (e.g., xk ∈ Rn ≥0 ), although we ultimately prove in Lemma 13 that, in fact, these facts are guar-anteed, i.e., they hold for any run for any generated k ∈ N. It is worthwhile to emphasize that the results in this section merely require that gk ∈ Rn for all k ∈ N, which means, for example, that Assumption 2 is not needed in this section. All results that depend on the properties and effects of the stochastic gradient estimates are found in the subsequent subsection, i.e., Section 4.3. Our first lemma follows directly from Lemma 1, so it is stated without proof. Lemma 6. Suppose that Assumption 1 holds. Then, in any run of the algorithm such that iteration k ∈ N is reached and xk ∈ Rn ≥0 , it holds that vk = 0 if and only if xk satisfies (5) , i.e., xk is either feasible or an infeasible stationary point, whereas vk 6 = 0 if and only if ‖ck‖2 > ‖ck + ∇c(xk)T vk‖2. Our next result shows that, in any iteration in which the current iterate xk is in the nonnegative orthant and τk−1 > 0, the merit parameter is either kept at the same value or decreased, and, if it is decreased, then it is decreased below a constant fraction times its former value. As in other SQP methods with such a feature, this ensures that if the merit parameter sequence does not vanish (i.e., its limiting value is nonzero), then it eventually remains at a constant positive value; see Lemma 13. Lemma 7. Suppose that Assumption 1 holds. In any run of the algorithm such that line 4 of iteration k ∈ N is reached, xk ∈ Rn ≥0 , and τk−1 ∈ R>0, it holds that 0 < τ k ≤ τk−1, where if τk < τ k−1, then τk ≤ (1 − τ )τk−1.Proof. Consider an arbitrary run in which line 4 of iteration k ∈ N is reached, xk ∈ Rn ≥0 , and τk−1 ∈ R>0.Let us show that 0 < τ k ≤ τk−1, in which case the fact that τk < τ k−1 implies τk ≤ (1 − τ )τk−1 follows from (11). Toward this end, let us next show that τ trial k By the constraints of (8), (10), and Lemma 6, one finds that τ trial k 0 whenever ‖ck‖2 −‖ ck +∇c(xk)T vk‖2 > 0. Hence, to show that one always finds τ trial k 0, all that remains is to consider the case when ‖ck‖2 − ‖ ck + ∇c(xk)T vk‖2 = 0. In this case, it follows from Lemma 6 that vk = 0, meaning that d = 0 is feasible for (8). This, in turn, means that gTk dk + 12 dTk Hkdk ≤ 0, so by (10) one finds that τ trial k = ∞ > 0. Since it has been shown that τ trial k 0, the fact that 0 < τ k ≤ τk−1 now follows directly from (11), completing the proof. We now show that the model reduction offered by the computed search direction satisfies a lower bound with the properties stated in our algorithm development. 12 Lemma 8. Suppose that Assumptions 1 and 3 hold. In any run of the algorithm such that line 4 is reached in iteration k ∈ N, xk ∈ Rn ≥0 , and τk ∈ R>0, one finds with ζ from Assumption 3 that ∆l(xk, τ k, g k, d k) ≥ 12 τkζ‖dk‖22 + σ(‖ck‖2 − ‖ ck + ∇c(xk)T dk‖2), (25) and, if dk 6 = 0 , then ∆l(xk, τ k, g k, d k) > 0.Proof. Consider an arbitrary run in which line 4 of iteration k ∈ N is reached, xk ∈ Rn ≥0 , and τk ∈ R>0. By (9) and Assumption 3, (25) is implied by (1 − σ)( ‖ck‖2 − ‖ ck + ∇c(xk)T dk‖2) ≥ τk(gTk dk + 12 dTk Hkdk). (26) If gTk dk + 12 dTk Hkdk ≤ 0, then (26) holds due to Lemma 6 and the fact that (8) ensures ∇c(xk)T vk = ∇c(xk)T dk. On the other hand, if gTk dk + 12 dTk Hkdk > 0, then one finds by the update of the merit parameter, namely, (10) and (11), that τk ≤ τ trial k = (1 −σ)( ‖ck ‖2−‖ ck +∇c(xk )T dk ‖2) gTkdk+ 12dTkHkdk , from which (26) follows again. Finally, that dk 6 = 0 implies ∆ l(xk, τ k, g k, d k) > 0 follows from (25), τk ∈ R>0,and since ζ ∈ R>0 in Assumption 3. Our next result is that, under the same conditions as our previous lemmas and under the assumption that ξk−1 ∈ R>0, the ratio parameter is either kept at the same value or decreased, and, like the merit parameter, if it is decreased, then it is decreased at least below a constant fraction times its previous value. Lemma 9. Suppose that Assumptions 1 and 3 hold. In any run of the algorithm such that line 4 is reached in iteration k ∈ N, xk ∈ Rn ≥0 , τk ∈ R>0, and ξk−1 ∈ R>0, it holds that 0 < ξ k ≤ ξk−1, where if ξk < ξ k−1,then ξk ≤ (1 − ξ )ξk−1.Proof. Consider an arbitrary run in which line 4 of iteration k ∈ N is reached, xk ∈ Rn ≥0 , τk ∈ R>0, and ξk−1 ∈ R>0. Let us show that 0 < ξ k ≤ ξk−1, in which case the fact that ξk < ξ k−1 implies ξk ≤ (1 − ξ )ξk−1 follows from (12). Toward this end, observe that if dk = 0, then the algorithm sets ξk ← ξk−1 > 0, which is consistent with the desired conclusion. On the other hand, if dk 6 = 0, then by (12), τk ∈ R>0, Lemma 6, the fact that (8) ensures ∇c(xk)T vk = ∇c(xk)T dk, and Lemma 8, ξtrial k = ∆l(xk ,τ k ,g k ,d k ) τk‖dk‖22 ≥ 12τkζ‖dk‖22 τk‖dk‖22 = 12 ζ > 0. (27) Hence, by (12), the desired conclusion follows. Next, we prove bounds for the step size computed in the algorithm. Lemma 10. Suppose that Assumptions 1 and 3 hold. In any run of the algorithm such that line 4 is reached in iteration k ∈ N, xk ∈ Rn ≥0 , τk ∈ R>0, and ξk ∈ R>0, it holds that 0 < α min k ≤ αmax k ≤ min {1, α ϕk }, and, so, xk+1 ∈ Rn ≥0 .Proof. Consider an arbitrary run of the algorithm in which line 4 of iteration k ∈ N is reached, xk ∈ Rn ≥0 , τk ∈ R>0, and ξk ∈ R>0. Let us show that 0 < α min k ≤ αmax k ≤ 1, in which case the fact that xk+1 ∈ Rn ≥0 follows from xk ∈ Rn ≥0 , the fact that the constraints of (8) ensure that xk + dk ∈ Rn ≥0 , and since the step size has αk ∈ [αmin k , α max k ] ⊂ (0 , 1]. Toward this end, observe that if dk = 0, then the algorithm yields αk = αmin k = αmax k = αϕk = 1, so the conclusion follows trivially. Hence, let us assume dk 6 = 0. Observe that from (13), the algorithm uses αmin k with 0 < α min k = 2(1 −η)βk ξk τk τkL+Γ ≤ 1. (28) 13 Now observing (15), which shows αmax k ≤ min {1, α ϕk }, one finds that all that remains is to prove that αmin k ≤ αϕk . For this purpose, let us introduce αsuff k := min { 1, 2(1 −η)βk ∆l(xk ,τ k ,g k ,d k )(τk L+Γ) ‖dk ‖22 } , where αsuff k ∈ (0 , 1] follows by βk ∈ (0 , 1], Lemma 8, and dk 6 = 0. To show that αmin k ≤ αϕk , our aim is to show that αmin k ≤ αsuff k ≤ αϕk . First, from (12), one finds αmin k = 2(1 −η)βk ξk τk τkL+Γ ≤ 2(1 −η)βk ξtrial kτk τkL+Γ = 2(1 −η)βk ∆l(xk ,τ k ,g k ,d k )(τk L+Γ) ‖dk ‖22 . (29) Combining (28) and (29), one finds that αmin k ≤ αsuff k , as desired. Now, toward proving that αsuff k ≤ αϕk , let us first show that ϕk(αsuff k ) ≤ 0. From the triangle inequality, the fact that αsuff k ∈ (0 , 1], and (14), it follows that ϕk(αsuff k )= ( η − 1) αsuff k βk∆l(xk, τ k, g k, d k) + ‖ck + αsuff k ∇c(xk)T dk‖2 − ‖ ck‖2 αsuff k (‖ck‖2 − ‖ ck + ∇c(xk)T dk‖2) + 12 (τkL + Γ)( αsuff k )2‖dk‖22 ≤ (η − 1) αsuff k βk∆l(xk, τ k, g k, d k) + (1 − αsuff k )‖ck‖2 + αsuff k ‖ck + ∇c(xk)T dk‖2 − ‖ ck‖2 + αsuff k (‖ck‖2 − ‖ ck + ∇c(xk)T dk‖2) + 12 (τkL + Γ)( αsuff k )2‖dk‖22 = ( η − 1) αsuff k βk∆l(xk, τ k, g k, d k) + 12 (τkL + Γ)( αsuff k )2‖dk‖22 ≤ (η − 1) αsuff k βk∆l(xk, τ k, g k, d k) + 12 αsuff k (τkL + Γ) ‖dk‖22 ( 2(1 −η)βk ∆l(xk ,τ k ,g k ,d k )(τk L+Γ) ‖dk ‖22 ) = 0 . Therefore, by (15), it follows that αsuff k ≤ αϕk .Our next lemma shows an upper bound on the change in the merit function. In the lemma and throughout the rest of the paper, for any k ∈ N such that line 4 is reached we let dtrue k ∈ Rn denote the solution of (8) when gk is replaced by ∇f (xk). Lemma 11. Suppose that Assumptions 1 and 3 hold. In any run of the algorithm such that line 4 is reached in iteration k ∈ N, xk ∈ R≥0, τk ∈ R>0, and αk ∈ (0 , α ϕk ], it holds that φ(xk + αkdk, τ k) − φ(xk, τ k) ≤ − αk∆l(xk, τ k, ∇f (xk), d true k )+ αkτk∇f (xk)T (dk − dtrue k ) + (1 − η)αkβk∆l(xk, τ k, g k, d k). Proof. Consider an arbitrary run of the algorithm in which line 4 of iteration k ∈ N is reached, xk ∈ Rn ≥0 , τk ∈ R>0, and αk ∈ (0 , α ϕk ]. By Assumption 1 (which led to (3)), (8) (which implies ck + ∇c(xk)T dk = ck + ∇c(xk)T dtrue k ), (9), (14), and the fact that 0 < α k ≤ αϕk (which means ϕk(αk) ≤ 0), it follows that φ(xk + αkdk, τ k) − φ(xk, τ k)= τk(f (xk + αkdk) − fk) + ‖c(xk + αkdk)‖2 − ‖ ck‖2 ≤ αkτk∇f (xk)T dk + ‖ck + αk∇c(xk)T dk‖2 − ‖ ck‖2 + 12 (τkL + Γ) α2 k ‖dk‖22 = − αk∆l(xk, τ k, ∇f (xk), d true k ) + αkτk∇f (xk)T (dk − dtrue k ) + ‖ck + αk∇c(xk)T dk‖2 − ‖ ck‖2 + αk(‖ck‖2 − ‖ ck + ∇c(xk)T dk‖2) + 12 (τkL + Γ) α2 k ‖dk‖22 ≤ − αk∆l(xk, τ k, ∇f (xk), d true k ) + αkτk∇f (xk)T (dk − dtrue k )+ (1 − η)αkβk∆l(xk, τ k, g k, d k), which shows the desired conclusion. 14 We now show that each search direction—and, similarly, the search direction that would be computed if the true gradient of the objective function were used in place of the stochastic gradient estimate—can be viewed as a projection of the unconstrained minimizer of the objective of (8) onto the feasible region of (8). Lemma 12. Suppose that Assumptions 1 and 3 hold. In any run of the algorithm such that line 4 is reached in iteration k ∈ N, xk ∈ R≥0, and with Dk := {d ∈ Rn : ∇c(xk)T (d − vk) = 0 , x k + d ≥ 0} and Proj k(d) := argmin d∈D k ‖d − d‖2 Hk , it holds that dk = Proj k(−H−1 k gk) and dtrue k = Proj k(−H−1 k ∇f (xk)) .Proof. Consider an arbitrary run of the algorithm in which line 4 of iteration k ∈ N is reached and xk ∈ R≥0.The desired conclusion follows from the facts that Dk is convex and, under Assumption 3, Hk is SPD; in particular, one finds that dk = argmin d∈D k gTk d + 12 dT Hkd = argmin d∈D k 12 ‖d + H−1 k gk‖2 Hk = Proj k(−H−1 k gk), and similarly with respect to dtrue k with gk replaced by ∇f (xk). We are now prepared to prove the following lemma, which shows that the algorithm is well defined and either terminates finitely with an infeasible stationary point or generates an infinite sequence of iterates with certain critical properties of the simultaneously generated algorithmic sequences. The lemma also reveals that the monotonically nonincreasing merit parameter sequence either vanishes or ultimately remains constant, and it reveals that the monotonically nonincreasing ratio parameter sequence ultimately remains constant at a value that is greater than or equal to a positive real number that is defined uniformly across all runs of the algorithm. Lemma 13. Suppose that Assumptions 1 and 3 hold. In any run, either the algorithm terminates finitely with an infeasible stationary point or it performs an infinite number of iterations such that, for all k ∈ N, it holds that (a) xk ∈ Rn ≥0 ,(b) vk = 0 if and only if xk satisfies (5) ,(c) vk 6 = 0 if and only if ‖ck‖2 > ‖ck + ∇c(xk)T vk‖2,(d) 0 < τ k ≤ τk−1 < ∞,(e) τk < τ k−1 if and only if τk ≤ (1 − τ )τk−1,(f ) (25) holds, (g) dk 6 = 0 if and only if ∆l(xk, τ k, g k, d k) > 0,(h) 0 < ξ k ≤ ξk−1 < ∞,(i) ξk < ξ k−1 if and only if ξk ≤ (1 − ξ )ξk−1, and (j) 0 < α min k ≤ αmax k ≤ min {1, α ϕk }.In addition, in any run that does not terminate finitely, it holds that (k) either {τk} ↘ 0 or there exists kτ ∈ N and τmin ∈ R>0 such that τk = τmin for all k ∈ N with k ≥ kτ ,and (l) there exist kξ ∈ N and ξmin ∈ R>0 with ξmin ≥ 12 ζ(1 − ξ ) such that ξk = ξmin for all k ∈ N with k ≥ kξ . 15 Proof. Given the initialization of the algorithm, statements ( a)–( j) follow by induction from Lemmas 6–10. Statement ( k) follows from statements ( d) and ( e). Finally, to prove statement ( l), consider arbitrary k ∈ N in a run that does not terminate finitely and note that if dk = 0, then ξtrial k ← ∞ , and if dk 6 = 0, then ξtrial k satisfies (27), meaning that ξtrial k ≥ 12 ζ. Consequently, by (12), ξk < ξ k−1 only if ξk−1 > 12 ζ. This, along with statements ( h) and ( i), leads to the conclusion. 4.3 Convergence Guarantees We now turn to prove convergence results under Assumption 4 below. Recalling the role of 12 ζ(1 − ξ ) ∈ R>0 in Lemma 13( l), the assumption focuses on the following event for some ( kmin , τ min , f sup ) ∈ N × R>0 × R,where for all generated k ∈ N we denote τ true ,trial k as the value of τ trial k that would be computed in iteration k if (8) were solved with ∇f (xk) in place of gk: E(kmin , τ min , f sup ):= {An infinite number of iterations are performed, f (xkmin ) ≤ fsup , and there exist k′ ∈ N with k′ ≤ kmin , τ ′ ∈ R>0 with τ ′ ≥ τmin ,and ξ′ ∈ R>0 with ξ′ ≥ 12 ζ(1 − ξ ) such that τk = τ ′ ≤ τ true ,trial k and ξk = ξ′ for all k ∈ N with k ≥ k′}. (30) The following assumption is made in this subsection. We present a discussion and supporting theoretical results about this assumption in Section 4.4. Assumption 4. For some (kmin , τ min , f sup ) ∈ N × R>0 × R, the event E := E(kmin , τ min , f sup ) occurs and, conditioned on the occurrence of E, Assumption 1 holds (with the same constants as previously presented in (2) and (3)) . It is not a shortcoming of our analysis that Assumption 4, through the definition of E, assumes that ( i)an infinite number of iterations are performed, ( ii ) the objective value is bounded above in iteration kmin ,and ( iii ) {ξk} ultimately becomes a constant sequence with value at least 12 ζ(1 − ξ ) ∈ R>0. After all: ( i)Lemma 13 shows that the only alternative to an infinite number of iterations being performed is that the algorithm terminates finitely with an infeasible stationary point, in which case there is nothing else to prove; (ii ) fsup ∈ R can be arbitrarily large and knowledge of it is not required by the algorithm, so assuming that it exists is a very loose requirement; and ( iii ) Lemma 13( l) shows that, in any run that does not terminate finitely, {ξk} is monotonically nonincreasing and bounded below by 12 ζ(1 − ξ ) ∈ R>0, which is a constant, i.e., it is not run-dependent. Overall, the only important restriction of our analysis in this section is the fact that E includes the requirement that {τk} ultimately becomes constant at a value at least τmin that is sufficiently small relative to {τ true ,trial k }. This restriction is the subject of Section 4.4. For the remainder of this subsection, we consider the stochastic process corresponding to the statement of Algorithm 1. Specifically, the sequence {(xk, v k, g k, d k, d true k , τ trial k , τ true ,trial k , τ k, ξ trial k , ξ k, α min k , α ϕk , α max k , α k)} generated in any run can be viewed as a realization of the stochastic process {(Xk, V k, G k, D k, D true k , T trial k , T true ,trial k , Tk, Ξtrial k , Ξk, Amin k , Aϕk , Amax k , Ak)}. Let G1 denote the σ-algebra defined by the initial conditions of the algorithm and, for all k ∈ N with k ≥ 2, let Gk denote the σ-algebra generated by the initial conditions and the random variables {G1, . . . , G k−1}.Then, with respect to the event E in Assumption 4, denote the trace σ-algebra of E on Gk as Fk := Gk ∩ E for all k ∈ N. It follows that {F k} is a filtration, and we proceed in our analysis under Assumptions 2, 3, and 4 (which subsumes Assumption 1) with the definitions Pk[·] := Pω [·|F k] and Ek[·] := Eω [·|F k]16 (where Pω denotes probability taken with respect to the distribution of ω). We also define, with respect to E, the random variables K′ ≤ kmin , T ′ ≥ τmin , and Ξ ′ ≥ 12 ζ(1 − ξ ), which for a given run of the algorithm have the realized values k′, τ ′, and ξ′, respectively, defined in (30). Conditioned on E, one has in any run that τmin ≤ T ′ ≤ τ0 and 12 ζ(1 − ξ ) ≤ Ξ′ ≤ ξ0, (31) and one has that T ′ and Ξ ′ are Fk-measurable for k = kmin ≥ K′.Our next lemma shows upper bounds on the norm of the difference between the computed search direction and the search direction that would be computed with the true gradient of the objective. (The conclusion of this lemma and the following one would hold even without assuming that the event E occurs, but in each result we condition on Fk := Gk ∩ E for use in our ultimate results under E.) Lemma 14. Suppose that Assumptions 2, 3, and 4 hold. For all k ∈ N, ‖Dk − Dtrue k ‖2 ≤ ζ−1‖Gk − ∇ f (Xk)‖2 and Ek[‖Dk − Dtrue k ‖2] ≤ ζ−1Ek[‖Gk − ∇ f (Xk)‖2] ≤ ζ−1√ρk. Proof. Consider arbitrary k ∈ N under the stated conditions. Lemma 12 and the obtuse angle lemma for projections [6, Proposition 1.1.9] imply (Dk − Dtrue k )T Hk(−H−1 k ∇f (Xk) − Dtrue k ) ≤ 0and (Dtrue k − Dk)T Hk(−H−1 k Gk − Dk) ≤ 0. Summing these inequalities yields 0 ≥ (Dk − Dtrue k )T Hk(−H−1 k ∇f (Xk) − Dtrue k ) + ( Dtrue k − Dk)T Hk(−H−1 k Gk − Dk)= ‖Dk − Dtrue k ‖2 Hk − (Dk − Dtrue k )T (∇f (Xk) − Gk). Hence, by the Cauchy–Schwarz inequality, it follows that ‖Dk − Dtrue k ‖2 Hk ≤ (Dk − Dtrue k )T (∇f (Xk) − Gk) ≤ ‖ Dk − Dtrue k ‖2‖∇ f (Xk) − Gk‖2, which shows under Assumption 3 that ‖Dk − Dtrue k ‖2 ≤ ζ−1‖Gk − ∇ f (Xk)‖2, as desired. Then, from this inequality, Assumption 2, and Jensen’s inequality, one has Ek[‖Gk − ∇ f (Xk)‖2] ≤ √ Ek[‖Gk − ∇ f (Xk)‖22] ≤ √ρk, from which the remainder of the conclusion follows. We now show an upper bound on the expected difference between inner products involving the true and stochastic gradients and the true and stochastic directions. Lemma 15. Suppose that Assumptions 2, 3, and 4 hold. For all k ≥ kmin , |Ek[GTk Dk − ∇ f (Xk)T Dtrue k ]| ≤ ζ−1(ρk + κ∇f √ρk) and Ek[∆ l(Xk, Tk, G k, D k)] − ∆l(Xk, T ′, ∇f (Xk), D true k ) ≤ T ′ζ−1(ρk + κ∇f √ρk). Proof. Consider arbitrary k ≥ kmin under the stated conditions. From the triangle and Cauchy–Schwarz inequalities and Lemma 14, it holds that |Ek[GTk Dk − ∇ f (Xk)T Dtrue k ]| = |Ek[( Gk − ∇ f (Xk)) T Dtrue k ( Gk − ∇ f (Xk)) T (Dk − Dtrue k )+ ∇f (Xk)T (Dk − Dtrue k )] | 17 = |Ek[( Gk − ∇ f (Xk)) T (Dk − Dtrue k )] + Ek[∇f (Xk)T (Dk − Dtrue k )] |≤ Ek[‖Gk − ∇ f (Xk)‖2‖Dk − Dtrue k ‖2] + ‖∇ f (Xk)‖2Ek[‖Dk − Dtrue k ‖2] ≤ ζ−1Ek[‖Gk − ∇ f (Xk)‖22] + ζ−1κ∇f Ek[‖Gk − ∇ f (Xk)‖2] ≤ ζ−1ρk + ζ−1κ∇f √ρk, which gives the first result. Then, for k ≥ kmin , (9) and the equation above give Ek[∆ l(Xk, Tk, G k, D k)] − ∆l(Xk, T ′, ∇f (Xk), D true k )= T ′Ek[∇f (Xk)T Dtrue k − GTk Dk] ≤ T ′ζ−1(ρk + κ∇f √ρk), which completes the proof. Our next lemma shows a lower bound on the true model reduction. In the lemma and our subsequent results, we define Jk := ∇c(Xk)T for the sake of brevity. Lemma 16. Suppose that Assumptions 2, 3, and 4 hold. For all k ≥ kmin , ∆l(Xk, Tk, ∇f (Xk), D true k ) ≥ 12 T ′ζ‖Dtrue k ‖22 + σ(‖c(Xk)‖2 − ‖ c(Xk) + JkDtrue k ‖2) ≥ 0. Proof. Consider arbitrary k ≥ kmin under the stated conditions. By (9), the fact that Tk = T ′, and Assumption 3, the first desired conclusion is implied by (1 − σ)( ‖c(Xk)‖2 − ‖ c(Xk) + JkDtrue k ‖2) ≥ T ′(∇f (Xk)T Dtrue k 12 (Dtrue k )T HkDtrue k ). If ∇f (Xk)T Dtrue k 12 (Dtrue k )T HkDtrue k ≤ 0, then the above holds due to Lemma 13 and the fact that JkDtrue k = JkVk; else, ∇f (Xk)T Dtrue k 12 (Dtrue k )T HkDtrue k 0, in which case one finds from the conditions of the lemma, (10), and (11) that Tk = T ′ ≤ T true ,trial k = (1 −σ)( ‖c(Xk )‖2−‖ c(Xk )+ Jk Dtrue k‖2) ∇f(Xk)TDtrue k+ 12 (Dtrue k)THkDtrue k , from which the displayed inequality above follows again. Finally, the remaining desired conclusion follows from the first conclusion, Lemma 13, and JkDtrue k = JkVk.Next, we prove a critical upper bound on the expected value of the second term on the right-hand side of the upper bound proved in Lemma 11. Lemma 17. Suppose that Assumptions 2, 3, and 4 hold. For all k ≥ kmin , Ek[AkTk∇f (Xk)T (Dk − Dtrue k )] ≤ ( 2(1 −η)Ξ ′T ′ T′L+Γ θ ) βkT ′κ∇f ζ−1√ρk. Proof. For arbitrary k ≥ kmin under the conditions, (13) and (15) yield Amin k = βkA′ and Amax k ≤ A min k θβ k, where A′ = 2(1 −η)Ξ ′T ′ T′L+Γ . (32) Letting Pk denote the event that ∇f (Xk)T (Dk−Dtrue k ) ≥ 0 and letting Pck denote the event that ∇f (Xk)T (Dk− Dtrue k ) < 0, the law of total expectation and the fact that T ′ and Ξ ′ are Fk-measurable for k ≥ kmin shows that Ek[AkTk∇f (Xk)T (Dk − Dtrue k )] = Pk[Pk] · Ek[AkT ′∇f (Xk)T (Dk − Dtrue k )|P k]+ Pk[Pck] · Ek[AkT ′∇f (Xk)T (Dk − Dtrue k )|P ck] ≤ (Amin k θβ k)T ′Pk[Pk] · Ek[∇f (Xk)T (Dk − Dtrue k )|P k]+ Amin k T ′Pk[Pck] · Ek[∇f (Xk)T (Dk − Dtrue k )|P ck]18 = Amin k T ′Ek[∇f (Xk)T (Dk − Dtrue k )] + θβ kT ′Pk[Pk] · Ek[∇f (Xk)T (Dk − Dtrue k )|P k]. The Cauchy-Schwarz inequality and law of total expectation show that Pk[Pk] · Ek[∇f (Xk)T (Dk − Dtrue k )|P k] ≤ Pk[Pk] · Ek[‖∇ f (Xk)‖2‖Dk − Dtrue k ‖2|P k]= Ek[‖∇ f (Xk)‖2‖Dk − Dtrue k ‖2] − Pk[Pck] · Ek[‖∇ f (Xk)‖2‖Dk − Dtrue k ‖2|P ck] ≤ Ek[‖∇ f (Xk)‖2‖Dk − Dtrue k ‖2], so from above, the Cauchy-Schwarz inequality, Assumption 4, and Lemma 14, Ek[AkTk∇f (Xk)T (Dk − Dtrue k )] ≤ (Amin k θβ k)T ′‖∇ f (Xk)‖2Ek[‖Dk − Dtrue k ‖2] ≤ ( 2(1 −η)Ξ ′T ′ T′L+Γ θ ) βkT ′κ∇f ζ−1√ρk, which gives the desired conclusion. We now present, as a lemma, results pertaining to the asymptotic behavior of the model reductions generated by the algorithm. In the subsequent theorem after the lemma, these results will be translated in terms of quantities that, as seen in Section 4.1, can be connected to stationarity measures related to problem (1). We remark that the conditions of the lemma can be satisfied in a run-dependent manner if, every time the merit or ratio parameter is decreased, say in iteration ˆk ∈ N, the sequence {βk} is “restarted” such that with α′ = 2(1 − η)ξˆkτˆk/(τˆkL + Γ) and some (run-independent) ψ ∈ (0 , 1] one chooses βk = β = ψ α′ 2(1 −η)( α′+θ) for part (a) of the lemma and βk = 1 k−ˆk+1 ψ α′ 2(1 −η)( α′+θ) for part (b); such a scheme was described in as well. Notice that in this situation, β and {βk}k≥ˆk in parts (a) and (b), respectively, are random variables, but, importantly, they are Fk-measurable for k ≥ kmin . Alternatively, one could choose {βk} using the same formulas, but with ξmin and τmin in place of ξk and τk, respectively, in the formula for α′, in which case the choices are run-independent. The downside of relying on this latter situation is that it requires knowledge of ξmin and τmin , which would not typically be known a priori . Hence, we analyze the former scheme, but use run-dependent bounds that, under E, are defined with respect to ξmin and τmin (even though these values are unknown). We also remark that for case (a) in the following lemma, the sequence {ρk}, which bounds the expected squared error in the stochastic gradient estimates, can be a constant sequence. However, for case (b), the relationship between {ρk} and {βk} means that the expected squared error in the gradient estimates must vanish as k → ∞ . This requirement, which is stronger than the requirement for equality-constraints-only case in , is needed to overcome the fact that in the presence of bound constraints the search directions can be biased estimates of their true counterparts. Lemma 18. Under Assumptions 2, 3, and 4, suppose that {ρk} is chosen such that there exists ι ∈ R>0 with ρk ≤ ιβ 2 k for all k ∈ N with k ≥ kmin , and define α′ min = 2(1 −η)ξmin τmin τmin L+Γ , α′ max = 2(1 −η)ξ0τ0 τ0L+Γ , and ρ′ max = ( α′ max θ)τ0ζ−1(κ∇f √ι + (1 − η)( ι + κ∇f √ι)) . Then, with A′ defined in (32) and E[·|E ] denoting total expectation over all realizations of the algorithm conditioned on event E, the following statements hold true. (a) if βk = β = ψ A′ 2(1 −η)( A′+θ) for some ψ ∈ (0 , 1] for all k ≥ kmin , then lim sup k→∞ E  1 k kmin +k−1 ∑ j=kmin ∆l(Xj , T ′, ∇f (Xj ), D true j ) ∣∣∣∣∣E  ≤ ψ(α′ max )2(α′ min +θ)ρ′ max 2(1 −η)(1 −ψ 2)( α′ min )2(α′ max +θ)2 ;19 (b) if ∑∞ k=kmin βk = ∞, ∑∞ k=kmin β2 k < ∞, and βk ≤ ψ A′ 2(1 −η)( A′+θ) for some ψ ∈ (0 , 1] for all k ≥ kmin , it holds that E  1 ∑kmin +k−1 j=kmin βjkmin +k−1∑ j=kmin βj ∆l(Xj , T ′, ∇f (Xj ), D true j ) ∣∣∣∣∣E  k→∞ −−−−→ 0. Proof. For arbitrary k ≥ kmin under the conditions, it follows from Lemma 11, Lemma 16 (which shows ∆l(Xk, Tk, ∇f (Xk), D true k ) ≥ 0), (32), the fact that Ak ≥ A min k = A′βk, Lemma 17, the fact that Ak ≤Amax k ≤ A min k θβ k = ( A′ + θ)βk, Lemma 14, Lemma 15, and βk ∈ (0 , 1] that Ek[φ(Xk+1 , Tk) − φ(Xk, Tk)] = Ek[φ(Xk + AkDk, Tk) − φ(Xk, Tk)] ≤ Ek[−A k∆l(Xk, Tk, ∇f (Xk), D true k )+ AkTk∇f (Xk)T (Dk − Dtrue k ) + (1 − η)Akβk∆l(Xk, Tk, G k, D k)] ≤ − A ′βk∆l(Xk, T ′, ∇f (Xk), D true k )+ ( A′ + θ)βkT ′κ∇f ζ−1√ρk (1 − η)( A′ + θ)β2 k (∆ l(Xk, T ′, ∇f (Xk), D true k ) + T ′ζ−1(ρk + κ∇f √ρk)) ≤ − A ′βk∆l(Xk, T ′, ∇f (Xk), D true k )+ ( A′ + θ)βkT ′κ∇f ζ−1√ιβ k (1 − η)( A′ + θ)β2 k (∆ l(Xk, T ′, ∇f (Xk), D true k ) + T ′ζ−1(ιβ 2 k κ∇f √ιβ k)) ≤ − βk(A′ − (1 − η)( A′ + θ)βk)∆ l(Xk, T ′, ∇f (Xk), D true k ) + R′β2 k , where R′ = ( A′ + θ)T ′ζ−1(κ∇f √ι + (1 − η)( ι + κ∇f √ι)). Now, from Assumption 4 (which subsumes Assumption 1), there exists φmin ∈ R such that φ(Xk, T ′) ≥ φmin for all k ≥ kmin . One also finds that α′ min ≤ A ′ ≤ α′ max due to the monotonicity of 2(1 −η)Ξ ′ττ L +Γ with respect to τ . Therefore, under part ( a) of the lemma, in which case one finds for k ≥ kmin that ψ α′ min 2(1 −η)( α′ min +θ) ≤ β ≤ ψ α′ max 2(1 −η)( α′ max +θ) , it follows from above that Ek[φ(Xk+1 , Tk) − φ(Xk, Tk)] ≤ − ψ(1 − ψ 2 )( α′ min )2 2(1 − η)( α′ min θ) ∆l(Xk, T ′, ∇f (Xk), D true k ) + ρ′ max ( ψ α′ max 2(1 − η)( α′ max θ) )2 so by taking total expectation conditioned on the event E one finds φmin − E[φ(Xkmin , T ′)|E ] ≤ E[φ(Xkmin +k, T ′) − φ(Xkmin , T ′)|E ] = E  kmin +k−1 ∑ j=kmin (φ(Xj+1 , T ′) − φ(Xj , T ′)) ∣∣∣∣∣E  ≤ − ψ(1 − ψ 2)( α′ min )2 2(1 −η)( α′ min +θ) E  kmin +k−1 ∑ j=kmin ∆l(Xj , T ′, ∇f (Xj ), D true j ) ∣∣∣∣∣E  + kρ ′ max ( ψ α′ max 2(1 −η)( α′ max +θ) )2 . Rearranging terms, observing that E[φ(Xkmin , T ′)|E ] is bounded above under Assumption 4, and considering the limit superior as k → ∞ , the conclusion of part ( a) follows. On the other hand, under the conditions of part ( b), it follows in a similar manner that, for any k ∈ N, one finds φmin − E[φ(Xkmin , T ′)|E ] ≤ E[φ(Xkmin +k, T ′) − φ(Xkmin , T ′)|E ] = E  kmin +k−1 ∑ j=kmin (φ(Xj+1 , T ′) − φ(Xj , T ′)) ∣∣∣∣∣E  20 ≤ E  kmin +k−1 ∑ j=kmin (−βj (A′ − (1 − η)( A′ + θ)βj )∆ l(Xj , T ′, ∇f (Xj ), D true j ) + R′β2 j ) ∣∣∣∣∣E  . Taking limits as k → ∞ , the conclusion of part ( b) follows. We now present our main convergence theorem for Algorithm 1, which is essentially a translation of Lemma 18 from results about model reductions to results about quantities connected to measures of station-arity for problem (1). Theorem 1. Suppose the conditions of Lemma 18 hold. Then, (a) under the conditions of Lemma 18(a), there exists C ∈ R>0 such that lim sup k→∞ E  1 k kmin +k−1 ∑ j=kmin ( 12 T ′ζ‖Dtrue j ‖22 + σ(‖c(Xj )‖2 − ‖ c(Xj ) + Jj Dtrue j ‖2)) ∣∣∣∣∣E  = C; (b) under the conditions of Lemma 18(b), with Bk := ∑kmin +k−1 j=kmin βj , E  1 Bkkmin +k−1∑ j=kmin βj ( 12 T ′ζ‖Dtrue j ‖22 + σ(‖c(Xj )‖2 − ‖ c(Xj ) + Jj Dtrue j ‖2)) ∣∣∣∣∣E  k→∞ −−−−→ 0, which further implies lim inf k→∞ E[‖Dtrue k ‖22 + ( ‖c(Xk)‖2 − ‖ c(Xk) + JkDtrue k ‖2)|E ] = 0 .Proof. The desired conclusions follow from Lemmas 16 and 18. One might be able to strengthen the conclusion in Theorem 1(b), say to an almost-sure convergence guarantee; see, e.g., . However, we are satisfied with Theorem 1(b), which is sufficient for revealing the favorable properties of Algorithm 1 under Assumptions 2, 3, and 4. Theorem 1(a) shows under Assump-tions 2, 3, and 4 that if the latter condition in (6) holds with ρk = ρ for some ρ ∈ R>0 for all k ∈ N and {βk} = {β} is chosen as a (sufficiently small) constant sequence, then the limit superior of the expectation of the average of quantities connected to stationarity measures for problem (1) is bounded above by a constant proportional to β. Intuitively, this shows that the iterates generated by the algorithm ultimately remain in a region in which these stationarity measures are small. On the other hand, Theorem 1(b) shows under Assumption 4 that if {ρk} and {βk} vanish with ρk = O(β2 k ), then a subsequence of iterates exist over which the expected values of these stationarity measures vanish. As seen in Lemma 3, if there exists a subsequence of iterates, say indexed by S ⊆ N, that converges to a point satisfying certain regularity conditions, then {‖ ck‖2 − ‖ ck + ∇c(xk)T vk‖2}k∈S → 0 means that the limit point is stationary with respect to the problem to minimize 12 ‖c(x)‖22 subject to x ∈ Rn ≥0 . Similarly, as seen in Lemma 5, if there exists such a subsequence and the limit point is feasible with respect to problem (1), then {dtrue k }k∈S → 0 means that the limit point is stationary with respect to (1). These situations are not guaranteed to occur, but this discussion shows that Theorem 1 is meaningful. 4.4 Non-vanishing Merit Parameter Our main convergence result in the previous section, namely, Theorem 1, requires Assumption 4, which in turn requires that the merit parameter sequence ultimately becomes a sufficiently small, positive constant sequence. (Recall the discussion after Assumption 4.) To show that this corresponds to a realistic event for practical purposes, we next show conditions under which one finds that the merit parameter would not vanish. We begin by showing a generally applicable result about the solution of (7). It is related to that in Lemma 3, but is stronger due to an additional assumption. 21 Lemma 19. Suppose the conditions of Lemma 3 hold and there exists κw ∈ [0 , 1) such that for all generated k ∈ N in any run of the algorithm one has ‖ck + ∇c(xk)T vk‖2 ≤ κw‖ck‖2. Then, there exists κv ∈ R>0 such that, in any run of the algorithm such that iteration k ∈ N is reached, one finds ‖ck‖2 − ‖ ck + ∇c(xk)T vk‖2 ≥ κv ‖vk‖2. (33) Proof. Consider an arbitrary run of the algorithm in which the conditions of the lemma hold and iteration k ∈ N is reached. If ck = 0, then it follows by construction of (7) that vk = 0, in which case (33) follows trivially. Hence, we may proceed under the assumption that ck 6 = 0, which by the conditions of the lemma, Assumption 1 (see (2)), and the triangle inequality gives κ∇c‖vk‖2 ≥ ‖∇ c(xk)T vk‖2 ≥ ‖ ck‖2 − ‖ ck + ∇c(xk)T vk‖2 ≥ (1 − κw)‖ck‖2. Consequently, from (21), (22), and a similar derivation as in Lemma 3, one finds 2‖ck‖2(‖ck‖2 − ‖ ck + ∇c(xk)T vk‖2) ≥ ‖ ck‖22 − ‖ ck + ∇c(xk)T vk‖22 ≥ min { λ2 κ2 ∇c , 2μ}‖ vk‖22 ≥ min { λ2 κ2 ∇c , 2μ}( 1−κw κ∇c )‖ck‖2‖vk‖2, from which the desired conclusion in (33) follows. We now show that, under common conditions and when the norm of the stochastic gradient estimate is bounded uniformly, the denominator of the formula for τ trial k in (10) is bounded proportionally to ‖vk‖2. Lemma 20. Suppose that Assumptions 1 and 3 hold, and that there exists (λ, μ, κ g ) ∈ R>0 × R>0 × R>0 such that for all generated k ∈ N in any run of the algorithm one has ∇c(xk)T ∇c(xk)  λI , μk ≥ μ, and ‖gk‖2 ≤ κg . Then, there exists κg,H ∈ R>0 such that, in any run such that iteration k ∈ N is reached, one finds gTk dk + 12 dTk Hkdk ≤ κg,H ‖vk‖2. Proof. Consider an arbitrary run in which the conditions of the lemma hold and iteration k ∈ N is reached. By Lemma 13, ( u, w ) = (0 , 0) is feasible for (7), so max { 12 ‖ck + ∇c(xk)T ∇c(xk)wk‖22, 12 μk‖uk‖22 } ≤ 12 ‖ck + ∇c(xk)T ∇c(xk)wk‖22 + 12 μk‖uk‖22 ≤ 12 ‖ck‖22. Since 12 ‖ck + ∇c(xk)T ∇c(xk)wk‖22 ≤ 12 ‖ck‖22, it follows that ‖∇ c(xk)T ∇c(xk)wk‖22 ≤ − 2cTk ∇c(xk)T ∇c(xk)wk ≤ 2‖ck‖2‖∇ c(xk)T ∇c(xk)wk‖2, which along with Assumption 1 (see (2)) shows that ‖∇ c(xk)wk‖2 ≤ κ∇c‖wk‖2 ≤ κ∇c λ ‖∇ c(xk)T ∇c(xk)wk‖2 ≤ 2 κ∇c λ ‖ck‖2 ≤ 2 κ∇c λ κc. On the other hand, since 12 μk‖uk‖22 ≤ 12 ‖ck‖22, it follows under Assumption 1 that ‖uk‖2 ≤ 1√μk ‖ck‖2 ≤ 1√μ κc.Therefore, overall, it follows that ‖vk‖2 = √ ‖∇ c(xk)wk‖22 + ‖uk‖22 ≤ (√ 4( κ∇c λ )2 + 1 μ ) κc. Now, since vk = ∇c(xk)wk + uk is a feasible solution of (8) while dk is the optimal solution of (8), it follows under the conditions of the lemma that gTk dk + 12 dTk Hkdk ≤ gTk vk + 12 vTk Hkvk ≤ κg ‖vk‖2 + 12 κH ‖vk‖22 ≤ ( κg + 12 κH (√ 4( κ∇c λ )2 + 1 μ ) κc ) ‖vk‖2, which leads to the desired conclusion. 22 We now prove conditions under which the merit parameter does not vanish. Theorem 2. Suppose that Assumptions 1 and 3 hold, and that there exists (λ, μ, κ g , κ w) ∈ R>0 × R>0 × R>0 × [0 , 1) such that for all generated k ∈ N in any run of the algorithm one has ∇c(xk)T ∇c(xk)  λI , μk ≥ μ, ‖gk‖2 ≤ κg , and ‖ck + ∇c(xk)T vk‖2 ≤ κw‖ck‖2. Then, in any run that does not terminate finitely, the latter event in Lemma 13 (k) occurs (i.e., {τk} does not vanish ) with τmin ≥ (1 −σ)κv κg,H (1 − τ ).Proof. Consider arbitrary k ∈ N in a run that does not terminate finitely and note that if dk = 0 or gTk dk + 12 dTk Hkdk ≤ 0, then τ trial k ← ∞ , and otherwise τ trial k is set by (10). Hence, under the conditions of the lemma and by Lemmas 19–20, τ trial k ≥ (1 −σ)( ‖ck ‖2−‖ ck +∇c(xk )T dk ‖2) gTkdk+ 12dTkHkdk = (1 −σ)( ‖ck ‖2−‖ ck +∇c(xk )T vk ‖2) gTkdk+ 12dTkHkdk ≥ (1 −σ)κv κg,H =: τ∗. Consequently, by the merit parameter update in (11), τk < τ k−1 only if τk−1 > τ ∗. This, along with Lemma 13( d)–( e), leads to the conclusion. Since ∇f is bounded in norm over the set X in Assumption 1, Theorem 2 shows that, amongst the other stated conditions, if ‖gk − ∇ f (xk)‖2 is bounded uniformly over all k ∈ N in any, then the merit parameter sequence always remains bounded below by a positive number. Under such conditions, the only potentially poor behavior of the merit parameter sequence is that, in a given run, it ultimately remains constant at a value that is too large. We claim that, under certain assumptions about the distribution of the stochastic gradient estimates, this behavior can be shown to occur with probability zero. (We do not prove such a result here, but refer the interested reader to Proposition 3.16 in to see such a result for the equality-constraints-only setting, in which case the behavior of the merit parameter is similar.) On the other hand, if ‖gk −∇ f (xk)‖2 is not bounded uniformly in this manner, then it is possible for the merit parameter sequence to vanish unnecessarily. This issue is one that should be noted by a user of the algorithm. In particular, if in a run of the algorithm one chooses μk ≥ μ for some μ ∈ R>0 for all k ∈ N and one finds for some (λ, κ w) ∈ R>0 × R>0 that generated k ∈ N yield ∇c(xk)T ∇c(xk)  λI and ‖ck + ∇c(xk)T vk‖2 ≤ κw‖ck‖2,yet τk has become exceedingly small, then Theorem 2 shows that this must be due to the stochastic gradient estimates tending to become significantly large in norm, in which case the performance of the algorithm may improve with more accurate stochastic gradient estimates. 4.5 Deterministic Algorithm We conclude this section with a statement of a convergence result that we claim to hold for Algorithm 1 if it were to be run with gk = ∇f (xk) for all k ∈ N. Due to space considerations, we do not provide a proof of the result, although we offer the proposition for reference for the reader and claim that it holds from results proved in this paper for the stochastic setting as well as other similar results for SQP methods for deterministic continuous nonlinear optimization. Proposition 1. Suppose Assumptions 1 and 3 hold and Algorithm 1 is run with gk = ∇f (xk) for all k ∈ N.If for all large k ∈ N there exists κw ∈ [0 , 1) such that ‖ck + ∇c(xk)T vk‖2 ≤ κw‖ck‖2, then {xk} ⊂ Rn ≥0 , {τk} is bounded away from zero, and, with yk ∈ Rm and zk ∈ Rn ≥0 defined as the optimal multipliers corresponding to the solution of subproblem (8) for all k ∈ N, it follows that ∥∥∥∥∥∥ ∇f (xk) + ∇c(xk)yk − zk ck xTk zk ∥∥∥∥∥∥ → 0. Otherwise, {xk} ⊂ Rn ≥0 , {min {∇ c(xk)ck, 0}} → 0, and {| xTk ∇c(xk)ck|} → 0, and if the sequence {τk} is bounded away from zero, then {∥ ∥∥∥[∇f (xk) + ∇c(xk)yk − zk xTk zk ]∥ ∥∥∥} → 0. 23 5 Numerical Results In this section, we provide results demonstrating the performance of a MATLAB implementation of Algo-rithm 1 when solving a subset of problems from CUTEst , where Gurobi is used to solve the arising subproblems . The purpose of these experiments is to compare this performance against that of the Julia implementation provided by the authors of [31, Algorithm 1]. From all inequality-constrained problems in CUTEst, we selected those such that (i) m ≤ n ≤ 1000, (ii) f (xk) ≥ − 10 20 for all k ∈ N in all runs of our algorithm, and (iii) Gurobi did not report any errors. This resulted in a set of 323 test problems. For each test problem, both codes used the same initial iterate and generated stochastic gradient estimates in the same manner. Specifically, for all k ∈ N in each run, the codes set gk = N (∇f (xk),  g (I + ee T )), where e is the all-ones vector and g ∈ { 10 −8, 10 −4, 10 −2, 10 −1} was fixed for each run (see below). If a problem had only inequality constraints, i.e., m = 0, then our code explicitly computed αϕk (as defined in (15)) and set αk ← αmax k for all k ∈ N. Otherwise, the code set αk ← min {1, (1 .1) tk αmin k , α min k θβ k}, where tk ← max {t ∈ N : ϕk((1 .1) tαmin k ) ≤ 0}. This guarantees that αk ∈ [αmin k , α max k ] for all k ∈ N. The other user-defined parameters of Algorithm 1 were selected as σ = τ0 = 0 .1, η = 0 .5, ξ0 = 1, τ = ξ = 10 −2, θ = 10 4, μk = max {10 −8, 10 −4‖ck‖22}, βk = 1, and Hk = I for all k ∈ N. The Lipschitz constants L and Γ were estimated every 100 iterations by differences of stochastic gradients at ten samples around the current iterate. Meanwhile, we ran the Julia code for [31, Algorithm 1] with the AdapGD option and its default parameter settings as described in [31, Section 4]. Each code terminated as soon as 10 4 stochastic gradient samples were evaluated or a 12-hour CPU time limit was reached. Let FeasErr (x) be the ∞-norm constraint violation at x and let KKTErr (x, y, z ) be the ∞-norm violation of the KKT conditions (recall (4)) at a primal-dual iterate ( x, y, z ). Each run of Algorithm 1 generates {xk} ⊂ Rn. For each k ∈ N, let ytrue k ∈ Rm and ztrue k ∈ Rn denote the optimal Lagrange multipliers corresponding to the equality and inequality constraints when (8) is solved with gk = ∇f (xk). For each run of Algorithm 1, we determined the best iterate as xkbest where kbest =  arg min k∈N FeasErr (xk) if FeasErr (xk) > 10 −4 for all k ∈ N, arg min k∈N {KKTErr (xk, y true k , z true k ) : FeasErr (xk) ≤ 10 −4} otherwise. We determined the best iterate in a run of [31, Algorithm 1] using the same formula with the sequence of iterates and Lagrange multiplier estimates that are computed as part of the algorithm. Our results for four noise levels, provided in Figure 1 below, are presented in terms of FeasErr (xkbest ) as the feasibility error and KKTErr (xkbest , y true kbest , z true kbest ) as the KKT error for each run of both algorithms. Since the Julia code for [31, Algorithm 1] is only set up to solve CUTEst problems without simple bound constraints, the results in Figure 1 are presented in two parts. For the 57 problems for which both algorithms were set up to run, the first two box plots show the best feasibility and KKT errors achieved by both codes, where each problem is run 5 times each (since the behaviors of the algorithms are stochastic). In the third box plot, we report the best feasibility and KKT errors obtained by our Matlab code on the remaining 266 (= 323 − 57) problems, again with five runs for each problem. Overall, one finds that the performance of our algorithm is comparatively good in this experimental set-up. The best feasibility and KKT errors are relatively low for our algorithm, although the errors increase with the noise level, as may be expected. Experiments with diminishing step sizes also showed favorable performance for our algorithm; these results are omitted due to page limit restrictions. 6 Conclusion We have proposed, analyzed, and tested an algorithm for solving continuous optimization problems. The algorithm requires that constraint function and derivative values can be computed in each iteration, but does not require exact objective function and derivative values; rather, the algorithm merely requires that a stochastic objective gradient estimate is computed to satisfy relatively loose assumptions in each iteration. The theoretical convergence guarantees of the algorithm require knowledge of Lipschitz constants for the 24 Figure 1: Box plots comparing the best feasibility errors (left) and KKT errors (middle) of a Matlab imple-mentation of Algorithm 1 (“Stochastic SQP”) and the Julia implementation provided by the authors of [31, Algorithm 1] (“Active-st SQP”) when solving 57 CUTEst problems without simple bound constraints. Box plots of the best feasibility and KKT errors (combined, right) of the implementation of Algorithm 1 when solving the other 266 CUTEst problems from the test set. objective gradient and constraint Jacobian, although in practice these constants can be estimated. Our numerical experiments show that our proposed algorithm can outperform an alternative algorithm that relies on the ability to compute more accurate gradient estimates. We have provided comments throughout the paper on how the assumptions that are required for our theoretical convergence guarantees might be loosened further. Acknowledgements The authors are grateful to Sen Na for providing consultation about the Julia implementation provided by the authors of [31, Algorithm 1]. This material is based upon work supported by the U.S. NSF under award CCF-2139735 and by the Office of Naval Research under award N00014-21-1-2532. References A. S. Berahas, F. E. Curtis, D. Robinson, and B. Zhou. Sequential quadratic optimization for nonlinear equality constrained stochastic optimization. SIAM Journal on Optimization , 31(2):1352–1379, 2021. Albert S Berahas, Raghu Bollapragada, and Baoyu Zhou. An adaptive sampling sequential quadratic programming method for equality constrained stochastic optimization. arXiv preprint arXiv:2206.00712 ,2022. Albert S Berahas, Frank E Curtis, Michael J O’Neill, and Daniel P Robinson. A stochastic sequential quadratic optimization algorithm for nonlinear equality constrained optimization with rank-deficient jacobians. arXiv preprint arXiv:2106.13015 , 2021. Albert S Berahas, Jiahao Shi, Zihong Yi, and Baoyu Zhou. Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction. arXiv preprint arXiv:2204.04161 , 2022. Albert S Berahas, Miaolan Xie, and Baoyu Zhou. A sequential quadratic programming method with high probability complexity bounds for nonlinear equality constrained stochastic optimization. arXiv preprint arXiv:2301.00477 , 2023. Dimitri Bertsekas. Convex Optimization Theory , volume 1. Athena Scientific, 2009. Dimitri P. Bertsekas. Network optimization: continuous and discrete models , volume 8. Athena Scien-tific, 1998. 25 Dimitri P. Bertsekas and John N. Tsitsiklis. Gradient convergence in gradient methods with errors. SIAM Journal on Optimization , 10(3):627–642, 2000. Richard H Byrd, Jean Charles Gilbert, and Jorge Nocedal. A trust region method based on interior point techniques for nonlinear programming. Math. Prog. , 89(1):149–185, 2000. Richard H Byrd, Mary E Hribar, and Jorge Nocedal. An interior point algorithm for large-scale nonlinear programming. SIAM Journal on Optimization , 9(4):877–900, 1999. Andrew R Conn. Constrained optimization using a nondifferentiable penalty function. SIAM Journal on Numerical Analysis , 10(4):760–784, 1973. F. E. Curtis, D. P. Robinson, and B. Zhou. Inexact sequential quadratic optimization for minimizing a stochastic objective function subject to deterministic nonlinear equality constraints. arXiv preprint arXiv:2107.03512 , 2021. Frank E Curtis, Michael J O’Neill, and Daniel P Robinson. Worst-case complexity of an sqp method for nonlinear equality constrained stochastic optimization. arXiv preprint arXiv:2112.14799 , 2021. G Di Pillo and L Grippo. Exact penalty functions in constrained optimization. SIAM Journal on control and optimization , 27(6):1333–1360, 1989. Gianni Di Pillo and Luigi Grippo. A continuously differentiable exact penalty function for nonlin-ear programming problems with inequality constraints. SIAM Journal on Control and Optimization ,23(1):72–84, 1985. II Dikin. Iterative solution of problems of linear and quadratic programming. In Doklady Akademii Nauk , volume 174, pages 747–748. Russian Academy of Sciences, 1967. Yuchen Fang, Sen Na, Michael W. Mahoney, and Mladen Kolar. Fully stochastic trust-region sequential quadratic programming for equality-constrained optimization problems. arXiv preprint 2211.15943 ,2022. Roger Fletcher. An exact penalty function for nonlinear programming with inequalities. Mathematical Programming , 5(1):129–150, 1973. Roger Fletcher. Practical methods of optimization . John Wiley & Sons, 2013. Philip E Gill, Walter Murray, and Michael A Saunders. Snopt: An SQP algorithm for large-scale constrained optimization. SIAM review , 47(1):99–131, 2005. Torkel Glad and Elijah Polak. A multiplier method with automatic limitation of penalty growth. Mathematical Programming , 17(1):140–155, 1979. Nicholas IM Gould, Dominique Orban, and Philippe L Toint. CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Computational optimization and applications , 60(3):545–557, 2015. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023. Shih-Ping Han. Superlinearly convergent variable metric algorithms for general nonlinear programming problems. Mathematical Programming , 11(1):263–282, 1976. Richard J Hathaway. A constrained formulation of maximum-likelihood estimation for normal mixture distributions. The Annals of Statistics , 13(2):795–800, 1985. Toshihide Ibaraki and Naoki Katoh. Resource allocation problems: algorithmic approaches . MIT press, 1988. 26 Drew P Kouri and Thomas M Surowiec. Risk-averse PDE-constrained optimization using the conditional value-at-risk. SIAM Journal on Optimization , 26(1):365–396, 2016. Guanghui Lan. First-order and stochastic optimization methods for machine learning . Springer, 2020. L Lasdon, A Waren, and R Rice. An interior penalty method for inequality constrained optimal control problems. IEEE Transactions on Automatic Control , 12(4):388–395, 1967. Robert McGill. Optimum control, inequality state constraints, and the generalized newton-raphson algorithm. Journal of the Society for Industrial and Applied Mathematics, Series A: Control , 3(2):291– 298, 1965. Sen Na, Mihai Anitescu, and Mladen Kolar. Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming. arXiv preprint arXiv:2109.11502 , 2021. Sen Na, Mihai Anitescu, and Mladen Kolar. An adaptive stochastic sequential quadratic programming with differentiable exact augmented lagrangians. Mathematical Programming , pages 1–71, 2022. Sen Na and Michael W Mahoney. Asymptotic convergence rate and statistical inference for stochastic sequential quadratic programming. arXiv preprint arXiv:2205.13687 , 2022. Jorge Nocedal and Stephen Wright. Numerical Optimization . Springer, 2006. Figen Oztoprak, Richard Byrd, and Jorge Nocedal. Constrained optimization in the presence of noise. arXiv preprint arXiv:2110.04355 , 2021. Vivak Patel and Shushu Zhang. Stochastic gradient descent on nonconvex functions with general noise models. arXiv preprint 2104.00423 , 2021. Andre F Perold. Large-scale portfolio optimization. Management science , 30(10):1143–1160, 1984. Songqiang Qiu and Vyacheslav Kungurtsev. A sequential quadratic programming method for optimiza-tion with stochastic objective functions, deterministic inequality constraints and robust subproblems. arXiv preprint arXiv:2302.07947 , 2023. Daniel P. Robinson. Primal-dual methods for nonlinear optimization (thesis) . University of California, San Diego, 2007. Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczynski. Lectures on stochastic programming: modeling and theory . SIAM, 2021. Qiankun Shi, Xiao Wang, and Hao Wang. A momentum-based linearized augmented lagrangian method for nonconvex constrained stochastic optimization. 2022. A. W¨ achter and L. T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming , 106(1):25–57, 2006. Stephen J Wright. Primal-dual interior-point methods . SIAM, 1997. Hiroshi Yamashita. A globally convergent primal-dual interior point method for constrained optimiza-tion. Optimization Methods and Software , 10(2):443–469, 1998. Willard I Zangwill. Non-linear programming via penalty functions. Management science , 13(5):344–358, 1967. Victor M. Zavala and Mihai Anitescu. Scalable nonlinear programming via exact differentiable penalty functions and trust-region newton methods. SIAM Journal on Optimization , 24(1):528–558, 2014. 27
12284
https://brainly.in/question/53608587
Find the general solution of 2cos (2x+π÷4) = ✓2 and hence find the solution from (-2π,2π)​ - Brainly.in Skip to main content Ask Question Log in Join for free For parents For teachers Honor code Textbook Solutions Brainly App millumidhlaj100 20.09.2022 Math Secondary School answered Find the general solution of 2cos (2x+π÷4) = ✓2 and hence find the solution from (-2π,2π)​ 1 See answer See what the community says and unlock a badge. Add answer+35 pts 0:00 / -- Read More millumidhlaj100 is waiting for your help. Add your answer and earn points. Add answer +35 pts Answer No one rated this answer yet — why not be the first? 😎 VishalKumar274 VishalKumar274 Step-by-step explanation: We already know that the values of sinx and cosx repeat after an interval of 2π. Also, the values of tanx repeat after an interval of π. If the equation involves a variable 0 ≤ x < 2π, then the solutions are called principal solutions. A general solution is one which involves the integer ‘n’ and gives all solutions of a trigonometric equation. Also, the character ‘Z’ is used to denote the set of integers. trigonometric equations Source: Pixabay Principal Solutions of Trigonometric Equations Let’s look at these examples to help us understand the principal solutions: Example 1 Find the principal solutions of the equation sinx = 3√2. Solution: We know that, sinπ3 = 3√2 Also, sin2π3 = sin(π–π3) Now, we know that sin(π–x) = sinx. Hence, sin2π3 = sinπ3 = 3√2 Therefore, the principal solutions of sinx = 3√2 are x = π3 and 2π3. Explore all similar answers Thanks 0 rating answer section Answer rating 0.0 (0 votes) Find Math textbook solutions? See all Class 12 Class 11 Class 10 Class 9 Class 8 Class 7 Class 6 Class 5 Class 4 Class 3 Class 2 Class 1 NCERT Class 9 Mathematics 619 solutions NCERT Class 8 Mathematics 815 solutions NCERT Class 7 Mathematics 916 solutions NCERT Class 10 Mathematics 721 solutions NCERT Class 6 Mathematics 1230 solutions Xam Idea Mathematics 10 2278 solutions ML Aggarwal - Understanding Mathematics - Class 8 2090 solutions R S Aggarwal - Mathematics Class 8 1964 solutions R D Sharma - Mathematics 9 2199 solutions R S Aggarwal - Mathematics Class 7 2222 solutions SEE ALL Advertisement Still have questions? Find more answers Ask your question New questions in Math Find the quotient and (a) a²+8a-9 ÷ a-​ [Q1] @ Find the transpose of Mattix ⑥ Find the conjugate of mattix Ⓒ Find the determinant of Mattix 1 2 -3-2 -4 -3 -3 6 8 po 3+2i -4i S 7-gi 1 0 4 -2 0]( 20) if ratio 2 Sm: Sn = m²: 52 show that ratio of ith and 7th term is (2m-1) (2n-1) ​ If A - B =∏ /4 , Show that ( 1+ tan A) ( 1+ tan B ) = 2 tan A 3a +4b power of 3- 5 = 3 + 4 power of 3 -5 PreviousNext Advertisement Advertisement Ask your question Free help with homework Why join Brainly? ask questions about your assignment get answers with explanations find similar questions I want a free account Company Careers Advertise with us Terms of Use Copyright Policy Privacy Policy Cookie Preferences Help Signup Help Center Safety Center Responsible Disclosure Agreement Get the Brainly App ⬈(opens in a new tab)⬈(opens in a new tab) Brainly.in We're in the know (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)
12285
https://dsp.stackexchange.com/questions/62893/efficient-magnitude-comparison-for-complex-numbers
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Efficient Magnitude Comparison for Complex Numbers Ask Question Asked Modified 3 months ago Viewed 4k times $\begingroup$ Is there a more efficient algorithm (or what is the most efficient known algorithm) to select the larger of two complex numbers given as $I + jQ$ without having to compute the squared magnitude as In particular binary arithmetic solutions that do not require multipliers? This would be for a binary arithmetic solution using AND, NAND, OR, NOR, XOR, XNOR, INV, and shifts and adds without simply replacing required multiplication steps with their shift and add equivalents (or what would be equivalent in processing steps). Also assume the number is represented with rectangular fixed point (bounded integers) coordinates (I, Q). I am aware of magnitude estimators for complex numbers such as $max(I,Q) + min(I,Q)/2$, and more accurate variants at the expense of multiplying coefficients but they all have a finite error. I have considered using the CORDIC rotator to rotate each to the real axis and then being able to compare real numbers. This solution also has finite error but I can choose the number of iterations in the CORDIC such that the error is less than $e$ for any $e$ that I choose within my available numeric precision. For that reason this solution would be acceptable. Are there other solutions that would be more efficient than the CORDIC (which requires time via iterations to achieve accuracy)? Determining Best Answer There was incredible work done by the participants (and we still welcome competing options if anyone has other other ideas). I posted my proposed approach to ranking the algorithms and welcome debate on the ranking approach before I dive in: Best Approach to Rank Complex Magnitude Comparision Problem algorithms complex numerical-algorithms Improve this question edited Jan 2, 2020 at 4:51 Dan BoschenDan Boschen asked Dec 28, 2019 at 22:42 Dan BoschenDan Boschen 57.7k33 gold badges6262 silver badges151151 bronze badges $\endgroup$ 35 $\begingroup$ I'm sure you're aware, but of course the square root is a strictly monotonous function on the non-negative reals, and thus can be omitted for comparison purposes. $\endgroup$ Marcus Müller – Marcus Müller 2019-12-28 22:45:42 +00:00 Commented Dec 28, 2019 at 22:45 $\begingroup$ Are more efficient ways to calculate $I^2 + Q^2$ OK? $\endgroup$ Olli Niemitalo – Olli Niemitalo 2019-12-29 11:23:18 +00:00 Commented Dec 29, 2019 at 11:23 1 $\begingroup$ I'm not sure about the required accuracy and the cost of multiplications, but I'd say that for most practical applications the alpha_max_plus_beta_min algorithm is a good choice. $\endgroup$ Matt L. – Matt L. 2019-12-29 12:12:42 +00:00 Commented Dec 29, 2019 at 12:12 1 $\begingroup$ Excellent name reference Matt! I have been looking for this since I saw this $0.96a+0.4b$ approximation in an old "engineer formulae" book $\endgroup$ Laurent Duval – Laurent Duval 2019-12-29 14:33:59 +00:00 Commented Dec 29, 2019 at 14:33 1 $\begingroup$ @MattL.This was my reference in my comment "I am aware of magnitude estimators"- my issue with those (to my understanding) is that I cannot with only 2 factors reduce the error to less the e for any given e--- I am looking for a solution that I can use for any given precision, with the expense of more operations the higher the precision is desired. Is my understanding correct on this limitation for the alpha max beta min? (Thanks for naming it) $\endgroup$ Dan Boschen – Dan Boschen 2019-12-29 14:44:19 +00:00 Commented Dec 29, 2019 at 14:44 | Show 30 more comments 9 Answers 9 Reset to default $\begingroup$ You mention in a comment that your target platform is a custom IC. That makes the optimization very different from trying to optimize for an already existing CPU. On a custom IC (and to a lesser extent, on an FPGA), we can take full advantage of the simplicity of bitwise operations. In addition, to reduce the area it is not only important to reduce the operations we execute, but to execute as many operations as possible using the same piece of logic. Logic descriptions in a language such as VHDL is converted to logic gate netlist by a synthetizer tool, which can do most of the low-level optimization for us. What we need to do is to choose an architecture that achieves the goal we want to optimize for; I'll show several alternatives below. Single cycle computation: lowest latency If you want to get a result in a single cycle, this all basically boils to a large combinatorial logic function. That's exactly what synthesis tools are great at solving, so you could just try to throw the basic equation at the synthetizer: if I1I1 + Q1Q1 > I2I2 + Q2Q2 then ... and see what it gives. Many synthetizers have attributes that you can use to force them to perform logic gate level optimization instead of using multiplier and adder macros. This will still take quite a bit of area, but is likely to be the smallest area single cycle solution there is. There is significant number of optimization that the synthetizer can do, such as exploiting symmetry in $x\cdot x$ as opposed to generic $x\cdot y$ multiplier. Pipelined computation: maximum throughput Pipelining the single cycle computation will increase maximum clock speed and thus throughput, but it will not reduce the area needed. For basic pipelining, you could compute N bits of each magnitude on each pipeline level, starting with the least significant bits. In the simplest case, you could do it in two halves: -- Second pipeline stage if m1(N downto N/2) > m2(N downto N/2) then ... -- First pipeline stage m1 := I1I1 + Q1Q1; m2 := I2I2 + Q2Q2; if m1(N/2-1 downto 0) > m2(N/2-1 downto 0) then ... Why start with least significant bits? Because they have the shortest logic equations, making them faster to compute. The result for the most significant bits is fed into a comparator only on the second clock cycle, giving it more time to proceed through the combinatorial logic. For more than 2 stages of pipeline, carry would have to be passed between the stages separately. This would eliminate the long ripple carry chain that would normally limit the clock rate of a multiplication. Starting the computation with most significant bits could allow early termination, but in a pipelined configuration that is hard to take advantage of as it would just cause a pipeline bubble. Serialized computation, LSB first: small area Serializing the computation will reduce the area needed, but each value will take multiple cycles to process before next computation can be started. The smallest area option is to compute 2 least significant magnitude bits on each clock cycle. On next cycle, the I and Q values are shifted right by 1 bit, causing the squared magnitude to shift by 2 bits. This way only 2x2 bit multiplier is needed, which takes very little chip area. Starting with least significant bits allows easy handling of carry in the additions: just store the carry bit in a register and add it on the next cycle. To avoid storing the full magnitude values, the comparison can also be serialized, remembering the LSB result and overriding it by MSB result if the MSB bits differ. m1 := I1(1 downto 0) I1(1 downto 0) + Q1(1 downto 0) Q1(1 downto 0) + m1(3 downto 2); m2 := I2(1 downto 0) I2(1 downto 0) + Q2(1 downto 0) Q2(1 downto 0) + m2(3 downto 2); I1 := shift_right(I1, 1); Q1 := shift_right(Q1, 1); I2 := shift_right(I2, 1); Q2 := shift_right(Q2, 1); if m1 > m2 then result := 1; elif m1 < m2 then result := 0; else -- keep result from LSBs end if; Serialized computation, MSB first: small area, early termination If we modify the serialized computation to process input data starting with the most significant bit, we can terminate as soon as we find a difference. This does cause a small complication in the carry logic: the upper-most bits could be changed by the carry from the lower bits. However, the effect this has on the comparison result can be predicted. We only get to the lower bits if the upper bits of each magnitude are equal. Then if one magnitude has a carry and the other does not, that value is necessarily larger. If both magnitudes have equal carry, the upper bits will remain equal also. m1 := I1(N downto N-1) I1(N downto N-1) + Q1(N downto N-1) Q1(N downto N-1); m2 := I2(N downto N-1) I2(N downto N-1) + Q2(N downto N-1) Q2(N downto N-1); if m1 > m2 then -- Computation finished, (I1,Q1) is larger elif m1 < m2 then -- Computation finished, (I2,Q2) is larger else -- Continue with next bits I1 := shift_left(I1, 1); Q1 := shift_left(Q1, 1); I2 := shift_left(I2, 1); Q2 := shift_left(Q2, 1); end if; It is likely that the MSB-first and LSB-first serialized solutions will have approximately equal area, but the MSB-first will take less clock cycles on average. In each of these code examples, I rely on the synthetizer to optimize the multiplication on the logic gate level into binary operations. The width of the operands should be selected so that the computations do not overflow. EDIT: After thinking about this overnight, I'm no longer so sure that squaring a number can be fully serialized or done just 2 bits at a time. So it is possible the serialized implementations would have to resort to N-bit accumulator after all. Share Improve this answer edited Dec 30, 2019 at 16:11 answered Dec 29, 2019 at 12:42 jpajpa 89844 silver badges1313 bronze badges $\endgroup$ 0 Add a comment | 10 +100 $\begingroup$ PROLOGUE My answer to this question is in two parts since it is so long and there is a natural cleavage. This answer can be seen as the main body and the other answer as appendices. Consider it a rough draft for a future blog article. Answer 1 Prologue (you are here) Latest Results Source code listing Mathematical justification for preliminary checks Answer 2 Primary determination probability analysis Explanation of the lossless adaptive CORDIC algorithm Small angle solution This turned out to be a way more in depth and interesting problem than it first appeared. The answer given here is original and novel. I, too, am very interested if it, or parts of it, exist in any canon. The process can be summarized like this: An initial primary determination is made. This catches about 80% of case very efficiently. The points are moved to difference/sum space and a one pass series of conditions tested. This catches all but about 1% of cases. Still quite efficient. The resultant difference/sum pair are moved back to IQ space, and a custom CORDIC approach is attempted So far, the results are 100% accurate. There is one possible condition which may be a weakness in which I am now testing candidates to turn into a strength. When it is ready, it will be incorporated in the code in this answer, and an explanation added to the other answer. The code has been updated. It now reports exit location counts. The location points are commented in the function definition. The latest results: Count: 1048576 Sure: 100.0 Correct: 100.0 Presumed: 0.0 Actually: -1 Faulty: 0.0 High: 1.0 Low: 1.0 1 904736 86.28 2 1192 86.40 3 7236 87.09 4 13032 88.33 5 108024 98.63 6 8460 99.44 Here are the results if my algorithm is not allowed to go into the CORDIC routine, but assumes the answer is zero (8.4% correct assumption). The overall correct rate is 99.49% (100 - 0.51 faulty). Count: 1048576 Sure: 99.437713623 Correct: 100.0 Presumed: 0.562286376953 Actually: 8.41248303935 Faulty: 0.514984130859 High: 1.05125 Low: 0.951248513674 1 904736 86.28 2 1192 86.40 3 7236 87.09 4 13032 88.33 5 108024 98.63 6 8460 99.44 Okay, I've added an integer interpretation of Olli's algorithm. I would really appreciate somebody double checking my translation into Python. It is located at the end of the source code. Here are the results: Count: 1048576 Correct: 94.8579788208 1 841216 80.22 (Partial) Primary Determination 2 78184 87.68 1st CORDIC 3 105432 97.74 2nd 4 10812 98.77 3rd 5 0 98.77 4th 6 12932 100.00 Terminating Guess Next, I added the "=" to the primary slope line comparisons. This is the version I left in my code. The results improved. To test it yourself, simply change the function being called in the main() routine. Correct: 95.8566665649 1 861056 82.12 2 103920 92.03 3 83600 100.00 Here is a Python listing for what I have so far. You can play around with it to your heart's content. If anybody notices any bugs, please let me know. import array as arr #================================================ def Main(): #---- Initialize the Counters theCount = 0 theWrongCount = 0 thePresumedZeroCount = 0 theCloseButNotZeroCount = 0 theTestExits = arr.array( "i", [ 0, 0, 0, 0, 0, 0, 0 ] ) #---- Test on a Swept Area theLimit = 32 theLimitSquared = theLimit theLimit theWorstHigh = 1.0 theWorstLow = 1.0 for i1 in range( theLimit ): ii1 = i1 i1 for q1 in range( theLimit ): m1 = ii1 + q1 q1 for i2 in range( theLimit ): ii2 = i2 i2 for q2 in range( theLimit ): m2 = ii2 + q2 q2 D = m1 - m2 theCount += 1 c, t = CompareMags( i1, q1, i2, q2 ) if t <= 6: theTestExits[t] += 1 if c == 2: thePresumedZeroCount += 1 if D != 0: theCloseButNotZeroCount += 1 Q = float( m1 ) / float( m2 ) if Q > 1.0: if theWorstHigh < Q: theWorstHigh = Q else: if theWorstLow > Q: theWorstLow = Q print "%2d %2d %2d %2d %10.6f" % ( i1, q1, i2, q2, Q ) elif c == 1: if D <= 0: theWrongCount += 1 print "Wrong Less ", i1, q1, i2, q2, D, c elif c == 0: if D != 0: theWrongCount += 1 print "Wrong Equal", i1, q1, i2, q2, D, c elif c == -1: if D >= 0: theWrongCount += 1 print "Wrong Great", i1, q1, i2, q2, D, c else: theWrongCount += 1 print "Invalid c value:", i1, q1, i2, q2, D, c #---- Calculate the Results theSureCount = ( theCount - thePresumedZeroCount ) theSurePercent = 100.0 theSureCount / theCount theCorrectPercent = 100.0 ( theSureCount - theWrongCount ) \ / theSureCount if thePresumedZeroCount > 0: theCorrectPresumptionPercent = 100.0 ( thePresumedZeroCount - theCloseButNotZeroCount ) \ / thePresumedZeroCount else: theCorrectPresumptionPercent = -1 theFaultyPercent = 100.0 theCloseButNotZeroCount / theCount #---- Report the Results print print " Count:", theCount print print " Sure:", theSurePercent print " Correct:", theCorrectPercent print print "Presumed:", 100 - theSurePercent print "Actually:", theCorrectPresumptionPercent print print " Faulty:", theFaultyPercent print print " High:", theWorstHigh print " Low:", theWorstLow print #---- Report The Cutoff Values pct = 0.0 f = 100.0 / theCount for t in range( 1, 7 ): pct += f theTestExits[t] print "%d %8d %6.2f" % ( t, theTestExits[t], pct ) print #================================================ def CompareMags( I1, Q1, I2, Q2 ): # This function compares the magnitudes of two # integer points and returns a comparison result value # # Returns ( c, t ) # # c Comparison # # -1 | (I1,Q1) | < | (I2,Q2) | # 0 | (I1,Q1) | = | (I2,Q2) | # 1 | (I1,Q1) | > | (I2,Q2) | # 2 | (I1,Q1) | ~=~ | (I2,Q2) | # # t Exit Test # # 1 Primary Determination # 2 D/S Centers are aligned # 3 Obvious Answers # 4 Trivial Matching Gaps # 5 Opposite Gap Sign Cases # 6 Same Gap Sign Cases # 10 Small Angle + Count # 20 CORDIC + Count # # It does not matter if the arguments represent vectors # or complex numbers. Nor does it matter if the calling # routine considers the integers as fixed point values. #---- Ensure the Points are in the First Quadrant WLOG a1 = abs( I1 ) b1 = abs( Q1 ) a2 = abs( I2 ) b2 = abs( Q2 ) #---- Ensure they are in the Lower Half (First Octant) WLOG if b1 > a1: a1, b1 = b1, a1 if b2 > a2: a2, b2 = b2, a2 #---- Primary Determination if a1 > a2: if a1 + b1 >= a2 + b2: return 1, 1 else: thePresumedResult = 1 da = a1 - a2 sa = a1 + a2 db = b2 - b1 sb = b2 + b1 elif a1 < a2: if a1 + b1 <= a2 + b2: return -1, 1 else: thePresumedResult = -1 da = a2 - a1 sa = a2 + a1 db = b1 - b2 sb = b1 + b2 else: if b1 > b2: return 1, 1 elif b1 < b2: return -1, 1 else: return 0, 1 #---- Bring Factors into 1/2 to 1 Ratio Range db, sb = sb, db while da < sa: da += da sb += sb if sb > db: db, sb = sb, db #---- Ensure the [b] Factors are Both Even or Odd if ( ( sb + db ) & 1 ) > 0: da += da sa += sa db += db sb += sb #---- Calculate Arithmetic Mean and Radius of [b] Factors p = ( db + sb ) >> 1 r = ( db - sb ) >> 1 #---- Calculate the Gaps from the [b] mean and [a] values g = da - p h = p - sa #---- If the mean of [b] is centered in (the mean of) [a] if g == h: if g == r: return 0, 2; elif g > r: return -thePresumedResult, 2 else: return thePresumedResult, 2 #---- Weed Out the Obvious Answers if g > h: if r > g and r > h: return thePresumedResult, 3 else: if r < g and r < h: return -thePresumedResult, 3 #---- Calculate Relative Gaps vg = g - r vh = h - r #---- Handle the Trivial Matching Gaps if vg == 0: if vh > 0: return -thePresumedResult, 4 else: return thePresumedResult, 4 if vh == 0: if vg > 0: return thePresumedResult, 4 else: return -thePresumedResult, 4 #---- Handle the Gaps with Opposite Sign Cases if vg < 0: if vh > 0: return -thePresumedResult, 5 else: if vh < 0: return thePresumedResult, 5 #---- Handle the Gaps with the Same Sign (using numerators) theSum = da + sa if g < h: theBound = ( p << 4 ) - p theMid = theSum << 3 if theBound > theMid: return -thePresumedResult, 6 else: theBound = ( theSum << 4 ) - theSum theMid = p << 5 if theBound > theMid: return thePresumedResult, 6 #---- Return to IQ Space under XY Names x1 = theSum x2 = da - sa y2 = db + sb y1 = db - sb #---- Ensure Points are in Lower First Quadrant (First Octant) if x1 < y1: x1, y1 = y1, x1 if x2 < y2: x2, y2 = y2, x2 #---- Variation of Olli's CORDIC to Finish for theTryLimit in range( 10 ): c, x1, y1, x2, y2 = Iteration( x1, y1, x2, y2, thePresumedResult ) if c != 2: break if theTryLimit > 3: print "Many tries needed!", theTryLimit, x1, y1, x2, y2 return c, 20 #================================================ def Iteration( x1, y1, x2, y2, argPresumedResult ): #---- Try to reduce the Magnitudes while ( x1 & 1 ) == 0 and \ ( y1 & 1 ) == 0 and \ ( x2 & 1 ) == 0 and \ ( y2 & 1 ) == 0: x1 >>= 1 y1 >>= 1 x2 >>= 1 y2 >>= 1 #---- Set the Perpendicular Values (clockwise to downward) dx1 = y1 dy1 = -x1 dx2 = y2 dy2 = -x2 sdy = dy1 + dy2 #---- Allocate the Arrays for Length Storage wx1 = arr.array( "i" ) wy1 = arr.array( "i" ) wx2 = arr.array( "i" ) wy2 = arr.array( "i" ) #---- Locate the Search Range thePreviousValue = x1 + x2 # Guaranteed Big Enough for theTries in range( 10 ): wx1.append( x1 ) wy1.append( y1 ) wx2.append( x2 ) wy2.append( y2 ) if x1 > 0x10000000 or x2 > 0x10000000: print "Danger, Will Robinson!" break theValue = abs( y1 + y2 + sdy ) if theValue > thePreviousValue: break thePreviousValue = theValue x1 += x1 y1 += y1 x2 += x2 y2 += y2 #---- Prepare for the Search theTop = len( wx1 ) - 1 thePivot = theTop - 1 x1 = wx1[thePivot] y1 = wy1[thePivot] x2 = wx2[thePivot] y2 = wy2[thePivot] theValue = abs( y1 + y2 + sdy ) #---- Binary Search while thePivot > 0: thePivot -= 1 uy1 = y1 + wy1[thePivot] uy2 = y2 + wy2[thePivot] theUpperValue = abs( uy1 + uy2 + sdy ) ly1 = y1 - wy1[thePivot] ly2 = y2 - wy2[thePivot] theLowerValue = abs( ly1 + ly2 + sdy ) if theUpperValue < theLowerValue: if theUpperValue < theValue: x1 += wx1[thePivot] x2 += wx2[thePivot] y1 = uy1 y2 = uy2 theValue = theUpperValue else: if theLowerValue < theValue: x1 -= wx1[thePivot] x2 -= wx2[thePivot] y1 = ly1 y2 = ly2 theValue = theLowerValue #---- Apply the Rotation x1 += dx1 y1 += dy1 x2 += dx2 y2 += dy2 #---- Bounce Points Below the Axis to Above if y1 < 0: y1 = -y1 if y2 < 0: y2 = -y2 #---- Comparison Determination c = 2 if x1 > x2: if x1 + y1 >= x2 + y2: c = argPresumedResult elif x1 < x2: if x1 + y1 <= x2 + y2: c = -argPresumedResult else: if y1 > y2: c = argPresumedResult elif y1 < y2: c = -argPresumedResult else: c = 0 #---- Exit return c, x1, y1, x2, y2 #================================================ def MyVersionOfOllis( I1, Q1, I2, Q2 ): # Returns ( c, t ) # # c Comparison # # -1 | (I1,Q1) | < | (I2,Q2) | # 1 | (I1,Q1) | > | (I2,Q2) | # # t Exit Test # # 1 (Partial) Primary Determination # 2 CORDIC Loop + 1 # 6 Terminating Guess #---- Set Extent Parameter maxIterations = 4 #---- Ensure the Points are in the First Quadrant WLOG I1 = abs( I1 ) Q1 = abs( Q1 ) I2 = abs( I2 ) Q2 = abs( Q2 ) #---- Ensure they are in the Lower Half (First Octant) WLOG if Q1 > I1: I1, Q1 = Q1, I1 if Q2 > I2: I2, Q2 = Q2, I2 #---- (Partial) Primary Determination if I1 < I2 and I1 + Q1 <= I2 + Q2: return -1, 1 if I1 > I2 and I1 + Q1 >= I2 + Q2: return 1, 1 #---- CORDIC Loop Q1pow1 = Q1 >> 1 I1pow1 = I1 >> 1 Q2pow1 = Q2 >> 1 I2pow1 = I2 >> 1 Q1pow2 = Q1 >> 3 I1pow2 = I1 >> 3 Q2pow2 = Q2 >> 3 I2pow2 = I2 >> 3 for n in range ( 1, maxIterations+1 ): newI1 = I1 + Q1pow1 newQ1 = Q1 - I1pow1 newI2 = I2 + Q2pow1 newQ2 = Q2 - I2pow1 I1 = newI1 Q1 = abs( newQ1 ) I2 = newI2 Q2 = abs( newQ2 ) if I1 <= I2 - I2pow2: return -1, 1 + n if I2 <= I1 - I1pow2: return 1, 1 + n Q1pow1 >>= 1 I1pow1 >>= 1 Q2pow1 >>= 1 I2pow1 >>= 1 Q1pow2 >>= 2 I1pow2 >>= 2 Q2pow2 >>= 2 I2pow2 >>= 2 #---- Terminating Guess Q1pow1 <<= 1 Q2pow1 <<= 1 if I1 + Q1pow1 < I2 + Q2pow1: return -1, 6 else: return 1, 6 #================================================ Main() You want to avoid multiplications. For comparison purposes, not only do you not have to take the square roots, but you can also work with the absolute values. Let $$ \begin{aligned} a_1 &= | I_1 | \ b_1 &= | Q_1 | \ a_2 &= | I_2 | \ b_2 &= | Q_2 | \ \end{aligned} $$ Note that for $a,b \ge 0$: $$ (a+b)^2 \ge a^2 + b^2 $$ Therefore $$ a_1 > a_2 + b_2 $$ means that $$ a_1^2 + b_1^2 \ge a_1^2 > ( a_2 + b_2 )^2 \ge a_2^2 + b_2^2 $$ $$ a_1^2 + b_1^2 > a_2^2 + b_2^2 $$ This is true for $b_1$ as well. Also in the other direction, which leads to this logic: (The previous pseudo-code has been functionally replaced by the Python listing below.) Depending on your distribution of values, this may save a lot. However, if all the values are expected to be close, you are better off buckling down and evaluating the Else clause from the get go. You can optimize slightly by not calculating s1 unless it is needed. This is off the top of my head so I can't tell you if it is the best. Depending on the range of values, a lookup table might also work, but the memory fetches might be more expensive than the calculations. This should run more efficiently: (The previous pseudo-code has been functionally replaced by the Python listing below.) A little more logic: $$ \begin{aligned} ( a_1^2 + b_1^2 ) - ( a_2^2 + b_2^2 ) &= ( a_1^2 - a_2^2 ) + ( b_1^2 - b_2^2 ) \ &= (a_1-a_2)(a_1+a_2) + (b_1-b_2)(b_1+b_2) \ \end{aligned} $$ When $a_1 > a_2 $ ( and $a_1 \ge b_1 $ and $a_2 \ge b_2 $ as in the code): $$ (a_1-a_2)(a_1+a_2) + (b_1-b_2)(b_1+b_2) >= (a1-a2)(b1+b2) + (b1-b2)(b1+b2) = (a1+b1)-(a2+b2) $$ So if $a_1+b_1 > a_2+b_2$ then $$ ( a_1^2 + b_1^2 ) - ( a_2^2 + b_2^2 ) > 0 $$ Meaning 1 is bigger. The reverse is true for $a_1 < a_2 $ The code has been modified. This leaves the Needs Determining cases really small. Still tinkering.... Giving up for tonight. Notice that the comparison of $b$ values after the comparison of $a$ values are actually incorporated in the sum comparisons that follow. I left them in the code as they save two sums. So, you are gambling an if to maybe save an if and two sums. Assembly language thinking. I'm not seeing how to do it without a "multiply". I put that in quotes because I am now trying to come up with some sort of partial multiplication scheme that only has to go far enough to make a determination. It will be iterative for sure. Perhaps CORDIC equivalent. Okay, I think I got it mostly. I'm going to show the $ a_1 > a_2 $ case. The less than case works the same, only your conclusion is opposite. Let $$ \begin{aligned} d_a &= a_1 - a_2 \ s_a &= a_1 + a_2 \ d_b &= b_2 - b_1 \ s_b &= b_2 + b_1 \ \end{aligned} $$ All these values will be greater than zero in the "Needs Determining" case. Observe: $$ \begin{aligned} D &= (a_1^2 + b_1^2) - (a_2^2 + b_2^2) \ &= (a_1^2 - a_2^2) + ( b_1^2 - b_2^2) \ &= (a_1 - a_2)(a_1 + a_2) + (b_1 - b_2)(b_1 + b_2) \ &= (a_1 - a_2)(a_1 + a_2) - (b_2 - b_1)(b_1 + b_2) \ &= d_a s_a - d_b s_b \end{aligned} $$ Now, if $D=0$ then 1 and 2 are equal. If $D>0$ then 1 is bigger. Otherwise, 2 is bigger. Here is the "CORDIC" portion of the trick: Swap db, sb # d is now the larger quantity While da < sa da =<< 1 sb =<< 1 If sb > db Then Swap db, sb # s is the smaller quantity EndWhile When this loop is complete, the following has is true: $D$ has been multiplied by some power of 2, leaving the sign indication preserved. $$ 2 s_a > d_a \ge s_a > d_a / 2 $$ $$ 2 s_b > d_b \ge s_b > d_b / 2 $$ In words, the $d$ will be larger than the $s$, and they will be within a factor of two of each other. Since we are working with integers, the next step requires that both $d_b$ and $s_b$ be even or odd. If ( (db+sb) & 1 ) > 0 Then da =<< 1 sa =<< 1 db =<< 1 sb =<< 1 EndIf This will multiply the $D$ value by 4, but again, the sign indication is preserved. Let $$ \begin{aligned} p &= (d_b + s_b) >> 1 \ r &= (d_b - s_b) >> 1 \ \end{aligned} $$ A little thinking shows: $$ 0 \le r < p/3 $$ The $p/3$ would be if $ d_b = 2 s_b $. Let $$ \begin{aligned} g &= d_a - p \ h &= p - s_a \ \end{aligned} $$ Plug these in to the $D$ equation that may have been doubled a few times. $$ \begin{aligned} D 2^k &= (p+g)(p-h) - (p+r)(p-r) \ &= [p^2 + (g-h)p - gh] - [p^2-r^2] \ &= (g-h)p + [r^2- gh] \ \end{aligned} $$ If $g=h$ then it is a simple determination: If $r=g$ they are equal. If $r>g$ then 1 is bigger, otherwise 2 is bigger. Let $$ \begin{aligned} v_g &= g - r \ v_h &= h - r \ \end{aligned} $$ Evaluate the two terms on the RHS of the $D2^k$ equation. $$ \begin{aligned} r^2 - gh &= r^2 - (r+v_g)(r+v_h) \ &= -v_g v_h - ( v_g + v_h ) r \ \end{aligned} $$ and $$ g - h = v_g - v_h $$ Plug them in. $$ \begin{aligned} D 2^k &= (g-h)p + [r^2- gh] \ &= (v_g - v_h)p - v_g v_h - ( v_g + v_h ) r \ &= v_g(p-r) - v_h(p+r) - v_g v_h \ &= v_g s_b - v_h d_b - \left( \frac{v_h v_g}{2} + \frac{v_h v_g}{2} \right) \ &= v_g(s_b-\frac{v_h}{2}) - v_h(d_b+\frac{v_g}{2}) \ \end{aligned} $$ Multiply both sides by 2 to get rid of the fraction. $$ \begin{aligned} D 2^{k+1} &= v_g(2s_b-v_h) - v_h(2d_b+v_g) \ \end{aligned} $$ If either $v_g$ or $v_h$ is zero, the sign determination of D becomes trivial. Likewise, if $v_g$ and $v_h$ have opposite signs the sign determination of D is also trivial. Still working on the last sliver...... So very close. theHandledPercent 98.6582716049 theCorrectPercent 100.0 Will continue later.......Anybody is welcome to find the correct handling logic for the same sign case. Another day, another big step. The original sign determining equation can be factored like this: $$ \begin{aligned} D &= d_a s_a - d_b s_b \ &= \left( \sqrt{d_a s_a} - \sqrt{d_b s_b} \right)\left( \sqrt{d_a s_a} + \sqrt{d_b s_b} \right) \ \end{aligned} $$ The sum factor will always be positive, so it doesn't influence the sign of D. The difference factor is the difference of the two geometric means. A geometric mean can be approximated by the arithmetic mean. This is the working principle behind the "alpha max plus beta min algorithm". The arithmetic mean is also the upper bound of the geometric mean. Because the $s$ values are bounded below by $d/2$, a rough lower bound can be established for the geometric mean without much calculation. $$ \begin{aligned} s &\ge \frac{d}{2} \ ds &\ge \frac{d^2}{2} \ \sqrt{ds} &\ge \frac{d}{\sqrt{2}} \ &= \frac{\frac{d}{\sqrt{2}}}{(d+s)/2} \cdot \frac{d+s}{2} \ &= \frac{\sqrt{2}}{1+s/d} \cdot \frac{d+s}{2} \ &\ge \frac{\sqrt{2}}{1+1/2} \cdot \frac{d+s}{2} \ &= \frac{2}{3} \sqrt{2} \cdot \frac{d+s}{2} \ &\approx 0.9428 \cdot \frac{d+s}{2} \ &> \frac{15}{16} \cdot \frac{d+s}{2} \ \end{aligned} $$ If the arithmetic mean of a is greater than b's, then if the upper bound of the geometric mean of b is less than the lower bound of the geometric mean of a it means b must be smaller than a. And vice versa for a. This takes care of a lot of the previously unhandled cases. The results are now: theHandledPercent 99.52 theCorrectPercent 100.0 The source code above has been updated. Those that remain unhandled are "too close to call". They will likely require a lookup table to resolve. Stay tuned..... Hey Dan, Well, I would shorten it, but none of it is superfluous. Except maybe the first part, but that is what got the ball rolling. So, a top posted summary would be nearly as long. I do intend to write a blog article instead. This has been a fascinating exercise and much deeper than I initially thought. I did trim my side note about Olli's slope bound. You should really be studying the code to understand how few operations actually have to be done. The math in the narrative is simply justification for the operations. The true "winner" should be the algorithm that is most efficient. A true test would be both approaches programmed on the same platform and tested there. As it is right now, I can tell you that mine (coded in C) will leave his in the dust simply due to I am prototyping with integers and he is using floats with a lot of expensive operations. My thoughts at this point are that the remaining 0.5% cases I'm not handling are best approached with a CORDIC iterative approach. I am going to try to implement a variation of Olli's approach, including his ingenius varying slope, in integers. The "too close to call" category should be very close indeed. I agree with you that Olli does excellent work. I've learned a lot from him. Finally, at the end, aren't we all winners? Dan, Your faith in the CORDIC is inspiring. I have implemented a lossless CORDIC different than Olli's, yet might be equivalent. So far, I have not found your assertion that it is the ultimate solution true. I am not going to post the code yet because there ought to be one more test that cinches it. I've changed the testing a little bit to be more comparable to Olli. I am limiting the region to a quarter circle (equivalent to a full circle) and scoring things differently. Return Meaning Code -1 First Smaller For Sure 0 Equal For Sure 1 First Larger For Sure 2 Presumed Zero The last category could also be called "CORDIC Inconclusive". I recommend for Olli to count that the same. Here are my current results: Count: 538756 Sure: 99.9161030225 Correct: 100.0 Presumed: 0.0838969774815 Zero: 87.610619469 Faulty: 0.0103943157942 High: 1.00950118765 Low: 0.990588235294 Out of all the cases 99.92% were determined for sure and all the determinations were correct. Out of the 0.08% cases that where presumed zero, 87.6% actually were. This means that only 0.01% of the answers were faulty, that is presumed zero erroneously. For those that were the quotient (I1^2+Q1^2)/(I2^2+Q2^2) was calculated. The high and low values are shown. Take the square root to get the actual ratio of magnitudes. Roughly 83% of cases are caught by the primary determination and don't need any further processing. That saves a lot of work. The CORDIC is needed in about 0.7% of the cases. (Was 0.5% in the previous testing.) C O M P L E T E A N D U T T E R S U C C E S S H A S B E E N A C H I E V E D !!!!!!!!!!! Count: 8300161 Sure: 100.0 Correct: 100.0 Presumed: 0.0 Zero: -1 Faulty: 0.0 High: 1.0 Low: 1.0 You can't do better than that and I am pretty sure you can't do it any faster. Or not by much anyway. I have changed all the "X <<= 1" statements to "X += X" because this is way faster on an 8088. (Sly grin). The code will stay in this answer and has been updated. Further explanations are forthcoming in my other answer. This one is long enough as it is and I can't end it on a nicer note than this. Share Improve this answer edited Jan 2, 2020 at 1:30 answered Dec 29, 2019 at 1:54 Cedron DawgCedron Dawg 7,63822 gold badges1010 silver badges2424 bronze badges $\endgroup$ 13 $\begingroup$ @DanBoschen, Let me think about it. The CORDIC seems expensive, even with shifts and adds. Is this on a specialized processor with unavailable multiplies? $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 02:09:28 +00:00 Commented Dec 29, 2019 at 2:09 $\begingroup$ You are closer to the metal than I have ever been. I've added some more cases to narrow it down, I'll keep thinking about it. Interesting challenge. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 02:17:18 +00:00 Commented Dec 29, 2019 at 2:17 $\begingroup$ It can be restructured to be more efficient. I already have, will post soon. I'm still trying to find a slick trick for the last part. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 02:35:05 +00:00 Commented Dec 29, 2019 at 2:35 $\begingroup$ @DanBoschen A diagram is easier to comprehend, but I can't draw them as well as you can. Imagine the first eighth of a circle in the first quadrant, containing (a1,b1). In the a1 > a2 case, it means the second value has to be to the left, ruling out all points to the right. Now, draw a 45 degree downward sloping line through (a1,b1), that represents the sum comparison. All second values below this line are smaller than the first value. Then we get to the "CORDIC" portion of my answer. Upon further reflection, the b's comparison following the a's comparison aren't really beneficial. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 16:37:22 +00:00 Commented Dec 29, 2019 at 16:37 $\begingroup$ @DanBoschen Matt asked the same thing. I answered under his answer for the initial comparisons. I'm thinking I may have 100% now, still working on it. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 17:18:26 +00:00 Commented Dec 29, 2019 at 17:18 | Show 8 more comments 9 +100 $\begingroup$ 1. Logarithms and exponents to avoid multiplication To completely avoid multiplication, you could use $\log$ and $\exp$ tables and calculate: $$I^2 + Q^2 = \exp!\big(2\log(I)\big) + \exp!\big(2\log(Q)\big).\tag{1}$$ Because $\log(0) = -\infty,$ you'd need to check for and calculate such special cases separately. If for some reason accessing the $\exp$ table is much more expensive than accessing the $\log$ table, then it may be more efficient to compare the logarithms of the squared magnitudes: $$\newcommand{\spaceship}{\mkern-1mu<\mkern-3mu=\mkern-3mu>} \begin{eqnarray}I_1^2 + Q_1^2&\spaceship&I_2^2 + Q_2^2\ \Leftrightarrow\quad\log(I_1^2 + Q_1^2)&\spaceship&\log(I_2^2 + Q_2^2),\end{eqnarray}\tag{2}$$ each calculated by (see: Gaussian logarithm): $$\log(I^2 + Q^2) = 2\log(I) + \log!\Big(1 + \exp!\big(2\log(Q) - 2\log(I)\big)\Big).\tag{3}$$ For any of the above formulas, you can use any base shared by $\log$ and $\exp$, with the base $2$ being the easiest for binary numbers. To calculate $\log_2(a)$: $$\log_2(a) = \operatorname{floor}!\big(\log_2(a)\big) + \log_2\left(\frac{a}{2^{\displaystyle\operatorname{floor}!\big(\log_2(a)\big)}}\right),\tag{4}$$ where $\operatorname{floor}$ simply returns the integer part of its argument. The first term can be calculated by counting the leading zeros of the binary representation of $a$ and by adding a constant that depends on the chosen representation. In the second term, the division by an integer power of 2 can be calculated by a binary shift, and the resulting argument of $\log_2$ is always in range $[1, 2)$ which is easy to tabulate. Similarly, for $2^a$ we have: $$2^{\displaystyle a} = 2^{\displaystyle a - \operatorname{floor}(a)} \times 2^{\displaystyle\operatorname{floor}(a)}.\tag{5}$$ The multiplication by an integer power of 2 can be calculated by a binary shift. The first exponent is always in range $[0, 1)$ which is easy to tabulate. 2. Reducing the number of multiplications There are four multiplications in the basic magnitude comparison calculation, but this can be reduced to two multiplications in two alternative ways: $$\begin{array}{rrcl}&I_1^2 + Q_1^2&\spaceship&I_2^2 + Q_2^2\ \Leftrightarrow&I_1^2 - I_2^2&\spaceship&Q_2^2 - Q_1^2\ \Leftrightarrow&(I_1 + I_2)(I_1 - I_2)&\spaceship&(Q_2 + Q_1)(Q_2 - Q_1)\ \Leftrightarrow&I_1^2 - Q_2^2&\spaceship&I_2^2 - Q_1^2\ \Leftrightarrow&(I_1 + Q_2)(I_1 - Q_2)&\spaceship&(I_2 + Q_1)(I_2 - Q_1).\end{array}\tag{6}$$ If you read the $\Leftrightarrow$ as equal signs, then you can also read $\spaceship$ as the 3-way comparison "spaceship operator", for example $123 \spaceship 456 = -1$, $123 \spaceship 123 = 0$ and $456 \spaceship 123 = 1$. Also @CedronDawg and @MattL. came up with the multiplication reduction trick and much of the following or similar ideas can also be found in their answers and in the comments. 3. CORDIC CORDIC (COordinate Rotation DIgital Computer) algorithms work by carrying out approximate rotations of the points about the origin, with each iteration roughly halving the absolute value of the rotation angle. Here is one such algorithm for the magnitude comparison problem. It does not detect magnitudes being equal which has a vanishingly small probability for random inputs, and in closely equal cases may also give an erroneous result due to limited precision of the arithmetic. Let the algorithm start with points $(I_1, Q_1)$ and $(I_2, Q_2)$ in the first octant such that the points have the same magnitudes as the inputs $(I_1, Q_1)$ and $(I_2, Q_2)$: $$\begin{gather}(I_1, Q_1) = \begin{cases} (|Q_1|, |I_1|)&\text{if }|I_1| < |Q_1|,\ (|I_1|, |Q_1|)&\text{otherwise.} \end{cases}\ (I_2, Q_2) = \begin{cases} (|Q_2|, |I_2|)&\text{if }|I_2| < |Q_2|,\ (|I_2|, |Q_2|)&\text{otherwise.} \end{cases}\end{gather}\tag{7}$$ Figure 1. The starting point is $(I_1, Q_1)$ (blue) and $(I_2, Q_2)$ (red) each in the first octant (pink). The angle of each complex number is bounded by $0 \le \operatorname{atan2}(Q[n], I[n]) \le \pi/4$. CORDIC pseudo-rotations reduce the upper bound $\pi/4$ further (see Figs. 2 & 4) so that at iteration $n$ the algorithm can terminate with a sure answer if any of the following conditions is met: If $I_1[n] < I_2[n]$ and $I_1[n] + Q_1[n]/2^n < I_2[n] + Q_2[n]/2^n$, then the magnitude of the second complex number is larger. If $I_1[n] > I_2[n]$ and $I_1[n] + Q_1[n]/2^n > I_2[n] + Q_2[n]/2^n$, then the magnitude of the first complex number is larger. The conditions are checked already before any CORDIC pseudo-rotations on the $0$th iteration. For iterations $n>0$ the conditions are a generalization (Fig. 4) of @CedronDawg's suggestion for $n=0$. If the algorithm does not terminate at the sure answer condition checks, it continues to the next iteration with pseudo-rotation: $$\begin{eqnarray}I_1[n] &=& I_1[n-1] + Q_1[n-1]/2^n,\ Q_1[n] &=& \big|Q_1[n-1] - I_1[n-1]/2^n\big|,\ I_2[n] &=& I_2[n-1] + Q_2[n-1]/2^n,\ Q_2[n] &=& \big|Q_2[n-1] - I_2[n-1]/2^n\big|.\end{eqnarray}\tag{8}$$ Figure 2. First iteration of the CORDIC algorithm: First the points are pseudo-rotated by -26.56505117 degrees approximating -22.5 degrees to bring the possible point locations (pink) closer to the positive real axis. Then the points that fell below the real axis are mirrored back to the nonnegative-$Q$ side. On the first iteration, this has a side effect of multiplying the magnitude of each point by $\sqrt{17}/4 \approx 1.030776406$, and on successive iterations by factors approaching 1. That is no problem for magnitude comparison because the magnitudes of both complex numbers are always multiplied by the same factor. Each successive round approximately halves the rotation angle. Figure 3. The first condition from the more complex sure answer condition set 2 reports that the magnitude of the second complex number is larger than the first if the first complex number is on the left side of both of the lines that intersect at the second complex number and are perpendicular to the two boundaries of the possible locations (pink/purple) of the complex points. Due to CORDIC pseudo-rotations, the upper boundary has slope $2^{-n}$, making an exact condition check feasible. Only a small portion of the "pizza slice" bounded by the dashed circle is outside the area of detection. If the input points are distributed uniformly within the complex plane unit circle, then the initial sure answer condition checks terminate the algorithm with an answer in 81 % of cases according to uniform random sampling. Otherwise, the sure answer condition checks are redone after the first CORDIC iteration, increasing the termination probability to 94 %. After two iterations the probability is 95 %, after three 98 %, etc. The performance as function of the maximum number of allowed iterations is summarized in Fig. 3. If the maximum iteration count is exceeded, an "unsure" guess answer is made by comparing the $I$ components of the results of partially computed final pseudo-rotations that center the possible point locations about the real axis: $$I_1[n] + Q_1[n]/2^{n+1} \spaceship I_2[n] + Q_1[n]/2^{n+1}.\tag{9}$$ Figure 4. Performance of the algorithm for $10^7$ random pairs of points uniformly and independently distributed within the unit circle, using the sure answer condition set 1. The plot shows the maximum absolute difference of magnitudes encountered that resulted in a wrong answer, and the frequencies of guesses (unsure answers), wrong answers, and sure answers that were wrong which were never encountered. Skipping sure answer condition checks It would also be possible to run just a predefined number of CORDIC iterations without the sure answer condition checks and to make just the guess at the end, saving two comparisons per iteration compared to the simple sure answer condition set 1. Also there is no harm in skipping some of the sure answer condition checks from sets 2 and 3, as the condition will be met also at the following iterations. Unused ideas I also came up with this sure answer condition set that was seemingly simpler but was actually more complex because it did not allow re-use of parts of the calculation: If $I_1[n] < I_2[n] - I_2[n]/2^{2n+1}$, then the magnitude of the second complex number is larger. If $I_1[n] > I_2[n] + I_2[n]/2^{2n+1}$, then the magnitude of the first complex number is larger. Here $I_2[n] - I_2[n]/2^{2n+1}$ is a simple to calculate lower bound for $2^n I_2[n]/\sqrt{2^{2n} + 1}$ (calculated by solving $y = 2^{-n}x,$ $x^2 + y^2 = I_2[n]^2]$) which is the least upper bound for $I_1[n]$ as function of $I_2[n]$ and $n$ when the magnitude of the second complex number is larger. The approximation does not work very well for low $N$. Figure 5. Same as fig. 4 but for the above alternative sure answer condition set. I also initially tried this sure answer condition set that simply checked whether one of the complex number was to the left and below from the other: If $I_1[n] < I_2[n]$ and $Q_1[n] < Q_2[n]$, then the magnitude of the second complex number is larger. If $I_1[n] > I_2[n]$ and $Q_1[n] > Q_2[n]$, then the magnitude of the first complex number is larger. The mirroring about the real axis seems to shuffle the $Q$ coordinates of the points so that the condition will be met for a large portion of complex number pairs with a small number of iterations. However the number of iterations needed is typically about twice that required by the other sure answer condition sets. Figure 6. Same as figs. 4 and 5 but for the above sure answer condition set. Integer input CORDIC The CORDIC algorithm of the previous section was prototyped using floating point numbers and tested with random input. For integer or equivalently fixed point inputs and small bit depths, it is possible to exhaustively test all input combinations and encounter problematic ones that become vanishingly rare in the limit of an infinite input bit depth. For inputs with 2s complement real and imaginary components of a certain bit depth, if we mirror the numbers to the first octant while retaining the magnitude, we get a set of complex numbers. In this set some complex numbers have the same magnitude as other complex numbers in the set (Fig. 7). A CORDIC algorithm may not be able to determine that such numbers are of equal magnitude. Pairs of real complex numbers from continuous probability distributions have zero probability of being of equal magnitude. If efficiency is important and if the inputs to the algorithm are reals quantized to integers, then it would make sense to allow also the magnitude comparison algorithm to return a false unequal for differences smaller than the quantization step or half the quantization step or something like that. Probably a possible magnitude equality for integer inputs is only due to their quantization. Figure 7. First octant complex numbers with integer $I$ and $Q$ components less than or equal to 2^7, colored by the count of complex numbers from the same set that have the same magnitude. Light gray: unique magnitude, darker: more complex numbers have the same magnitude. Highlighted in red is one of the largest subsets of the complex numbers that share the same magnitude, in this case $\sqrt{5525}$. The magnitude for subsets of any size is rarely an integer. An integer implementation should start with a shift of the inputs to the left, to give a few fractional part bits in a fixed point representation (Fig. 8). The implementation will also need one bit extra headroom in the integer part for the iterated $I_1[n]$, $Q_1[n],$, $I_2[n]$, $Q_2[n]$ components. Addition results in some comparison calculations may need a further headroom of one bit. Division by powers of 2 are done by right shifts. I have not looked into rounding right shifts which might reduce the need of fractional part bits. The maximum number of iterations needed to reach an error level of half the input quantization step (possibly giving a wrong answer for smaller magnitude differences) was also tested extensively (Fig. 8) but with no guarantees that all the worst cases would have been covered. Figure 8. Integer implementation parameters for input bit depth $b$ when it is required that the algorithm returns the right answer for magnitude differences larger than half the precision of the input numbers. Another unused idea It is possible to use counting of leading zeros, which is equivalent to $\operatorname{floor}!\big(!\operatorname{log2}(x)\big)$, combined with other binary manipulations, to determine if the algorithm can skip forward directly to a later smaller CORDIC pseudo-rotation (Fig. 9). Pseudocode: if (Q > 0) { diff = clz(Q) - clz(I); n = diff; if (I >= Q << diff) n++; if (I >= Q << (diff + 1)) n++; // Start at iteration n. } else { // No need to iterate as we are on the real line. } The smaller n for the two complex numbers would need to be chosen as it is not possible to pseudo-rotate the complex numbers individually due to the iteration-dependent magnitude multiplication factor. The trick can be repeated multiple times. At the end I think this trick is too complicated for the current problem, but perhaps it might find use elsewhere. Figure 9. output from a binary trick to determine the needed CORDIC pseudo-rotation (see source code at the end) for a complex number. For a pair of complex numbers, the larger rotation would need to be chosen. C++ listing: floating point CORDIC magnitude comparison algorithm and testing // -- compile-command: "g++ --std=c++11 -O3 cordic.cpp -o cordic" -- #include #include #include #include #include std::default_random_engine gen(std::chrono::system_clock::now().time_since_epoch().count()); std::uniform_real_distribution uni(-1.0, 1.0); // Returns sign of I1^2 + Q1^2 - (I2^2 + Q2^2). The sign is one of -1, 0, 1. // sure is set to true if the answer is certain, false if there is uncertainty using magnitude_compare = std::function; int magnitude_compare_reference(double I1, double Q1, double I2, double Q2) { double mag1 = I1I1 + Q1Q1; double mag2 = I2I2 + Q2Q2; if (mag1 < mag2) { return -1; } else if (mag1 > mag2) { return 1; } else { return 0; } } // This algorithm never detects equal magnitude int magnitude_compare_cordic_olli(double I1, double Q1, double I2, double Q2, int maxIterations, bool &sure) { I1 = fabs(I1); Q1 = fabs(Q1); I2 = fabs(I2); Q2 = fabs(Q2); if (Q1 > I1) std::swap(I1, Q1); if (Q2 > I2) std::swap(I2, Q2); sure = true; // if (I1 < I2 && Q1 < Q2) { // Set 1 if (I1 < I2 && I1 + Q1 < I2 + Q2) { // Set 2 / @CedronDawg // (I1 < I2 - I2/2) { // Set 3 return -1; } // if (I1 > I2 && Q1 > Q2) { // Set 1 if (I1 > I2 && I1 + Q1 > I2 + Q2) { // Set 2 / @CedronDawg // if (I1 > I2 + I2/2) { // Set 3 return 1; } int n; for (n = 1; n <= maxIterations; n++) { double newI1 = I1 + Q1pow(2, -n); double newQ1 = Q1 - I1pow(2, -n); double newI2 = I2 + Q2pow(2, -n); double newQ2 = Q2 - I2pow(2, -n); I1 = newI1; Q1 = fabs(newQ1); I2 = newI2; Q2 = fabs(newQ2); // if (I1 < I2 && Q1 < Q2) { // Set 1 if (I1 < I2 && I1 + Q1pow(2, -n) < I2 + Q2pow(2, -n)) { // Set 2 // if (I1 < I2 - I2pow(2, -2n - 1)) { // Set 3 return -1; } // if (I1 > I2 && Q1 > Q2) { // Set 1 if (I2 < I1 && I2 + Q2pow(2, -n) < I1 + Q1pow(2, -n)) { // Set 2 // if (I2 < I1 - I1pow(2, -2n - 1)) { // Set 3 return 1; } } n--; sure = false; if (I1 + Q1pow(2, -n - 1) < I2 + Q2pow(2, -n - 1)) { return -1; } else { return 1; } } void test(magnitude_compare algorithm, int maxIterations, int numSamples) { int numSure = 0; int numWrong = 0; int numSureWrong = 0; double maxFailedMagDiff = 0; for (int sample = 0; sample < numSamples; sample++) { // Random points within the unit circle double I1, Q1, I2, Q2; do { I1 = uni(gen); Q1 = uni(gen); } while (I1I1 + Q1Q1 > 1); do { I2 = uni(gen); Q2 = uni(gen); } while (I2I2 + Q2Q2 > 1); bool sure; int result = algorithm(I1, Q1, I2, Q2, maxIterations, sure); int referenceResult = magnitude_compare_reference(I1, Q1, I2, Q2); if (sure) { numSure++; if (result != referenceResult) { numSureWrong++; } } if (result != referenceResult) { numWrong++; double magDiff = fabs(sqrt(I1I1 + Q1Q1) - sqrt(I2I2 + Q2Q2)); if (magDiff > maxFailedMagDiff) { maxFailedMagDiff = magDiff; } } } printf("%d,", maxIterations); printf("%.7f,", (numSamples-numSure)/(double)numSamples); printf("%.7f,", numWrong/(double)numSamples); printf("%.7f,", numSureWrong/(double)numSamples); printf("%.10f\n", maxFailedMagDiff); } int main() { int numSamples = 10000000; printf("Algorithm: CORDIC @OlliNiemitalo\n"); printf("max iterations,frequency unsure answer,frequency wrong answer,frequency sure answer is wrong,max magnitude difference for wrong answer\n"); for (int maxIterations = 0; maxIterations < 17; maxIterations++) { test(magnitude_compare_cordic_olli, maxIterations, numSamples); } return 0; } p5.js listing for Figs. 7 & 8 This program which can be run in editor.p5js.org and draws figure 7 or 8 depending on what part is un/commented. function setup() { let stride = 4; let labelStride = 8; let leftMargin = 30; let rightMargin = 20; let bottomMargin = 20; let topMargin = 30; let maxInt = 128; let canvasWidth = leftMargin+maxIntstride+rightMargin; let canvasHeight = topMargin+maxIntstride+bottomMargin; createCanvas(canvasWidth, canvasHeight); background(255); textAlign(LEFT, CENTER); text('Q', leftMargin+stride, topMargin+labelStridestride/2) textAlign(CENTER, CENTER); text('I', canvasWidth-rightMargin/2, canvasHeight-bottomMargin) textAlign(RIGHT, CENTER); for (let Q = 0; Q <= maxInt; Q += labelStride) { text(str(Q), leftMargin-stride, canvasHeight-bottomMargin-Qstride); line(leftMargin, canvasHeight-bottomMargin-Qstride, canvasWidth-rightMargin, canvasHeight-bottomMargin-Qstride); } textAlign(CENTER, TOP); for (let I = 0; I <= maxInt; I += labelStride) { text(str(I), leftMargin + Istride, canvasHeight-bottomMargin+stride); line(leftMargin+Istride, topMargin, leftMargin+Istride, canvasHeight-bottomMargin); } let counts = new Array(maxIntmaxInt2+1).fill(0); let maxCount = 0; let peakSq = 0; for (let Q = 0; Q <= maxInt; Q++) { for (let I = Q; I <= maxInt; I++) { counts[II + QQ]++; if (counts[II + QQ] > maxCount) { maxCount = counts[II + QQ]; peakSq = II + QQ; } } } for (let Q = 0; Q <= maxInt; Q++) { for (let I = Q; I <= maxInt; I++) { strokeWeight(stride-1); // Plot 7 if (II + QQ == peakSq) { strokeWeight(stride+1); stroke(255,0,0); } else { stroke(255-32-(255-32)(counts[II + QQ] - 1)/(maxCount - 1)); } / // Plot 8 let diff = Math.clz32(Q) - Math.clz32(I); let iter = diff + (I >= Q << diff) + (I >= Q << diff + 1); if (Q) stroke(255-iter32); else stroke(0); / point(leftMargin + Istride, canvasHeight-bottomMargin-Qstride); } } } C++ listing: integer input CORDIC algorithm // -- compile-command: "g++ --std=c++11 -O3 intcordic.cpp -o intcordic" -- #include #include #include const int maxIterations = { 0, 0, 0, 0, 1, 2, 3, 3, 4, 5, 5, 6, 7, 7, 8, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, -1, -1, // -1 is a placeholder -1 }; const int numFractionalBits = { 0, 0, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, -1, -1, // -1 is a placeholder -1 }; struct MagnitudeCompareResult { int cmp; // One of: -1, 0, 1 double cost; // For now: number of iterations }; MagnitudeCompareResult magnitude_compare_cordic_olli(int32_t I1, int32_t Q1, int32_t I2, int32_t Q2, int b) { double cost = 0; int n = 0; int64_t i1 = abs((int64_t)I1) << numFractionalBits[b]; int64_t q1 = abs((int64_t)Q1) << numFractionalBits[b]; int64_t i2 = abs((int64_t)I2) << numFractionalBits[b]; int64_t q2 = abs((int64_t)Q2) << numFractionalBits[b]; if (q1 > i1) { std::swap(i1, q1); } if (q2 > i2) { std::swap(i2, q2); } if (i1 < i2 && i1 + q1 < i2 + q2) { return {-1, cost}; } if (i1 > i2 && i1 + q1 > i2 + q2) { return {1, cost}; } for (n = 1; n <= maxIterations[b]; n++) { cost++; int64_t newi1 = i1 + (q1>>n); int64_t newq1 = q1 - (i1>>n); int64_t newi2 = i2 + (q2>>n); int64_t newq2 = q2 - (i2>>n); i1 = newi1; q1 = abs(newq1); i2 = newi2; q2 = abs(newq2); if (i1 < i2 && i1 + (q1>>n) < i2 + (q2>>n)) { return {-1, cost}; } if (i2 < i1 && i2 + (q2>>n) < i1 + (q1>>n)) { return {1, cost}; } } if (i1 + (q1>>(n + 1)) < i2 + (q2>>(n + 1))) { return {-1, cost}; } else { return {1, cost}; } } Share Improve this answer edited Jun 12 at 7:05 answered Dec 29, 2019 at 13:50 Olli NiemitaloOlli Niemitalo 13.8k11 gold badge3636 silver badges6666 bronze badges $\endgroup$ 18 1 $\begingroup$ For sure. "Efficiency" is part of the title. The sum test I am using supercedes your early out tests of "I1 < I2 && Q1 < Q2" and reduces the number of cases you have to do the CORDIC on considerably. Simply change to "I1 < I2 && I1 + Q1 < I2 + Q2". Your test rules out all points to the left of and below 2 in your first diagram. My test also includes all points above 2 which are to the left of a diagonal line with slope -1 passing through 2. The rightmost point (largest I) defines the pizza slice. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-30 13:10:30 +00:00 Commented Dec 30, 2019 at 13:10 1 $\begingroup$ You mean the lines should get steeper as your rightmost point approaches the real axis, -1, -2, -4 etc.. The line cuts a secant on the circle. One point is your rightmost point, the ideal other point is the intersection of the angle where you can guarantee the other point will be below. A slope of -1 is the bound, reached when the right most point approaches the I=Q line (where both points are initially guaranteed to be below). I'm afraid calculating the slope is more involved than the problem we are trying to solve. You may have a bound estimate there that works. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-30 18:43:14 +00:00 Commented Dec 30, 2019 at 18:43 1 $\begingroup$ @OlliNiemitalo Nice work Olli, I am updating my question with the proposed ranking approach that I would like to debate/come to agreement on before I dive in. $\endgroup$ Dan Boschen – Dan Boschen 2020-01-01 22:05:56 +00:00 Commented Jan 1, 2020 at 22:05 1 $\begingroup$ @DanBoschen I have added a version of Olli's algorithm to my test program included in my first answer. It should save Dan some work, but please Olli, check it for accuracy. Don't shoot the messenger! $\endgroup$ Cedron Dawg – Cedron Dawg 2020-01-02 00:54:44 +00:00 Commented Jan 2, 2020 at 0:54 1 $\begingroup$ @OlliNiemitalo See my update of proposed scoring---will leave it up for debate for a few days. $\endgroup$ Dan Boschen – Dan Boschen 2020-01-02 03:33:39 +00:00 Commented Jan 2, 2020 at 3:33 | Show 13 more comments 8 $\begingroup$ Given two complex numbers $z_1=a_1+jb_1$ and $z_2=a_2+jb_2$ you want to check the validity of $$a_1^2+b_1^2>a_2^2+b_2^2\tag{1}$$ This is equivalent to $$(a_1+a_2)(a_1-a_2)+(b_1+b_2)(b_1-b_2)>0\tag{2}$$ I've also seen this inequality in Cedron Dawg's answer but I'm not sure how it is used in his code. Note that the validity of $(2)$ can be checked without any multiplications if the signs of both terms on the left-hand side of $(2)$ are equal. If the real and imaginary parts have the same distribution, then this will be true in $50$ % of all cases. Note that we can change the signs of both real and imaginary parts to make them all non-negative without changing the magnitude of the complex numbers. Consequently, the sign check in $(2)$ reduces to checking if the signs of $a_1-a_2$ and $b_1-b_2$ are equal. Obviously, if the real and imaginary parts of one complex number are both greater in magnitude than the magnitudes of the real and imaginary parts of the other complex number then the magnitude of the first complex number is guaranteed to be greater than the magnitude of the other complex number. If we cannot make a decision without multiplications based on $(2)$, we can use the same trick on the following inequality: $$(a_1+b_2)(a_1-b_2)+(b_1+a_2)(b_1-a_2)>0\tag{3}$$ which is equivalent to $(1)$. Again, if the signs of both terms on the left-hand side of $(3)$ are equal, we can take a decision without using multiplications. After catching $50$ % of the cases using $(2)$ based on sign checks only, we catch another $1/6$ (why?) of the cases using sign checks according to $(3)$. This leaves us with $1/3$ of the cases for which we cannot take a decision without multiplications based on simple sign checks in Eqs $(2)$ and $(3)$. For these remaining cases I do not yet have a simpler solution than either using any of the known methods for approximating the magnitude of a complex number, or using Eq. $(2)$ or $(3)$ with the two necessary multiplications. The following Octave/Matlab code shows a simple implementation without using any approximation. If the real and imaginary parts of both complex numbers have the same distribution, then $2/3$ of all cases can be dealt with without any multiplication, and in $1/3$ of the cases we need two multiplications, i.e., on average we need $0.67$ multiplications per comparison. % given: two complex numbers z1 and z2 % result: r=0 |z1| = |z2| % r=1 |z1| > |z2| % r=2 |z1| < |z2| a1 = abs(real(z1)); b1 = abs(imag(z1)); a2 = abs(real(z2)); b2 = abs(imag(z2)); if ( a1 < b1 ), tmp = a1; a1 = b1; b1 = tmp; endif if ( a2 < b2 ), tmp = a2; a2 = b2; b2 = tmp; endif swap = 0; if ( a2 > a1 ) swap = 1; tmp = a1; a1 = a2; a2 = tmp; tmp = b1; b1 = b2; b2 = tmp; endif if ( b1 > b2 ) if( swap ) r = 2; else r = 1; endif else tmp1 = ( a1 + a2 ) ( a1 - a2 ); tmp2 = ( b1 + b2 ) ( b2 - b1 ); if ( tmp1 == tmp2 ) r = 0; elseif ( tmp1 > tmp2 ) r = 1; else r = 2; endif endif (Cedron Insert) Hey Matt, I've formatted your code a bit and added a few comments so it can be compared to mine. Changed some functionality too. Upgraded your pizza slicer, should take you to 80%ish without multiplies. Made the multiplication comparison swap aware using an old dog trick. Ced % given: two complex numbers z1 and z2 % result: r=0 |z1| = |z2| % r=1 |z1| > |z2| % r=2 |z1| < |z2| %---- Take the absolute values (WLOG) Move to First Quadrant a1 = abs(real(z1)); b1 = abs(imag(z1)); a2 = abs(real(z2)); b2 = abs(imag(z2)); %---- Ensure the a is bigger than b (WLOG) Move to Lower Half if ( a1 < b1 ), tmp = a1; a1 = b1; b1 = tmp; endif if ( a2 < b2 ), tmp = a2; a2 = b2; b2 = tmp; endif %---- Ensure the first value is rightmost swap = 0; if ( a2 > a1 ) swap = 1; tmp = a1; a1 = a2; a2 = tmp; tmp = b1; b1 = b2; b2 = tmp; endif %---- Primary determination if ( a1 + b1 > a2 + b2 ) r = 1 + swap; else tmp1 = ( a1 + a2 ) ( a1 - a2 ); tmp2 = ( b1 + b2 ) ( b2 - b1 ); if ( tmp1 == tmp2 ) r = 0; elseif ( tmp1 > tmp2 ) r = 1 + swap; else r = 2 - swap; endif endif Share Improve this answer edited Jan 2, 2020 at 13:47 Cedron Dawg 7,63822 gold badges1010 silver badges2424 bronze badges answered Dec 29, 2019 at 15:56 Matt L.Matt L. 94.3k1010 gold badges8585 silver badges190190 bronze badges $\endgroup$ 9 $\begingroup$ You can also swap a and b WLOG. This saves some comparisons later on in my answer. I may have plugged the hole, not sure, but I think my $v_g$ and $v_h$ can't have the same sign in the "Needs Determination" case. Working on that now, you'd probably see it immediately. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 16:42:49 +00:00 Commented Dec 29, 2019 at 16:42 $\begingroup$ @CedronDawg: Yes, that's how you arrive at Eq. $(3)$ from Eq. $(2)$ by swapping $a$ and $b$ of one of the two complex numbers. My problem is that I'm still left with $1/3$ of the cases for which no decision can be made. $\endgroup$ Matt L. – Matt L. 2019-12-29 16:45:20 +00:00 Commented Dec 29, 2019 at 16:45 $\begingroup$ That's not what I meant, you just did a different association. I am saying you can assume $a_1 \ge b_1$ and $a_2 \ge b_2$ WLOG. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 16:49:38 +00:00 Commented Dec 29, 2019 at 16:49 $\begingroup$ @CedronDawg: That's right, I understood that. In your method, what percentage of cases remain undetermined? $\endgroup$ Matt L. – Matt L. 2019-12-29 16:52:50 +00:00 Commented Dec 29, 2019 at 16:52 $\begingroup$ 0% if $a_1=b_1$ up to a half if $b_1=0$. (Check out my diagram comment about the $a_1>a_2$ case to Dan under my answer) For an arbitrary $(a_1,b_1)$, draw the vertical line up to the $a=b$ line, and the 45 degree angle line up to the $a=b$ line. This forms a triangle, containing an arc of a cirlce. For second values below the slanted line, 1 is bigger (they are all inside the circle). For second values inside the cirle, above the line, 1 is bigger. If the second point is outside the circle, then it is bigger, but can't be to the right. $\endgroup$ Cedron Dawg – Cedron Dawg 2019-12-29 17:06:57 +00:00 Commented Dec 29, 2019 at 17:06 | Show 4 more comments 7 $\begingroup$ I'm putting this as a separate answer because my other answer is already too long, and this is an independent topic but still very pertinent to the OP question. Please start with the other answer. A lot of fuss has been made about the effectiveness of the initial "early-out" test, which I have been calling the Primary Determination. The base approach looks like: If x1 > x2 Then If y1 > y2 Then The secant approach looks like: If x1 > x2 Then If x1 + y1 >= x2 + y2 Then [VERY IMPORTANT EDIT: The points on the diagonal line are also on the "pizza slice" so an equal sign should be used in the sum comparison. This becomes significant in exact integer cases.] So, what do two extra sums buy you? It turns out a lot. First the Simple approach. Imagine a point (x,y) in the lower half (below the x=y line) of the first quadrant. That is $x\ge 0$, $y\ge 0$, and $x\ge y$. Let this represent the rightmost point without loss of generality. The other point then has to fall at or to the left of this point, or within a triangle formed by drawing a vertical line through (x,y) up to the diagonal. The area of this triangle is then: $$ A_{full} = \frac{1}{2} x^2 $$ The base approach will succeed for all points in the full triangle below a horizontal line passing through (x,y). The area of this region is: $$ A_{base} = xy - \frac{1}{2} y^2 $$ The probability of success at this point can be defined as: $$ p(x,y) = \frac{A_{base}}{A_{full}} = \frac{xy - \frac{1}{2} y^2}{\frac{1}{2} x^2} = 2 \frac{y}{x} - \left( \frac{y}{x} \right)^2 $$ A quick sanity check shows that if the point is on the x-axis the probabilty is zero, and if the point is on the diagonal the probability is one. This can also be easily expressed in polar coordinates. Let $ \tan(\theta) = y/x $. $$ p(\theta) = 2 \tan(\theta) - \tan^2(\theta) $$ Again, $p(0) = 0$ and $p(\pi/4) = 1$ To evaluate the average, we need to know the shape of the sampling area. If it is a square, then we can calculate the average from a single vertical trace without loss of generality. Likewise, if it is circular we can calculate the average from a single arc trace. The square case is easier, WLOG select $x=1$, then the calculation becomes: $$ \begin{aligned} \bar p &= \frac{1}{1} \int_0^1 p(1,y) dy \ &= \int_0^1 2y - y^2 dy \ &= \left[ y^2 - \frac{1}{3}y^3 \right]_0^1 \ &= \left[ 1 - \frac{1}{3} \right] - [ 0 - 0 ] \ &= \frac{2}{3} \ &\approx 0.67 \end{aligned} $$ The circle case is a little tougher. $$ \begin{aligned} \bar p &= \frac{1}{\pi/4} \int_0^{\pi/4} p(\theta) d\theta \ &= \frac{1}{\pi/4} \int_0^{\pi/4} 2 \tan(\theta) - \tan^2(\theta) d\theta \ &= \frac{1}{\pi/4} \int_0^{\pi/4} 2 \frac{\sin(\theta)}{\cos(\theta)} - \frac{\sin^2(\theta)}{\cos^2(\theta)} d\theta \ &= \frac{1}{\pi/4} \int_0^{\pi/4} 2 \frac{\sin(\theta)}{\cos(\theta)} - \frac{1-\cos^2(\theta)}{\cos^2(\theta)} d\theta \ &= \frac{1}{\pi/4} \int_0^{\pi/4} 2 \frac{\sin(\theta)}{\cos(\theta)} - \frac{1}{\cos^2(\theta)} + 1 \; d\theta \ &= \frac{1}{\pi/4} \left[ -2 \ln(\cos(\theta)) - \tan(\theta) + \theta \right]_0^{\pi/4} \ &= \frac{1}{\pi/4} \left[ \left( -2 \ln(\cos(\pi/4)) - \tan(\pi/4) + \pi/4 \right) - ( 0 - 0 + 0 ) \right] \ &= \frac{1}{\pi/4} \left( \ln(2) - 1 + \pi/4 \right) \ &\approx 0.61 \end{aligned} $$ Compare this to the secant approach. Draw a line from (x,y) to the origin. This forms the lower triangle. Now draw a line with a -1 slope up to the diagonal. This forms the upper triangle. $$ A_{lower} = \frac{1}{2} xy $$ $$ A_{upper} = \frac{1}{4} ( x^2 - y^2 ) $$ The probability definition is now: $$ \begin{aligned} p(x,y) &= \frac{ A_{lower} + A_{upper} }{A_{full}} \ &= \frac{\frac{1}{2} xy + \frac{1}{4} ( x^2 - y^2 )}{\frac{1}{2} x^2} \ &= \frac{1}{2} + \frac{y}{x} - \frac{1}{2} \left( \frac{y}{x} \right)^2 \end{aligned} $$ The same sanity check as above yields a range of one half to one as expected. Note that it can also be put into polar form like before for the circle case. The average probability for the square case can now be calculated like above. $$ \begin{aligned} \bar p &= \frac{1}{1} \int_0^1 p(1,y) dy \ &= \int_0^1 \frac{1}{2} + y - \frac{1}{2} y^2 dy \ &= \left[ \frac{1}{2} y + \frac{1}{2} y^2 - \frac{1}{6}y^3 \right]_0^1 \ &= \left[ \frac{1}{2} + \frac{1}{2} - \frac{1}{6} \right] - [ 0 + 0 - 0 ] \ &= \frac{5}{6} \ &\approx 0.83 \end{aligned} $$ Some may look at this and say "Big deal, you caught maybe 20 percent more cases." Look at it the other way, you've reduced the number of cases that need further processing by one half. That's a bargain for two sums. The polar version of the square case is left as an exercise for the reader. Have fun. [The astute reader might say, "Hmmmm... 1/2 + 0.61/2 is about 0.8-ish. What's the big deal?"] I will be explaining my lossless CORDIC in a while...... In my implementation, the end CORDIC routine does not get called until the other tests are exhausted. (The working Python code can be found in my other answer.) This cuts the cases down that need to be handled to fewer than 1%. This is not an excuse to get lazy though and go brute force. Since these are the trouble cases, it can be safely assumed that both magnitudes are roughly equal or equal. In fact, with small integer arguments, the equals case is most prevalent. The goal of Olli's approach, and what Dan has articulated, is to use CORDIC to rotate the points down to the real axis so they can be compared along a single dimension. This is not necessary. What suffices is to align the points along the same angle where the simple determination tests that failed earlier are more likely to succeed. To achieve this, it is desired to rotate the points so one point falls below the axis at the same angle the other point is above the axis. Then when the mirror bounce is done, the two points will be aligned at the same angle above the axis. Because the magnitudes are near equal, this is the same as rotating so the points are the same distance above and below the axis after rotation. Another way to define it is to say the midpoint of the two points should fall as close to the axis as possible. It is also very important not to perform any truncation. The rotations have to be lossless or wrong results are possible. This limits the kind of rotations we can do. How can this be done? For each point, a rotation arm value is calculated. It is actually easier to understand in vector terms as vector addition and complex number addition are mathematically equivalent. For each point (as a vector) an orthogonal vector is created that is the same length and points in the downward direction. When this vector is added to the point vector, the result is guaranteed to fall below the axis for both points since both are below the I=Q diagonal. What we would like to do is to shorten these vectors to just the right length so one point is above the axis and the other below at the same distance. However, shortening the vector can generally not be done without truncation. So what is the slick trick? Lengthen the point vectors instead. As long as it is done proportionally, it will have no effect on the outcome. The measure to use is the distance of the midpoint to the axis. This is to be minimized. The distance is the absolute value of midpoint since the target is zero. A division (or shift) can be save by simply minimizing the absolute value of the sum of the imaginary parts. The brute force solution goes like this: Keep the original point vectors as step vectors. Figure out where the rotated points would end up vertically (a horizontal calculation is unnecessary) at each step. Take the next step by adding the step vectors to the point vectors. Measure the value again. As soon as it starts increasing, you have found the minimum and are done searching. Apply the rotation. Flip the below point in the mirror. Do a comparison. In the vast majority of cases, one rotation is all that is needed. The nice thing is that the equal cases get caught right away. How can a brute force search be eliminated? Here comes the next slick trick. Instead of adding the step vector at each step, double the point vectors. Yep, a classic O(log2) improvement. As soon as the value starts increasing, you know you have reached the end of the range of possibilities. Along the way, you cleverly save the point vectors. These now serve as power of two multiples of your step vector. Anybody smell a binary search here? Yep, simply start at the next to the last point which is halfway through your range. Pick the next largest step vector and look to either side. If a smaller value is found, move to it. Pick the next largest step vector. Repeat till you get down to the unit step vector. You will now have the same unit multiple you would have found with a brute search. A word of caution: If the two points are at really shallow angles, it could take a lot of multiples until the rotated points straddle the axis. Overflow is possible. It would probably be wise to use long integers here if possible. There is a hack overflow check in place, but this warrants further investigation. This is an "ideal case" in the other scenarios, so there should be an alternative check that can be applied when this situation occurs. Likely employing Olli's idea of using a steeper cutoff line. Still working on that..... I am currently developing and testing small angle solutions. Please be patient.... Share Improve this answer edited Jan 1, 2020 at 15:36 answered Dec 31, 2019 at 11:43 Cedron DawgCedron Dawg 7,63822 gold badges1010 silver badges2424 bronze badges $\endgroup$ Add a comment | 5 $\begingroup$ The Sigma Delta Argument Test I came up with my own solution with the premise of resolving maximum vector magnitude (including equality) by testing the angle for quadrature between the sum and difference of the two vectors: For the sum $\Sigma$ and difference $\Delta$ of $z_1$ and $z_2$ given as (which is a 2 point DFT) $\Sigma = z_1 + z_2$ $\Delta = z_1 - z_2$ The angle $\phi$ between $z_1$ and $z_2$ (as given by the argument of the complex conjugate product of $\Sigma$ and $\Delta$: $arg(\Sigma\cdot \Delta^)$) has the following properties (See derivation at bottom of this answer): For $z_1 < z_2, |\phi| < \pi/2$ For $z_1 = z_2, |\phi| = \pi/2$ For $z_1 > z_2, |\phi| > \pi/2$ Given the convenience of $\pi/2$ boundaries we never need to compute the argument! The significance of this approach is that a polar coordinate threshold of constant radius is converted to a rectangular coordinate threshold as a linear line going through the origin, providing for a simpler algorithm with no truncation errors. The efficiency in this approach comes down to computing the sum and difference for the two vectors and then being able to efficiently test whether then phase between them is greater than or less than $\pi/2$. If multipliers were allowed this would be easily resolved by evaluating the real part of the complex conjugate result, thus the complete algorithm if first introduced with using a multiplier, and then to meet the objectives of the question, the approach with no multipliers follows. If Multiplier Can Be Used First to introduce a baseline algorithm allowing for multipliers: 1) Step 1: Sum $z_1 = I_1+jQ_1$, $z_2 = I_2+jQ_2$: $\Sigma = I_{\Sigma} + jQ_{\Sigma} = (I_1+I_2) + j(Q_1+Q_2)$ $\Delta = I_{\Delta} + jQ_{\Delta} = (I_1-I_2) + j(Q_1-Q_2)$ 2) Step 2: Compute the Real of the complex conjugate product: $\Sigma\cdot\Delta^$. This is the dot product and the MSB of the result (the sign bit) is the binary answer directly! $q = I_{\Sigma}I_{\Delta}+Q_{\Sigma}Q_{\Delta}$ 3) Step 3: For a ternary result test q: $q<0 \rightarrow z_2>z_1$ $q=0 \rightarrow z_2=z_1$ $q>0 \rightarrow z_2 So this approach provides a binary > or < result with 2 real multipliers and 5 real sums, resulting in a savings compared to comparing to squared magnitudes which requires 4 real multipliers and 3 read adds. This on its own is not notable as a similar mathematical reduction could be directly achieved as the equations are similar (as already pointed out by @Cedron, @MattL, @Olli in their answers), but included to show its relation to the Sigma Delta Argument Test: The magnitude test directly in similar form is to compare $I^2+Q^2$: $$q = (I_1I_1+Q_1Q_1)-(I_2I_2+Q_2Q_2)$$ Which can be rewritten as follows to reduce multipliers (or reordered to directly match the equations above): $$q = (I_1+Q_2)(I_1-Q_2)-(I_2+Q_1)(I_2-Q_1)$$ The No Multiplier Solution The no multiplier solution is done by efficiently determining the location of an arbitrary complex point on a plane that is bisected by a line that crosses through the origin. With this approach, the objective is simplified to determining if the point is above or to the left of the line, below or to the right of the line or on the line. This test can be visualized by rotating $\Delta$ by -$\pi/2$ radians ($\Delta/j$) which then changes the test for the boundary between $\Sigma$ and $\Delta/j$ to be $0$ and $\pi$. This rotation is done by swapping I and Q and then change the sign on I: $-j(I+jQ) = Q-jI$ This is simply incorporated into a modified equation from $\Delta$ such that no further processing steps are needed: $$\Delta/j = (Q_1-Q_2) + j(I_2-I_1)$$ In this case, the test is to see if the point given by $\Delta$ lies above the line y = mx where m is the ratio of the imaginary and real terms of $\Sigma$. (where y is the imaginary axis denoted by Q, and x is the real axis denoted by I). The four quadrants denoted with Q1 to Q4 are rotationaly invariant to the test so I will refer to Q1 as whatever quadrant $\Sigma$ is in to be as shown in the graphic below. Q2 and Q4 are trivial, if $\Delta/j$ is in either of these quadrants a decision can be easily made. When $\Delta/j$ is in Q3, the test is the negative of when $\Delta/j$ is in Q1, so the algorithm is now down to the most efficient way to determine if $\Delta/j$ is above the y=mx dashed line, below the dashed line, or on the dashed line when both $\Delta/j$ and $\Sigma$ are in Q1. The approaches used to efficiently determine if $\Delta/j$ is above or below the line that goes through the origin and $\Sigma$ is as follows: Consider starting with $Z_s = \Sigma$ as $Z_d =\Delta/j$. $Z_s$ is forced to be in Q1 by taking the absolute values of $I_1$, $I_2$, $Q_1$ and $Q_2$ before computing $Z_s$ and $Z_d$. If $Z_d$ is in Q3, it is move to Q1 by negating it: $Z_d = \Delta/j$. This would cause it to fall on the opposite side of the test line, so a flag is set to invert the output solution. If $Z_d$ is in Q2 or Q4, the determination is clear: If in Q2, $Z_d$ must be above the line always so $|z_1|<|z_2|$. If in Q4, $Z_d$ must be below the line always so $|z_1|>|z_2|$. If $Z_d$ is in Q3, it can be resolved only if $Z_d$ is in a new Q2 or Q4 as given by moving the origin to $Z_s$. This is accomplished by growing $Z_d$ through bit shifting until it is beyond the $I_s$ or $Q_s$ boundaries. This ensures rapid $2^n$ growth and that the result will not exceed $2Q_s$ or $2I_s$. $Z_s$ is subtracted and the test is repeated. By subtracting $Z_s$, the new vector given by $Z_d' = Z_d-Z_s$ will rotate either toward the Q axis or the I axis (also at rate $2^n$), eventually landing in the area that would be Q2 or Q4 once it is again grown and $I_s$ subtracted. Example Python Code of the Algorithm def CompareMag(I1, Q1, I2, Q2, b = 16): ''' Given Z1 = I1 + jQ1, Z2 = I2 + jQ2 I1, I2, Q1, Q2 are b-bit signed integers returns: -2: |Z1| < |Z2| 0: |Z1| = |Z2| +2: |Z1| > |Z2| ''' iters = b+2 # max iterations inv = 0 # Initiate XOR toggle of output #---- ensure Zs is in Q1 I1 = abs(I1); Q1 = abs(Q1) I2 = abs(I2); Q2 = abs(Q2) #---- # For speed boost insert optional PD algo here #---- #---- sum and difference Zs = Is + jQs, Zd = Id + jQd Is = I1 + I2; Qs = Q1 + Q2 Id = Q1 - Q2; Qd = I2 - I1 # rotate Zd by -j #---- if Zd is in Q3, invert Zd and invert result if Id < 0 and Qd <= 0: Id, Qd = -Id, -Qd inv = -4 # reverse output +2 -> -2 or -2 -> +2 while iters>0: #---- Can resolve if Zd is in Q2, Q4 or origin, otherwise iterate if Id < 0: return inv -1 # Qd >= Qs so |Z1| < |Z2| if Qd < 0: return inv 1 # Id >= Is so |Z1| > |Z2| if Id == 0 and Qd == 0: return 0 # |Z1| = |Z2| while Id < Is and Qd < Qs: # grow Zd until Id > Is or Qd > Qs Id <<= 1; Qd <<= 1 Id = Id - Is; Qd = Qd - Qs # move origin to Zs iters -= 1 return 0 # not rotating, so |Z1| = |Z2| Speed Boost Cedron's Primary Determination Algorithm (with similar variant's in Matt's and Olli's code) provides a substantial speed improvement by resolving a majority of the cases (up to 90%) prior to doing the sum and difference computations. Further detailing profiling is needed to resolve if this variant is the fastest, as we get different results on different machines (statistically all very close). ``` ---------- # Insert the following in code above at "For speed boost insert optional PD algo here" #---- Ensure they are in the Lower Half (First Octant) (CEDRON ALGO) if Q1 > I1: I1, Q1 = Q1, I1 if Q2 > I2: I2, Q2 = Q2, I2 #---- Primary Determination (CEDRON ALGO) If I1 > I2: if I1 + Q1 >= I2 + Q2: return 2 elif I1 < I2: if I1 + Q1 <= I2 + Q2: return -2 else: if Q1 > Q2: return 2 elif Q1 < Q2: return -2 else: return 0 # #---------- ``` Mathematical Derivation Here is the derivation on how the sum and difference leads to an angle test and provides for the more detailed mathematical relationship (to help with sensitivity testing etc): consider $$z_1 = A_1e^{j\phi_1}$$ $$z_2 = A_2e^{j\phi_2}$$ Where $A_1$ and $A_2$ are positive real quantities representing the magnitude of $z_1$ and $z_2$ and $\phi_1$ and $\phi_2$ are the phase in radians. Divide both by $z_1$ to have expression for $z_2$ relative to $z_1$ $$z_1' = \frac{z_1}{z_1} = 1$$ $$z_2' = \frac{z_2}{z_1} = \frac{A_2}{A_1}e^{j(\phi_2-\phi_1)} = Ke^{j\phi}$$ Such that if $K>1$ then $z_2>z_1$ The sum and the difference of the $z_1'$ and $z_2'$ would be: $$\Sigma = z_1' + z_2' = 1 + Ke^{j\phi}$$ $$\Delta = z_1' - z_2' = 1 - Ke^{j\phi}$$ The complex conjugate multiplication of two vectors provides for the angle difference between the two; for example: Given $$v_1= V_1e^{j\theta_1}$$ $$v_2= V_2e^{j\theta_2}$$ The complex conjugate product is: $$v_1v_2^= V_1e^{j\theta_1}V_2e^{-j\theta_2}= V_1V_2e^{j(\theta_1-\theta_2)}$$ So the complex conjugate product of $\Sigma$ and $\Delta$ with a result $Ae^{j\theta}$ is: $$ \begin{aligned} Ae^{j\theta} &= \Sigma \cdot \Delta^ \ &= (1+Ke^{j\phi})(1-Ke^{j\phi})^ \ &= (1+Ke^{j\phi})(1-Ke^{-j\phi)}) \ &= 1 +K(2jsin(\phi))-K^2 \ &= (1 - K^2) +j2Ksin(\phi) \ \end{aligned} $$ Noting that the above reduces to $2jsin(\phi)$ when K= 1, and when K < 1 the real component is always positive and when K > 1 the real component is always negative such that: for $K < 1, |\theta| < \pi/2$ for $K = 1, |\theta| = \pi/2$ for $K > 1, |\theta| > \pi/2$ Below shows the results of a quick simulation to demonstrate the result summarized above where a uniformly random selection of complex $z_1$, $z_2$ pairs as plotted in the upper plot as red and blue dots, and the resulting mapping to the angle between the sum and difference of $z_1$ and $z_2$. Share Improve this answer edited Jan 20, 2020 at 14:14 answered Jan 4, 2020 at 6:34 Dan BoschenDan Boschen 57.7k33 gold badges6262 silver badges151151 bronze badges $\endgroup$ 0 Add a comment | 3 $\begingroup$ This is an unprecedented (for me) third answer to a question. It is a completely new answer so it does not belong in the other two. Dan (in question): max(I,Q) + min(I,Q)/2 Laurent Duval (in question comments): 0.96a + 0.4b a concerned citizen (in question comments): |a1| + |b1| > |a2| + |b2| By convention, I am going to use $(x,y)$ as the point instead of $(I,Q)$ or $(a,b)$. For most people this will likely make it seem like a distance measure rather than a complex number magnitude. That doesn't matter; they are equivalent. I'm thinking this algorithm will be more use in distance applications (at least by me) than complex number evaluation. All those formulas can be seen as level curve formulas for a tilted plane. The level of each of the two input points can be used as a proxy for magnitude and directly compared. $$ L(x,y) = ax + by $$ The three formulas above can now be stated as: $$ \begin{aligned} L_{DB} &= 1.0 x + 0.5 y \ L_{LD} &= 0.96 x + 0.4 y \ L_{CC} &= 1.0 x + 1.0 y \ \end{aligned} $$ Drum roll please....... The best fit answer (criteria coming) turns out to be: $$ \begin{aligned} L &\approx 0.939 x + 0.417 y \ &\approx 0.94 x + 0.42 y \ &\approx (15/16) x + (107/256) y \ &= [ 240 x + 107 y]/256 \ &= [ (256-16) x + (128-16-4-1) y]/256 \ &= [ (x<<8) - (x<<4) \ &+ (y<<7) - (y<<4) - (y<<2) - y ] >> 8 \ \end{aligned} $$ This closely matches L.D.'s formula. Those old engineers probably used a slide rule or something. Or maybe different criteria for best fit. But it got me thinking. If you look at the level curve of $L=1$, these lines are trying to approximate the unit circle. That was the breakthrough. If we can partition the unit circle into smaller arcs, and find a best fit line for each arc, the corresponding level function could be found and used as comparator for points within that arc span. We can't partition angles easily, but we can find distances up the $x=1$ line without difficulty. A line passing through the origin can be defined by $y=mx$. That means it hits the $x=1$ line at a height of $m$. So for a particular $m$, if $y>mx$ is is above the line, $y=mx$ on the line, and $y below the line. To partition the circle into four arcs, the values of {0,1/4,1/2,3/4,1} can be used for $m$. Comparing $y$ to $mx$ becomes possible with binary shifts, additions, and subractions. For example: $$ \begin{aligned} y &< \frac{3}{4}x \ 4y &< 3x \ (y<<2) &< (x<<1) + x \ \end{aligned} $$ In a similar manner, the best fit line segment to approximate an arc, can also be implemented with some shifts, adds, and subtracts. The explanation of how to best do that is forthcoming. The test routine called "DanBeastFour" uses four arcs. The resulting estimate quality can be judged by this table of values: Deg Degrees Rad Radians X,Y Float x,y Integer R Radius of Integer as Float r Returned Estimate as Integer r/R Accuracy Metric Deg Rad X Y x y R r r/R 0 0.00 (10000.00, 0.00) (10000, 0) 10000.00 9921 0.99210 1 0.02 ( 9998.48, 174.52) ( 9998, 175) 9999.53 9943 0.99435 2 0.03 ( 9993.91, 348.99) ( 9994, 349) 10000.09 9962 0.99619 3 0.05 ( 9986.30, 523.36) ( 9986, 523) 9999.69 9977 0.99773 4 0.07 ( 9975.64, 697.56) ( 9976, 698) 10000.39 9990 0.99896 5 0.09 ( 9961.95, 871.56) ( 9962, 872) 10000.09 9999 0.99989 6 0.10 ( 9945.22, 1045.28) ( 9945, 1045) 9999.75 10006 1.00062 7 0.12 ( 9925.46, 1218.69) ( 9925, 1219) 9999.58 10009 1.00094 8 0.14 ( 9902.68, 1391.73) ( 9903, 1392) 10000.35 10010 1.00096 9 0.16 ( 9876.88, 1564.34) ( 9877, 1564) 10000.06 10007 1.00069 10 0.17 ( 9848.08, 1736.48) ( 9848, 1736) 9999.84 10001 1.00012 11 0.19 ( 9816.27, 1908.09) ( 9816, 1908) 9999.72 9992 0.99923 12 0.21 ( 9781.48, 2079.12) ( 9781, 2079) 9999.51 9980 0.99805 13 0.23 ( 9743.70, 2249.51) ( 9744, 2250) 10000.40 9966 0.99656 14 0.24 ( 9702.96, 2419.22) ( 9703, 2419) 9999.99 9948 0.99480 15 0.26 ( 9659.26, 2588.19) ( 9659, 2588) 9999.70 9965 0.99653 16 0.28 ( 9612.62, 2756.37) ( 9613, 2756) 10000.27 9981 0.99807 17 0.30 ( 9563.05, 2923.72) ( 9563, 2924) 10000.04 9993 0.99930 18 0.31 ( 9510.57, 3090.17) ( 9511, 3090) 10000.36 10002 1.00016 19 0.33 ( 9455.19, 3255.68) ( 9455, 3256) 9999.93 10008 1.00081 20 0.35 ( 9396.93, 3420.20) ( 9397, 3420) 10000.00 10012 1.00120 21 0.37 ( 9335.80, 3583.68) ( 9336, 3584) 10000.30 10012 1.00117 22 0.38 ( 9271.84, 3746.07) ( 9272, 3746) 10000.12 10009 1.00089 23 0.40 ( 9205.05, 3907.31) ( 9205, 3907) 9999.83 10003 1.00032 24 0.42 ( 9135.45, 4067.37) ( 9135, 4067) 9999.44 9993 0.99936 25 0.44 ( 9063.08, 4226.18) ( 9063, 4226) 9999.85 9982 0.99821 26 0.45 ( 8987.94, 4383.71) ( 8988, 4384) 10000.18 9967 0.99668 27 0.47 ( 8910.07, 4539.90) ( 8910, 4540) 9999.98 9981 0.99810 28 0.49 ( 8829.48, 4694.72) ( 8829, 4695) 9999.71 9994 0.99943 29 0.51 ( 8746.20, 4848.10) ( 8746, 4848) 9999.78 10004 1.00042 30 0.52 ( 8660.25, 5000.00) ( 8660, 5000) 9999.78 10011 1.00112 31 0.54 ( 8571.67, 5150.38) ( 8572, 5150) 10000.08 10015 1.00149 32 0.56 ( 8480.48, 5299.19) ( 8480, 5299) 9999.49 10015 1.00155 33 0.58 ( 8386.71, 5446.39) ( 8387, 5446) 10000.03 10013 1.00130 34 0.59 ( 8290.38, 5591.93) ( 8290, 5592) 9999.73 10008 1.00083 35 0.61 ( 8191.52, 5735.76) ( 8192, 5736) 10000.53 10000 0.99995 36 0.63 ( 8090.17, 5877.85) ( 8090, 5878) 9999.95 9988 0.99881 37 0.65 ( 7986.36, 6018.15) ( 7986, 6018) 9999.63 10001 1.00014 38 0.66 ( 7880.11, 6156.61) ( 7880, 6157) 10000.15 10012 1.00118 39 0.68 ( 7771.46, 6293.20) ( 7771, 6293) 9999.51 10018 1.00185 40 0.70 ( 7660.44, 6427.88) ( 7660, 6428) 9999.74 10023 1.00233 41 0.72 ( 7547.10, 6560.59) ( 7547, 6561) 10000.20 10024 1.00238 42 0.73 ( 7431.45, 6691.31) ( 7431, 6691) 9999.46 10022 1.00225 43 0.75 ( 7313.54, 6819.98) ( 7314, 6820) 10000.35 10018 1.00176 44 0.77 ( 7193.40, 6946.58) ( 7193, 6947) 10000.00 10009 1.00090 45 0.79 ( 7071.07, 7071.07) ( 7071, 7071) 9999.90 9998 0.99981 Not too shabby for a beast. Here is a Python code sample for DanBeast_2_8, fka DanBeastFour. if yN+yN < xN: # 2 y < x if (yN<<2) < xN: # 4 y < x LN = (xN<<8) - xN - xN \ + (yN<<5) + (yN<<1) # = ___ x + ___ y # y/x = 0.00 to 0.25 else: LN = (xN<<8) - (xN<<4) \ + (yN<<6) + (yN<<5) - (yN<<2) - yN - yN # = ___ x + ___ y # y/x = 0.25 to 0.50 else: if (yN<<2) < xN + xN + xN: # 4 y < 3 x LN = (xN<<8) - (xN<<5) - (xN<<2) - xN - xN \ + (yN<<7) + (yN<<3) - yN # = 218 x + 135 y (See Table h=8) # y/x = 0.50 to 0.75 else: LN = (xN<<7) + (xN<<6) + xN + xN \ + (yN<<7) + (yN<<5) + (yN<<3) # = ___ x + ___ y # y/x = 0.75 to 1.00 # DN = LN >> 8 And a look at some numbers: Arc for: y/x = 0.50 to 0.75 Best fit using linear regression: y = -1.615 x + 1.897 Comparison level function LN = 0.851 x + 0.527 y LN_2^8 ~=~ 218 x + 135 y h 2^h a 2^h a2h diff diff/2^h b 2^h b2h diff diff/2^h 0 1 0.851 1 0.1486 0.148647 0.527 1 0.4728 0.472787 1 2 1.703 2 0.2973 0.148647 1.054 1 0.0544 0.027213 2 4 3.405 3 0.4054 0.101353 2.109 2 0.1089 0.027213 3 8 6.811 7 0.1892 0.023647 4.218 4 0.2177 0.027213 4 16 13.622 14 0.3784 0.023647 8.435 8 0.4354 0.027213 5 32 27.243 27 0.2433 0.007603 16.871 17 0.1292 0.004037 6 64 54.487 54 0.4866 0.007603 33.742 34 0.2584 0.004037 7 128 108.973 109 0.0268 0.000210 67.483 67 0.4832 0.003775 -8- 256 217.946 218 0.0537 0.000210 134.966 135 0.0336 0.000131 9 512 435.893 436 0.1073 0.000210 269.933 270 0.0671 0.000131 The diff/2^h is the unit error in the distance. There are two best fittings done. The first is the best fit line segment to the arc. The second is the best fit integer representation of the comparison level function. There is no point in trying to carry the precision of one any further than the other. The error produced by the first is a function of the arc's start and end angles. (Now, that should be just arc length, shouldn't it?) The error of the second can be selected to match to any requirement, like the table. So, when you want to select which DanBeast is right for your application you need to provide two parameters, d and h. The first is the if-tree depth (d). This will define the number of arc partitions (2^d) and the height bound for maximum precision. At run time, this costs an extra if statement. The second parameter is how high precision (h) you want in the coefficients(a,b). Higher precision costs more shifts and adds at run time. Expect about two shifts and two add/subtracts per step. Within the input variables there has to be at least headroom of (h+1) bits that are zeros to allow for the shifts, adds, and subtracts. The plus one is sign bit clearance, YMMY. Clearly some of those old engineers were sharp with their paper and pencils and maybe slide rules or log tables (DIY). The equation provided by L.D. is closest to the best fit answer in the link provided by Dan ( Linear regression on $ y = mx + c $ is not the best best fit to use. It's kind of a hack. The best way to do it is with a least squares integral in polar coordinates. I don't have time for that now. LR on $ x = (1/m) y - (c/m) $ will make a better best fit, but still a hack. Since the next step is an integer best fit, it doesn't matter much. The best best fit is expected to be centered on each arc. If you look at the table of numbers above, estimate the first arc ending at about 11 degrees, and look for the peak and end values of the accuracy metric. You don't have to be a carpenter to see that that bubble isn't level. Close enough for now, but that's why we test. Thanks Dan for the bounty and putting it on the answer I preferred. I'm going to pledge half of it forward to the winner of the horse race that isn't one of my entries. Right now Olli is the default winner because his routine is already incorporated and he has an answer I can lay the bounty on. Comment on Dan's solution and a suggestive question: A different perspective from Linear Algebra. $$ \begin{bmatrix} S \ D \end{bmatrix} = \begin{bmatrix} 1 & 1 \ 1 & -1 \end{bmatrix} \begin{bmatrix} z_1 \ z_2 \end{bmatrix} = \sqrt{2} \begin{bmatrix} \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \ \frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2} \end{bmatrix} \begin{bmatrix} z_1 \ z_2 \end{bmatrix} $$ Search on "rotation matrix". An Olli cordic rotation can also be expressed as a linear transform. For example: $$ \begin{bmatrix} I_1[n+1] \ Q_1[n+1] \ I_2[n+1] \ Q_2[n+1] \ \end{bmatrix} = \begin{bmatrix} 1 & 2^{-k} & 0 & 0 \ -2^{-k} & 1 & 0 & 0 \ 0 & 0 & 1 & 2^{-k} \ 0 & 0 & -2^{-k} & 1 \ \end{bmatrix} \begin{bmatrix} I_1[n] \ Q_1[n] \ I_2[n] \ Q_2[n] \ \end{bmatrix} $$ Can you smear that center matrix somehow to make the numbers work together to make it converge faster? The result determiner can be expressed as: $$ \begin{aligned} D &= \begin{bmatrix} I_1 \ Q_1 \ I_2 \ Q_2 \ \end{bmatrix}^T \begin{bmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & -1 & 0 \ 0 & 0 & 0 & -1 \ \end{bmatrix} \begin{bmatrix} I_1 \ Q_1 \ I_2 \ Q_2 \ \end{bmatrix} \ &= I_1^2 + Q_1^2 - I_2^2 - Q_2^2 \end{aligned} $$ If you blur your eyes a bit, you should see something that looks like this: $$ x[n+1] = A\cdot x[n] $$ and $$ D = x^T \cdot V \cdot x $$ What happens when the linear transform (A) has an output vector that is in the same direction as the input vector? Check it out: $$ A\cdot x = \lambda x $$ Plug it in $$ x[n+1] = \lambda x[n] $$ With a little recursion: $$ x[n+p] = \lambda^p x[n] $$ Tada, a vector problem has been reduced to a scalar problem with an exponential solution. These kind of vectors are give a special name. They are called Eigenvectors, and the multiplier value($\lambda$) are called Eigenvalues. You have probably heard of them. This is why they are important. They form basis spaces for solutions for multidimensional problems. Rock on. More coming on DanBeasts later. These are "DanBeast_4_9" distance estimates: 0 0.00 (10000.00, 0.00) (10000, 0) 10000.00 10000 1.00000 1 0.02 ( 9998.48, 174.52) ( 9998, 175) 9999.53 10003 1.00035 2 0.03 ( 9993.91, 348.99) ( 9994, 349) 10000.09 10004 1.00039 3 0.05 ( 9986.30, 523.36) ( 9986, 523) 9999.69 10002 1.00023 4 0.07 ( 9975.64, 697.56) ( 9976, 698) 10000.39 10002 1.00016 5 0.09 ( 9961.95, 871.56) ( 9962, 872) 10000.09 10004 1.00039 6 0.10 ( 9945.22, 1045.28) ( 9945, 1045) 9999.75 10004 1.00042 7 0.12 ( 9925.46, 1218.69) ( 9925, 1219) 9999.58 10000 1.00004 8 0.14 ( 9902.68, 1391.73) ( 9903, 1392) 10000.35 10001 1.00006 9 0.16 ( 9876.88, 1564.34) ( 9877, 1564) 10000.06 10002 1.00019 10 0.17 ( 9848.08, 1736.48) ( 9848, 1736) 9999.84 10000 1.00002 11 0.19 ( 9816.27, 1908.09) ( 9816, 1908) 9999.72 9992 0.99923 For integer applications, I don't see any more need than that. This is the code: ``` ==================================================================== def DanBeast_4_9( x, y ): if (y+y) < x: if (y<<2) < x: if (y<<3) < x: if (y<<4) < x: L = (x<<9) + (y<<4) else: L = (x<<9) - (x+x) + (y<<5) + (y<<4) else: if (y<<4) < (x+x) + x: L = (x<<9) - (x<<2) - (x+x) + (y<<6) + (y<<4) - y else: L = (x<<9) - (x<<3) - (x<<2) + (y<<7) - (y<<4) - (y+y) - y else: if (y<<3) < (x+x) + x: if (y<<4) < (x<<2) + x: L = (x<<9) - (x<<4) - (x+x) - x + (y<<7) + (y<<3) + (y+y) + y else: L = (x<<9) - (x<<5) + (x<<2) + (y<<7) + (y<<5) + (y<<2) + (y+y) else: if (y<<4) < (x<<3) - x: L = (x<<9) - (x<<5) - (x<<2) - (x+x) + (y<<8) - (y<<6) + y else: L = (x<<9) - (x<<5) - (x<<4) + (y<<8) - (y<<5) - (y<<3) + y else: if (y<<2) < (x+x) + x: if (y<<3) < (x<<2) + x: if (y<<4) < (x<<3) + x: L = (x<<9) - (x<<6) + (x<<2) + (y<<8) - (y<<4) else: L = (x<<9) - (x<<6) - (x<<3) + (y<<8) + (y<<2) + y else: if (y<<4) < (x<<3) + (x+x) + x: L = (x<<9) - (x<<6) - (x<<4) - (x<<2) + (y<<8) + (y<<5) - (y<<3) + y else: L = (x<<9) - (x<<6) - (x<<5) + (y<<8) + (y<<5) + (y<<3) + (y+y) + y else: if (y<<3) < (x<<3) - x: if (y<<4) < (x<<4) - (x+x) - x: L = (x<<9) - (x<<7) + (x<<4) + (x<<2) + (y<<8) + (y<<6) - (y<<2) - y else: L = (x<<9) - (x<<7) + (x<<3) - x + (y<<8) + (y<<6) + (y<<3) + (y+y) else: if (y<<4) < (x<<4) - x: L = (x<<8) + (x<<7) - (x<<2) + (y<<8) + (y<<6) + (y<<4) + (y<<3) else: L = (x<<8) + (x<<7) - (x<<4) + (y<<8) + (y<<7) - (y<<5) + (y<<2) return L # >> 9 #==================================================================== ``` Keep in mind that only one L assignment gets executed per call. Yes, this is sort of like a lookup table embedded in code. And this the code generator that wrote it. import numpy as np from scipy import stats #==================================================================== def Main(): HandleDepth( 2, 6 ) HandleDepth( 2, 7 ) HandleDepth( 3, 7 ) HandleDepth( 3, 8 ) HandleDepth( 3, 9 ) HandleDepth( 4, 9 ) print "#====================================================================" #==================================================================== def HandleDepth( ArgDepth, ArgHeadroom ): #---- Build the Tree theTree = [] theLevelIndex = np.zeros( ArgDepth + 1, "i" ) AddTreeNode( theTree, "RT", 0, ArgDepth, theLevelIndex ) #---- Print Header print "#====================================================================" print "def DanBeast_%d_%d( x, y ):" % ( ArgDepth, ArgHeadroom ) print "" #---- Generate Code for theBranch in theTree: theType = theBranch theLevel = theBranch theOrdinal = theBranch theHeight = 1 << theLevel theRecipHeight = 1.0 / float( theHeight ) if theType == "IF": theX = BuildAsInteger( "x", theOrdinal ) theY = BuildAsInteger( "y", theHeight ) theClause = "if " + theY + " < " + theX + ":" print ( 4 + 3 theLevel ) " ", theClause elif theType == "EL": print ( 4 + 3 theLevel ) " ", "else:" if theLevel == ArgDepth: theLowSlope = ( theOrdinal - 1.0 ) theRecipHeight theHighSlope = float( theOrdinal ) theRecipHeight ia, ib = SolveRange( theLowSlope, theHighSlope, ArgHeadroom ) theX = BuildAsInteger( "x", ia ) theY = BuildAsInteger( "y", ib ) if theY[0:3] == " - ": theCombined = theX + theY else: theCombined = theX + " + " + theY print ( 7 + 3 theLevel ) " ", "L = " + theCombined #---- Print Footer print "" print " return L # >> %d" % ArgHeadroom print "" return #==================================================================== def AddTreeNode( ArgTree, ArgType, ArgLevel, ArgDepth, ArgLevelIndex ): #---- Print Results ArgLevelIndex[ArgLevel] += 1 # print ArgLevel " ", ArgType, ( 1 << ArgLevel), ArgLevelIndex[ArgLevel] #---- Add to Tree ArgTree.append( [ ArgType, ArgLevel, ArgLevelIndex[ArgLevel] ] ) #---- Check for Terminal Case if ArgLevel == ArgDepth: return #---- Add more branches AddTreeNode( ArgTree, "IF", ArgLevel + 1, ArgDepth, ArgLevelIndex ) AddTreeNode( ArgTree, "EL", ArgLevel + 1, ArgDepth, ArgLevelIndex ) # 0 1 1 -1 # 1 2 1 0 IF0 2 1 # 2 4 1 1 IF1 4 1 # 3 8 1 2 IF2 8 1 0 --> 1/8 # 4 8 2 2 EL2 8 2 1/8 --> 1/4 # 5 4 2 1 EL1 4 2 # 6 8 3 5 IF2 8 3 1/4 --> 3/8 # 7 8 4 5 EL2 8 4 3/8 --> 1/2 # 8 2 2 0 EL0 2 2 # 9 4 3 8 IF1 4 3 # 10 8 5 9 IF2 8 5 1/2 --> 5/8 # 11 8 6 9 EL2 8 6 5/8 --> 3/4 # 12 4 4 8 EL1 4 4 # 13 8 7 12 IF2 8 7 3/4 --> 7/8 # 14 8 8 12 EL2 8 8 7/8 --> 1 #==================================================================== def BuildAsInteger( ArgRef, ArgValue ): #---- Prepare for Build theClause = "" b = 16 v = 1 << b r = ArgValue c = [] #---- Build Shifty String while v > 0: ar = abs( r ) nv = v >> 1 dt = v - ar # Top Distance db = ar - nv # Bottom Distance if db >= 0: if dt < db: if r > 0: c.append( [b,v] ) r -= v theClause += " + " + ShiftyNumberFormat( ArgRef, b ) else: theClause += " - " + ShiftyNumberFormat( ArgRef, b ) c.append( [b,-v] ) r += v v = nv b -= 1 #---- Exit if theClause[0:3] == " + ": return theClause[3:] return theClause #==================================================================== def ShiftyNumberFormat( ArgRef, ArgShift ): if ArgShift == 0: return ArgRef if ArgShift == 1: return "(" + ArgRef + "+" + ArgRef + ")" return "(" + ArgRef + "<<" + str( ArgShift ) + ")" #==================================================================== def SolveRange( ArgLowSlope, ArgHighSlope, ArgHeadroom ): #---- Get the Low End Point theLowAngle = np.arctan( ArgLowSlope ) theLowX = np.cos( theLowAngle ) theLowY = np.sin( theLowAngle ) #---- Get the High End Point theHighAngle = np.arctan( ArgHighSlope ) theHighX = np.cos( theHighAngle ) theHighY = np.sin( theHighAngle ) #---- Generate a Set of Points on the Circumference x = np.zeros( 101 ) y = np.zeros( 101 ) theSlice = ( theHighAngle - theLowAngle ) 0.01 theAngle = theLowAngle for s in range( 101 ): x[s] = np.cos( theAngle ) y[s] = np.sin( theAngle ) theAngle += theSlice #---- find the Best Fit Line # x = v0 y + v1 # (1/v1) x - (v0/v1) y = 1 v = stats.linregress( y, x ) a = 1/v b = -v a #---- Get the Integer Coefficients p = 1 << ArgHeadroom ia = int( p a + 0.5 ) ib = int( p b + 0.5 ) #---- Return Results return ia, ib #==================================================================== Main() If you aren't familiar with code generators, learn this one, then put on a "Software Engineer" hat, and do a little dance. Enjoy. This code is as it is. This should keep every one interested busy for a while. I have to turn my attention to another project. Here is what the results look like using the same hack linear regression best fit with floating point. Still not too shabby. 0 0.00 (10000.00, 0.00) (10000, 0) 10000.00 9996.79 0.99968 1 0.02 ( 9998.48, 174.52) ( 9998, 175) 9999.53 10000.25 1.00007 2 0.03 ( 9993.91, 348.99) ( 9994, 349) 10000.09 10001.68 1.00016 3 0.05 ( 9986.30, 523.36) ( 9986, 523) 9999.69 9999.11 0.99994 4 0.07 ( 9975.64, 697.56) ( 9976, 698) 10000.39 9999.25 0.99989 5 0.09 ( 9961.95, 871.56) ( 9962, 872) 10000.09 10001.54 1.00014 6 0.10 ( 9945.22, 1045.28) ( 9945, 1045) 9999.75 10000.74 1.00010 7 0.12 ( 9925.46, 1218.69) ( 9925, 1219) 9999.58 9997.05 0.99975 8 0.14 ( 9902.68, 1391.73) ( 9903, 1392) 10000.35 10000.78 1.00004 9 0.16 ( 9876.88, 1564.34) ( 9877, 1564) 10000.06 10001.62 1.00016 10 0.17 ( 9848.08, 1736.48) ( 9848, 1736) 9999.84 9999.49 0.99997 The extra precision in the float means the precision limitation in the integer case is the allowed head room of 9. A "Dan_Beast_4_10", or eleven, might be better, but it may also cost an extra shift and add, or two. Here is the generated code. It is a rare occasion when C code is more readable than Python. Of course, the same integer approach could be used in C as well, but having a floating point version could be really useful. And it's nice to see the actual numbers. A typical statement is C for the distance would be: d = sqrt( xx + yy ); There are your two multiplies and a sum already used up. Now look at the code. Each evaluation takes just two multiplies and a sum. Prior to that, four "if" statements, each which may have some multiplies (but by powers of 2!). //============================================================================ double DanBeast_4( double x, double y ) { double L; if( 2 y < x ) { if( 4 y < x ) { if( 8 y < x ) { if( 16 y < x ) { L = 0.999678613703 x + 0.0312074396995 y; } else { L = 0.995805522911 x + 0.0932603458768 y; } } else { if( 16 y < 3 x ) { L = 0.988192203544 x + 0.154247985106 y; } else { L = 0.977092013909 x + 0.213526336117 y; } } } else { if( 8 y < 3 x ) { if( 16 y < 5 x ) { L = 0.962856265021 x + 0.270541822731 y; } else { L = 0.945905260028 x + 0.324851897977 y; } } else { if( 16 y < 7 x ) { L = 0.9266972621 x + 0.376133998508 y; } else { L = 0.905699333381 x + 0.424183797255 y; } } } } else { if( 4 y < 3 x ) { if( 8 y < 5 x ) { if( 16 y < 9 x ) { L = 0.883362895379 x + 0.468905065322 y; } else { L = 0.860105506764 x + 0.510294074311 y; } } else { if( 16 y < 11 x ) { L = 0.836299114665 x + 0.548421408954 y; } else { L = 0.812264134793 x + 0.583413547218 y; } } } else { if( 8 y < 7 x ) { if( 16 y < 13 x ) { L = 0.788268215169 x + 0.615435858151 y; } else { L = 0.764528383207 x + 0.644677969247 y; } } else { if( 16 y < 15 x ) { L = 0.741215358784 x + 0.671341883117 y; } else { L = 0.718459026658 x + 0.695632819967 y; } } } } return L; } //============================================================================ Yes, I need an efficient distance calculation in my next project..... Share Improve this answer edited Jan 7, 2020 at 0:57 answered Jan 3, 2020 at 16:54 Cedron DawgCedron Dawg 7,63822 gold badges1010 silver badges2424 bronze badges $\endgroup$ 11 $\begingroup$ The two point solutions are the family of "alpha max beta min" magnitude estimators (as Matt had named for us in the comments) en.wikipedia.org/wiki/Alpha_max_plus_beta_min_algorithm and see this interesting post for additional details on how that person extended it three (and higher dimensions) in case your approach is better: math.stackexchange.com/questions/1282435/… $\endgroup$ Dan Boschen – Dan Boschen 2020-01-03 17:56:43 +00:00 Commented Jan 3, 2020 at 17:56 $\begingroup$ @DanBoschen Thanks. I'll have to look at this later. Won't be back till late tonight. I don't think they employt the "no multiplications allowed rule" though. The same technique could definitely be extended to any N-ball sections. I am thinking that DanBeastSixteen will be more than adequate for your purposes. I'm not going to do the coefficients manually though, so patience please. The extra depth takes negligible extra processing time! I am also going to incorporate this into my "guaranteed correct" solution and improve its performance as well. What did you think of my new rc plan? $\endgroup$ Cedron Dawg – Cedron Dawg 2020-01-03 18:09:40 +00:00 Commented Jan 3, 2020 at 18:09 $\begingroup$ That would ultimately require multipliers or a look -up table so doesn't really fit the bill, correct? This is what I meant in my opening comment about being aware of these approaches but they require mults and have finite error) so adding more coeff eliminates finite error concern as I was good with any error e as long as I could dictate e --- so you just add more points to get below e, all good there but then you need multipliers and look-up tables to implement. So may not be a good answer for here but may be a very good answer, and perhaps a better answer, for the math site link I sent. $\endgroup$ Dan Boschen – Dan Boschen 2020-01-03 18:13:37 +00:00 Commented Jan 3, 2020 at 18:13 $\begingroup$ If you have a generic solution that anyone can extend to any number of coefficients, then I think it would be a fantastic answer to that question at the math site. $\endgroup$ Dan Boschen – Dan Boschen 2020-01-03 18:16:16 +00:00 Commented Jan 3, 2020 at 18:16 $\begingroup$ did I interpret this correctly; you have a 4 point option to the 2 point alpha max beta min but it would require either multipliers or look-up tables or is this indeed consistent with the goals here? I don't think you can get below any e with just shifts and adds with this approach (finite error), or am I not seeing it yet? $\endgroup$ Dan Boschen – Dan Boschen 2020-01-04 03:02:05 +00:00 Commented Jan 4, 2020 at 3:02 | Show 6 more comments 1 $\begingroup$ Foreword: "There are three kinds of #computations: the one which requires exact arithmetic, and the other which does not". I here recycle an old pun: There are three kinds of mathematicians: those who can count, and those who cannot. This is a really edgy question. This contribution is modest, in this it tends to gather bits of options, rather that a full-fledged answer. I feel this can be appropriate for this question, that mixes: analog operations (adds, square roots, and powers), analog approximates vs discrete number formats toward $n$-ary or ($n=2$) binary, discrete operation costs. Indeed, for the abstract algorithmic operation counting to the (hardware-bound) metal, efficiency and optimization can show different facets depending on language, compilation, ressource, hardware etc. Whether input/output is signed/integer/complex/float matters. (1) Analog operations: Calculus tricks can limit the classical computational burden. Indeed, most engineering solutions are approximations of a non-directly solvable problem. Analog operations. Logarithms and logarithmic or slide rulers or log tables were used (even invented) to save time on computing products. The Fourier transform converted a tedious convolution into a more simple product. f there is a hierarchy on basic operations, addition is often thought simpler than products. So $a^2-b^2$ (requiring two multiplies and one add) can be less efficient than $(a+b)(a-b)$ (requiring two adds and one multiply). Similarly, the multiplication of two complex numbers, $a_1 + i a_2$ and $b_1 + i b_2$, yields the complex product: $$ (a_1 + i a_2)(b_1 + i b_2) = (a_1 b_1 -a_2 b_2) + i(a_1 b_2+a_2 b_1)$$ requiring four multiplications and two additions. But with $k_1 = a_1(b_1 + b_2)$, $k_2 = b_2(a_1 + a_2)$ and $k_3 = b_1(a_2 €“ a_1)$ you get $\mathrm{Re}(a_1 + i a_2)(b_1 + i b_2) = k_1-k_2$ and $\mathrm{Im}(a_1 + i a_2)(b_1 + i b_2) = k_1+k_3$. Therefore, two multiplies, and four adds. [OH ITS GETTING LATE HERE, BBL8R] Discrete costs Approximates Share Improve this answer answered Dec 29, 2019 at 23:34 Laurent DuvalLaurent Duval 32.6k33 gold badges3636 silver badges106106 bronze badges $\endgroup$ 1 $\begingroup$ See my update of proposed scoring---will leave it up for debate for a few days in case you had thoughts on that. $\endgroup$ Dan Boschen – Dan Boschen 2020-01-02 03:34:34 +00:00 Commented Jan 2, 2020 at 3:34 Add a comment | 1 $\begingroup$ This answer (4th!) is a summary repeat of the first answer, with the unnecessary code and explanations removed. It is a revision, so the horse's name is "Cedron Revised" in the race. Best Approach to Rank Complex Magnitude Comparision Problem For me, this is the winner, and the one I will be using. It may not be absolute fastest by testing, but it is in the same neighborhood as the fastest with a much smaller footprint and no internal function calls. The determination can be reduced to comparing geometric means. $$ \begin{aligned} D &= (x_1^2 + y_1^2) - (x_2^2 + y_2^2) \ &= (x_1^2 - x_2^2) + ( y_1^2 - y_2^2) \ &= (x_1 - x_2)(x_1 + x_2) + (y_1 - y_2)(y_1 + y_2) \ &= (x_1 - x_2)(x_1 + x_2) - (y_2 - y_1)(y_1 + y_2) \ &= D_x S_x - D_y S_y \ &= \left( \sqrt{D_x S_x} - \sqrt{D_y S_y} \right) \left( \sqrt{D_x S_x} + \sqrt{D_y S_y} \right) \ \end{aligned} $$ Where $ D_x, S_x, D_y, S_y \ge 0 $. The second factor will always be positive. So the sign of the difference in geometric means will also be the sign of the determiner and give the correct answer when not zero. The slick trick employed can be stated as "If two positive numbers are within a factor of two of each other, their geometric mean will be bounded above by their arithmetic mean and below by 16/17 of the arithmetic mean." The upper bound is trivial to prove: $$ \begin{aligned} \sqrt{AB} &\le \frac{A+B}{2} \ 2\sqrt{AB} &\le A+B \ 4 AB &\le A^2 + 2AB + B^2 \ 0 &\le A^2 - 2AB + B^2 \ 0 &\le ( A - B )^2 \ \end{aligned} $$ Which is true for any A and B. The lower bound, almost as easy: $$ \begin{aligned} B \ge A &\ge \frac{B}{2} \ AB &\ge \frac{B^2}{2} \ \sqrt{AB} &\ge \frac{B}{\sqrt{2}} \ &= \frac{\frac{B}{\sqrt{2}}}{(A+B)/2} \cdot \frac{A+B}{2} \ &= \frac{\sqrt{2}}{1+A/B} \cdot \frac{A+B}{2} \ &\ge \frac{\sqrt{2}}{1+1/2} \cdot \frac{A+B}{2} \ &= \frac{2}{3} \sqrt{2} \cdot \frac{A+B}{2} \ &\approx 0.9428 \cdot \frac{A+B}{2} \ &> \frac{16}{17} \cdot \frac{A+B}{2} \ &\approx 0.9412 \cdot \frac{A+B}{2} \ \end{aligned} $$ "Squaring" the factors means bringing them into a factor of two of each other. This is done by repeatedly muliplying the smaller one by two until it exceeds or equals the other one. Both numbers sets have to be multiplied in tandom to stay relative. The second while loop will only be effective for a very, very small set of input values. Generally, it counts as one "if" statement. The process goes as follows; Move points to first octant Do the easy comparisons Take the sums and differences "Square" the factors Do proxy Geometric Mean comparison Do multiplication comparison Here is the code in Python. Readily coded in any language because of its simplicity. ``` ==================================================================== def CedronRevised( I1, Q1, I2, Q2 ): #---- Ensure the Points are in the First Quadrant WLOG x1 = abs( I1 ) y1 = abs( Q1 ) x2 = abs( I2 ) y2 = abs( Q2 ) #---- Ensure they are in the Lower Half (First Octant) WLOG if y1 > x1: x1, y1 = y1, x1 if y2 > x2: x2, y2 = y2, x2 #---- Primary Determination with X Absolute Difference if x1 > x2: if x1 + y1 >= x2 + y2: return 2, 0 thePresumedResult = 2 dx = x1 - x2 elif x1 < x2: if x1 + y1 <= x2 + y2: return -2, 0 thePresumedResult = -2 dx = x2 - x1 else: if y1 > y2: return 2, 1 elif y1 < y2: return -2, 1 else: return 0, 1 #---- Sums and Y Absolute Difference sx = x1 + x2 sy = y1 + y2 dy = abs( y1 - y2 ) #---- Bring Factors into 1/2 to 1 Ratio Range while dx < sx: dx += dx if dy <= sy: dy += dy else: sy += sy while dy < sy: dy += dy if dx <= sx: dx += dx else: sx += sx #---- Use Twice of Arithmetic Mean as Proxy for Geometric Mean cx = sx + dx # >= 2 sqrt(sxdx) > 16/17 cx cy = sy + dy cx16 = cx << 4 cy16 = cy << 4 if cx16 > cy16 + cy: return thePresumedResult, 2 if cy16 > cx16 + cx: return -thePresumedResult, 2 #---- X Multiplication px = 0 while sx > 0: if sx & 1: px += dx dx += dx sx >>= 1 #---- Y Multiplication py = 0 while sy > 0: if sy & 1: py += dy dy += dy sy >>= 1 #---- Return Results if px > py: return thePresumedResult, 3 if px < py: return -thePresumedResult, 3 return 0, 3 #==================================================================== ``` This is my entry for the "doesn't necessarily have to be 100% correct" category. If requirements are tighter, a deeper and more precise DanBeast could be used. ``` ==================================================================== def DanBeast_3_9( I1, Q1, I2, Q2 ): #---- Ensure the Points are in the First Quadrant WLOG x1 = abs( I1 ) y1 = abs( Q1 ) x2 = abs( I2 ) y2 = abs( Q2 ) #---- Ensure they are in the Lower Half (First Octant) WLOG if y1 > x1: x1, y1 = y1, x1 if y2 > x2: x2, y2 = y2, x2 #---- Primary Determination with Quick Exit if x1 > x2: if x1 + y1 >= x2 + y2: return 2, 0 elif x1 < x2: if x1 + y1 <= x2 + y2: return -2, 0 else: if y1 > y2: return 2, 0 elif y1 < y2: return -2, 0 else: return 0, 0 #---- Estimate First Multiplied Magnitude if (y1+y1) < x1: if (y1<<2) < x1: if (y1<<3) < x1: L1 = (x1<<9) - x1 + (y1<<5) else: L1 = (x1<<9) - (x1<<3) + (y1<<6) + (y1<<5) - (y1+y1) else: if (y1<<3) < (x1+x1) + x1: L1 = (x1<<9) - (x1<<4) - (x1<<3) + x1 + (y1<<7) + (y1<<4) + (y1<<3) else: L1 = (x1<<9) - (x1<<5) - (x1<<3) - (x1+x1) + (y1<<8) - (y1<<6) + (y1<<4) - (y1+y1) - y1 else: if (y1<<2) < (x1+x1) + x1: if (y1<<3) < (x1<<2) + x1: L1 = (x1<<9) - (x1<<6) - x1 + (y1<<8) - (y1<<2) - y1 else: L1 = (x1<<9) - (x1<<6) - (x1<<5) + (x1<<2) + (x1+x1) + (y1<<8) + (y1<<5) + (y1+y1) else: if (y1<<3) < (x1<<3) - x1: L1 = (x1<<9) - (x1<<7) + (x1<<4) - (x1+x1) + (y1<<8) + (y1<<6) + (y1+y1) else: L1 = (x1<<8) + (x1<<7) - (x1<<3) - (x1+x1) + (y1<<8) + (y1<<6) + (y1<<5) - (y1+y1) #---- Estimate Second Multiplied Magnitude if (y2+y2) < x2: if (y2<<2) < x2: if (y2<<3) < x2: L2 = (x2<<9) - x2 + (y2<<5) else: L2 = (x2<<9) - (x2<<3) + (y2<<6) + (y2<<5) - (y2+y2) else: if (y2<<3) < (x2+x2) + x2: L2 = (x2<<9) - (x2<<4) - (x2<<3) + x2 + (y2<<7) + (y2<<4) + (y2<<3) else: L2 = (x2<<9) - (x2<<5) - (x2<<3) - (x2+x2) + (y2<<8) - (y2<<6) + (y2<<4) - (y2+y2) - y2 else: if (y2<<2) < (x2+x2) + x2: if (y2<<3) < (x2<<2) + x2: L2 = (x2<<9) - (x2<<6) - x2 + (y2<<8) - (y2<<2) - y2 else: L2 = (x2<<9) - (x2<<6) - (x2<<5) + (x2<<2) + (x2+x2) + (y2<<8) + (y2<<5) + (y2+y2) else: if (y2<<3) < (x2<<3) - x2: L2 = (x2<<9) - (x2<<7) + (x2<<4) - (x2+x2) + (y2<<8) + (y2<<6) + (y2+y2) else: L2 = (x2<<8) + (x2<<7) - (x2<<3) - (x2+x2) + (y2<<8) + (y2<<6) + (y2<<5) - (y2+y2) #---- Return Results if L1 < L2: return -1, 2 return 1, 2 #==================================================================== ``` It's a beast, but it runs fast. This one gets about 1/3 as many as wrong as the original DanBeast4. Both do better than Olli's Cordic approach. Don't trust these timings too closely. The scoring is accurate. Algorithm Correct Time Score Penalties Eggs --------------- ------- ------ ------- --------- ---- Empty Economy 49.86 2.6425 472849 2378650 0 Empty Deluxe 0.05 2.7039 1944 474168000 243 Starter Economy 89.75 2.8109 851367 486060 0 Starter Deluxe 90.68 2.8986 1663118 441920 151 Walk On One 93.58 2.8282 887619 304800 0 Walk On Two 93.58 2.7931 887619 304800 0 Dan Beast Four 99.85 2.9718 1750076 7130 151 Dan Beast 3_9 99.95 2.9996 1750996 2530 151 Cedron Unrolled 100.00 3.0909 1898616 0 243 Cedron Revised 100.00 3.1709 1898616 0 243 Cedron Deluxe 100.00 3.1734 1898616 0 243 Olli Revised 99.50 3.1822 1728065 23880 0 Olli Original 99.50 3.2420 1728065 23880 0 Cedron Multiply 100.00 3.2148 1898616 0 243 Matt Multiply 100.00 3.3242 1898616 0 243 We had a couple of walk ons: ``` ==================================================================== def WalkOnOne( I1, Q1, I2, Q2 ): x1 = abs( I1 ) y1 = abs( Q1 ) x2 = abs( I2 ) y2 = abs( Q2 ) s1 = x1 + y1 s2 = x2 + y2 D = s1 - s2 if D < 0: return -1, 0 return 1, 0 #==================================================================== def WalkOnTwo( I1, Q1, I2, Q2 ): s1 = abs( I1 ) + abs( Q1 ) s2 = abs( I2 ) + abs( Q2 ) if s1 < s2: return -1, 0 return 1, 0 #==================================================================== ``` This little section pertains more to the DanBeast solution, but since that answer has reached capacity, I am adding it here. There are the results for floating point DanBeasts done in C on a sweep of angles from 0 to 45 degrees in increments of 0.1. Using float values is as if the H parameter is 60 something. In otherwords, any error in these charts are due to the best fit of the line to the curve, not the best fit of integer coefficients for the line. D Depth, first specification parameter Min,Max,Ave,Std Dev Estimate/Actual results MinB, MaxB Log_2(1-Min), Log_2(Max-1) H Headroom, second specification parameter D Min Max Ave Std Dev MinB MaxB H - ---------- ---------- ---------- ---------- ----- ----- -- 0 0.94683054 1.02671481 1.00040437 0.02346769 -4.2 -5.2 5 1 0.98225695 1.00919519 1.00011525 0.00668514 -5.8 -6.8 6 2 0.99505899 1.00255518 1.00002925 0.00170539 -7.7 -8.6 8 3 0.99872488 1.00065730 1.00000719 0.00042868 -9.6 -10.6 10 4 0.99967861 1.00016558 1.00000181 0.00010727 -11.6 -12.6 12 5 0.99991949 1.00004147 1.00000044 0.00002685 -13.6 -14.6 14 Every step up in D means a doubling of the if-tree size. The skew in the Ave value is a reflection of not using the best best fit metric. These numbers are for a linear regression fit of x as a function of y. The H column gives the recommended headroom parameter for each D level. It increases by about 2 bits per level. Using values less than this means the integer coefficient error dominates the best fit of the line error. Here is another run of the race, with new horses added. Algorithm Correct Time Score Penalties Eggs --------------- ------- ------ ------- --------- ---- Empty Economy 49.86 3.0841 472849 2378650 0 Empty Deluxe 0.05 3.0433 1944 474168000 243 Starter Economy 89.75 3.1843 851367 486060 0 Starter Deluxe 93.88 3.1376 1693416 290430 151 Walk On Cheat 100.00 2.9710 1898616 0 243 Walk On One 93.58 3.1043 887619 304800 0 Walk On Two 93.58 3.0829 887619 304800 0 Walk On Three 97.90 3.2090 928619 99800 0 Walk On Four 97.96 3.4982 929267 96560 0 Olli Revised 99.50 3.3451 1728065 23880 0 Olli Original 99.50 3.4025 1728065 23880 0 Dan Beast Four 99.85 3.2680 1750076 7130 151 Dan Beast 3_9 99.95 3.3335 1750996 2530 151 Dan Beast 3_10 99.97 3.3476 1751206 1480 151 Dan Beast 3_11 99.97 3.2893 1751220 1410 151 Cedron Unrolled 100.00 3.2740 1898616 0 243 Cedron Revised 100.00 3.2747 1898616 0 243 Cedron Deluxe 100.00 3.3309 1898616 0 243 Cedron Multiply 100.00 3.3494 1898616 0 243 Matt Multiply 100.00 3.4944 1898616 0 243 The time values are rough and noisy and should not be considered conclusive. The "Walk On Cheat" is the two multiplication solution using Python multiplication. It is no surprise that it is significantly faster. "Walk On Three" and "Walk On Four" are the max_min and approximately the Old Engineer's equations, respectively. Relevant snippets: ``` ==================================================================== s1 = x1 + x1 + y1 s2 = x2 + x2 + y2 if s1 < s2: return -1, 0 return 1, 0 #==================================================================== s1 = (x1 << 7) - (x1 << 2) - x1 \ + (y1 << 6) - (y1 << 4) + y1 + y1 + y1 s2 = (x2 << 7) - (x2 << 2) - x2 \ + (y2 << 6) - (y2 << 4) + y2 + y2 + y2 if s1 < s2: return -1, 0 return 1, 0 # 123 / 128 ~=~ 0.961 51 / 128 ~=~ 0.398 #==================================================================== ``` The "Starter Deluxe" has been tweaked slightly to return the opposite of the Presumed Result after a Primary Determination. The DanBeast population has been increased for comparison purposes. I think the CORDIC solution can't compete as it is, and I don't see much hope in salvaging it. Olli could surprise me, though. The testing code is now too large to post. Anybody wanting a copy, or of the two code generators for DanBeast logic (Python and C) can email me at cedron ta exede tod net. I believe it would make excellent instructional material for a programming course. Share Improve this answer edited Jan 8, 2020 at 18:36 answered Jan 7, 2020 at 15:39 Cedron DawgCedron Dawg 7,63822 gold badges1010 silver badges2424 bronze badges $\endgroup$ 6 $\begingroup$ Nice! If this is the best answer can you delete your other three answers (or put any important items from those at the bottom of this one)? That will really clean things up on this post $\endgroup$ Dan Boschen – Dan Boschen 2020-01-07 16:36:39 +00:00 Commented Jan 7, 2020 at 16:36 $\begingroup$ Thanks, but sorry, I can't. They would exceed space limitations. The first is still valid since it is the origination. The second demonstrates the properties of the Primary Determination logic. The third is the DanBeast solution, which is quite different in nature, but still viable. This one is my submitted one for your consideration. I think deleting any of them would be a loss. If this one wins (and I think it will on just about any criteria), it will be listed first in the future. $\endgroup$ Cedron Dawg – Cedron Dawg 2020-01-07 16:42:14 +00:00 Commented Jan 7, 2020 at 16:42 $\begingroup$ Got it€” we€™ll good there is a clear one to try, looking forward to running these! $\endgroup$ Dan Boschen – Dan Boschen 2020-01-07 16:49:13 +00:00 Commented Jan 7, 2020 at 16:49 $\begingroup$ @DanBoschen I included my recommended DanBeast version. It beats Olli's in time and accuracy. The walk-ons are an implementation of a concerned citizen's simple metric |a| + |b|. $\endgroup$ Cedron Dawg – Cedron Dawg 2020-01-07 17:21:59 +00:00 Commented Jan 7, 2020 at 17:21 $\begingroup$ So just wondering, should I upvote this if I like your approaches? And - sorry I'm a bit overwhelmed - how does it actually compare (execution time) to doing it "the stupid way" and calculating the magnitude and then comparing it? I'm trying to find two peaks in a FFT on a microcontroller and don't really need the real-valued magnitude of all the other points. $\endgroup$ – Arsenal 2020-03-18 15:18:45 +00:00 Commented Mar 18, 2020 at 15:18 | Show 1 more comment Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algorithms complex numerical-algorithms See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Linked Equal power crossfade 8 Best Approach to Rank Complex Magnitude Comparision Problem 1 Quadrature Mirror Filter symmetry? How to understand this notation Related Algorithms for deformable image registration How to perform an FFT on a signal with a sampling rate of 44100? 1 DFT algorithm for FPGA without phase 0 How to model state space for complex valued system correctly in SIMULINK (MATLAB)? 1 Fractional powers of complex numbers (DSPrelated computation) 8 Best Approach to Rank Complex Magnitude Comparision Problem 5 Algorithm to Count Zeros Outside Unit Circle for FIR Filter Finding SNR for a portion complex signal 0 FM generation with complex numbers 5 Efficient Log2 and dB from Floating Point and Fixed Point Representation Hot Network Questions Is this commentary on the Greek of Mark 1:19-20 accurate? How to home-make rubber feet stoppers for table legs? Why, really, do some reject infinite regresses? Numbers Interpreted in Smallest Valid Base How to design a circuit that outputs the binary position of the 3rd set bit from the right in an 8-bit input? How to rsync a large file by comparing earlier versions on the sending end? Find non-trivial improvement after submitting Can a cleric gain the intended benefit from the Extra Spell feat? manage route redirects received from the default gateway Storing a session token in localstorage Clinical-tone story about Earth making people violent Fix integral lower bound kerning in textstyle or smaller with unicode-math Identifying a thriller where a man is trapped in a telephone box by a sniper how do I remove a item from the applications menu Space Princess Space Tours: Black Holes merging - what would you visually see? Can I use the TEA1733AT for a 150-watt load despite datasheet saying 75 W? What’s the usual way to apply for a Saudi business visa from the UAE? Riffle a list of binary functions into list of arguments to produce a result How can the problem of a warlock with two spell slots be solved? I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way? Mishearing Monica’s line in Friends: “beacon that only dogs…” — is there a “then”? Are there any world leaders who are/were good at chess? How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? The altitudes of the Regular Pentagon more hot questions Question feed
12286
https://math.stackexchange.com/questions/2268435/alternative-definition-of-concave-functions
real analysis - alternative definition of concave functions - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more alternative definition of concave functions Ask Question Asked 8 years, 4 months ago Modified8 years, 4 months ago Viewed 249 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. Let g(x):R→R+g(x):R→R+ be a measurable function. Suppose {x:g(x)>0}=(a,b){x:g(x)>0}=(a,b). If g(x)−g(x−α)≥g(y)−g(y−α),x<y,α≥0 g(x)−g(x−α)≥g(y)−g(y−α),x<y,α≥0 Can I say g(.)g(.) is concave? How can show this? real-analysis Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited May 6, 2017 at 16:10 Amin RoshaniAmin Roshani asked May 6, 2017 at 12:29 Amin RoshaniAmin Roshani 131 3 3 bronze badges 3 Also, since you mentioned g g is a function into R+R+ which is conventionally interpreted as (0,∞)(0,∞), isn't g−1(0,∞)g−1(0,∞) just the entire real line and a=−∞,b=∞a=−∞,b=∞?Vim –Vim 2017-05-06 12:40:49 +00:00 Commented May 6, 2017 at 12:40 sorry, α α is positive constant.Amin Roshani –Amin Roshani 2017-05-06 12:41:22 +00:00 Commented May 6, 2017 at 12:41 1 You can pretty easily get g((1−t)x+t y)≤(1−t)g(x)+t g(y)g((1−t)x+t y)≤(1−t)g(x)+t g(y) for rational t t. Extending this to all real t∈[0,1]t∈[0,1] seems difficult without some additional regularity hypothesis.Anthony Carapetis –Anthony Carapetis 2017-05-06 13:10:48 +00:00 Commented May 6, 2017 at 13:10 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 5 Save this answer. Show activity on this post. Choosing α=y−x α=y−x shows that your condition implies Jensen's inequality g(x+y 2)≤g(x)+g(y)2 g(x+y 2)≤g(x)+g(y)2 for all x,y x,y - this is known as Jensen convexity. Iterating this we get t t-convexity g((1−t)x+t y)≤(1−t)g(x)+t g(y)g((1−t)x+t y)≤(1−t)g(x)+t g(y) for all rational t∈[0,1]t∈[0,1]. (Maybe just convince yourself that this is true for dyadic rationals, which is a little easier.) If g g is additionally continuous, then the set T={t∈[0,1]∣(∀x,y)g((1−t)x+t y)≤(1−t)g(x)+t g(y)}T={t∈[0,1]∣(∀x,y)g((1−t)x+t y)≤(1−t)g(x)+t g(y)} can be written as an intersection of preimages of closed sets by continuous functions, so it is closed and thus must be all of [0,1][0,1]. It turns out measurability and Jensen-convexity imply continuity and thus full convexity: see Theorem II of Blumberg - On Convex Functions. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered May 6, 2017 at 13:55 Anthony CarapetisAnthony Carapetis 36.6k 3 3 gold badges 53 53 silver badges 105 105 bronze badges Add a comment| This answer is useful 0 Save this answer. Show activity on this post. So long as one one is not dealing with closed intervals (as it appears one is not I think there can can be issues for rational convexity (rather than dyadic rational convexity). If it implies convexity and concavity things are completely different. Or at least continuity at the end pts even for measurable concave function. It applies to the open interval. I am not sure what your condition is; it might be a little stronger such as having increasing increments or some such. So particularly even if strictly monotonic and the F(0)=0 F(0)=0 and F(1)=1 F(1)=1,F:[0,1]→[0,1]F:[0,1]→[0,1] then one only has to deal with continuity from above for F(0)=0 F(0)=0 concavity, monotonicity and F(1)=1 F(1)=1 takes care of the right hand end point. See Kuk. The opposite way around for mid-convex functions. I think that if its wright convex/schur convex symmetric and midpoint convex (it has an increasing increments conditions), Then issues may not be a worry under strict monotonicity and F(0)=0 F(0)=0 F(1)=1 F(1)=1. Presumably there is condition for wright concavity as well. Kuczma, Marek, An introduction to the theory of functional equations and inequalities. Cauchy's equation and Jensen's inequality. Edited by Attila Gil\'anyi, Basel: Birkh\"auser (ISBN 978-3-7643-8748-8/pbk). xiv, 595~p. (2009). ZBL1221.39041. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered May 19, 2017 at 5:30 user231063 user231063 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions real-analysis See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 12Easy but hard question regarding concave functions! 2Convex and concave functions 2Why is p−β p−β concave? 1Sequence of Concave Functions 2Is ‎‎f‎‎f‎‎ ‎monotone ‎when ‎‎f‎‎f‎ ‎is ‎concave?‎ 0limit of positive concave function is concave 1Equivalence of definitions of concave functions 4Prove that f(x)=∏n i=1 x α i i f(x)=∏i=1 n x i α i is concave 1Intermediate value theorem for concave functions Hot Network Questions How many stars is possible to obtain in your savefile? Is it safe to route top layer traces under header pins, SMD IC? Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? Copy command with cs names How to locate a leak in an irrigation system? How to home-make rubber feet stoppers for table legs? Spectral Leakage & Phase Discontinuites I have a lot of PTO to take, which will make the deadline impossible How to use \zcref to get black text Equation? Riffle a list of binary functions into list of arguments to produce a result ICC in Hague not prosecuting an individual brought before them in a questionable manner? Should I let a player go because of their inability to handle setbacks? Storing a session token in localstorage The geologic realities of a massive well out at Sea Why do universities push for high impact journal publications? Discussing strategy reduces winning chances of everyone! Analog story - nuclear bombs used to neutralize global warming alignment in a table with custom separator What is the meaning of 率 in this report? I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way? How to rsync a large file by comparing earlier versions on the sending end? Identifying a thriller where a man is trapped in a telephone box by a sniper If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? how do I remove a item from the applications menu Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
12287
https://pmc.ncbi.nlm.nih.gov/articles/PMC390848/
Expression at the cell surface of biologically active fusion and hemagglutinin/neuraminidase proteins of the paramyxovirus simian virus 5 from cloned cDNA - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Journal List User Guide View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Proc Natl Acad Sci U S A . 1985 Nov;82(22):7520–7524. doi: 10.1073/pnas.82.22.7520 Search in PMC Search in PubMed View in NLM Catalog Add to search Expression at the cell surface of biologically active fusion and hemagglutinin/neuraminidase proteins of the paramyxovirus simian virus 5 from cloned cDNA. R G Paterson R G Paterson Find articles by R G Paterson , S W Hiebert S W Hiebert Find articles by S W Hiebert , R A Lamb R A Lamb Find articles by R A Lamb Copyright and License information PMC Copyright notice PMCID: PMC390848 PMID: 3865176 Abstract cDNAs encoding the mRNAs for the fusion protein (F) and the hemagglutinin/neuraminidase protein (HN) of the paramyxovirus simian virus 5 have been inserted into a eukaryotic expression vector under the control of the simian virus 40 late promoter. The F and HN proteins synthesized in recombinant infected cells are indistinguishable in terms of electrophoretic mobility and glycosylation from the proteins synthesized in simian virus 5-infected cells. In addition, the expressed F and HN proteins have been shown to be anchored in the plasma membrane in a biologically active form by indirect live cell immunofluorescence, the F-mediated formation of syncytia, and the ability of HN to cause the hemadsorption of erythrocytes to the infected cell surface. Full text PDF Previous 7520 7521 7522 7523 7524 Next Images in this article Image on p.7522 Image on p.7522 Image on p.7522 Image on p.7522 Image on p.7523 Image on p.7523 Image on p.7523 Image on p.7523 Image on p.7523 Image on p.7523 Image on p.7523 Image on p.7523 Image on p.7523 Image on p.7523 Selected References These references are in PubMed. This may not be the complete list of references from this article. Blok J., Air G. M., Laver W. G., Ward C. W., Lilley G. G., Woods E. F., Roxburgh C. M., Inglis A. S. Studies on the size, chemical composition, and partial sequence of the neuraminidase (NA) from type A influenza viruses show that the N-terminal region of the NA is not processed and serves to anchor the NA in the viral membrane. Virology. 1982 May;119(1):109–121. doi: 10.1016/0042-6822(82)90069-1. [DOI] [PubMed] [Google Scholar] Blumberg B. M., Giorgi C., Rose K., Kolakofsky D. Sequence determination of the Sendai virus fusion protein gene. J Gen Virol. 1985 Feb;66(Pt 2):317–331. doi: 10.1099/0022-1317-66-2-317. [DOI] [PubMed] [Google Scholar] Blumberg B., Giorgi C., Roux L., Raju R., Dowling P., Chollet A., Kolakofsky D. Sequence determination of the Sendai virus HN gene and its comparison to the influenza virus glycoproteins. Cell. 1985 May;41(1):269–278. doi: 10.1016/0092-8674(85)90080-7. [DOI] [PubMed] [Google Scholar] Bosch F. X., Garten W., Klenk H. D., Rott R. Proteolytic cleavage of influenza virus hemagglutinins: primary structure of the connecting peptide between HA1 and HA2 determines proteolytic cleavability and pathogenicity of Avian influenza viruses. Virology. 1981 Sep;113(2):725–735. doi: 10.1016/0042-6822(81)90201-4. [DOI] [PubMed] [Google Scholar] Collins P. L., Huang Y. T., Wertz G. W. Nucleotide sequence of the gene encoding the fusion (F) glycoprotein of human respiratory syncytial virus. Proc Natl Acad Sci U S A. 1984 Dec;81(24):7683–7687. doi: 10.1073/pnas.81.24.7683. [DOI] [PMC free article] [PubMed] [Google Scholar] Dreyfuss G., Choi Y. D., Adam S. A. Characterization of heterogeneous nuclear RNA-protein complexes in vivo with monoclonal antibodies. Mol Cell Biol. 1984 Jun;4(6):1104–1114. doi: 10.1128/mcb.4.6.1104. [DOI] [PMC free article] [PubMed] [Google Scholar] Fields S., Winter G., Brownlee G. G. Structure of the neuraminidase gene in human influenza virus A/PR/8/34. Nature. 1981 Mar 19;290(5803):213–217. doi: 10.1038/290213a0. [DOI] [PubMed] [Google Scholar] Florkiewicz R. Z., Rose J. K. A cell line expressing vesicular stomatitis virus glycoprotein fuses at low pH. Science. 1984 Aug 17;225(4663):721–723. doi: 10.1126/science.6087454. [DOI] [PubMed] [Google Scholar] Garten W., Bosch F. X., Linder D., Rott R., Klenk H. D. Proteolytic activation of the influenza virus hemagglutinin: The structure of the cleavage site and the enzymes involved in cleavage. Virology. 1981 Dec;115(2):361–374. doi: 10.1016/0042-6822(81)90117-3. [DOI] [PubMed] [Google Scholar] Gething M. J., White J. M., Waterfield M. D. Purification of the fusion protein of Sendai virus: analysis of the NH2-terminal sequence generated during precursor activation. Proc Natl Acad Sci U S A. 1978 Jun;75(6):2737–2740. doi: 10.1073/pnas.75.6.2737. [DOI] [PMC free article] [PubMed] [Google Scholar] Hiebert S. W., Paterson R. G., Lamb R. A. Hemagglutinin-neuraminidase protein of the paramyxovirus simian virus 5: nucleotide sequence of the mRNA predicts an N-terminal membrane anchor. J Virol. 1985 Apr;54(1):1–6. doi: 10.1128/jvi.54.1.1-6.1985. [DOI] [PMC free article] [PubMed] [Google Scholar] Hiebert S. W., Paterson R. G., Lamb R. A. Identification and predicted sequence of a previously unrecognized small hydrophobic protein, SH, of the paramyxovirus simian virus 5. J Virol. 1985 Sep;55(3):744–751. doi: 10.1128/jvi.55.3.744-751.1985. [DOI] [PMC free article] [PubMed] [Google Scholar] Homma M., Ouchi M. Trypsin action on the growth of Sendai virus in tissue culture cells. 3. Structural difference of Sendai viruses grown in eggs and tissue culture cells. J Virol. 1973 Dec;12(6):1457–1465. doi: 10.1128/jvi.12.6.1457-1465.1973. [DOI] [PMC free article] [PubMed] [Google Scholar] Hsu M. C., Scheid A., Choppin P. W. Reconstitution of membranes with individual paramyxovirus glycoproteins and phospholipid in cholate solution. Virology. 1979 Jun;95(2):476–491. doi: 10.1016/0042-6822(79)90502-6. [DOI] [PubMed] [Google Scholar] Hsu M., Choppin P. W. Analysis of Sendai virus mRNAs with cDNA clones of viral genes and sequences of biologically important regions of the fusion protein. Proc Natl Acad Sci U S A. 1984 Dec;81(24):7732–7736. doi: 10.1073/pnas.81.24.7732. [DOI] [PMC free article] [PubMed] [Google Scholar] Hsu M., Scheid A., Choppin P. W. Activation of the Sendai virus fusion protein (f) involves a conformational change with exposure of a new hydrophobic region. J Biol Chem. 1981 Apr 10;256(7):3557–3563. [PubMed] [Google Scholar] Kondor-Koch C., Burke B., Garoff H. Expression of Semliki Forest virus proteins from cloned complementary DNA. I. The fusion activity of the spike glycoprotein. J Cell Biol. 1983 Sep;97(3):644–651. doi: 10.1083/jcb.97.3.644. [DOI] [PMC free article] [PubMed] [Google Scholar] König M., Lai C. J. A general approach to construct double deletion mutants of SV40. Virology. 1979 Jul 15;96(1):277–280. doi: 10.1016/0042-6822(79)90193-4. [DOI] [PubMed] [Google Scholar] Lamb R. A., Etkind P. R., Choppin P. W. Evidence for a ninth influenza viral polypeptide. Virology. 1978 Nov;91(1):60–78. doi: 10.1016/0042-6822(78)90355-0. [DOI] [PubMed] [Google Scholar] Lamb R. A., Lai C. J. Expression of unspliced NS1 mRNA, spliced NS2 mRNA, and a spliced chimera mRNA from cloned influenza virus NS DNA in an SV40 vector. Virology. 1984 May;135(1):139–147. doi: 10.1016/0042-6822(84)90124-7. [DOI] [PubMed] [Google Scholar] Lopata M. A., Cleveland D. W., Sollner-Webb B. High level transient expression of a chloramphenicol acetyl transferase gene by DEAE-dextran mediated DNA transfection coupled with a dimethyl sulfoxide or glycerol shock treatment. Nucleic Acids Res. 1984 Jul 25;12(14):5707–5717. doi: 10.1093/nar/12.14.5707. [DOI] [PMC free article] [PubMed] [Google Scholar] Markoff L., Lin B. C., Sveda M. M., Lai C. J. Glycosylation and surface expression of the influenza virus neuraminidase requires the N-terminal hydrophobic region. Mol Cell Biol. 1984 Jan;4(1):8–16. doi: 10.1128/mcb.4.1.8. [DOI] [PMC free article] [PubMed] [Google Scholar] Merz D. C., Scheid A., Choppin P. W. Immunological studies of the functions of paramyxovirus glycoproteins. Virology. 1981 Feb;109(1):94–105. doi: 10.1016/0042-6822(81)90474-8. [DOI] [PubMed] [Google Scholar] Merz D. C., Scheid A., Choppin P. W. Importance of antibodies to the fusion glycoprotein of paramyxoviruses in the prevention of spread of infection. J Exp Med. 1980 Feb 1;151(2):275–288. doi: 10.1084/jem.151.2.275. [DOI] [PMC free article] [PubMed] [Google Scholar] Nagai Y., Klenk H. D., Rott R. Proteolytic cleavage of the viral glycoproteins and its significance for the virulence of Newcastle disease virus. Virology. 1976 Jul 15;72(2):494–508. doi: 10.1016/0042-6822(76)90178-1. [DOI] [PubMed] [Google Scholar] Paterson R. G., Harris T. J., Lamb R. A. Analysis and gene assignment of mRNAs of a paramyxovirus, simian virus 5. Virology. 1984 Oct 30;138(2):310–323. doi: 10.1016/0042-6822(84)90354-4. [DOI] [PubMed] [Google Scholar] Paterson R. G., Harris T. J., Lamb R. A. Fusion protein of the paramyxovirus simian virus 5: nucleotide sequence of mRNA predicts a highly hydrophobic glycoprotein. Proc Natl Acad Sci U S A. 1984 Nov;81(21):6706–6710. doi: 10.1073/pnas.81.21.6706. [DOI] [PMC free article] [PubMed] [Google Scholar] Peluso R. W., Lamb R. A., Choppin P. W. Polypeptide synthesis in simian virus 5-infected cells. J Virol. 1977 Jul;23(1):177–187. doi: 10.1128/jvi.23.1.177-187.1977. [DOI] [PMC free article] [PubMed] [Google Scholar] Pipas J. M., Peden K. W., Nathans D. Mutational analysis of simian virus 40 T antigen: isolation and characterization of mutants with deletions in the T-antigen gene. Mol Cell Biol. 1983 Feb;3(2):203–213. doi: 10.1128/mcb.3.2.203. [DOI] [PMC free article] [PubMed] [Google Scholar] Portner A. The HN glycoprotein of Sendai virus: analysis of site(s) involved in hemagglutinating and neuraminidase activities. Virology. 1981 Dec;115(2):375–384. doi: 10.1016/0042-6822(81)90118-5. [DOI] [PubMed] [Google Scholar] Richardson C. D., Choppin P. W. Oligopeptides that specifically inhibit membrane fusion by paramyxoviruses: studies on the site of action. Virology. 1983 Dec;131(2):518–532. doi: 10.1016/0042-6822(83)90517-2. [DOI] [PubMed] [Google Scholar] Richardson C. D., Scheid A., Choppin P. W. Specific inhibition of paramyxovirus and myxovirus replication by oligopeptides with amino acid sequences similar to those at the N-termini of the F1 or HA2 viral polypeptides. Virology. 1980 Aug;105(1):205–222. doi: 10.1016/0042-6822(80)90168-3. [DOI] [PubMed] [Google Scholar] Riedel H., Kondor-Koch C., Garoff H. Cell surface expression of fusogenic vesicular stomatitis virus G protein from cloned cDNA. EMBO J. 1984 Jul;3(7):1477–1483. doi: 10.1002/j.1460-2075.1984.tb01999.x. [DOI] [PMC free article] [PubMed] [Google Scholar] Roux L., Beffy P., Portner A. Restriction of cell surface expression of Sendai virus hemagglutinin-neuraminidase glycoprotein correlates with its higher instability in persistently and standard plus defective interfering virus infected BHK-21 cells. Virology. 1984 Oct 15;138(1):118–128. doi: 10.1016/0042-6822(84)90152-1. [DOI] [PubMed] [Google Scholar] Roux L., Waldvogel F. A. Defective interfering particles of Sendai virus modulate HN expression at the surface of infected BHK cells. Virology. 1983 Oct 15;130(1):91–104. doi: 10.1016/0042-6822(83)90120-4. [DOI] [PubMed] [Google Scholar] Scheid A., Caliguiri L. A., Compans R. W., Choppin P. W. Isolation of paramyxovirus glycoproteins. Association of both hemagglutinating and neuraminidase activities with the larger SV5 glycoprotein. Virology. 1972 Dec;50(3):640–652. doi: 10.1016/0042-6822(72)90418-7. [DOI] [PubMed] [Google Scholar] Scheid A., Choppin P. W. Identification of biological activities of paramyxovirus glycoproteins. Activation of cell fusion, hemolysis, and infectivity of proteolytic cleavage of an inactive precursor protein of Sendai virus. Virology. 1974 Feb;57(2):475–490. doi: 10.1016/0042-6822(74)90187-1. [DOI] [PubMed] [Google Scholar] Scheid A., Choppin P. W. The hemagglutinating and neuraminidase protein of a paramyxovirus: interaction with neuraminic acid in affinity chromatography. Virology. 1974 Nov;62(1):125–133. doi: 10.1016/0042-6822(74)90308-0. [DOI] [PubMed] [Google Scholar] Scheid A., Choppin P. W. Two disulfide-linked polypeptide chains constitute the active F protein of paramyxoviruses. Virology. 1977 Jul 1;80(1):54–66. doi: 10.1016/0042-6822(77)90380-4. [DOI] [PubMed] [Google Scholar] White J., Helenius A., Gething M. J. Haemagglutinin of influenza virus expressed from a cloned gene promotes membrane fusion. Nature. 1982 Dec 16;300(5893):658–659. doi: 10.1038/300658a0. [DOI] [PubMed] [Google Scholar] White J., Kielian M., Helenius A. Membrane fusion proteins of enveloped animal viruses. Q Rev Biophys. 1983 May;16(2):151–195. doi: 10.1017/s0033583500005072. [DOI] [PubMed] [Google Scholar] Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences ACTIONS View on publisher site PDF (1.8 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Full text Images in this article Selected References Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
12288
https://www.ck12.org/flexi/cbse-math/divisibility-rules/list-numbers-that-are-divisible-by-4./
Flexi answers - List numbers that are divisible by 4. | CK-12 Foundation Subjects Explore Donate Sign InSign Up All Subjects CBSE Math Divisibility Rules Question List numbers that are divisible by 4. Flexi Says: A natural number is divisible by 4 if the number formed by its last two digits in the same order (ten’s digit and unit’s digit) is divisible by 4. Example of divisibility rule for 4: 34932, 84916, 500 are all divisible by 4 as the numbers formed by taking the last two digits in each case is divisible by 4. Click here to learn more about the divisibility rules! Analogy / Example Try Asking: What is 48 divisible by?By what numbers is 35 divisible?What is 432 divisible by? How can Flexi help? By messaging Flexi, you agree to our Terms and Privacy Policy
12289
https://flexbooks.ck12.org/cbook/ck-12-precalculus-concepts-2.0/section/4.6/related/lesson/law-of-sines-geom-hnrs/
Skip to content Elementary Math Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Interactive Math 6 Math 7 Math 8 Algebra I Geometry Algebra II Conventional Math 6 Math 7 Math 8 Algebra I Geometry Algebra II Probability & Statistics Trigonometry Math Analysis Precalculus Calculus What's the difference? Science Grade K to 5 Earth Science Life Science Physical Science Biology Chemistry Physics Advanced Biology FlexLets Math FlexLets Science FlexLets English Writing Spelling Social Studies Economics Geography Government History World History Philosophy Sociology More Astronomy Engineering Health Photography Technology College College Algebra College Precalculus Linear Algebra College Human Biology The Universe Adult Education Basic Education High School Diploma High School Equivalency Career Technical Ed English as 2nd Language Country Bhutan Brasil Chile Georgia India Translations Spanish Korean Deutsch Chinese Greek Polski EXPLORE Flexi A FREE Digital Tutor for Every Student FlexBooks 2.0 Customizable, digital textbooks in a new, interactive platform FlexBooks Customizable, digital textbooks Schools FlexBooks from schools and districts near you Study Guides Quick review with key information for each concept Adaptive Practice Building knowledge at each student’s skill level Simulations Interactive Physics & Chemistry Simulations PLIX Play. Learn. Interact. eXplore. CCSS Math Concepts and FlexBooks aligned to Common Core NGSS Concepts aligned to Next Generation Science Standards Certified Educator Stand out as an educator. Become CK-12 Certified. Webinars Live and archived sessions to learn about CK-12 Other Resources CK-12 Resources Concept Map Testimonials CK-12 Mission Meet the Team CK-12 Helpdesk FlexLets Know the essentials. Pick a Subject Donate Sign Up Back To Law of SinesBack 4.6 Law of Sines Written by:CK-12 | Kaitlyn Spong Fact-checked by:The CK-12 Editorial Team Last Modified: Sep 01, 2025 You know how to use trigonometric ratios to find missing sides in right triangles, but what about non-right triangles? For the triangle below, can you find@$\begin{align}AB\end{align}@$? Law of Sines Look at the triangle below. Based on the angles, can you tell which side is the shortest? Which side is the longest? The smallest angle is @$\begin{align}\angle A\end{align}@$. It opens up to create the shortest side, @$\begin{align}\overline{BC}\end{align}@$. The largest angle is @$\begin{align}\angle C\end{align}@$. It opens up to create the longest side, @$\begin{align}\overline{AB}\end{align}@$. Clearly, angles and opposite sides within triangles are connected. In fact, the Law of Sines states that the ratio between the sine of an angle and the side opposite the angle is constant for each of the three angle/side pairs within a triangle. Law of Sines: @$\begin{align}\frac{\sin A}{a}=\frac{\sin B}{b}=\frac{\sin C}{c}\end{align}@$ In the past, you used the sine ratio to find missing sides and angles in right triangles. With the Law of Sines, you can find missing sides and angles in any type of triangle. In the following problems you will learn how to prove the Law of Sines. Proving the Law of Sines Consider @$\begin{align}\Delta ABC\end{align}@$ below. Draw an altitude from vertex @$\begin{align}B\end{align}@$ that intersects @$\begin{align}\overline{AC}\end{align}@$ to divide @$\begin{align}\Delta ABC\end{align}@$ into two right triangles. Find two equations for the length of the altitude. Below, the altitude has been drawn and labeled @$\begin{align}h\end{align}@$. Consider the right triangle with hypotenuse of length @$\begin{align}c\end{align}@$. @$\begin{align}h\end{align}@$ is opposite @$\begin{align}\angle A\end{align}@$ in this triangle. With an opposite side and hypotenuse you can use the sine ratio. @$$\begin{align}\sin A &= \frac{h}{c}\ h &= c \sin A\end{align}@$$ Now consider the right triangle with hypotenuse of length @$\begin{align}a\end{align}@$. @$\begin{align}h\end{align}@$ is opposite @$\begin{align}\angle C\end{align}@$ in this triangle. With an opposite side and hypotenuse you can use the sine ratio. @$$\begin{align}\sin C &= \frac{h}{a}\ h &= a \sin C\end{align}@$$ You have now found two equations for @$\begin{align}h\end{align}@$. From first problem, @$\begin{align}h=c \sin A\end{align}@$ and @$\begin{align}h=a \sin C\end{align}@$. This means that @$\begin{align}c \sin A=a \sin C\end{align}@$. Divide both sides by @$\begin{align}ac\end{align}@$ and you will see the Law of Sines. @$$\begin{align}c \sin A &= a \sin C\ \frac{c \sin A}{ac} &= \frac{a \sin C}{ac}\ \frac{\sin A}{a} &= \frac{\sin C}{c}\end{align}@$$ @$\begin{align}A\end{align}@$ and @$\begin{align}C\end{align}@$ were two random angles in the original triangle. This proves that in general, within any triangle the ratio of the sine of an angle to its opposite side is constant. Now, let's do a problem using the Law of Sines. Use the Law of Sines to find @$\begin{align}BC\end{align}@$ and @$\begin{align}AC\end{align}@$. Look for an angle and side pair whose measurements are both given. @$\begin{align}m \angle C=64^\circ\end{align}@$, and its opposite side is @$\begin{align}AB=12\end{align}@$. Next, set up an equation using the two known measurements as one of the ratios in the Law of Sines. Make sure to match angles with opposite sides. Solve for @$\begin{align}BC\end{align}@$: @$$\begin{align}\frac{\sin 64^\circ}{12} &= \frac{\sin 38^\circ}{BC}\ BC &= \frac{12 \sin 38^\circ}{\sin 64^\circ}\ BC & \approx 8.22\end{align}@$$ Solve for @$\begin{align}AC\end{align}@$: @$$\begin{align}\frac{\sin 64^\circ}{12} &= \frac{\sin 78^\circ}{AC}\ AC &= \frac{12 \sin 78^\circ}{\sin 64^\circ}\ AC & \approx 13.06\end{align}@$$ Examples Example 1 Earlier, you were asked to find the side @$\begin{align}AB\end{align}@$. To find @$\begin{align}AB\end{align}@$, you can use the Law of Sines. Match angles with opposite sides. @$$\begin{align}\frac{\sin 27^\circ}{9} &= \frac{\sin 72^\circ}{AB}\ AB &= \frac{9 \sin 72^\circ}{\sin 27^\circ}\ AB & \approx 18.85\end{align}@$$ Example 2 The triangle below is drawn to scale. Use the Law of Sines to solve for the measure of angle @$\begin{align}B\end{align}@$. Does your answer seem correct? From looking at the triangle, @$\begin{align}\angle B\end{align}@$ looks to be an obtuse angle. Match angles with opposite sides. @$$\begin{align}\frac{\sin 36^\circ}{14} &= \frac{\sin B}{22}\ \sin B &= \frac{22 \sin 36^\circ}{14}\ m \angle B &= \sin^{-1} \left(\frac{22 \sin 36^\circ}{14}\right)\ m \angle B & \approx 67.5^\circ\end{align}@$$ This answer doesn't seem correct because the angle appears to be obtuse (greater than @$\begin{align}90^\circ\end{align}@$). Example 3 Recall that SSA was not a criterion for triangle congruence. This was because two non-congruent triangles could have the same “side-side-angle” pattern. What does this have to do with the seemingly wrong answer to #2? There are two triangles that fit the criteria given in #2 (angle measures have been slightly rounded). The two possible measures of @$\begin{align}\angle B\end{align}@$ are supplementary @$\begin{align}(112^\circ+68^\circ=180^\circ)\end{align}@$. Also note that @$\begin{align}\sin 112^\circ=0.927=\sin 68^\circ\end{align}@$. The inverse sine function on your calculator will only produce angles between @$\begin{align}0^\circ\end{align}@$ and @$\begin{align}90^\circ\end{align}@$. In a sense, the calculator was imagining the triangle on the right while you were imagining the triangle on the left. This is a problem when using the Law of Sines to solve for missing angles. If the information you are given fits SSA, it is possible that there are two answers. If your picture is drawn to scale, you can determine whether your answer should be the acute angle or the obtuse angle by looking at the picture. Example 4 The triangle below is drawn to scale. Use the Law of Sines to solve for the measure of angle C. Angle C appears to be obtuse. Match angles with opposite sides. @$$\begin{align}\frac{\sin 28^\circ}{17} &= \frac{\sin C}{33}\ \sin C &= \frac{33 \sin 28^\circ}{17}\ m \angle C &= \sin^{-1} \left(\frac{33 \sin 28^\circ}{17}\right)\ m \angle C & \approx 65.7^\circ\end{align}@$$ This is the acute version of the answer. You know the angle should be obtuse, so find the angle supplementary to @$\begin{align}65.7^\circ\end{align}@$: @$\begin{align}180^\circ-65.7^\circ=114.3^\circ\end{align}@$ Note that @$\begin{align}\sin 65.7^\circ=\sin 114.3\end{align}@$ The final answer is: @$\begin{align}m \angle C=114.3^\circ\end{align}@$. Review For each triangle, find the measure of all missing sides and angles. Round your answer to the nearest tenths. 1. 2. 3. 4. 5. 6. 7. 8. What does SSA have to do with the Law of Sines? What type of problems do you have to think extra carefully about to make sure you have the correct answer? The triangle below is drawn to scale. Determine the measure of each of the missing angles. The triangle below is drawn to scale. Determine the measure of each of the missing angles. The triangle below is drawn to scale. Determine the measure of each of the missing angles. Use the picture below to derive the Law of Sines. What type of problems can you solve with the Law of Sines? Explain why you cannot use the Law of Sines to solve for @$\begin{align}x\end{align}@$ in the triangle below. Review (Answers) Click HERE to see the answer key or go to the Table of Contents and click on the Answer Key under the 'Other Versions' option. | Image | Reference | Attributions | --- | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA; CC BY-NC 3.0 | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA; CC BY-NC 3.0 | | | | License: CC BY-NC-SA; CC BY-NC 3.0 | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | | | | License: CC BY-NC-SA | Student Sign Up Are you a teacher? Having issues? Click here By signing up, I confirm that I have read and agree to the Terms of use and Privacy Policy Already have an account? Save this section to your Library in order to add a Practice or Quiz to it. (Edit Title)12/ 100 This lesson has been added to your library. |Searching in: | | | Looks like this FlexBook 2.0 has changed since you visited it last time. We found the following sections in the book that match the one you are looking for: Go to the Table of Contents No Results Found Your search did not match anything in .
12290
https://www.frontiersin.org/journals/genetics/articles/10.3389/fgene.2024.1500167/full
Frontiers | An investigation of a hemophilia A female with heterozygous intron 22 inversion and skewed X chromosome inactivation Frontiers in Genetics About us About us Who we are Mission and values History Leadership Awards Impact and progress Frontiers' impact Our annual reports Publishing model How we publish Open access Peer review Research integrity Research Topics FAIR² Data Management Fee policy Services Societies National consortia Institutional partnerships Collaborators More from Frontiers Frontiers Forum Frontiers Planet Prize Press office Sustainability Career opportunities Contact us All journalsAll articlesSubmit your researchSearchLogin Frontiers in Genetics Sections Sections Applied Genetic Epidemiology Behavioral and Psychiatric Genetics Cancer Genetics and Oncogenomics Computational Genomics Cytogenomics ELSI in Science and Genetics Epigenomics and Epigenetics Evolutionary and Population Genetics Genetics of Aging Genetics of Common and Rare Diseases Genomic Assay Technology Genomics of Plants and the Phytoecosystem Human and Medical Genomics Immunogenetics Livestock Genomics Neurogenomics Nutritional Genomics Pharmacogenetics and Pharmacogenomics RNA Statistical Genetics and Methodology Stem Cell Research Toxicogenomics ArticlesResearch TopicsEditorial board About journal About journal Scope Field chief editors Mission & scope Facts Journal sections Open access statement Copyright statement Quality For authors Why submit? Article types Author guidelines Editor guidelines Publishing fees Submission checklist Contact editorial office About us About us Who we are Mission and values History Leadership Awards Impact and progress Frontiers' impact Our annual reports Publishing model How we publish Open access Peer review Research integrity Research Topics FAIR² Data Management Fee policy Services Societies National consortia Institutional partnerships Collaborators More from Frontiers Frontiers Forum Frontiers Planet Prize Press office Sustainability Career opportunities Contact us All journalsAll articlesSubmit your research Frontiers in Genetics Sections Sections Applied Genetic Epidemiology Behavioral and Psychiatric Genetics Cancer Genetics and Oncogenomics Computational Genomics Cytogenomics ELSI in Science and Genetics Epigenomics and Epigenetics Evolutionary and Population Genetics Genetics of Aging Genetics of Common and Rare Diseases Genomic Assay Technology Genomics of Plants and the Phytoecosystem Human and Medical Genomics Immunogenetics Livestock Genomics Neurogenomics Nutritional Genomics Pharmacogenetics and Pharmacogenomics RNA Statistical Genetics and Methodology Stem Cell Research Toxicogenomics ArticlesResearch TopicsEditorial board About journal About journal Scope Field chief editors Mission & scope Facts Journal sections Open access statement Copyright statement Quality For authors Why submit? Article types Author guidelines Editor guidelines Publishing fees Submission checklist Contact editorial office Frontiers in Genetics Sections Sections Applied Genetic Epidemiology Behavioral and Psychiatric Genetics Cancer Genetics and Oncogenomics Computational Genomics Cytogenomics ELSI in Science and Genetics Epigenomics and Epigenetics Evolutionary and Population Genetics Genetics of Aging Genetics of Common and Rare Diseases Genomic Assay Technology Genomics of Plants and the Phytoecosystem Human and Medical Genomics Immunogenetics Livestock Genomics Neurogenomics Nutritional Genomics Pharmacogenetics and Pharmacogenomics RNA Statistical Genetics and Methodology Stem Cell Research Toxicogenomics ArticlesResearch TopicsEditorial board About journal About journal Scope Field chief editors Mission & scope Facts Journal sections Open access statement Copyright statement Quality For authors Why submit? Article types Author guidelines Editor guidelines Publishing fees Submission checklist Contact editorial office Submit your researchSearchLogin Your new experience awaits. Try the new design now and help us make it even better Switch to the new experience ORIGINAL RESEARCH article Front. Genet., 06 January 2025 Sec. Genetics of Common and Rare Diseases Volume 15 - 2024 | An investigation of a hemophilia A female with heterozygous intron 22 inversion and skewed X chromosome inactivation Xiaoyan Tan 1Yi Yang 2Xia Wu 1Jing Zhu 1Teng Wang 1Huihui Jiang 1Shu Chen 1†Shifeng Lou1† 1 Department of Hematology, The Second Affiliated Hospital, Chongqing Medical University, Chongqing, China 2 Department of Hematology, Three Gorges Hospital, Chongqing University, Chongqing, China Objectives: Hemophilia A (HA) is an X-linked recessive inherited bleeding disorder that typically affects men. Women are usually asymptomatic carriers, and rarely presenting with severe or moderately severe phenotype. This study aims to describe a case of a 17-year-old girl with moderate HA, investigating the mechanisms of her condition and the genetic basis within her family. Methods: We conducted coagulation tests and bleeding assessments to evaluate her bleeding phenotype. Molecular genetic examinations, karyotype analysis, X-chromosome inactivation testing, and targeted bioinformatic analysis were used to identify potential genetic etiologies. Results: The proband exhibited a severe bleeding phenotype and was found to be a heterozygous carrier of an intron 22 inversion (Inv22) with a normal chromosomal karyotype. No other hemostatic defects were identified through whole exome sequencing. The proband’s mother and monozygotic twin sister are also Inv22 carriers, yet remain asymptomatic with normal FVIII activity. X-chromosome inactivation experiments revealed unbalanced inactivation in the proband, leading to the silencing of the healthy X copy. Notably, several novel X-linked gene mutations (SHROOM2, RPGR, VCX3B, GAGE, GCNA, ZNF280C, CT45A, and XK) were identified in the proband compared to her monozygotic twin sister, though their impact on X-chromosome inactivation remains unclear. Conclusion: Our findings suggest that the proband’s bleeding phenotype results from unbalanced X-chromosome inactivation. This research marks the first analysis of X chromosome-related gene mutations among monozygotic twins who are carriers of hemophilia A, laying the groundwork for further investigations into the disorder’s pathogenesis in women and highlighting the complexities in genetic counseling. 1 Introduction Hemophilia A (HA) is an X-linked recessive inherited bleeding disorder caused by an abnormal quality or quantity of factor VIII (FVIII). Clinical manifestations include spontaneous bleeding or bleeding after minor trauma in joints, muscles, internal organs, and deep tissues. Repeated joint bleeding can gradually impair joint mobility and lead to disability (Mannucci, 2008; Quintana Paris, 2023). HA primarily affects males, while females are typically asymptomatic carriers, although they may also experience symptoms and rarely exhibit severe or moderately severe clinical phenotypes (Mingot Castellano, 2020; Cygan and Kouides, 2021; Dardik et al., 2023; Hermans et al., 2023). Recently, increasing evidence suggests an increased bleeding tendency in female hemophilia carriers (HCs) (Young et al., 2017; Chaudhury et al., 2020; Li et al., 2020), which warrants further attention and research. To improve diagnosis and management and establish uniform terminologies for clinical research, the International Society on Thrombosis and Haemostasis (ISTH) has developed a new nomenclature that takes into account personal bleeding history and baseline plasma FVIII levels in female patients. This nomenclature distinguishes five clinically relevant HC categories: women/girls with mild, moderate, or severe hemophilia (consistent with the diagnostic criteria for HA in males), symptomatic and asymptomatic HC (van Galen et al., 2021). Currently, there is limited information available in female patients with severe and moderate hemophilia. The molecular pathogenesis of male HA is well established as a typical FVIII monogenic disease. In approximately 40% of male HA patients, the disorder is attributed to inversions occurring in the F8 gene. The remaining cases are caused by various mutations in F8, including small and large deletions, insertions, as well as non-sense and missense mutations (Borràs et al., 2022). However, the genetic defects underlying female HA exhibit wide variability and few larger reports on its genetic etiology are available. Currently, skewed X-chromosome inactivation (XCI) combined with F8 variants on the active allele is believed to be the most common cause of female HA. Other molecular mechanisms include the involvement of a second mutation in F8 gene (homozygous or compound heterozygous), the presence of a second hemostatic defect due to mutations in genes other than F8 gene (e.g., von Willebrand factor, VWF), abnormalities of the X-chromosome in structure and number (e.g., Turner syndrome), and androgen insensitivity syndrome (Janczar et al., 2020; Miller and Bean, 2020; Shen et al., 2022). Current understandings of hemophilia genetics in female patients are mainly based on published case reports. Therefore, it is necessary to evaluate the clinical characteristics and clarify the pathogenesis for each female HA patient. Here, we describe a case of a girl with moderate HA, exhibiting a severe bleeding phenotype. We analyzed the laboratory characteristics of the proband and her family members. Additionally, we conducted research on the pathogenesis of the proband through genetic testing for F8, chromosomal karyotype analysis, whole exome sequencing (WES), XCI assay, and X-chromosome targeted bioinformatic analysis. 2 Materials and methods 2.1 Subjects Patients and their families were studied after obtaining their written informed consent. The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Second Affiliated Hospital of Chongqing Medical University. 2.2 Haemostatic tests Basic hemostatic analyses were conducted using the Stago fully automated coagulation analyzer and its corresponding reagents, including prothrombin time (PT), activated partial thromboplastin time (APTT), factor VIII activity (FVIII:C), and plasma von Willebrand factor antigen (vWF:Ag). 2.3 Karyotype analysis Peripheral blood samples were collected from the proband using heparin anticoagulant. The samples were cultured according to standard cytogenetic protocols (Bates, 2011). Giemsa-banding staining was used to analyze the cultured cells, and then the karyotype was analyzed using a chromosome karyotyping software system (Zeiss). 2.4 Genetic analysis The genomic DNA was extracted from peripheral blood using QIAGEN’s QIAamp DNA Blood Mini Kit following the provided instructions. Long-distance PCR ((LD-PCR), gene sequencing, and Multiplex Ligation-dependent Probe Amplification (MLPA) technology was applied to screen for the F8 mutations. Among these, LD-PCR technology is used to detect the intron 1 and intron 22 inversion mutations of the F8; gene sequencing involves direct sequencing of the coding regions of the F8 gene’s exons and comparing them with reference sequences (NM_000132.3 and NG_011403.1) to identify potential micro-mutations; MLPA detection is used to examine each exon of the F8 gene in the sample, using normal human DNA as a reference to determine whether there are deletions or duplications in the gene.The intron 22 inversion (Inv22) was assessed by LD-PCR as per the protocol described in the kit’s instructions (AmpliTaq Gold® 360 Master Mix). The polymorphism of 17 autosomal short tandem repeats (STRs) (THO1, D21S11, D2S1338, Penta E, D5S818, D13S317, D7S820, D16S539, CSF1PO, vWA, D8S1179, TPOX, FGA, D6S1043, D12S391, D10S1248, Penta D) and a sex marker (amelogenin) was analyzed in the DNA samples of the proband and her twin sister. The WES process was carried out by Shanghai Diagnostics Biotechnology Co., Ltd. The sequencing library was constructed using the VAHTS Universal DNA Library Prep Kit for Illumina V3 (NuGEN) and captured using Twist Comprehensive Exome (twist). High-throughput sequencing was performed on the library using the NovaSeq 6,000 (Illumina) sequencer. X chromosome-focused bioinformatics analysis was done based on WES. 2.5 X chromosome inactivation Currently, methylation-based methods are the most widely used for quantitatively defining human XCI status. In this study, we initially utilized the methylation-sensitive HUMARA (human androgen receptor) assay, following previously described modifications (Juchniewicz et al., 2018), to determine XCI patterns across all DNA samples. However, due to the uninformative nature of the androgen receptor locus in this case, we subsequently assessed XCI status at the ZNF261 locus using primers designed by Beever et al. (2003). This approach employed the methylation-sensitive restriction endonuclease HhaI (Thermo Scientific™, ER1851) and a pair of ZNF261-specific fluorescently labeled primers (forward primer: 5′-ATG​CTA​AGG​ACC​ATC​CAG​GA-3’; reverse primer: 5′-GGA​GTT​TTC​CTC​CCT​CAC​CA-3′). DNA was extracted from peripheral blood samples of the proband and family members, and DNA concentration and purity were measured. Each sample (500 ng DNA) was digested with 20 U of HhaI enzyme in a 20 μL reaction buffer at 37°C for 16 h, followed by heat inactivation at 65°C for 20 min. Complete digestion was confirmed by PCR and agarose gel electrophoresis. A 20 μL PCR system was then used to amplify both digested and undigested DNA, and the resulting amplification products were verified using 2% agarose gel electrophoresis. Capillary electrophoresis was performed on a genetic analyzer (Superyears, Classic116, Nanjing, China) to obtain fluorescence values. To ensure the reliability and accuracy of the results, we strictly adhered to standard operating procedures (SOP) throughout the experiment. To minimize the impact of stutter peaks—a common issue with dinucleotide repeats—we optimized PCR conditions, employed high-resolution capillary electrophoresis, used GeneMapper software for precise data analysis, and conducted repeated experiments. The XCI ratio was calculated based on fluorescence intensities before and after digestion: (a) the paternal X chromosome inactivation ratio was determined as the signal intensity of the paternal X chromosome after digestion divided by its intensity before digestion; (b) the maternal X chromosome inactivation ratio was calculated similarly. XCI patterns were classified into random (50:50 to 70:30), moderately skewed (70:30 to 90:10), or highly skewed (90:10 or above) categories, consistent with criteria established in previous studies (Xiao et al., 2023; Bagislar et al., 2006; Yoon et al., 2008). These rigorous methodologies allowed for accurate classification of XCI patterns in our study. 3 Results 3.1 Case presentation The proband is a 17-year-old female with a history of hemorrhage including epistaxis, gingival bleeding, skin bruising, frequent bruising after minor trauma, and menorrhagia. At the age of 13, she experienced a ruptured left ovarian corpus luteum with bleeding and underwent surgical treatment, which was followed by postoperative bleeding. Coagulation assays revealed significantly prolonged APTT, normal PT, factor VIII activity of 1.4%, and normal vWF:Ag levels. Due to having a younger brother with severe HA, she was diagnosed with moderate HA and received replacement therapy with plasma and FVIII concentrates. At the age of 14, she had one episode of gastrointestinal bleeding, requiring FVIII concentrates replacement therapy. Her bleeding score, calculated with the Bleeding Assessment Tool (BAT) (Rodeghiero et al., 2010), was 12, as compared to 0–5 in normal females. The proband’s maternal grandfather, grandmother, father, mother, and twin sister reported no hemorrhagic symptoms. The proband’s 10-year-old brother is a severe HA patient and requires long-term replacement therapy with FVIII concentrates. The proband’s mother has a younger brother who had recurrent bleeding, joint pain, and joint deformities, but was not definitively diagnosed. He passed away at the age of 6. After obtaining informed consent, we conducted this study on the proband, her parents, twin sister, and brother, while other relatives declined to participate in this study. 3.2 Subject characteristics The coagulation test results of the proband and her family members are presented in Table 1. The proband and her brother had significantly decreased FVIII activity, at 1.1% and 0.4% respectively, while their father, mother, and twin sister had normal FVIII activity. APTT test results were consistent with the FVIII activity test results, with the brother showing significantly prolonged APTT, followed by the proband, while the father, mother, and sister had normal results. The level of vWF:Ag of the proband and her family members was within the normal range. Both the proband and her brother tested negative for FVIII inhibitor. Additionally, targeted joint ultrasound examinations were performed on the proband and her brother, including the elbows, knees, and ankles. The results showed that the proband’s joint ultrasound examination was normal, but her brother had varying degrees of hemophilic arthropathy changes in both knees and ankles (data not shown). Table 1 Table 1. Laboratory Findings of the proband and her family. 3.3 Gene and chromosome analysis Genetic screening for HA revealed a heterozygous Inv22 of the F8 in the proband. To determine if there are other chromosomal or genetic abnormalities exacerbating the deficiency of FVIII, we performed chromosomal karyotype analysis and WES on the proband. The patient’s chromosomal karyotype analysis was normal, and WES revealed no other mutations associated with bleeding disorders (Table 2). Table 2 Table 2. Exome sequencing findings in proband and her monozygotic twin sister. To confirm the inheritance mode of the Inv22 in the family, we further validated it in the proband’s parents, sister, and brother. Specific primers were used to amplify the gene fragment flanking the Inv22, and the analysis was performed using agarose gel electrophoresis. The results showed that two bands were obtained from the samples of the proband, proband’s mother, and sister (one 12 kb long identifying the normal allele, the other 11 kb long identifying the Inv22 mutated allele), which is characteristic of Inv22 carrier. The father’s sample showed one band at the 12 kb position, indicating a normal state. The brother showed an amplification band only at the 11 kb position, indicating a HA patient (Figure 1). The results of the STRs polymorphism analysis indicated that the proband and her twin sister are monozygotic twins (Supplementary Table S1). The pedigree diagram is shown in Figure 2. Figure 1 Figure 1. Electrophoresis diagram of F8 gene Inv22 detection in the proband and their family members. The bands 11 and 12 kb long identify the Inv22 and wild-type alleles, respectively. The lanes a to e represent samples from the proband, father, mother, brother, and sister, respectively. The results indicate that the proband, mother, and the sister are HA carriers, while the brother is a HA patient, and the father is normal. HA: hemophilia A; CA, carrier; N, normal subject; Inv22, intron 22 inversion; M: DNA molecular weight marker. Figure 2 Figure 2. Pedigree chart. I1: normal male; I2: female Inv22 carrier; II1: female Inv22 carrier; II2: the proband, female Inv22 carrier, but presented as HA phenotype; III3: male with Inv22, a HA patient. Inv22: intron 22 inversion; HA, hemophilia A. 3.4 X chromosome inactivation analysis The proband is an Inv22 carrier, and her FVIII levels decreased enough to classify her as a case of moderate HA. Therefore, she was tested for non-random XCI, a condition that may lead to this phenotypic expression of HA in female carriers. Initially, we analyzed the polymorphic CAG locus in the HUMARA gene in the genomic DNA of the proband and her sister before and after digestion with the methylation-sensitive restriction endonuclease HhaI. The results show that the amplifications of the HUMARA alleles from both the father and mother of the proband have the same fragment length of 272 bp, making it impossible to determine the XCI status. The same result was found for her sister (data not shown).Therefore, we changed our approach and analyzed the polymorphic locus of the ZNF261 gene. The ZNF261 PCR fragment, 258 bp long, was inherited from the mother and identified the X chromosome carrying the Inv22 mutation. The 258 bp allele in the proband was mostly digested, while her second ZNF261 allele (254 bp in length, inherited from the father, identifying the X chromosome carrying the normal F8 allele) showed almost no change after HhaI digestion (Figure 3). HhaI restriction enzymes can selectively cut active (non-methylated) DNA, so the above results suggest that the proband’s Inv22 X chromosome was transcriptionally very active, while her normal X copy was silenced (Inv22: wild-type ratio 71:29, moderately skewed). A more balanced X inactivation ratio was observed in the proband’s sister, with an Inv22: wild-type ratio of 63:37 (Figure 3). Figure 3 Figure 3. Capillary electrophoresis of the X chromosome inactivation analysis. Digesting with the HhaI restriction enzyme, which selectively cuts active (non-methylated) DNA. (A) Analysis of the ZNF261 polymorphism in the proband, displaying results before digestion (HhaI-) and after digestion (HhaI+). (B) Analysis of the ZNF261 polymorphism in the monozygotic twin sister, displaying results before digestion (HhaI-) and after digestion (HhaI+). (C) The 254 bp allele identifies the normal X chromosome inherited from father to daughter. (D) The 258 bp ZNF261 allele identifies the Inv22 X chromosome inherited from mother to daughter. The numbers below each allele represent peak areas (A) and base pair sizes (S). The relative percentage of active and inactivated X chromosome was calculated from the ratio of the peak areas of the different ZNF261 alleles before and after digestion. 3.5 X-chromosome targeted bioinformatic analysis To investigate whether other X chromosome-related gene mutations led to XCI skewing in the proband, we conducted targeted bioinformatics analysis of the WES results focusing on the X chromosome. The results were then compared between the proband and her monozygotic twin sister. The proband identified several novel X chromosome-related mutations, including SHROOM2, RPGR, VCX3B, GAGE, GCNA, ZNF280C, CT45A, and XK. Comprehensive information about these mutated genes is available in Table 3. These genes are associated with the development of tissues such as the nervous system, retina, sperm, and blood, as well as tumor occurrence (according to NCBI and ClinVar). Currently, none of these mutations have been reported to result in skewed XCI. Table 3 Table 3. Novel X chromosome-related mutations identified in the proband compared to her monozygotic twin sister. 4 Discussion This is the case of our proband, an Inv22 carrier with skewed XCI, whose FVIII levels decreased enough to classify her as a case of moderate HA, and therefore representing one of the extremely rare cases of the disease that have been described in females. The proband exhibited significant bleeding symptoms and had a family history of HA, yet the diagnosis of HA did not occur until a severe bleeding event at the age of 13. This delayed diagnosis indicates the need for increased attention to individuals in similar situations. Referring to the study by Daidone et al. (2018), we utilized the BAT questionnaire, a tool currently employed in von Willebrand’s disease, to assess the proband’s bleeding tendency. Her bleeding score was found to be significantly higher than normal. Since HA carriers are typically asymptomatic or have mild symptoms, evaluating the bleeding score could aid in identifying these patients in specific populations, particularly in females with a family history. This approach may also help in better defining their bleeding risk and enable more appropriate management. Therefore, it would be valuable to incorporate the BAT questionnaire into clinical practice for this purpose. The proband has a younger brother with severe HA and a healthy twin sister, making it essential to conduct a family pedigree study. Unfortunately, apart from the proband’s parents, sister, and brother, all other relatives have refused to participate in the investigation. The family investigation revealed that the proband’s mother transmitted Inv22 mutation to her three children. More interestingly, it was confirmed that the proband and her sister are monozygotic twins. However, even though they carry the same mutation, the phenotypes are different. Thus, we speculate that the proband may have acquired chromosomal abnormalities or skewed XCI during development. These two conditions have been previously observed in females with HA and may explain the manifestation in heterozygous mutation carriers (Knobe et al., 2008; Berendt et al., 2020; Janczar et al., 2020). Karyotype analysis revealed the proband to have a normal 46, XX karyotype, eliminating the diagnosis of Turner Syndrome or other large chromosomal abnormalities. Subsequently, XCI analysis explained the different phenotypes between the monozygotic twins. Currently, there are only a few case reports on the occurrence of HA in female monozygotic twins (Valleix et al., 2002; Bennett et al., 2008). One case report describes a pair of monozygotic twins, with only one showing the phenotype of HA, while the other is an asymptomatic carrier. Another report describes a pair of monozygotic twins, both presenting with HA. The reported mechanisms involve unbalanced XCI leading to the overexpression of mutant alleles. However, the potential mechanisms behind the differences in XCI between monozygotic twins remain unclear. In the normal female population, about 10% of females exhibit non-random XCI in peripheral blood cells. Non-random XCI typically does not have an impact on females. However, in certain cases, skewed XCI can result in X-linked disease in females due to a prevalent silencing of the healthy X copy (Khan and Theunissen, 2023; Schwämmle and Schulz, 2023). Currently, the relationship between FVIII activity and the severity of XCI skewing in female HA carriers has not been well elucidated. Additionally, there is a variety of criteria in the research field for defining XCI deviation. Unlike the previously published studies (Knobe et al., 2008; Daidone et al., 2018), female HA patients showed strongly unbalanced XCI. However, in our case, the maternal XCI ratio of Inv22 was 29%, indicating a moderate inactivation shift, yet the FVIII activity was very low, at only 1.1%. We need to pay attention to the limitations of the XCI marker. The dinucleotide repeat sequence of ZNF261 has high polymorphism as an XCI analysis marker; however, it may lead to stutter peaks, which can affect the results of XCI. In this study, to ensure the reliability of the results, we have optimized the experimental conditions, employed high-resolution capillary electrophoresis to better distinguish stutter peaks, and conducted repeated experiments. Therefore, we still believe that ZNF261 can provide sufficient accuracy for determining the XCI pattern in our research. Given the limitations of HUMARA in our study, selecting other suitable XCI markers for more comprehensive validation will be more helpful in ensuring the reliability and accuracy of the data. FVIII is mainly produced by liver cells; however, studies have shown that non-random XCI does not differ significantly in different tissues and is mainly influenced by age (Bittel et al., 2008). Thus, we cannot completely rule out the possibility that other factors may be contributing to the decreased FVIII activity in the proband, such as deep intronic mutations. Additional tests like RNA sequencing and deep intronic sequencing will aid in clarifying these matters. In addition, there are many reasons for non-random XCI, which can be classified as primary or secondary. Primary non-random XCI occurs at the initial stage of inactivation. Secondary non-random X chromosome inactivation is often related to cell selection, where the presence of certain specific gene mutations on the X chromosome can lead to cells gaining a proliferative advantage or disadvantage (Dardik et al., 2021; Dardik et al., 2023). In our study, all three females were Inv22 carriers, but only the proband experienced unbalanced XCI. Analysis of the WES results of the proband and her monozygotic twin sister revealed the presence of some novel gene mutations, with the SHROOM2 mutation frequency exceeding 50%. However, currently, there are no reports linking any of these gene mutations to XCI skewing. The limitation of this study is the lack of functional validation for these mutations. Therefore, the impact of these different gene mutations between monozygotic twins on XCI and FVIII activity remains unclear. However, we believe that our findings provide important clues and potential directions for further functional validation studies. We believe that future work will further clarify the relationship between these mutations and the pathogenesis of HA, providing deeper insights into the role of XCI in female patients with HA. 5 Conclusion We report a rare case of a girl with moderate HA carrying a heterozygous Inv22 mutation. Pedigree analysis revealed that the proband’s mother and her monozygotic twin sister are both Inv22 carriers, but only the proband exhibited severe bleeding symptoms. Chromosome inactivation analysis indicated that the proband’s clinical phenotype was attributed to an unbalanced XCI pattern, resulting in the allele carrying the Inv22 being transcriptionally more active than the normal allele. Furthermore, we identified several novel X-linked gene mutations, including SHROOM2, RPGR, VCX3B, GAGE, GCNA, ZNF280C, CT45A, and XK, in the proband compared to her monozygotic twin sister. However, it remains uncertain whether these mutations affect XCI in the proband. We conducted the first analysis of the differences in X chromosome-related gene mutations among monozygotic twins who are carriers of HA, laying the foundation for further research into the pathogenesis of HA in women. Our study also emphasizes that the complexity of the pathogenesis of HA in females poses challenges for subsequent genetic counseling. Data availability statement The data presented in the study are deposited in the Genome Sequence Archive (Genomics, Proteomics and Bioinformatics 2021) in the National Genomics Data Center (Nucleic Acids Res 2024), China National Center for Bioinformation/Beijing Institute of Genomics, Chinese Academy of Sciences, accession number GSA-Human: HRA009718, and are publicly accessible at Ethics statement The studies involving humans were approved by Ethics Committee of the Second Affiliated Hospital of Chongqing Medical University. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation in this study was provided by the participants’ legal guardians/next of kin. Written informed consent was obtained from the individual(s), and minor(s)’ legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article. Author contributions XT: Formal Analysis, Investigation, Methodology, Writing–original draft. YY: Data curation, Validation, Writing–review and editing. XW: Data curation, Project administration, Writing–review and editing. JZ: Data curation, Software, Writing–review and editing. TW: Investigation, Methodology, Writing–review and editing. HJ: Investigation, Methodology, Writing–review and editing. SC: Conceptualization, Supervision, Validation, Writing–review and editing. SL: Conceptualization, Writing–review and editing. Funding The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article. Acknowledgments The authors thank all the patients and family members for their participation. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher’s note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Supplementary material The Supplementary Material for this article can be found online at: References Bagislar, S., Ustuner, I., Cengiz, B., Soylemez, F., Akyerli, C. B., Ceylaner, S., et al. (2006). Extremely skewed X-chromosome inactivation patterns in women with recurrent spontaneous abortion. Aust. N. Z. J. Obstet. Gynaecol. 46 (5), 384–387. doi:10.1111/j.1479-828X.2006.00622.x PubMed Abstract | CrossRef Full Text | Google Scholar Bates, S. E. (2011). Classical cytogenetics: karyotyping techniques. Methods Mol. Biol. 767, 177–190. doi:10.1007/978-1-61779-201-4_13 PubMed Abstract | CrossRef Full Text | Google Scholar Beever, C., Lai, B. P. Y., Baldry, S. E. L., Peñaherrera, M. S., Jiang, R., Robinson, W. P., et al. (2003). Methylation of ZNF261 as an assay for determining X chromosome inactivation patterns. Am. J. Med. Genet. Part A 120A (3), 439–441. doi:10.1002/ajmg.a.20045 PubMed Abstract | CrossRef Full Text | Google Scholar Bennett, C. M., Boye, E., and Neufeld, E. J. (2008). Female monozygotic twins discordant for hemophilia A due to nonrandom X-chromosome inactivation. Am. J. Hematol. 83 (10), 778–780. doi:10.1002/ajh.21219 PubMed Abstract | CrossRef Full Text | Google Scholar Berendt, A., Wójtowicz-Marzec, M., Wysokińska, B., and Kwaśniewska, A. (2020). Severe haemophilia a in a preterm girl with turner syndrome - a case report from the prenatal period to early infancy (part I). Italian J. Pediatr. 46 (1), 125. doi:10.1186/s13052-020-00892-7 PubMed Abstract | CrossRef Full Text | Google Scholar Bittel, D. C., Theodoro, M. F., Kibiryeva, N., Fischer, W., Talebizadeh, Z., and Butler, M. G. (2008). Comparison of X-chromosome inactivation patterns in multiple tissues from human females. J. Med. Genet. 45 (5), 309–313. doi:10.1136/jmg.2007.055244 PubMed Abstract | CrossRef Full Text | Google Scholar Borràs, N., Castillo-González, D., Comes, N., Martin-Fernandez, L., Rivero-Jiménez, R. A., Chang-Monteagudo, A., et al. (2022). Molecular study of a large cohort of 109 haemophilia patients from Cuba using a gene panel with next generation sequencing-based technology. Haemophilia 28 (1), 125–137. doi:10.1111/hae.14438 PubMed Abstract | CrossRef Full Text | Google Scholar Chaudhury, A., Sidonio, R., Jain, N., Tsao, E., Tymoszczuk, J., Oviedo Ovando, M., et al. (2020). Women and girls with haemophilia and bleeding tendencies: outcomes related to menstruation, pregnancy, surgery and other bleeding episodes from a retrospective chart review. Haemophilia 27 (2), 293–304. doi:10.1111/hae.14232 PubMed Abstract | CrossRef Full Text | Google Scholar Cygan, P. H., and Kouides, P. A. (2021). Regulation and importance of factor VIII levels in hemophilia A carriers. Curr. Opin. Hematol. 28 (5), 315–322. doi:10.1097/moh.0000000000000667 PubMed Abstract | CrossRef Full Text | Google Scholar Daidone, V., Galletta, E., Bertomoro, A., and Casonato, A. (2018). The Bleeding Assessment Tool and laboratory data in the characterisation of a female with inherited haemophilia A. Blood Transfus. 16 (1), 114–117. doi:10.2450/2016.0132-16 PubMed Abstract | CrossRef Full Text | Google Scholar Dardik, R., Avishai, E., Lalezari, S., Barg, A. A., Levy-Mendelovich, S., Budnik, I., et al. (2021). Molecular mechanisms of skewed X-chromosome inactivation in female hemophilia patients-lessons from wide genome analyses. Int. J. Mol. Sci. 22 (16), 9074. doi:10.3390/ijms22169074 PubMed Abstract | CrossRef Full Text | Google Scholar Dardik, R., Janczar, S., Lalezari, S., Avishai, E., Levy-Mendelovich, S., Barg, A. A., et al. (2023). Four decades of carrier detection and prenatal diagnosis in hemophilia A: historical overview, state of the art and future directions. Int. J. Mol. Sci. 24 (14), 11846. doi:10.3390/ijms241411846 PubMed Abstract | CrossRef Full Text | Google Scholar Hermans, C., Ventriglia, G., Obaji, S., Beckermann, B. M., Lehle, M., Catalani, O., et al. (2023). Emicizumab use in females with moderate or mild hemophilia A without factor VIII inhibitors who warrant prophylaxis. Res. Pract. Thromb. Haemost. 7 (8), 102239. doi:10.1016/j.rpth.2023.102239 PubMed Abstract | CrossRef Full Text | Google Scholar Janczar, S., Babol-Pokora, K., Jatczak-Pawlik, I., Taha, J., Klukowska, A., Laguna, P., et al. (2020). Six molecular patterns leading to hemophilia A phenotype in 18 females from Poland. Thrombosis Res. 193, 9–14. doi:10.1016/j.thromres.2020.05.041 PubMed Abstract | CrossRef Full Text | Google Scholar Juchniewicz, P., Kloska, A., Tylki-Szymańska, A., Jakóbkiewicz-Banecka, J., Węgrzyn, G., Moskot, M., et al. (2018). Female Fabry disease patients and X-chromosome inactivation. Gene 641, 259–264. doi:10.1016/j.gene.2017.10.064 PubMed Abstract | CrossRef Full Text | Google Scholar Khan, S. A., and Theunissen, T. W. (2023). Modeling X-chromosome inactivation and reactivation during human development. Curr. Opin. Genet. Dev. 82, 102096. doi:10.1016/j.gde.2023.102096 PubMed Abstract | CrossRef Full Text | Google Scholar Knobe, K. E., SjÖRin, E., Soller, M. J., LiljebjÖRn, H., and Ljung, R. C. R. (2008). Female haemophilia A caused by skewed X inactivation. Haemophilia 14 (4), 846–848. doi:10.1111/j.1365-2516.2008.01754.x PubMed Abstract | CrossRef Full Text | Google Scholar Li, S., Fang, Y., Li, L., Lee, A., Poon, M. C., Zhao, Y., et al. (2020). Bleeding assessment in haemophilia carriers-High rates of bleeding after surgical abortion and intrauterine device placement: a multicentre study in China. Haemophilia 26 (1), 122–128. doi:10.1111/hae.13889 PubMed Abstract | CrossRef Full Text | Google Scholar Mannucci, P. M. (2008). Back to the future: a recent history of haemophilia treatment. Haemophilia 14 (Suppl. 3), 10–18. doi:10.1111/j.1365-2516.2008.01708.x PubMed Abstract | CrossRef Full Text | Google Scholar Miller, C. H., and Bean, C. J. (2020). Genetic causes of haemophilia in women and girls. Haemophilia 27 (2), e164–e179. doi:10.1111/hae.14186 PubMed Abstract | CrossRef Full Text | Google Scholar Mingot Castellano, M. E. (2020). General concepts on hemophilia A and on women carrying the disease. Blood Coagul. Fibrinolysis 31 (1s), S1–s3. doi:10.1097/mbc.0000000000000984 PubMed Abstract | CrossRef Full Text | Google Scholar Quintana Paris, L. (2023). Foundations of hemophilia and epidemiology. Blood Coagul. Fibrinolysis 34 (S1), S35–s36. doi:10.1097/mbc.0000000000001222 PubMed Abstract | CrossRef Full Text | Google Scholar Rodeghiero, F., Tosetto, A., Abshire, T., Arnold, D. M., Coller, B., James, P., et al. (2010). ISTH/SSC bleeding assessment tool: a standardized questionnaire and a proposal for a new bleeding score for inherited bleeding disorders. J. Thromb. Haemost. 8 (9), 2063–2065. doi:10.1111/j.1538-7836.2010.03975.x PubMed Abstract | CrossRef Full Text | Google Scholar Schwämmle, T., and Schulz, E. G. (2023). Regulatory principles and mechanisms governing the onset of random X-chromosome inactivation. Curr. Opin. Genet. Dev. 81, 102063. doi:10.1016/j.gde.2023.102063 PubMed Abstract | CrossRef Full Text | Google Scholar Shen, M. C., Chang, S. P., Lee, D. J., Lin, W. H., Chen, M., and Ma, G. C. (2022). Skewed X-chromosome inactivation and parental gonadal mosaicism are implicated in X-linked recessive female hemophilia patients. Diagn. (Basel) 12 (10), 2267. doi:10.3390/diagnostics12102267 PubMed Abstract | CrossRef Full Text | Google Scholar Valleix, S., Vinciguerra, C., Lavergne, J.-M., Leuer, M., Delpech, M., and Negrier, C. (2002). Skewed X-chromosome inactivation in monochorionic diamniotic twin sisters results in severe and mild hemophilia A. Blood 100 (8), 3034–3036. doi:10.1182/blood-2002-01-0277 PubMed Abstract | CrossRef Full Text | Google Scholar van Galen, K. P. M., d'Oiron, R., James, P., Abdul-Kadir, R., Kouides, P. A., Kulkarni, R., et al. (2021). A new hemophilia carrier nomenclature to define hemophilia in women and girls: communication from the SSC of the ISTH. J. Thromb. Haemost. 19 (8), 1883–1887. doi:10.1111/jth.15397 PubMed Abstract | CrossRef Full Text | Google Scholar Xiao, X., Yang, J., Li, Y., Yang, H., Zhu, Y., Li, L., et al. (2023). Identification of a novel frameshift variant of ARR3 related to X-linked female-limited early-onset high myopia and study on the effect of X chromosome inactivation on the myopia severity. J. Clin. Med. 12 (3), 835. doi:10.3390/jcm12030835 PubMed Abstract | CrossRef Full Text | Google Scholar Yoon, S. H., Choi, Y. M., Hong, M. A., Kang, B. M., Kim, J. J., Min, E. G., et al. (2008). X chromosome inactivation patterns in patients with idiopathic premature ovarian failure. Hum. Reprod. 23 (3), 688–692. doi:10.1093/humrep/dem415 PubMed Abstract | CrossRef Full Text | Google Scholar Young, J. E., Grabell, J., Tuttle, A., Bowman, M., Hopman, W. M., Good, D., et al. (2017). Evaluation of the self-administered bleeding assessment tool (Self-BAT) in haemophilia carriers and correlations with quality of life. Haemophilia 23 (6), e536–e538. doi:10.1111/hae.13354 PubMed Abstract | CrossRef Full Text | Google Scholar Keywords: hemophilia a, females, carriers, x-chromosome inactivation, gene mutations Citation: Tan X, Yang Y, Wu X, Zhu J, Wang T, Jiang H, Chen S and Lou S (2025) An investigation of a hemophilia A female with heterozygous intron 22 inversion and skewed X chromosome inactivation. Front. Genet. 15:1500167. doi: 10.3389/fgene.2024.1500167 Received: 26 September 2024; Accepted: 04 December 2024; Published: 06 January 2025. Edited by: Mara Marongiu, National Research Council (CNR), Italy Reviewed by: Enrique Medina-Acosta, Prof. Enrique Medina-Acosta, M.Sc., Ph.D.State University of Northern Rio de Janeiro, Brazil Steven Pei, Yale University, United States Copyright © 2025 Tan, Yang, Wu, Zhu, Wang, Jiang, Chen and Lou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Correspondence: Shifeng Lou, 300326@hospital.cqmu.edu.cn; Shu Chen, 300318@hospital.cqmu.edu.cn †These authors have contributed equally to this work and share last authorship Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher. Frontiers' impact Articles published with Frontiers have received 12 million total citations Your research is the real superpower - learn how we maximise its impact through our leading community journals Explore our impact metrics Download article Download PDF ReadCube epub XML Supplemental data Share on Export citation EndNote Reference Manager Simple Text file BibTex 1,360 Total views 467 Downloads Citation numbers are available from Dimensions View article impact View altmetric score Share on Edited by Mara Marongiu Reviewed by Steven Pei Prof. Enrique Medina-Acosta, M.Sc., Ph.D. Table of contents Abstract 1 Introduction 2 Materials and methods 3 Results 4 Discussion 5 Conclusion Data availability statement Ethics statement Author contributions Funding Acknowledgments Conflict of interest Publisher’s note Supplementary material References Supplemental data Export citation EndNote Reference Manager Simple Text file BibTex Check for updates People also looked at Increased cardiac macrophages in Sorbs2-deficient hearts: revealing a potential role for macrophage in responding to embryonic myocardial abnormalities Beibei Hu, Xiangyang Liu, Shanshan Xiong, Qin Gong, Junjie Yang, Hongjun Shi, Min Zhang, Fei Liang and Zhen Zhang Editorial: Female Infertility: Genetics of Reproductive Ageing, Menopause and Primary Ovarian Insufficiency Mara Marongiu, Laura Crisponi, Manuela Uda and Emanuele Pelosi Molecular genetic analysis of Rubinstein–Taybi syndrome in Russian patients Olga R. Ismagilova, Tagui A. Adyan, Tatiana S. Beskorovainaya and Alexander V. Polyakov Detection of inversion with breakpoints in ARSB causing MPS VI by whole-genome sequencing: lessons learned and best practices Yufeng Huang, Wenyue Deng, Hui Huang, Xiankai Zhang, Xiaohong Chen, Jian Ye, Sukun Luo, Ting Yu, Hui Yao, Hao Du and Xuelian He Case report: A novel intronic JMJD6 likely pathogenic variant (c.941+75G > T) associated with congenital eyelid coloboma in one of the identical twin sisters Xin Li, Yuqi Zhang, Gang Chai, Weijie Su and Yan Zhang Supplementary Material Table 1.xlsx Guidelines Author guidelines Services for authors Policies and publication ethics Editor guidelines Fee policy Explore Articles Research Topics Journals How we publish Outreach Frontiers Forum Frontiers Policy Labs Frontiers for Young Minds Frontiers Planet Prize Connect Help center Emails and alerts Contact us Submit Career opportunities Follow us © 2025 Frontiers Media S.A. All rights reserved Privacy policy|Terms and conditions Download article Download Download PDF ReadCube epub XML X (1) Mendeley (6) See more details We use cookies Our website uses cookies that are essential for its operation and additional cookies to track performance, or to improve and personalize our services. To manage your cookie preferences, please click Cookie Settings. For more information on how we use cookies, please see ourCookie Policy Cookies Settings Reject non-essential cookies Accept cookies Privacy Preference Center Our website uses cookies that are necessary for its operation. Additional cookies are only used with your consent. These cookies are used to store and access information such as the characteristics of your device as well as certain personal data (IP address, navigation usage, geolocation data) and we process them to analyse the traffic on our website in order to provide you a better user experience, evaluate the efficiency of our communications and to personalise content to your interests. Some cookies are placed by third-party companies with which we work to deliver relevant ads on social media and the internet. Click on the different categories' headings to change your cookie preferences. If you wish to learn more about how we use cookies, please see our cookie policy below. Cookie Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for our websites to function as they enable you to navigate around the sites and use our features. They cannot be switched off in our systems. They are activated set in response to your actions such as setting your privacy preferences, logging-in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Performance/Analytics Cookies [x] Performance/Analytics Cookies These cookies allow us to collect information about how visitors use our website, including the number of visitors, the websites that referred them to our website, and the pages that they visited. We use them to compile reports, to measure and improve the performance of our website, analyze which pages are the most viewed, see how visitors move around the site and fix bugs. If you do not allow these cookies, your experience will not be altered but we will not be able to improve the performance and content of our website. Targeting/Advertising Cookies [x] Targeting/Advertising Cookies These cookies may be set by us to offer your personalized content and opportunities to cooperate. They may also be used by social media companies we work with to build a profile of your interests and show you relevant adverts on their services. They do not store directly personal information but are based on unique identifiers related to your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm My Choices
12291
https://www.feynmanlectures.caltech.edu/I_15.html
LOADING PAGE... Dear Reader, There are several reasons you might be seeing this page. In order to read the online edition of The Feynman Lectures on Physics, javascript must be supported by your browser and enabled. If you have visited this website previously it's possible you may have a mixture of incompatible files (.js, .css, and .html) in your browser cache. If you use an ad blocker it may be preventing our pages from downloading necessary resources. So, please try the following: make sure javascript is enabled, clear your browser cache (at least of files from feynmanlectures.caltech.edu), turn off your browser extensions, and open this page: If it does not open, or only shows you this message again, then please let us know: which browser you are using (including version #) which operating system you are using (including version #) This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below. By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated. Best regards, Mike Gottlieb feynmanlectures@caltech.edu Editor, The Feynman Lectures on Physics New Millennium Edition 15The Special Theory of Relativity 15–1The principle of relativity For over 200 years the equations of motion enunciated by Newton were believed to describe nature correctly, and the first time that an error in these laws was discovered, the way to correct it was also discovered. Both the error and its correction were discovered by Einstein in 1905. Newton’s Second Law, which we have expressed by the equation F=d(mv)/dt, F=d(mv)/dt, was stated with the tacit assumption that mm is a constant, but we now know that this is not true, and that the mass of a body increases with velocity. In Einstein’s corrected formula mm has the value m=m0√1−v2/c2, m=m01−v2/c2−−−−−−−−√,(15.1) where the “rest mass” m0m0 represents the mass of a body that is not moving and cc is the speed of light, which is about 3×1053×105 km⋅sec−1km⋅sec−1 or about 186,000186,000 mi⋅sec−1.mi⋅sec−1. For those who want to learn just enough about it so they can solve problems, that is all there is to the theory of relativity—it just changes Newton’s laws by introducing a correction factor to the mass. From the formula itself it is easy to see that this mass increase is very small in ordinary circumstances. If the velocity is even as great as that of a satellite, which goes around the earth at 55 mi/sec, then v/c=5/186,000v/c=5/186,000: putting this value into the formula shows that the correction to the mass is only one part in two to three billion, which is nearly impossible to observe. Actually, the correctness of the formula has been amply confirmed by the observation of many kinds of particles, moving at speeds ranging up to practically the speed of light. However, because the effect is ordinarily so small, it seems remarkable that it was discovered theoretically before it was discovered experimentally. Empirically, at a sufficiently high velocity, the effect is very large, but it was not discovered that way. Therefore it is interesting to see how a law that involved so delicate a modification (at the time when it was first discovered) was brought to light by a combination of experiments and physical reasoning. Contributions to the discovery were made by a number of people, the final result of whose work was Einstein’s discovery. There are really two Einstein theories of relativity. This chapter is concerned with the Special Theory of Relativity, which dates from 1905. In 1915 Einstein published an additional theory, called the General Theory of Relativity. This latter theory deals with the extension of the Special Theory to the case of the law of gravitation; we shall not discuss the General Theory here. The principle of relativity was first stated by Newton, in one of his corollaries to the laws of motion: “The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line.” This means, for example, that if a space ship is drifting along at a uniform speed, all experiments performed in the space ship and all the phenomena in the space ship will appear the same as if the ship were not moving, provided, of course, that one does not look outside. That is the meaning of the principle of relativity. This is a simple enough idea, and the only question is whether it is true that in all experiments performed inside a moving system the laws of physics will appear the same as they would if the system were standing still. Let us first investigate whether Newton’s laws appear the same in the moving system. Fig. 15–1.Two coordinate systems in uniform relative motion along their xx-axes. Suppose that Moe is moving in the xx-direction with a uniform velocity uu, and he measures the position of a certain point, shown in Fig. 15–1. He designates the “xx-distance” of the point in his coordinate system as x′x′. Joe is at rest, and measures the position of the same point, designating its xx-coordinate in his system as xx. The relationship of the coordinates in the two systems is clear from the diagram. After time tt Moe’s origin has moved a distance utut, and if the two systems originally coincided, x′=x−ut,y′=y,z′=z,t′=t. x′y′z′t′=x−ut,=y,=z,=t.(15.2) If we substitute this transformation of coordinates into Newton’s laws we find that these laws transform to the same laws in the primed system; that is, the laws of Newton are of the same form in a moving system as in a stationary system, and therefore it is impossible to tell, by making mechanical experiments, whether the system is moving or not. The principle of relativity has been used in mechanics for a long time. It was employed by various people, in particular Huygens, to obtain the rules for the collision of billiard balls, in much the same way as we used it in Chapter 10 to discuss the conservation of momentum. In the 19th century interest in it was heightened as the result of investigations into the phenomena of electricity, magnetism, and light. A long series of careful studies of these phenomena by many people culminated in Maxwell’s equations of the electromagnetic field, which describe electricity, magnetism, and light in one uniform system. However, the Maxwell equations did not seem to obey the principle of relativity. That is, if we transform Maxwell’s equations by the substitution of equations (15.2), their form does not remain the same; therefore, in a moving space ship the electrical and optical phenomena should be different from those in a stationary ship. Thus one could use these optical phenomena to determine the speed of the ship; in particular, one could determine the absolute speed of the ship by making suitable optical or electrical measurements. One of the consequences of Maxwell’s equations is that if there is a disturbance in the field such that light is generated, these electromagnetic waves go out in all directions equally and at the same speed cc, or 186,000186,000 mi/sec. Another consequence of the equations is that if the source of the disturbance is moving, the light emitted goes through space at the same speed cc. This is analogous to the case of sound, the speed of sound waves being likewise independent of the motion of the source. This independence of the motion of the source, in the case of light, brings up an interesting problem: Suppose we are riding in a car that is going at a speed uu, and light from the rear is going past the car with speed cc. Differentiating the first equation in (15.2) gives dx′/dt=dx/dt−u, dx′/dt=dx/dt−u, which means that according to the Galilean transformation the apparent speed of the passing light, as we measure it in the car, should not be cc but should be c−uc−u. For instance, if the car is going 100,000100,000 mi/sec, and the light is going 186,000186,000 mi/sec, then apparently the light going past the car should go 86,00086,000 mi/sec. In any case, by measuring the speed of the light going past the car (if the Galilean transformation is correct for light), one could determine the speed of the car. A number of experiments based on this general idea were performed to determine the velocity of the earth, but they all failed—they gave no velocity at all. We shall discuss one of these experiments in detail, to show exactly what was done and what was the matter; something was the matter, of course, something was wrong with the equations of physics. What could it be? 15–2The Lorentz transformation When the failure of the equations of physics in the above case came to light, the first thought that occurred was that the trouble must lie in the new Maxwell equations of electrodynamics, which were only 20 years old at the time. It seemed almost obvious that these equations must be wrong, so the thing to do was to change them in such a way that under the Galilean transformation the principle of relativity would be satisfied. When this was tried, the new terms that had to be put into the equations led to predictions of new electrical phenomena that did not exist at all when tested experimentally, so this attempt had to be abandoned. Then it gradually became apparent that Maxwell’s laws of electrodynamics were correct, and the trouble must be sought elsewhere. In the meantime, H. A. Lorentz noticed a remarkable and curious thing when he made the following substitutions in the Maxwell equations: x′=x−ut√1−u2/c2,y′=y,z′=z,t′=t−ux/c2√1−u2/c2, x′y′z′t′=x−ut1−u2/c2−−−−−−−−√,=y,=z,=t−ux/c21−u2/c2−−−−−−−−√,(15.3) namely, Maxwell’s equations remain in the same form when this transformation is applied to them! Equations (15.3) are known as a Lorentz transformation. Einstein, following a suggestion originally made by Poincaré, then proposed that all the physical laws should be of such a kind that they remain unchanged under a Lorentz transformation. In other words, we should change, not the laws of electrodynamics, but the laws of mechanics. How shall we change Newton’s laws so that they will remain unchanged by the Lorentz transformation? If this goal is set, we then have to rewrite Newton’s equations in such a way that the conditions we have imposed are satisfied. As it turned out, the only requirement is that the mass mm in Newton’s equations must be replaced by the form shown in Eq. (15.1). When this change is made, Newton’s laws and the laws of electrodynamics will harmonize. Then if we use the Lorentz transformation in comparing Moe’s measurements with Joe’s, we shall never be able to detect whether either is moving, because the form of all the equations will be the same in both coordinate systems! It is interesting to discuss what it means that we replace the old transformation between the coordinates and time with a new one, because the old one (Galilean) seems to be self-evident, and the new one (Lorentz) looks peculiar. We wish to know whether it is logically and experimentally possible that the new, and not the old, transformation can be correct. To find that out, it is not enough to study the laws of mechanics but, as Einstein did, we too must analyze our ideas of space and time in order to understand this transformation. We shall have to discuss these ideas and their implications for mechanics at some length, so we say in advance that the effort will be justified, since the results agree with experiment. 15–3The Michelson-Morley experiment As mentioned above, attempts were made to determine the absolute velocity of the earth through the hypothetical “ether” that was supposed to pervade all space. The most famous of these experiments is one performed by Michelson and Morley in 1887. It was 18 years later before the negative results of the experiment were finally explained, by Einstein. Fig. 15–2.Schematic diagram of the Michelson-Morley experiment. The Michelson-Morley experiment was performed with an apparatus like that shown schematically in Fig. 15–2. This apparatus is essentially comprised of a light source AA, a partially silvered glass plate BB, and two mirrors CC and EE, all mounted on a rigid base. The mirrors are placed at equal distances LL from BB. The plate BB splits an oncoming beam of light, and the two resulting beams continue in mutually perpendicular directions to the mirrors, where they are reflected back to BB. On arriving back at BB, the two beams are recombined as two superposed beams, DD and FF. If the time taken for the light to go from BB to EE and back is the same as the time from BB to CC and back, the emerging beams DD and FF will be in phase and will reinforce each other, but if the two times differ slightly, the beams will be slightly out of phase and interference will result. If the apparatus is “at rest” in the ether, the times should be precisely equal, but if it is moving toward the right with a velocity uu, there should be a difference in the times. Let us see why. First, let us calculate the time required for the light to go from BB to EE and back. Let us say that the time for light to go from plate BB to mirror EE is t1t1, and the time for the return is t2t2. Now, while the light is on its way from BB to the mirror, the apparatus moves a distance ut1ut1, so the light must traverse a distance L+ut1L+ut1, at the speed cc. We can also express this distance as ct1ct1, so we have ct1=L+ut1,ort1=L/(c−u). ct1=L+ut1,ort1=L/(c−u). (This result is also obvious from the point of view that the velocity of light relative to the apparatus is c−uc−u, so the time is the length LL divided by c−uc−u.) In a like manner, the time t2t2 can be calculated. During this time the plate BB advances a distance ut2ut2, so the return distance of the light is L−ut2L−ut2. Then we have ct2=L−ut2,ort2=L/(c+u). ct2=L−ut2,ort2=L/(c+u). Then the total time is t1+t2=2Lc/(c2−u2). t1+t2=2Lc/(c2−u2). For convenience in later comparison of times we write this as t1+t2=2L/c1−u2/c2. t1+t2=2L/c1−u2/c2.(15.4) Our second calculation will be of the time t3t3 for the light to go from BB to the mirror CC. As before, during time t3t3 the mirror CC moves to the right a distance ut3ut3 to the position C′C′; in the same time, the light travels a distance ct3ct3 along the hypotenuse of a triangle, which is BC′BC′. For this right triangle we have (ct3)2=L2+(ut3)2 (ct3)2=L2+(ut3)2 or L2=c2t23−u2t23=(c2−u2)t23, L2=c2t23−u2t23=(c2−u2)t23, from which we get t3=L/√c2−u2. t3=L/c2−u2−−−−−−√. For the return trip from C′C′ the distance is the same, as can be seen from the symmetry of the figure; therefore the return time is also the same, and the total time is 2t32t3. With a little rearrangement of the form we can write 2t3=2L√c2−u2=2L/c√1−u2/c2. 2t3=2Lc2−u2−−−−−−√=2L/c1−u2/c2−−−−−−−−√.(15.5) We are now able to compare the times taken by the two beams of light. In expressions (15.4) and (15.5) the numerators are identical, and represent the time that would be taken if the apparatus were at rest. In the denominators, the term u2/c2u2/c2 will be small, unless uu is comparable in size to cc. The denominators represent the modifications in the times caused by the motion of the apparatus. And behold, these modifications are not the same—the time to go to CC and back is a little less than the time to EE and back, even though the mirrors are equidistant from BB, and all we have to do is to measure that difference with precision. Here a minor technical point arises—suppose the two lengths LL are not exactly equal? In fact, we surely cannot make them exactly equal. In that case we simply turn the apparatus 9090 degrees, so that BCBC is in the line of motion and BEBE is perpendicular to the motion. Any small difference in length then becomes unimportant, and what we look for is a shift in the interference fringes when we rotate the apparatus. In carrying out the experiment, Michelson and Morley oriented the apparatus so that the line BEBE was nearly parallel to the earth’s motion in its orbit (at certain times of the day and night). This orbital speed is about 1818 miles per second, and any “ether drift” should be at least that much at some time of the day or night and at some time during the year. The apparatus was amply sensitive to observe such an effect, but no time difference was found—the velocity of the earth through the ether could not be detected. The result of the experiment was null. The result of the Michelson-Morley experiment was very puzzling and most disturbing. The first fruitful idea for finding a way out of the impasse came from Lorentz. He suggested that material bodies contract when they are moving, and that this foreshortening is only in the direction of the motion, and also, that if the length is L0L0 when a body is at rest, then when it moves with speed uu parallel to its length, the new length, which we call L∥L∥ (LL-parallel), is given by L∥=L0√1−u2/c2. L∥=L01−u2/c2−−−−−−−−√.(15.6) When this modification is applied to the Michelson-Morley interferometer apparatus the distance from BB to CC does not change, but the distance from BB to EE is shortened to L√1−u2/c2L1−u2/c2−−−−−−−−√. Therefore Eq. (15.5) is not changed, but the LL of Eq. (15.4) must be changed in accordance with Eq. (15.6). When this is done we obtain t1+t2=(2L/c)√1−u2/c21−u2/c2=2L/c√1−u2/c2. t1+t2=(2L/c)1−u2/c2−−−−−−−−√1−u2/c2=2L/c1−u2/c2−−−−−−−−√.(15.7) Comparing this result with Eq. (15.5), we see that t1+t2=2t3t1+t2=2t3. So if the apparatus shrinks in the manner just described, we have a way of understanding why the Michelson-Morley experiment gives no effect at all. Although the contraction hypothesis successfully accounted for the negative result of the experiment, it was open to the objection that it was invented for the express purpose of explaining away the difficulty, and was too artificial. However, in many other experiments to discover an ether wind, similar difficulties arose, until it appeared that nature was in a “conspiracy” to thwart man by introducing some new phenomenon to undo every phenomenon that he thought would permit a measurement of uu. It was ultimately recognized, as Poincaré pointed out, that a complete conspiracy is itself a law of nature! Poincaré then proposed that there is such a law of nature, that it is not possible to discover an ether wind by any experiment; that is, there is no way to determine an absolute velocity. 15–4Transformation of time In checking out whether the contraction idea is in harmony with the facts in other experiments, it turns out that everything is correct provided that the times are also modified, in the manner expressed in the fourth equation of the set (15.3). That is because the time 2t32t3, calculated for the trip from BB to CC and back, is not the same when calculated by a man performing the experiment in a moving space ship as when calculated by a stationary observer who is watching the space ship. To the man in the ship the time is simply 2L/c2L/c, but to the other observer it is (2L/c)/√1−u2/c2(2L/c)/1−u2/c2−−−−−−−−√ (Eq. 15.5). In other words, when the outsider sees the man in the space ship lighting a cigar, all the actions appear to be slower than normal, while to the man inside, everything moves at a normal rate. So not only must the lengths shorten, but also the time-measuring instruments (“clocks”) must apparently slow down. That is, when the clock in the space ship records 11 second elapsed, as seen by the man in the ship, it shows 1/√1−u2/c21/1−u2/c2−−−−−−−−√ second to the man outside. This slowing of the clocks in a moving system is a very peculiar phenomenon, and is worth an explanation. In order to understand this, we have to watch the machinery of the clock and see what happens when it is moving. Since that is rather difficult, we shall take a very simple kind of clock. The one we choose is rather a silly kind of clock, but it will work in principle: it is a rod (meter stick) with a mirror at each end, and when we start a light signal between the mirrors, the light keeps going up and down, making a click every time it comes down, like a standard ticking clock. We build two such clocks, with exactly the same lengths, and synchronize them by starting them together; then they agree always thereafter, because they are the same in length, and light always travels with speed cc. We give one of these clocks to the man to take along in his space ship, and he mounts the rod perpendicular to the direction of motion of the ship; then the length of the rod will not change. How do we know that perpendicular lengths do not change? The men can agree to make marks on each other’s yy-meter stick as they pass each other. By symmetry, the two marks must come at the same yy- and y′y′-coordinates, since otherwise, when they get together to compare results, one mark will be above or below the other, and so we could tell who was really moving. Fig. 15–3.(a) A “light clock” at rest in the S′S′ system. (b) The same clock, moving through the SS system. (c) Illustration of the diagonal path taken by the light beam in a moving “light clock.” Now let us see what happens to the moving clock. Before the man took it aboard, he agreed that it was a nice, standard clock, and when he goes along in the space ship he will not see anything peculiar. If he did, he would know he was moving—if anything at all changed because of the motion, he could tell he was moving. But the principle of relativity says this is impossible in a uniformly moving system, so nothing has changed. On the other hand, when the external observer looks at the clock going by, he sees that the light, in going from mirror to mirror, is “really” taking a zigzag path, since the rod is moving sidewise all the while. We have already analyzed such a zigzag motion in connection with the Michelson-Morley experiment. If in a given time the rod moves forward a distance proportional to uu in Fig. 15–3, the distance the light travels in the same time is proportional to cc, and the vertical distance is therefore proportional to √c2−u2c2−u2−−−−−−√. That is, it takes a longer time for light to go from end to end in the moving clock than in the stationary clock. Therefore the apparent time between clicks is longer for the moving clock, in the same proportion as shown in the hypotenuse of the triangle (that is the source of the square root expressions in our equations). From the figure it is also apparent that the greater uu is, the more slowly the moving clock appears to run. Not only does this particular kind of clock run more slowly, but if the theory of relativity is correct, any other clock, operating on any principle whatsoever, would also appear to run slower, and in the same proportion—we can say this without further analysis. Why is this so? To answer the above question, suppose we had two other clocks made exactly alike with wheels and gears, or perhaps based on radioactive decay, or something else. Then we adjust these clocks so they both run in precise synchronism with our first clocks. When light goes up and back in the first clocks and announces its arrival with a click, the new models also complete some sort of cycle, which they simultaneously announce by some doubly coincident flash, or bong, or other signal. One of these clocks is taken into the space ship, along with the first kind. Perhaps this clock will not run slower, but will continue to keep the same time as its stationary counterpart, and thus disagree with the other moving clock. Ah no, if that should happen, the man in the ship could use this mismatch between his two clocks to determine the speed of his ship, which we have been supposing is impossible. We need not know anything about the machinery of the new clock that might cause the effect—we simply know that whatever the reason, it will appear to run slow, just like the first one. Now if all moving clocks run slower, if no way of measuring time gives anything but a slower rate, we shall just have to say, in a certain sense, that time itself appears to be slower in a space ship. All the phenomena there—the man’s pulse rate, his thought processes, the time he takes to light a cigar, how long it takes to grow up and get old—all these things must be slowed down in the same proportion, because he cannot tell he is moving. The biologists and medical men sometimes say it is not quite certain that the time it takes for a cancer to develop will be longer in a space ship, but from the viewpoint of a modern physicist it is nearly certain; otherwise one could use the rate of cancer development to determine the speed of the ship! A very interesting example of the slowing of time with motion is furnished by muons, which are particles that disintegrate spontaneously after an average lifetime of 2.2×10−62.2×10−6 sec. They come to the earth in cosmic rays, and can also be produced artificially in the laboratory. Some of them disintegrate in midair, but the remainder disintegrate only after they encounter a piece of material and stop. It is clear that in its short lifetime a muon cannot travel, even at the speed of light, much more than 600600 meters. But although the muons are created at the top of the atmosphere, some 1010 kilometers up, yet they are actually found in a laboratory down here, in cosmic rays. How can that be? The answer is that different muons move at various speeds, some of which are very close to the speed of light. While from their own point of view they live only about 22 μμsec, from our point of view they live considerably longer—enough longer that they may reach the earth. The factor by which the time is increased has already been given as 1/√1−u2/c21/1−u2/c2−−−−−−−−√. The average life has been measured quite accurately for muons of different velocities, and the values agree closely with the formula. We do not know why the muon disintegrates or what its machinery is, but we do know its behavior satisfies the principle of relativity. That is the utility of the principle of relativity—it permits us to make predictions, even about things that otherwise we do not know much about. For example, before we have any idea at all about what makes the muon disintegrate, we can still predict that when it is moving at nine-tenths of the speed of light, the apparent length of time that it lasts is (2.2×10−6)/√1−92/102(2.2×10−6)/1−92/102−−−−−−−−−√ sec; and our prediction works—that is the good thing about it. 15–5The Lorentz contraction Now let us return to the Lorentz transformation (15.3) and try to get a better understanding of the relationship between the (x,y,z,t)(x,y,z,t) and the (x′,y′,z′,t′)(x′,y′,z′,t′) coordinate systems, which we shall call the SS and S′S′ systems, or Joe and Moe systems, respectively. We have already noted that the first equation is based on the Lorentz suggestion of contraction along the xx-direction; how can we prove that a contraction takes place? In the Michelson-Morley experiment, we now appreciate that the transverse arm BCBC cannot change length, by the principle of relativity; yet the null result of the experiment demands that the times must be equal. So, in order for the experiment to give a null result, the longitudinal arm BEBE must appear shorter, by the square root √1−u2/c21−u2/c2−−−−−−−−√. What does this contraction mean, in terms of measurements made by Joe and Moe? Suppose that Moe, moving with the S′S′ system in the xx-direction, is measuring the x′x′-coordinate of some point with a meter stick. He lays the stick down x′x′ times, so he thinks the distance is x′x′ meters. From the viewpoint of Joe in the SS system, however, Moe is using a foreshortened ruler, so the “real” distance measured is x′√1−u2/c2x′1−u2/c2−−−−−−−−√ meters. Then if the S′S′ system has travelled a distance utut away from the SS system, the SS observer would say that the same point, measured in his coordinates, is at a distance x=x′√1−u2/c2+utx=x′1−u2/c2−−−−−−−−√+ut, or x′=x−ut√1−u2/c2, x′=x−ut1−u2/c2−−−−−−−−√, which is the first equation of the Lorentz transformation. 15–6Simultaneity In an analogous way, because of the difference in time scales, the denominator expression is introduced into the fourth equation of the Lorentz transformation. The most interesting term in that equation is the ux/c2ux/c2 in the numerator, because that is quite new and unexpected. Now what does that mean? If we look at the situation carefully we see that events that occur at two separated places at the same time, as seen by Moe in S′S′, do not happen at the same time as viewed by Joe in SS. If one event occurs at point x1x1 at time t0t0 and the other event at x2x2 and t0t0 (the same time), we find that the two corresponding times t′1t′1 and t′2t′2 differ by an amount t′2−t′1=u(x1−x2)/c2√1−u2/c2. t′2−t′1=u(x1−x2)/c21−u2/c2−−−−−−−−√. This circumstance is called “failure of simultaneity at a distance,” and to make the idea a little clearer let us consider the following experiment. Suppose that a man moving in a space ship (system S′S′) has placed a clock at each end of the ship and is interested in making sure that the two clocks are in synchronism. How can the clocks be synchronized? There are many ways. One way, involving very little calculation, would be first to locate exactly the midpoint between the clocks. Then from this station we send out a light signal which will go both ways at the same speed and will arrive at both clocks, clearly, at the same time. This simultaneous arrival of the signals can be used to synchronize the clocks. Let us then suppose that the man in S′S′ synchronizes his clocks by this particular method. Let us see whether an observer in system SS would agree that the two clocks are synchronous. The man in S′S′ has a right to believe they are, because he does not know that he is moving. But the man in SS reasons that since the ship is moving forward, the clock in the front end was running away from the light signal, hence the light had to go more than halfway in order to catch up; the rear clock, however, was advancing to meet the light signal, so this distance was shorter. Therefore the signal reached the rear clock first, although the man in S′S′ thought that the signals arrived simultaneously. We thus see that when a man in a space ship thinks the times at two locations are simultaneous, equal values of t′t′ in his coordinate system must correspond to different values of tt in the other coordinate system! 15–7Four-vectors Let us see what else we can discover in the Lorentz transformation. It is interesting to note that the transformation between the xx’s and tt’s is analogous in form to the transformation of the xx’s and yy’s that we studied in Chapter 11 for a rotation of coordinates. We then had x′=xcosθ+ysinθ,y′=ycosθ−xsinθ, x′y′=x=ycosθ+ycosθ−xsinθ,sinθ,(15.8) in which the new x′x′ mixes the old x and y, and the new y′ also mixes the old x and y; similarly, in the Lorentz transformation we find a new x′ which is a mixture of x and t, and a new t′ which is a mixture of t and x. So the Lorentz transformation is analogous to a rotation, only it is a “rotation” in space and time, which appears to be a strange concept. A check of the analogy to rotation can be made by calculating the quantity x′2+y′2+z′2−c2t′2=x2+y2+z2−c2t2.In this equation the first three terms on each side represent, in three-dimensional geometry, the square of the distance between a point and the origin (surface of a sphere) which remains unchanged (invariant) regardless of rotation of the coordinate axes. Similarly, Eq. (15.9) shows that there is a certain combination which includes time, that is invariant to a Lorentz transformation. Thus, the analogy to a rotation is complete, and is of such a kind that vectors, i.e., quantities involving “components” which transform the same way as the coordinates and time, are also useful in connection with relativity. Thus we contemplate an extension of the idea of vectors, which we have so far considered to have only space components, to include a time component. That is, we expect that there will be vectors with four components, three of which are like the components of an ordinary vector, and with these will be associated a fourth component, which is the analog of the time part. This concept will be analyzed further in the next chapters, where we shall find that if the ideas of the preceding paragraph are applied to momentum, the transformation gives three space parts that are like ordinary momentum components, and a fourth component, the time part, which is the energy. 15–8Relativistic dynamics We are now ready to investigate, more generally, what form the laws of mechanics take under the Lorentz transformation. [We have thus far explained how length and time change, but not how we get the modified formula for m (Eq. 15.1). We shall do this in the next chapter.] To see the consequences of Einstein’s modification of m for Newtonian mechanics, we start with the Newtonian law that force is the rate of change of momentum, or F=d(mv)/dt. Momentum is still given by mv, but when we use the new m this becomes p=mv=m0v√1−v2/c2.This is Einstein’s modification of Newton’s laws. Under this modification, if action and reaction are still equal (which they may not be in detail, but are in the long run), there will be conservation of momentum in the same way as before, but the quantity that is being conserved is not the old mv with its constant mass, but instead is the quantity shown in (15.10), which has the modified mass. When this change is made in the formula for momentum, conservation of momentum still works. Now let us see how momentum varies with speed. In Newtonian mechanics it is proportional to the speed and, according to (15.10), over a considerable range of speed, but small compared with c, it is nearly the same in relativistic mechanics, because the square-root expression differs only slightly from 1. But when v is almost equal to c, the square-root expression approaches zero, and the momentum therefore goes toward infinity. What happens if a constant force acts on a body for a long time? In Newtonian mechanics the body keeps picking up speed until it goes faster than light. But this is impossible in relativistic mechanics. In relativity, the body keeps picking up, not speed, but momentum, which can continually increase because the mass is increasing. After a while there is practically no acceleration in the sense of a change of velocity, but the momentum continues to increase. Of course, whenever a force produces very little change in the velocity of a body, we say that the body has a great deal of inertia, and that is exactly what our formula for relativistic mass says (see Eq. 15.10)—it says that the inertia is very great when v is nearly as great as c. As an example of this effect, to deflect the high-speed electrons in the synchrotron that is used here at Caltech, we need a magnetic field that is 2000 times stronger than would be expected on the basis of Newton’s laws. In other words, the mass of the electrons in the synchrotron is 2000 times as great as their normal mass, and is as great as that of a proton! That m should be 2000 times m0 means that 1−v2/c2 must be 1/4,000,000, and that means that v differs from c by one part in 8,000,000, so the electrons are getting pretty close to the speed of light. If the electrons and light were both to start from the synchrotron (estimated as 700 feet away) and rush out to Bridge Lab, which would arrive first? The light, of course, because light always travels faster.1 How much earlier? That is too hard to tell—instead, we tell by what distance the light is ahead: it is about 1/1000 of an inch, or 14 the thickness of a piece of paper! When the electrons are going that fast their masses are enormous, but their speed cannot exceed the speed of light. Now let us look at some further consequences of relativistic change of mass. Consider the motion of the molecules in a small tank of gas. When the gas is heated, the speed of the molecules is increased, and therefore the mass is also increased and the gas is heavier. An approximate formula to express the increase of mass, for the case when the velocity is small, can be found by expanding m0/√1−v2/c2=m0(1−v2/c2)−1/2 in a power series, using the binomial theorem. We get m0(1−v2/c2)−1/2=m0(1+12v2/c2+38v4/c4+⋯). We see clearly from the formula that the series converges rapidly when v is small, and the terms after the first two or three are negligible. So we can write m≅m0+12m0v2(1c2)in which the second term on the right expresses the increase of mass due to molecular velocity. When the temperature increases the v2 increases proportionately, so we can say that the increase in mass is proportional to the increase in temperature. But since 12m0v2 is the kinetic energy in the old-fashioned Newtonian sense, we can also say that the increase in mass of all this body of gas is equal to the increase in kinetic energy divided by c2, or Δm=Δ(K.E.)/c2. 15–9Equivalence of mass and energy The above observation led Einstein to the suggestion that the mass of a body can be expressed more simply than by the formula (15.1), if we say that the mass is equal to the total energy content divided by c2. If Eq. (15.11) is multiplied by c2 the result is mc2=m0c2+12m0v2+⋯ Here, the term on the left expresses the total energy of a body, and we recognize the last term as the ordinary kinetic energy. Einstein interpreted the large constant term, m0c2, to be part of the total energy of the body, an intrinsic energy known as the “rest energy.” Let us follow out the consequences of assuming, with Einstein, that the energy of a body always equals mc2. As an interesting result, we shall find the formula (15.1) for the variation of mass with speed, which we have merely assumed up to now. We start with the body at rest, when its energy is m0c2. Then we apply a force to the body, which starts it moving and gives it kinetic energy; therefore, since the energy has increased, the mass has increased—this is implicit in the original assumption. So long as the force continues, the energy and the mass both continue to increase. We have already seen (Chapter 13) that the rate of change of energy with time equals the force times the velocity, or dEdt=F⋅v. We also have (Chapter 9, Eq. 9.1) that F=d(mv)/dt. When these relations are put together with the definition of E, Eq. (15.13) becomes d(mc2)dt=v⋅d(mv)dt.We wish to solve this equation for m. To do this we first use the mathematical trick of multiplying both sides by 2m, which changes the equation to c2(2m)dmdt=2mv⋅d(mv)dt.We need to get rid of the derivatives, which can be accomplished by integrating both sides. The quantity (2m)dm/dt can be recognized as the time derivative of m2, and (2mv)⋅d(mv)/dt is the time derivative of (mv)2. So, Eq. (15.15) is the same as c2d(m2)dt=d(m2v2)dt.If the derivatives of two quantities are equal, the quantities themselves differ at most by a constant, say C. This permits us to write m2c2=m2v2+C.We need to define the constant C more explicitly. Since Eq. (15.17) must be true for all velocities, we can choose a special case where v=0, and say that in this case the mass is m0. Substituting these values into Eq. (15.17) gives m20c2=0+C.We can now use this value of C in Eq. (15.17), which becomes m2c2=m2v2+m20c2.Dividing by c2 and rearranging terms gives m2(1−v2/c2)=m20,from which we get m=m0/√1−v2/c2.This is the formula (15.1), and is exactly what is necessary for the agreement between mass and energy in Eq. (15.12). Ordinarily these energy changes represent extremely slight changes in mass, because most of the time we cannot generate much energy from a given amount of material; but in an atomic bomb of explosive energy equivalent to 20 kilotons of TNT, for example, it can be shown that the dirt after the explosion is lighter by 1 gram than the initial mass of the reacting material, because of the energy that was released, i.e., the released energy had a mass of 1 gram, according to the relationship ΔE=Δ(mc2). This theory of equivalence of mass and energy has been beautifully verified by experiments in which matter is annihilated—converted totally to energy: An electron and a positron come together at rest, each with a rest mass m0. When they come together they disintegrate and two gamma rays emerge, each with the measured energy of m0c2. This experiment furnishes a direct determination of the energy associated with the existence of the rest mass of a particle. The electrons would actually win the race versus visible light because of the index of refraction of air. A gamma ray would make out better. ↩
12292
https://www.isbe.net/CTEDocuments/BMCE-L780027.pdf
Branching Unit: Coding Problem Area: Programming Basics Lesson: Branching Student Learning Objectives. Instruction in this lesson should result in students achieving the following objectives: 1 Determine the nature of a condition. 2 Explain Boolean logic. 3 Use branching statements. Resources. The following resources may be useful in teaching this lesson: E-units(s) corresponding to this lesson plan. CAERT, Inc, Introduction to Creating Flowcharts, YouTube. Accessed October 29, 2019. Microsoft product screenshot(s) reprinted with permission from Microsoft Systems Incorpo-rated. Lesson: Branching Page 1 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 Equipment, Tools, Supplies, and Facilities Overhead or PowerPoint projector Visual(s) from accompanying master(s) Copies of sample test, lab sheet(s), and/or other items designed for duplication Materials listed on duplicated items Computers with printers and Internet access Classroom resource and reference materials PC with Internet Browser Software Internet access Key Terms. The following terms are introduced in this lesson (shown in bold italics): Boolean expression Comparison operators Condition Decision Structure Dual alternative decision structure Flowcharts Logical operators Multi-alternative decision structure Nested Decision structure Pseudocode Single alternative decision structure Interest Approach. Use an interest approach that will prepare the students for the lesson. Teachers often develop approaches for their unique class and student situations. A possible approach is included here. Computer programs contain many statements that need to be executed some of the time. For example, a program may contain code to display “Good morning” when the time is between 8 A.M. and 11:59 A.M. in the morning and “Good evening” when time is between 12:00 P.M. and 11:59 P.M. This logic is accomplished in a computer program using branching (or decision) statements, where each branch executes a different set of code. This lesson examines the process of coding statements that evaluate decision statements and implement branching logic in computer programs. Lesson: Branching Page 2 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 CONTENT SUMMARY AND TEACHING STRATEGIES Objective 1: Determine the nature of a condition. Anticipated Problem: What is a condition? I. In a computer program, not all statements are executed each time the program runs. Certain statements only execute when a specific condition is met. This section discusses constructing conditions and their effect of program logic. A. USING FLOWCHARTS. Developing code is a complex venture. It requires knowledge of the programming language used to create the program, and an understanding of the logic to be implemented in code. Therefore, early in the days of programming, a system of depicting logic using symbols called Flowcharting gained prevalence. Flowcharts depict logic using symbols and shapes and are language independent. Since they are graphical, it is easier to understand program logic than it is to read code. Flowcharts, is a sense, are a language all by themselves with different shapes used to represent different operations. Flowcharts are developed before writing code. Once logic for a problem has been finalized, the flowchart is translated into a programming language. See VM–A for some of the commonly used flowcharting symbols. For example, a rectangle is used to depict variable declaration and computations, and a parallelogram is used to represent input and output operations. In some schools of flowcharting, declaration statements are shown with lines inside. When a program needs to display a message to the user, the text inside the parallelogram can say “Print” or “Display”. The action is better represented by the “Display”, but “Print” is used even though it sounds like an action to print on a printer. This is because languages, such as Python, use a command called “print” to display messages to the user. This lesson uses “Display” as the action instead of “Print”. B. CONDITIONS AND DECISION STRUCTURES. A Condition is an expression in a program that evaluates to a True or a False. Different sets of instructions execute when a condition returns a value of True or a False. See VM–B for examples of conditions that a program may need to handle. Conditional logic is implemented in a programming language using a Decision Structure. Decision structures are also referred to as Selection structures or Branching structures. Decision structures are depicted in a Flowcharts using a diamond shape. A decision structure starts with a condition. The condition/question is placed inside the diamond shape. A condition returns a True or a False, and the flowchart shows instructions that execute when the condition returns a True or a False. See VM–C for an examples of decision structures that execute code when the condition returns a true. Lesson: Branching Page 3 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 C. TYPES OF DECISION STRUCTURES. There are three types of Decision structures and these are discussed here. Single alternative and dual alternative decision structures make use of conditions that return true and false. A multi-alternative decision structure contains multiple paths depending upon the value in a variable. 1. Single alternative decision structure : Consider a decision structure that wishes to perform additional steps when a condition returns a true. For example, when an employee sells more than $100,000 worth of goods, a bonus is awarded. This is depicted in VM–D. Since there are statements that need to only execute when the condition is true, this type of decision structure is called a Single alternative decision structure and has only a single branch. A Sin-gle Alternative Decision structure cannot be written so that code executes when the condition returns False. 2. Dual alternative decision structure: A Dual alternative decision structure contains statements that need to be executed when a condition returns a True and when a condition returns False. It contains 2 branches - one that executes on True and one that executes on a False. Suppose that code is used to set price for a product. Its regular price is $4.99 and when it is on sale, its rice is 2.99. This logic can be implemented using a Dual alternative decision struc-ture as shown in VM–E. Different sets of actions are performed when the con-dition returns True and when it returns False. The problem shown in VM–E can also be coded as shown in VM–F. The question asked in VM–F is the opposite of that asked in VM–E. Both sets of code are correct. Note that in both exam-ples, the assumption is that the user enters a “Y” or “N”. If neither of these is entered, the program will not work as expected. Code should be written to validate used input. 3. Multi-alternative decision structure: A Multi-alternative decision structure is one that contains multiple logical paths based upon the value of a variable. Consider the example of code that displays messages based upon the letter grade achieved by students. If Decision structures are used, code will need to contain 5 decision structures to display the message associated with each of the letter grades. Some languages offer a Multi-alternative decision structure where a question yields more than a True/False answer. See VM–G. The deci-sion shape, the diamond, asks a question about the value of the letter_grade variable. Right after the decision, there is a 5-way branch, with each branch corresponding to a letter grade. Multi-alternative decision structures allow for logic to be coded in a very succinct fashion. However, this type of decision structure cannot execute directly at the machine level, where only single alter-native and dual alternative decisions are implementable. A Multi-alternative decision structure is eventually converted into multiple simple Decision structures before execution. 4. Nested Decision Structures. A Nested Decision structure is a structure where there is a decision structure underneath the “Yes” portion or the “No” portion of a decision structure. See VM–H for an example. The problem states that price per unit depends upon the number of units purchased. When number of units is <= 100, price is $3.00. When number of units is more than 100, another decision structure is created to check if it is <= 300. Since there is a Lesson: Branching Page 4 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 decision structure inside the “Else” portion of the original decision, the larger decision structure is called a Nested Decision structure. Note that the “inner” Decision structure is completely nested inside the “outer” Decision structure. Teaching Strategy: Many techniques can be used to help students master this objective. Use VM–A through VM–H to understand conditions and Decision structures. Objective 2: Explain Boolean logic. Anticipated Problem: What is Boolean logic? II. Boolean logic is the study of Boolean expressions. A Simple Boolean expression consists of 2 operands and a relational operator. Simple Boolean expressions may be combined to create complex Boolean expressions and their result is explained by Boolean algebra. A. SIMPLE BOOLEAN EXPRESSIONS. When 2 values are compared, the result of the comparison operation is either a yes or a no. For example, when a person’s age is checked to see if it is greater than or equal to 21, the answer is either a yes or a no. In Boolean terms, the answer is either a True or a False. A simple Boolean expression compares 2 operands of the same datatype and the comparison is performed using a Relational operator. The 2 operands can be literals or variables. See VM–I for the list of relational operators used in computer programming languages. Some of the relational operators, such as the greater-than-equal-to operator, are composed of 2 symbols from the keyboard. This is because in the early days of programming, the keyboard used to key in programs was the same one used by typewriters, and it did not have the ?, ?, or the ? symbols on it. Therefore, a combination of symbols found on the keyboard was used to represent certain operations, such as the >= operator. No space is left between the 2 symbols and the order of the 2 symbols is significant. For example, the >= operator cannot be replaced by the =< operator. Some languages do not use the = symbol is not used to check equality of 2 operands because the = operator is used as an assignment operator and is used to assign a value to a variable. Instead the == operator is used to check equality of 2 operands. B. COMPLEX BOOLEAN EXPRESSIONS. Many real-world computer-based operations require that a value be compared with more than one other value. For example, there may be a requirement to ensure that a test score fall within a range. In such a case, the test score must be compared with the lower boundary value and the upper boundary value. Computers, however, can only compare two values at a time. When a value, such as a test score, must be checked to see if it is within a range, multiple Boolean expressions must be coded and tested. Logical operators are used to combine simple Boolean expressions to create a complex Boolean expression. There are 3 logical operators used in programming and these are AND, OR, and NOT. Lesson: Branching Page 5 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 1. The AND Logical Operator: When Boolean expressions are combined using AND, each individual Boolean expression must evaluate to True for the com-bined expression to return a True. The AND logical operator is frequently used to ensure that a value lies within a range. See VM–J. The user is requested to enter a test score in the range 0-100. A condition is built to check for data validity. 2 Boolean expressions are coded. These are test_score >= 0 and test_score <= 100. Both Boolean expressions must be true and are con-nected using the AND Logical operator. Note that the Boolean expression can-not be coded as test_score >= 0 AND <=100. When the AND logical opera-tor is used, it must be placed between 2 complete Boolean expressions, even if it means repeating the name of the variable in each of the 2 Boolean expressions. 2. The OR Logical Operator: When Boolean expressions are combined using OR, one of the 2 individual Boolean expression must evaluate to True, or they both may return True for the combined expression to return a True. The OR logical operator is shown in VM–K. It uses the OR logical operator to check if user input is invalid. Once again, the user is requested to enter a test score in the range 0-100. A condition is built to check for invalid data. The 2 Boolean expressions, test_score < 0 and test_score > 100 are connected using the logical operator OR. Only 1 of these 2 expressions needs to return True for the condition to be True and indicate that the user entered value is invalid. 3. The NOT Logical Operator: The NOT logical operator is used to invert a condi-tion. Consider a program that asks the user to enter a letter grade (“A”, “B”, “C”, “D”, or “F”), and displays a message when it is invalid. While it can be written in several different ways, VM–L demonstrates this logic using the NOT operator. The condition for good data is built and then placed in parentheses, with the NOT placed before the ( symbol. The “good” condition is inverted using the NOT operator to obtain the condition for “bad” or invalid data. Teaching Strategy: Many techniques can be used to help students master this objective. Use VM–I through VM–L to understand Boolean Algebra and Logical operators. Objective 3: Use branching statements. Anticipated Problem: What are branching statements and how are they used in code? III. Decision structures allow programs to pose questions and perform different sets of code when the answer is a True and when the answer is a False. This logic was demonstrated with Flowcharts in previous sections, and this section converts the logic into pseudocode. A. PSEUDOCODE: Pseudocode is an English-like language that is used to depict logic. It is much closer to computer programming language code than a flowchart. Pseudocode was used in previous lessons to declare variables and perform actions as shown in VM–M. Pseudocode cannot directly execute on any computer. Lesson: Branching Page 6 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 It appears to be in a “computer language-like” format but is not computer code. Pseudocode is written in statements that, in many cases, can be converted, line by line, into program code. While drawing flowcharts is a more free-flowing exercise in creating logic structures, pseudocode creation is more structured and serves as a medium to start the process of converting logic structures from graphical representations to stringent program code. B. DECISIONS IN PSEUDOCODE. Decisions are coded in Pseudocode using the “If” and the “Then” keywords. It may include the “Else” keyword as well. 1. Single-alternative decision structure: In the previous section, single-alternative decision structures were discussed. See VM–N for pseudocode for the single-alternative decision structure that was discussed in the previous section. A Decision structure is created to Award bonus when sales is more than $100,000. In the pseudocode, the word “If” is followed by the question asked in the condition. When it returns a True, statements that need to execute are displayed at an indent. The “End If” word represents the end of the decision structure. The “End If” must be placed on the same level of indent as the “If” statement. 2. Dual-alternative decision structure: See VM–O. It shows the logic to price an item. Its regular price is $4.99 and when it is on sale, its rice is 2.99. This logic is implemented using a Dual alternative decision structure. In the Pseudocode, the decision structure, once again, starts with the word “If”. After the statements that need to execute when the condition returns True, the “Else” word is coded. Statements that execute when the condition returns False are placed between “Else” and “End If”. Note that no condition or ques-tion is placed after the “Else” word. Control in the flowchart transfers to the “Else” portion of the code when the condition in the “If” statement returns a False. The “If”, “Else”, and “End If” are placed at the same level of indent. The problem shown in VM–O can also be coded as shown in VM–P. The question asked in VM–P is the opposite of that asked in VM–O. Both sets of code are correct. 3. Multi-alternative decision structure: A Multi-alternative decision structure is created to provide alternative paths based upon the value in a single variable. See VM–Q for an example of a multi-alternative decision structure. The value of the variable for each branch is placed after the word “Case”. The last branch uses “Other” which is analogous to the “Else” statement. Control passes here when none of the other branch’s “Case” value applies to the vari-able. In most languages, the “Case” statements look for exact matches, and cannot be used to check for ranges. 4. Nested Decision Structures. A Nested Decision structure is a structure where there is a decision structure underneath one of its 2 branches. Each Decision structure is coded with “If”, “Else” and “End If”. Nesting of Decision structures is complete; the inner Decision structure is completely nested inside the outer Decision structure. See VM–R for an example of a nested decision structure. In this example, the price of a product is based upon the number of units pur-chased. The first decision determines if the number of units purchased is less Lesson: Branching Page 7 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 than or equal to 100. If this condition is true, price is set to 3.99. If this condi-tion is false, it means that the number of units purchased is larger than 100. The next decision checks to see if the number of units purchased is less than equal to 300, and if the condition is true, the price is set to 2.99. If the condi-tion is false, it means that the number of units is larger than 300 and price is set to 1.99. There are multiple ways to accomplish this pricing logic. It can be performed using the logic shown in VM–S. Here, 3 separate stand-alone deci-sions are developed using logical operators. Both sets of logic are valid, and the one chosen is based upon programmer preference. Teaching Strategy: Many techniques can be used to help students master this objective. Use VM–M through VM–S to understand how branching is implemented in pseudocode. Review/Summary. Use the student learning objectives to summarize the lesson. Have students explain the content associated with each objective. Student responses can be used in determining which objectives need to be reviewed or taught from a different angle. Questions at the ends of chapters in the textbook may also be used in the Review/ Summary. Application. Use the included visual master(s) and lab sheet(s) to apply the information presented in the lesson. Evaluation. Evaluation should focus on student achievement of the objectives for the lesson. Various techniques can be used, such as student performance on the application activities. A sample written test is provided. Answers to Sample Test: Part One: Completion 1. Flowcharts 2. Rectangle 3. Condition 4. Diamond 5. Single alternative decision 6. If Part Two: True/False 1. T 2. F 3. F 4. F Lesson: Branching Page 8 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 5. T 6. T Part Three: Short Answer 1. Flowcharts depict logic using symbols and shapes. They are easier to understand that program code. 2. Pseudocode is written in simple text and be easily converted into program code. 3. A nested decision structure is a structure where there is a decision structure underneath one of the 2 branches of a decision structure. Lesson: Branching Page 9 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 Sample Test Name ______ Branching Part One: Completion Instructions: Provide the word or words to complete the following statements. 1. ___ represent program logic using graphics and symbols. 2. A ____ shape represents a variable declaration. 3. In a computer program a ___ returns a True or false value. 4. Branching is depicted with a ____ shape. 5. A Decision structure that only contains a single branch is called a ____ structure. 6. In pseudocode, decisions are coded using the word ____. Part Two: True/False Instructions: Write T for True or F for False. _1. Flowcharts use symbols to depict logic. 2. Pseudocode uses symbols to depict logic. __3. A Boolean Expression returns 3 values — True, False, Neutral. _4. Two Boolean expressions may be combined using a Relational operator. 5. Decision structures begin with the “If” word in Pseudocode. __6. In Pseudocode, dual-alternative decision structures make use of the “Else” word. Lesson: Branching Page 10 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 Part Three: Short Answer Instructions: Answer the following. 1. Why are Flowcharts used? 2. Why is Pseudocode used? 3. What is a nested decision structure? Lesson: Branching Page 11 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–A FLOWCHART SYMBOLS Lesson: Branching Page 12 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–B CONDITIONS IN A COMPUTER PROGRAM Lesson: Branching Page 13 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–C DECISIONS IN FLOWCHARTS Lesson: Branching Page 14 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–D SINGLE ALTERNATIVE DECISION STRUCTURE Lesson: Branching Page 15 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–E DUAL ALTERNATIVE DECISION STRUCTURE — I Lesson: Branching Page 16 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–F DUAL ALTERNATIVE DECISION STRUCTURE — II Lesson: Branching Page 17 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–G MULTI-ALTERNATIVE DECISION STRUCTURE Lesson: Branching Page 18 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–H NESTED DECISION STRUCTURE FLOWCHART Lesson: Branching Page 19 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–I RELATIONAL OPERATORS Lesson: Branching Page 20 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 Relational Operator Description Operator Used in Math > < >= <= == != Greater than Less than Greater than equal to Less than equal to Equals Not equal to > < = ≥ ≤ ≠ VM–J USING THE AND LOGICAL OPERATOR Lesson: Branching Page 21 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–K USING THE OR LOGICAL OPERATOR Lesson: Branching Page 22 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–L USING THE NOT LOGICAL OPERATOR Lesson: Branching Page 23 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 These are values. valid VM–M FLOWCHART SYMBOLS & PSEUDOCODE Lesson: Branching Page 24 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–N SINGLE ALTERNATIVE DECISION STRUCTURE PSEUDOCODE Lesson: Branching Page 25 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–O DUAL ALTERNATIVE DECISION STRUCTURE PSEUDOCODE — I Lesson: Branching Page 26 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 No condition or question is placed after “Else” VM–P DUAL ALTERNATIVE DECISION STRUCTURE DUAL PSEUDOCODE—II Lesson: Branching Page 27 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–Q MULTI-ALTERNATIVE DECISION STRUCTURE PSEUDOCODE Lesson: Branching Page 28 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–R NESTED DECISION STRUCTURE PSEUDOCODE Lesson: Branching Page 29 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 VM–S NESTED DECISION STRUCTURE PSEUDOCODE Lesson: Branching Page 30 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 LS–A Name ______ Create Flowcharts to Depict Decisions Purpose Understand decision structures and flowcharts Objective Create decision structures in a flowchart. Materials Device with Internet Lab sheet Pen or pencil Procedure 1. Read the following instructions and create a flowchart by hand or by using Visio, or going to 2. Consider the table that shows letter grades for various points earned in a test. Percent Letter Grade 97-100 A+ 93-96 A 90-92 A-87-89 B+ 83-86 B 80-82 B-70-79 C 60-69 D Below 60 F 3. Draw a flowchart that asks user for a percent. The logic should display the letter grade depending upon the percent entered by the user. Lesson: Branching Page 31 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 LS–A: Teacher Information Sheet Create Flowcharts to Depict Decisions There are multiple possible solutions. Two alternatives are shown below: Alternative 1 Lesson: Branching Page 32 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 Alternative 2 Lesson: Branching Page 33 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 LS–B Name ________ Code Decision Structure in Pseudocode Purpose Understand the process of creating pseudocode Objective Create pseudocode for a decision. Materials Device with Internet Lab sheet Pen or pencil Procedure 1. Consider the following pricing table that bases unit price of a product based upon the number of units purchased. Number of Units Purchased Price per Unit 1-99 5.99 100-199 4.99 200-299 3.99 More than 300 2.99 Lesson: Branching Page 34 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 2. Consider the following flowchart that depicts this logic to determine unit price based upon number of units purchased. 3. Write the pseudocode that corresponds to the logic shown in the flowchart. Lesson: Branching Page 35 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027 LS–A: Teacher Information Sheet Code Decision Structure in Pseudocode Pseudocode for the problem is as follows: Declare num_units_purchased as Integer Declare unit_price as Float Display “How many units were purchased?” Input num_units_purchased If num_units_purchased >= 1 and num_units_purchased < = 99 Then Set unit_price = 5.99 End If If num_units_purchased >= 100 and num_units_purchased < = 199 Then Set unit_price = 4.99 End If If num_units_purchased >= 200 and num_units_purchased < = 299 Then Set unit_price = 3.99 End If If num_units_purchased >= 300 Then Set unit_price = 2.99 End If Lesson: Branching Page 36 www.MyCAERT.com Copyright © by CAERT, Inc. | Reproduction by subscription only. | L780027
12293
https://physics.nmu.edu/~ddonovan/classes/ph201/Homework/Chap01/CH01P60.html
60.�The volume of liquid flowing per second is called the volume flow rate Q and has the dimensions of [L]3/[T]. The flow rate of a liquid through a hypodermic needle during an injection can be estimated with the following equation: The length and radius of the needle are L and R, respectively, both of which have the dimension [L]. The pressures at opposite ends of the needle are P 2 and P 1, both of which have the dimensions of [M]/{[L][T]2}. The symbol h represents the viscosity of the liquid and has the dimensions of [M]/{[L][T]}. The symbol p stands for pi and, like the number 8 and the exponent n, has no dimensions. Using dimensional analysis, determine the value of n in the expression for Q. We now put the units in for each of the terms Since the Pressures are subtracting and they have the same units we can simplify this set of units to the following Now cancel units where possible and we get So the units come down to The units work if So PH 201 Homework Chapter 1 SolutionsDr. Donovan's PH 201 Homework Page Dr. Donovan's Classes PageNMU Physics Department Web PageNMU Main Page Please send any comments or questions about this page toddonovan@nmu.edu This page last updated on January 11, 2020
12294
https://web.math.princeton.edu/WebCV/Nathanson-Publications.pdf
Melvyn B. Nathanson: Books Monographs 1. Additive Number Theory: The Classical Bases, Graduate Texts in Mathe-matics, Vol. 164, Springer-Verlag, New York, 1996. 2. Additive Number Theory: Inverse Problems and the Geometry of Sumsets, Graduate Texts in Mathematics, Vol. 165, Springer-Verlag, New York, 1996. 3. Elementary Methods in Number Theory, Graduate Texts in Mathematics, Vol. 195, Springer-Verlag, New York, 2000. 4. Additive Number Theory: Extremal Problems and the Combinatorics of Sumsets, Graduate Texts in Mathematics, Springer, New York, 2009, to appear. 5. Additive Number Theory: Density Problems and the Growth of Sumsets, Graduate Texts in Mathematics, Springer, New York, 2010, to appear. Proceedings 6. Number Theory Day: Proceedings of the Conference held at Rockefeller University, New York, March 4, 1976, Edited by M. B. Nathanson, Lecture Notes in Mathematics, Vol. 626, Springer-Verlag, Berlin, 1977. 7. Number Theory, Carbondale 1979: Proceedings of the Southern Illinois Number Theory Conference, held at Southern Illinois University, Carbon-dale, Ill., March 30–31, 1979, Edited by M. B. Nathanson, Lecture Notes in Mathematics, Vol. 751, Springer, Berlin, 1979. 8. Number Theory: Proceedings of the seminar held at the City University of New York, New York, 1982, Edited by D. V. Chudnovsky, G. V. Chud-novsky, H. Cohn, and M. B. Nathanson, Lecture Notes in Mathematics, Vol. 1052, Springer-Verlag, Berlin, 1984. 9. Number Theory: Proceedings of the seminar held at the City University of New York, New York, 1983–1984, Edited by D. V. Chudnovsky, G. V. Chudnovsky, H. Cohn, and M. B. Nathanson, Lecture Notes in Mathemat-ics, Vol. 1135, Springer-Verlag, Berlin, 1985. 10. Number Theory: Proceedings of the seminar held at the City University of New York, New York, 1984–1985, Edited by D. V. Chudnovsky, G. V. Chudnovsky, H. Cohn, and M. B. Nathanson, Lecture Notes in Mathemat-ics, Vol. 1240, Springer-Verlag, Berlin, 1987. 11. Number Theory: Proceedings of the seminar held at the City University of New York, New York, 1985–1988, Edited by D. V. Chudnovsky, G. V. Chudnovsky, H. Cohn, and M. B. Nathanson, Lecture Notes in Mathemat-ics, Vol. 1383, Springer-Verlag, Berlin, 1989. 12. Number Theory: Papers from the seminar held at the City University of New York, New York, 1989–1990, Edited by D. V. Chudnovsky, G. V. Chudnovsky, H. Cohn, and M. B. Nathanson, Springer-Verlag, New York, 1991 13. Number Theory: Papers from the seminars held at the City University of New York, New York, 1991–1995, Edited by D. V. Chudnovsky, G. V. Chudnovsky, and M. B. Nathanson, Springer-Verlag, New York, 1996. 14. Number Theory: Papers from the seminar (NYNTS) held at the City Uni-versity of New York, New York, 2003, Edited by D. V. Chudnovsky, G. V. Chudnovsky, and M. B. Nathanson, Springer-Verlag, New York, 2004. 1 2 15. Unusual Applications of Number Theory: Proceedings of the DIMACS Work-shop held at Rutgers University, January 10–14, 2000, Edited by M.B. Nathanson, DIMACS Series in Discrete Mathematics and Theoretical Com-puter Science, Vol. 64, American Mathematical Society, Providence, RI, 2004. 16. Combinatorial Number Theory, Edited by B. M. Landman, M. B. Nathanson, J. Neˇ setˇ ril, and C. Pomerance, de Gruyter, Berlin, 2007 17. Additive Combinatorics, Edited by A. Granville, M. B. Nathanson, and J. Solymosi, Amer. Math. Soc., Providence, 2007. Translations 18. Anatolij A. Karatsuba, Basic Analytic Number Theory, Translated from the second (1983) Russian Edition and with a preface by Melvyn B. Nathanson, Springer-Verlag, Berlin, 1993. 19. Grigori Freiman, It Seems I am a Jew: A Samizdat Essay on Soviet Mathe-matics, Translated from the Russian and with an introduction by Melvyn B. Nathanson and appendices by Melvyn B. Nathanson and Andrei Sakharov, Southern Illinois University Press, Carbondale, 1979 Other 20. Komar/Melamid: Two Soviet Dissident Artists, Southern Illinois Univer-sity Press, Carbondale, 1979. 21. Nuclear Nonproliferation: The Spent Fuel Problem (Pergamon policy stud-ies on energy and environment), Harvard University Nuclear Nonprolifera-tion Study Group, Pergamon Press, 1979. Melvyn B. Nathanson: Papers 1971 1. Derivatives of binary sequences, SIAM J. Appl. Math. 21 (1971), 407–412. 1972 2. An exponential congruence of Mahler, Amer. Math Monthly 79 (1972), 55–57. 3. On the greatest order of an element in the symmetric group, Amer. Math Monthly 79 (1972), 500–501. 4. Complementing sets of n-tuples of integers, Proc. Amer. Math. Soc. 34 (1972), 71–72. 5. Shift dynamical systems over finite fields, Proc. Amer. Math. Soc. 34 (1972), 591–594. 6. Sums of finite sets of integers, Amer. Math. Monthly 79 (1972),1010–1012 7. Integrals of binary sequences, SIAM J. Appl. Math. 23 (1972), 84–86. 3 1973 8. On the fundamental domain of a discrete group, Proc. Amer. Math. Soc. 41 (1973), 629–630. 1974 9. Catalan’s equation in K(t), Amer. Math. Monthly 81 (1974), 371–373. 10. Minimal bases and maximal nonbases in additive number theory, J. Number Theory 6 (1974), 324–333. 11. Approximation by continued fractions, Proc. Amer. Math. Soc. 45 (1974), 323–324. 1975 12. Maximal asymptotic nonbases (with P. Erd˝ os), Proc. Amer. Math. Soc. 48 (1975), 57–60. 13. Products of sums of powers, Math. Mag. 48 (1975), 112–113. 14. Linear recurrences and uniform distribution, Proc. Amer. Math. Soc. 48 (1975), 289–291. 15. An algorithm for partitions, Proc. Amer. Math. Soc. 52 (1975), 121–124 16. Oscillations of bases for the natural numbers (with P. Erd˝ os), Proc. Amer. Math. Soc. 53 (1975), 253–258 17. Round metric spaces, Amer. Math. Monthly 82 (1975), 738–741. 18. Essential components in discrete groups, Amer. Math. Monthly 82 (1975), 834 1976 19. Polynomial Pell’s equations, Proc. Amer. Math. Soc. 56 (1976), 89–92. 20. Partial products in finite groups, Discrete Math. 15 (1976), 201–203. 21. Partitions of the natural numbers into infinitely oscillating bases and non-bases (with P. Erd˝ os), Comment. Math. Helv. 51 (1976), 171–182. 22. Piecewise linear functions with almost all points eventually periodic, Proc. Amer. Math. Soc. 60 (1976), 75–81. 23. Difference operators and periodic sequences over finite modules, Acta Math. Acad. Sci. Hungar. 28 (1976), 219–224. 24. Mellin’s formula and some combinatorial identities (with S. Chowla), Monatsh. Math. 81 (1976), 261–265. 25. Prime polynomial sequences (with S. D. Cohen and p. Erd˝ os), J. London math. Soc. (2) 14 (1976), 559–562. 1977 26. Permutations, periodicity, and chaos, J. Combinatorial Theory Ser. A 22 (1977), 61–68. 27. s-maximal nonbases of density zero, J. London Math. Soc. (2) 15 (1977), 29–34. 28. Nonbases of density zero not contained in maximal nonbases (with P. Erd˝ os), J. London Math. Soc. (2) 15 (1977), 403–405. 29. Asymptotic distribution and asymptotic independence of sequences of in-tegers, Acta Math. Acad Sci. Hungar. 29 (1977), 207–218. 30. Oscillations of bases in number theory and combinatorics, in: Number the-ory day (Proc. Conf., Rockefeller Univ., New York, 1976), Lecture Notes in Math., Vol. 626, Springer, Berlin, 1977, pages 217–231. 4 1978 31. Multiplication rules for polynomials, Proc. Amer. Math. Soc. 69 (1978), 210–212. 32. Sets of natural numbers with no minimal asymptotic bases (with P. Erd˝ os), Proc. Amer. Math. Soc. 70 (1978), 100–102. 33. Monomial congruences, Monatsh. Math. 85 (1978), 199–200. 34. Representation functions of sequences in additive number theory, Proc. Amer. Math. Soc. 72 (1978), 16–20. 1979 35. Bases and nonbases of squarefree integers (with P. Erd˝ os), J. Number The-ory. 11 (1979), 197–208. 36. Additive h-bases for lattice points, in: Second International Conference on Combinatorial Mathematics (New York, 1978), Ann. New York Acad. Sci. 319 (1979), 413–414. 37. Systems of distinct representatives and minimal bases in additive number theory (with P. Erd˝ os), in: Number theory, Carbondale 1979 (Proc. South-ern Illinois Conf., Southern Illinois Univ., Carbondale, Ill., 1979), Lecture Notes in Math., Vol. 751, Springer, Berlin, 1979, pages 89–107. 38. Classification problems in K-categories, Fund. Math. 105 (1979/80), 187– 197. 1980 39. Sumsets of measurable sets, Proc. Amer. Math. Soc. 78 (1980), 59–63. 40. Connected components of arithmetic graphs, Monatsh. Math. 89 (1980), 219–222. 41. Minimal asymptotic bases for the natural numbers (with P. Erd˝ os), J. Num-ber Theory 12 (1980), 154–159. 42. Sumsets contained in infinite sets of integers, J. Combin. Theory Ser. A 28 (1980), 150–155. 43. Lagrange’s theorem with N 1/3 squares (with S. L. g. Choi and P. Erd˝ os), Proc. Amer. Math. Soc. 79 (1980), 203–205. 44. Arithmetic progressions contained in sequences with bounded gaps, Canad. Math. Bull. 23 (1980), 491–493. 1981 45. Waring’s problem for sets of density zero, in Analytic number theory (Philadel-phia, Pa., 1980), Lecture Notes in Math., Vol. 899, Springer, Berlin, 1981, pages 301–310. 46. Lagrange’s theorem and thin subsequences of squares (with (P. Erd˝ os), in: Contributions to Probability, Academic Press, New York, 1981, pages 3–9. 1983 47. Largest and smallest maximal sets of pairwise disjoint partitions, J. Number Theory 17 (1983), 103–112. 1984 48. The exact order of subsets of additive bases, in: Number Theory (New York, 1982), Lecture Notes in Math., Vol. 1052, Springer, Berlin, 1984, pages 273–277. 5 1985 49. Cofinite subsets of asymptotic bases for the positive integers (with J. C. M. Nash), J. Number Theory 20 (1985), 363–372. 1986 50. Divisibility properties of additive bases, Proc. Amer. Math. Soc. 96 (1986), 11–14 51. Waring’s problem for finite intervals, Proc. Amer. Math. Soc. 96 (1986), 15–17. 52. Independence of solution sets in additive number theory (with P. Erd˝ os), in: Probability, statistical mechanics, and number theory, Adv. Math. Suppl. Stud., Vol. 9, Academic Press, Orlando, FL, 1986, pages 97–105. 1987 53. A short proof of Cauchy’s polygonal number theorem, Proc. Amer. Math. Soc. 99 (1987), 22–24 54. An extremal problem for least common multiples, Discrete Math. 64 (1987), 221–228. 55. Multiplicative representations of integers, Israel J. Math. 57 (1987), 129– 136. 56. Thin bases in additive number theory, in: Journ´ ees Arithm´ etiques de Be-san¸ con (Besan¸ con, 1985), Ast´ erisque 147-148 (1987), 315–317, 345. 57. Problems and results on minimal bases in additive number theory (with P. Erd˝ os), in: Number Theory (New York, 1984–1985), Lecture Notes in Math., Vol. 1240, Springer, Berlin, 1987, pages 87–96. 58. A generalization of the Goldbach-Shnirel’man theorem, Amer. Math. Monthly 94 (1987), 768–771. 59. Sums of polygonal numbers, in: Analytic number theory and Diophan-tine problems (Stillwater, OK, 1984), Progr. Math., Vol. 70, Birkh¨ auser Boston, Boston, 1987, pages 305–316. 1988 60. Sumsets containing infinite arithmetic progressions (with P. Erd˝ os and A. S´ ark¨ ozy), J. Number Theory 28 (1988), 159–166. 61. Partitions of bases into disjoint unions of bases (with P. Erd˝ os), J. Number Theory 29 (1988), 1–9. 62. Minimal asymptotic bases with prescribed densities (with P. Erd˝ os), Illinois J. Math. 32 (1988), 562–574. 63. Simultaneous systems of representatives for families of finite sets, Proc. Amer. Math. Soc. 103 (1988), 1322–1326. 64. Minimal bases and powers of 2, Acta Arith. 49 (1988), 525–532. 1989 65. On the maximum density of minimal asymptotic bases (with A. S´ ark¨ ozy), Proc. Amer. Math. Soc. 105 (1989), 31–33. 66. A simple construction of minimal asymptotic bases (with X.-D. Jia), Acta Arith. 52 (1989), 95–101. 67. Sumsets containing k-free integers, in Number Theory (Ulm, 1987), Lecture Notes in Math., Vol. 1380, Springer, New York, 1989, pages 179–184. 6 68. Combinatorial pairs, and sumsets contained in sequences, in: Combinato-rial Mathematics: Proceedings of the Third International Conference (New York, 1985), Ann. New York Acad. Sci. 555 (1989), 316–319. 69. Additive problems in combinatorial number theory, in: Number Theory (New York, 1985/1988), Lecture Notes in Math., Vol. 1383, Springer, Berlin, 1989, pages 123–139. 70. Sumsets containing long arithmetic progressions and powers of 2 (with A. S´ ark¨ ozy), Acta Arith. 54 (1989), 147–154. 71. Long arithmetic progressions and powers of 2, in Th´ eorie des nombres (Que-bec, PQ, 1987), de Gruyter, Berlin, 1989, pages 735–739. 72. Additive bases with many representations (with P. Erd˝ os), Acta Arith. 52 (1989), 399–406. 73. Two applications of combinatorics to number theory, in: Graph theory and its applications: East and West (Jinan, 1986), Ann. New York Acad. Sci. 576 (1989), 408–410. 1990 74. Simultaneous systems of representatives and combinatorial number theory, Discrete Math. 79 (1990), 197–205. 75. Extremal properties for bases in additive number theory, in: Number The-ory, Vol. I (Budapest, 1987), Colloq. Math. Soc. J´ anos Bolyai, Vol. 51, North-Holland, Amsterdam, 1990, pages 437–446. 76. Best possible results on the density of sumsets, in: Analytic number theory (Allerton Park, IL, 1989), Progr. Math., Vol. 85, Birkh¨ auser Boston, Boston, 1990, pages 395–403. 1992 77. On a problem of Rohrbach for finite groups, J. Number Theory 41 (1992), 69–76 1993 78. The simplest inverse problems in additive number theory, in: Number theory with an emphasis on the Markoffspectrum (Provo, UT, 1991), Lecture Notes in Pure and Appl. Math., Vol. 147, Dekker, New York, 1993, pages 191–206 1994 79. An inverse theorem for sums of sets of lattice points, J. Number Theory 46 (1994), 29–59 80. Addition theorems for σ-finite groups (with X.-D. Jia), in: The Rademacher legacy to mathematics (University Park, PA, 1992), Contemp. Math., Vol. 166, Amer. Math. Soc., Providence, RI, 1994, pages 275–284. 1995 81. Inverse theorems for subset sums, Trans. Amer. Math. Soc. 347 (1995), 1409–1418. 82. Independence of solution sets and minimal asymptotic bases (with P. Erd˝ os and P. Tetali), Acta Arith. 69 (1995), 243–258. 83. Adding distinct congruence classes modulo a prime (with N. Alon and I. Z. Ruzsa), Amer. Math. Monthly 102 (1995), 250–255. 7 1996 84. The polynomial method and restricted sums of congruence classes (with N. Alon and I. Z. Ruzsa), J. Number Theory 56 (1996), 404–417. 85. On the sum of the reciprocals of the differences between consecutive primes (with P. Erd˝ os), in: Number theory (New York, 1991–1995), Springer, New York, 1996, pages 97–101. 86. Finite graphs and the number of sums and products (with X.-D. Jia), in: Number theory (New York, 1991–1995), Springer, New York, 1996, pages 211–219. 1997 87. On sums and products of integers, Proc. Amer. Math. Soc. 125 (1997), 9–16. 88. Ballot numbers, alternating products, and the Erd˝ os-Heilbronn conjecture, in: The mathematics of Paul Erd˝ os, I, Springer, Berliln, 1997, pages 199– 217. 1998 89. Linear forms in finite sets of integers (with S.-P. Han and and C. Kirfel), Ramanujan J. 2 (1998), 271–281. 1999 90. Inverse theorems and the number of sums and products (with G. Tenen-baum), in: Structure theory of set addition, Ast´ erisque 258 (1999), 195–204. 91. Number theory and semigroups of intermediate growth, Amer. Math. Monthly 106 (1999), 666–669. 2000 92. Partitions with parts in a finite set, Proc. Amer. Math. Soc. 128 (2000), 1269–1273. 93. N-graphs, modular Sidon and sum-free sets, and partition identities, Ra-manujan J. 4 (2000), 59–67. 94. Convexity and sumsets (with G. Elekes and I. Z. Ruzsa), J. Number Theory 83 (2000), 194–201. 95. Growth of sumsets in abelian semigroups, Semigroup Forum 61 (2000),149– 153. 2002 96. Polynomial growth of sumsets in abelian semigroups (with I. Z. Ruzsa), J. Theor. Nombres Bordeaux 14 (2002), 553–560. 2003 97. Unique representation bases for the integers, Acta Arith. 108 (2003), 1–8. 98. A functional equation arising from multiplication of quantum integers, J. Number Theory 103 (2003), 214–233. 2004 99. The inverse problem for representation functions of additive bases, in: Num-ber theory (New York, 2003), Springer, New York, 2004, pages 253–262. 100. On the ubiquity of Sidon sets, in: Number theory (New York, 2003), Springer, New York, 2004, pages 263–272. 101. Generalized additive bases, Konig’s lemma, and the Erdos-Turan conjec-ture, J. Number Theory 106 (2004), 70–78. 8 102. Formal power series arising from multiplication of quantum integers, in: Unusual applications of number theory, DIMACS Ser. Discrete Math. The-oret. Comput. Sci., Vol. 64, Amer. Math. Soc., Providence, RI, 2004, pages 145–167. 103. Representation functions of additive bases for abelian semigroups, Int. J. Math. Math. Sci. (2004), 29-32. 104. Quantum integers and cyclotomy (with A. Borisov and Y. Wang), J. Num-ber Theory 109 (2004), 120–135. 2005 105. Every function is the representation function of an additive basis for the integers, Port. Math. (N.S.) 62 (2005), 55–72. 2006 106. Quadratic addition rules for quantum integers (with A. V. Kontorovich), J. Number Theory 117 (2006), 1–13. 107. A new upper bound for finite additive bases (with S. Gunturk), Acta Arith. 124 (2006),235–255. 108. Additive number theory and the ring of quantum integers, in: General The-ory of Information Transfer and Combinatorics, Lecture Notes in Computer Science, Vol. 4123, Springer, Berlin, 2006, pages 505–511. 2007 109 Affine invariants, relatively prime sets, and a phi function for subsets of {1, 2, . . . , n}, Integers 7 (2007), A1: 1–7. 110. Sets with more sums than differences, Integers 7 (2007), A5: 1–24. 111. Density of sets of natural numbers and the L´ evy group (with R. Parikh), J. Number Theory 124 (2007), 151–158. 112. Linear quantum addition rules, in: Combinatorial Number Theory, de Gruyter, Berlin, 2007, pages 371–380. 113. Problems in additive number theory, I , in: Additive Combinatorics, Amer. Math. Soc., Providence, 2007, 263–270. 114. Binary linear forms over finite sets of integers (with K. O’Bryant, B. Orosz, I. Z. Ruzsa, and M. Silva), Acta Arith. 129 (2007), 341–361. 115. Representation functions of bases for binary linear forms, Funct. Approx. Comment. Math. 37 (2007), 341–350. 116. Asymptotic estimates for relatively prime subsets of {m + 1, . . . , n} (with B. Orosz), Integers 7 (2007). 2008 117. Heights in finite projective space, and a problem on directed graphs (with B. Sullivan), Integers 8 (2008), A13: 1–9. 118. Desperately seeking mathematical truth, Notices Amer. Math Soc. 55:7 (2008), 773. 119. Inverse problems for linear forms over finite sets of integers, J. Ramanujan Math. Soc., to appear. 120. Inverse problems for representation functions in additive number theory, Surveys in Number Theory (K. Alladi, ed.), Springer, New York, 2008, 89–117. 121. Tennenbaum at Penn and Rochester, Integers 8:2 (2008), 2–5. 9 122. Perfect difference sets constructed from Sidon sets (with J. Cilleruelo), Combinatorica, to appear. 123. The Caccetta-Haggkvist conjecture and additive number theory, in Analytic Number Theory: Essays in Honour of Klaus Roth (W. W. L. Chen, W. T. Gowers, H. Halberstam, W. M. Schmidt, R. C. Vaughan, eds.), Cambridge University Press, to appear. 124. Maximal Sidon sets and matroids (with J. Dias da Silva), Discrete Math., to appear. 125. Heights on the finite projective line, Intern. J. Number Theory, to appear. 126. Sums of products of congruence classes and of arithmetic progressions (with S. V. Konyagin), Intern. J. Number Theory, to appear. 127. Problems in additive number theory, II: Linear forms and complementing sets of integers, Journal de Th´ eorie des Nombres de Bordeaux, to appear. 127. Adjoining identities and zeros to semigroups, preprint. 128. Supersequences, rearrangements of sequences, and the spectrum of bases in additive number theory, J. Number Theory, to appear. 129. Problems in Additive Number Theory, III: Thematic Seminars at the Centre de Recerca Matem` atica, to appear. 130. Adjoining identities and zeros to semigroups, preprint. 131. Dense sets of integers with prescribed representation functions (with J. Cilleruelo), preprint. 132. Families of linear semigroups of intermediate growth, preprint. 133. Semidirect products and functional equations for quantum multiplication, preprint. 134. The spectrum of bases of finite order in additive number theory, preprint. 135. Desperately seeking mathematical proof, preprint.
12295
https://jmedicalcasereports.biomedcentral.com/articles/10.1186/s13256-023-04136-0
Typesetting math: 100% Skip to main content Journal of Medical Case Reports Download PDF Case report Open access Published: Daily dose of metformin caused acute kidney injury with lactic acidosis: a case report Maho Ariga1na1, Junichiro Hagita2na1, Midori Soda1, Yasuhisa Oida1, Hitomi Teramachi3 & … Kiyoyuki Kitaichi1 Journal of Medical Case Reports volume 17, Article number: 393 (2023) Cite this article 8750 Accesses 4 Citations 1 Altmetric Metrics details Abstract Background Metformin-induced lactic acidosis with acute kidney injury is rare but well known. Here we report a case of a Japanese patient taking metformin who experienced severe acute renal failure accompanied with significantly elevated metformin plasma concentrations and signs of lactic acidosis. Case presentation A 60-year-old Japanese man with type II diabetes, who was taking metformin (500 mg three times a day) along with several other medications, visited the emergency department with dizziness, malaise, and oliguria. The initial laboratory test results showed elevated levels of serum creatinine and blood urea nitrogen, although his renal function was normal approximately 2 weeks earlier. His lactate level was raised (4.27 mmol/L), and he was diagnosed with lactic acidosis. Considering the low creatinine clearance and elevated urinary albumin/serum creatinine ratio, urinary N-acetyl-β-d-glucosaminidase level, and β2-microglobulin level, the patient was further diagnosed with AKI (in other words, acute tubular necrosis). A renal biopsy performed on day 3 after admission revealed renal tubular epithelium necrosis, supporting this diagnosis. The patient underwent intermittent hemodialysis until he was discharged on day 13. The metformin concentrations on days 3, 5, and 7 were 8.95, 2.58, and 0.16 μg/mL, respectively, which is significantly higher than the maximal steady-state concentration of metformin at the recommended dosage (approximately 1 μg/mL). The calculated pharmacokinetic parameters of metformin suggested poor renal excretion and a low distribution volume at higher metformin levels. Other possible acute kidney injury-causing factors included dehydration, alcohol consumption, and the use of an angiotensin receptor blocker or SGLT2 inhibitor. Conclusions This is the first reported case of acute kidney injury possibly caused by high levels of metformin with lactic acidosis in a patient treated with the recommended metformin dose. Thus, the development of metformin-induced acute kidney injury should be considered for patients with several acute kidney injury risk factors who are taking metformin. Peer Review reports Background Metformin (Met) is a biguanide used to treat type II diabetes mellitus (T2DM). It inhibits gluconeogenesis in the liver, decreases glucose absorption from the intestine, and promotes the utilization of glucose in the peripheral tissues by activating adenosine monophosphate (AMP)-activated protein kinase . Met is frequently used to treat T2DM as it is relatively safer than other drugs. However, one of the major adverse effects of Met is lactic acidosis with an occurrence rate that is lower than 10 in 100,000 patients per year . Here, we report a case of a Japanese patient taking Met who experienced severe acute renal failure and acute kidney injury (AKI) accompanied with significantly elevated Met plasma concentrations and lactic acidosis. Case presentation A 60-year-old Japanese man with a 9-year history of T2DM presented at the emergency department of Kariya Toyota General Hospital with anuria and severe general fatigue. On admission, the patient was transported by ambulance due to his difficulty in moving. On evaluation, the patient reported fatigue, malaise, weight loss, and intermittent nausea and vomiting. He reported no headaches, joint, or muscle pain. He denied having a fever or contact with another sick individual. Apart from T2DM, his past medical history included hypertension and lumbar disc herniation. He had no history of psychiatric disease and no known drug allergies. He had no family history of diabetes mellitus, lived alone, and his educational status was not obtained. The patient had smoked 1 pack of cigarettes per day for 45 years and consumed approximately 540 mL of Japanese sake daily. His medications included Met (500 mg three times daily), sitagliptin (50 mg once daily), pioglitazone (15 mg once daily), dapagliflozin (5 mg once daily), amlodipine (5 mg once daily), azilsartan/amlodipine (20 mg/5 mg once daily), rosuvastatin (2.5 mg once daily), and a hydrogel patch containing loxoprofen sodium. Renal function was normal before admission and the Met dosage used to treat T2DM was suitable for this patient. No abnormality was found in the patient’s blood tests (data not shown) when he had visited his doctor 13 days before admission. Notably, 12 days before admission, his work had changed from office to physical work (approximately 30,000–40,000 steps per day). He had noted a gradual decrease in urinary output for several days before admission, but had no other symptoms. Two days before admission, he developed anuria, dizziness, and nausea. The initial laboratory findings are shown in Table 1. The time-course changes in serum creatinine and blood urea nitrogen (BUN) levels after admission are shown in Fig. 1. The lactate concentration of the patient was 4.27 mmol/L and the arterial pH was 7.31. Considering the previously established definition for lactic acidosis (lactate > 5 mmol/L and pH < 7.35) , the patient did not meet these criteria. However, a recently updated definition of lactic acidosis revealed that this patient taking Met fulfilled the criteria for the diagnosis of lactic acidosis because the lactate concentration was > 4 mmol/L . The physical examination at admission indicated the following: height of 172.3 cm, weight of 69.3 kg, blood pressure of 159/94 mmHg, regular pulse of 96 beats per minute, body temperature of 36.3 °C, and oxygen saturation of 99% breathing ambient air. There was no sign of hypoglycemia since blood glucose was 129 mg/dL (Table 1). This case is considered to be different from either diabetic ketoacidosis or euglycemic diabetic ketoacidosis. Despite the presence of normal glucose levels, he was not in the insulin-dependent state characteristic of euglycemic diabetic ketoacidosis and did not present with the physical symptoms of ketosis (tachypnea, nausea, vomiting, abdominal pain, and impaired consciousness). The patient was alert and appeared well. Cardiac auscultation revealed a regular rhythm without murmurs or gallops and an audible S1 and S2. Respiratory sounds were clear to auscultation with no crackles, wheezes, or bronchial breath sounds. The abdomen was non-distended, soft, and non-tender. His legs had no edema or muscle weakness, and the remainder of the examination was unremarkable. A radiograph of the chest was normal. Computed tomography (CT) showed slight cortical atrophy in the bilateral kidneys; however, no kidney stone or hydronephrosis was observed. There was irregular thickening of the small bowel wall without adjacent fat stranding and no sign of inferior vena cava collapse. Bilateral hydronephrosis and nephrolithiasis were excluded based on the CT scan data. Glomerulonephritis was also excluded based on the negative autoantibody findings. The ratio of urea nitrogen to creatinine was < 20, suggesting a possible cause for the intrinsic renal injury. There was no hematuria because the presence of red cells in the urine (Table 1) was due the trauma induced by urinary bladder catheterization on admission. Considering the low estimated glomerular filtration rate (eGFR) and high urinary protein/creatinine ratio (11.04 g/g Cre), urinary N-acetyl-β-d-glucosaminidase (420.3 U/g Cre), and β2-microglobulin/creatinine ratio (149 µg/g Cre) (Table 1), the patient was presumptively diagnosed with acute tubular necrosis with metabolic acidosis (anion-gap 23.5 mmol/L). Renal pathology findings on day 3 after admission revealed that 3 of 23 glomeruli were sclerotic and no significant pathological alterations were observed with light microscopy. However, the proximal tubular cells were diffusely enlarged with vacuolar degeneration. The same pathology may be seen with Fabry’s disease and use of osmotic diuretics, but the patient’s history was negative. All immunofluorescence results were negative and, apart from the vacuolization and swelling of the proximal tubules, no obvious abnormal findings were observed by electron microscopy. Based on the pathological findings and clinical course, the diagnosis of acute tubular necrosis was confirmed. After admission, intravenous hydration therapy was started. On the day of presentation at the emergency unit (day 0), he was hospitalized and hemodialysis (HD) was introduced to remove Met and improve acid–base disruption and anuria. In detail, HD was performed with a polysulfone dialyzer [Pinnafine® PN-100 (filter size 1.0 m2, Fresenius Medical Care AG and Co., Hessen, Germany) (at day 0, 1, and 3 with small molecule heparin) or Pinnafine® PN-140 (filter size 1.4 m2, Fresenius Medical Care AG and Co., Hessen, Germany) (at day 5 and 7 with Nafamostat Mesilate)] with blood and dialysate (Kidaly 4E solution, Fuso Pharmaceutical Industries, Ltd., Osaka, Japan) flow rates of 150 and 500 mL/min, respectively, for 3 hours per session. During the HD session, the patient’s condition was stable. The eGFR was calculated from serum creatinine levels on days 0, 3, 5, 7, 10, and 19 were 5.1, 4.5, 10.8, 31.9, 61.2, and 59.9 mL/min/1.73 m2, respectively (Fig. 1). The patient resumed urination on day 2. On day 5, anuria was recovered after the third session of HD. Renal function gradually improved, and on day 11, the serum creatinine level reached 1.78 mg/dL. The patient was discharged 13 days after hospitalization, and no major prognostic problems were observed. The plasma concentrations of Met were measured using high-performance liquid chromatography (HPLC)–ultraviolet (UV) as per a previous report , with slight modification and in accordance with the validation criteria guidelines of the US Food and Drug Administration [6f . Accessed 13 Dec 2021.")]. The Met concentrations on days 3, 5, and 7 were 8.95, 2.58, and 0.16 µg/mL, respectively. The semilogarithmic concentration–time plots for Met showed good linearity (R2 = 0.954, Fig. 2D). The one-compartment model pharmacokinetic (PK) analysis yielded an observed ke (day 3–7) of 0.04 hours−1 and calculated t1/2 (day 3–7) of 16.5 hours. Moreover, the t1/2 (day 3–5) was 26.7 hours, and t1/2 (day 5–7) was 11.9 hours. Discussion and conclusions This is a case where a patient was being treated with the recommended Met dose and presented at the emergency unit and was diagnosed with AKI. PK analysis revealed that AKI might be caused by high plasma Met levels, consistent with previous reports that have demonstrated that Met overdose causes AKI [7,8,9]. Possible mechanism to cause AKI by Met is the decrement of mitochondrial adenosine triphosphate (ATP) by Met since AKI can arise from mitochondrial ATP depletion . Although it is well known that a daily dose of Met exerts renal protecting effects [10, 12,13,14], these results suggest that a daily dose of Met could cause AKI, particularly in patients with several AKI risk factors. The patient had severe renal failure with oliguria on the day of hospitalization (day 0), although laboratory data obtained 13 days earlier indicated normal renal function. Based on three data points of Met plasma concentration up to 7 days after hospitalization, the t1/2 of Met was 16.5 hours. This t1/2 is comparable to that of patients with severe renal failure (creatine clearance rate: 10–30 mL/minute) . However, in this patient, the t1/2 (day 3–5) and t1/2 (day 5–7) were 26.7 and 11.9 hours, respectively. Considering that the patient underwent 5 hours of HD between days 0 and 3, 3 hours between days 3 and 5, and 4 hours between days 5 and 7, the actual t1/2 would have been far worse. However, slight renal function recovery occurred after day 5, which is consistent with the eGFR recovery. Regarding the value of t1/2 (day 3–5), the one-compartment analysis yielded an estimated plasma Met concentration on the day of hospitalization (C0) of 58 µg/mL. This value is notably higher since healthy volunteers receiving the recommended Met daily dose (500 mg three times a day) had a Met Cmax at a steady-state (Css,max) of approximately 1.0 µg/mL . This high Met concentration occurred in a patient who attempted suicide by taking 26,250 mg of Met (42.9 µg/mL at admission); this patient displayed acute tubular necrosis . In the case a person without renal dysfunction overdosed on Met, he presented with high Met concentrations and consequently with AKI . These results suggest that high Met concentrations possibly cause AKI. Up to day 19, the patient showed moderate renal failure (eGFR: 59.9–61.2 mL/minute). A previous report indicated that patients with severe and moderate renal failure had similar Met PK parameters . In that report, patients with severe renal failure (t1/2 = 17.2 hours) who received a single 850 mg Met dose had an estimated Cmax of 3.93 µg/mL . From this Cmax, we extrapolated the Cmax of our patient who had taken 500 mg of Met three times a day before admission, when the symptoms started to appear. Using the equation , we obtained a Css,max of 8.4 µg/mL. This value is lower than our estimated C0 of 58 µg/mL. This data suggests that other PK parameters elevated the C0. Considering the following PK formulas, and , one PK parameter possibly altering the C0 is the volume of distribution (Vd). That is, a decreased Vd can increase the Met plasma concentration levels. Considering the patient’s background, dehydration probably caused by working outside in high temperatures, decreased Vd . Moreover, dehydration is a risk factor for AKI . Therefore, dehydration might be an important factor in AKI development with increased Met plasma concentration levels. Furthermore, in this case, the other possible AKI causes were alcohol consumption and the use of other prescribed medication, such as angiotensin receptor blockers (ARBs) [10, 14]. Alcohol consumption frequently causes dehydration, especially when accompanied with water diuresis . A report on a patient who took 5 g Met and approximately 600 mL Japanese sake developed lactic acidosis; the alcohol consumption rapidly increased the hepatic NADH/NAD+ ratio in the patient, which inhibited the gluconeogenesis from lactate . Thus, advised limited alcohol consumption during the use of Met consultation would be recommended, since alcohol can reduce the liver’s capacity to metabolize lactate and induces dehydration . Furthermore, ARBs and SGLT2 inhibitors may cause AKI by decreasing the glomerular filtration pressure, especially in patients that are dehydrated [14, 20]. Nonsteroidal anti-inflammatory drugs (NSAIDs)-induced AKI could be excluded because loxoprofen was topically applied using a hydrogel patch. The limitations of this study included the scarce Met concentration data and the absence of the quantification of other drugs possibly related to AKI, such as ARBs or azilsartan. Further analyses of the Met concentration in the dialyzed fluid samples can increase our understanding of Met distribution. Moreover, it is difficult to rule out the alternate possibility to increase plasma Met concentration that acute tubular necrosis caused the accumulation of Met in tubular epithelium. For example, multidrug and toxin extrusion proteins (MATEs) are transporters located on the apical side of tubular epithelium [21, 22], involved in the renal secretion of Met . Thus, acute tubular necrosis causes the dysfunction of MATEs, subsequently deteriorating tubular secretion of Met. Further histochemical analysis of Met-related transporters such as MATEs in tubular epithelium would be needed to understand this phenomenon. To the best of our knowledge, this is the first report of AKI caused by high plasma Met concentration levels with mild lactic acidosis in a patient treated with the recommended Met dose. PK analysis unveiled the importance of dehydration and alcohol consumption to cause high plasma Met by decreasing Vd. Other factors, including prescription medication use, might also play a part in the development of AKI. Taken together, patients with several AKI risk factors should be aware that even the daily recommended Met dose could induce AKI. Availability of data and materials The datasets obtained and analyzed during the current study are available from the corresponding author on reasonable request. Abbreviations AKI: : Acute kidney injury ARBs: : Angiotensin receptor blockers BUN: : Blood urea nitrogen CT: : Computed tomography eGFR: : Estimated glomerular filtration rate HD: : Hemodialysis HPLC: : High-performance liquid chromatography MATE: : Multidrug and toxin extrusion protein Met: : Metformin PK: : Pharmacokinetic T2DM: : Type II diabetes mellitus UV: : Ultraviolet References Zhou G, Myers R, Li Y, Chen Y, Shen X, Fenyk-Melody J, et al. Role of AMP-activated protein kinase in mechanism of metformin action. J Clin Invest. 2001;108:1167–74. Article CAS PubMed PubMed Central Google Scholar 2. Suzuki K, Okada H, Yoshida S, Okamoto H, Suzuki A, Suzuki K, et al. Effect of high-flow high-volume-intermittent hemodiafiltration on metformin-associated lactic acidosis with circulatory failure: a case report. J Med Case Rep. 2018;12:280. Article PubMed PubMed Central Google Scholar 3. Luft D, Deichsel G, Schmülling RM, Stein W, Eggstein M. Definition of clinically relevant lactic acidosis in patients with internal diseases. Am J Clin Pathol. 1983;80:484–9. Article CAS PubMed Google Scholar 4. Calello DP, Liu KD, Wiegand TJ, Roberts DM, Lavergne V, Gosselin S, et al. Extracorporeal treatment for metformin poisoning: systematic review and recommendations from the Extracorporeal Treatments in Poisoning Workgroup. Crit Care Med. 2015;43:1716–30. Article PubMed Google Scholar 5. Abdessadek M, Tadmori AE, Attari AE, Diarra M, Magoul R, Ajdi F, et al. Simple HPLC-UV method for determination of metformin in human plasma and erythrocytes application to therapeutic drug monitoring. Int J Pharm Pharm Sci. 2015;7:35–9. CAS Google Scholar 6. U.S. Department of Health and Human Services. Bioanalytical method validation guidance for industry (May 2018). Accessed 13 Dec 2021. 7. Iwai H, Ohno Y, Itoh H, Endo T, Komaki K, Ishii S, et al. Type 2 diabetes mellitus with lactic acidosis and acute renal failure induced by metformin overdose in suicide. J Japan Diab Soc. 2004;47:439–45. Google Scholar 8. Chiew AL, Wright DFB, Dobos NM, McArdle K, Mostafa AA, Newth A, et al. “Massive” metformin overdose. Br J Clin Pharmacol. 2018;84:2923–7. Article CAS PubMed PubMed Central Google Scholar 9. Chowdhury W, Lodhi MU, Syed IA, Ahmed U, Miller M, Rahim M. Metformin-induced lactic acidosis: a case study. Cureus. 2018;10: e2152. PubMed PubMed Central Google Scholar 10. Rocha A, Almeida M, Santos J, Carvalho A. Metformin in patients with chronic kidney disease: strengths and weaknesses. J Nephrol. 2013;26:55–60. Article CAS PubMed Google Scholar Brooks C, Wei Q, Cho SG, Dong Z. Regulation of mitochondrial dynamics in acute kidney injury in cell culture and rodent models. J Clin Invest. 2009;119:1275–85. Article CAS PubMed PubMed Central Google Scholar 12. Pan Q, Lu X, Zhao C, Liao S, Chen X, Guo F, et al. Metformin: the updated protective property in kidney disease. Aging (Albany NY). 2020;12:8742–59. Article CAS PubMed Google Scholar 13. Amini FG, Rafieian-Kopaei M, Nematbakhsh M, Baradaran A, Nasri H. Ameliorative effects of metformin on renal histologic and biochemical alterations of gentamicin-induced renal toxicity in Wistar rats. J Res Med Sci. 2012;17:621–5. PubMed PubMed Central Google Scholar 14. McDaniel BL, Bentley ML. The role of medications and their management in acute kidney injury. Integr Pharm Res Pract. 2015;4:21–9. PubMed PubMed Central Google Scholar 15. Sambol NC, Chiang J, Lin ET, Goodman AM, Liu CY, Benet LZ, et al. Kidney function and age are both predictors of pharmacokinetics of metformin. J Clin Pharmacol. 1995;35:1094–102. Article CAS PubMed Google Scholar 16. Kuan IHS, Wilson LC, Leishman JC, Cosgrove S, Walker RJ, Putt TL, et al. Metformin doses to ensure efficacy and safety in patients with reduced kidney function. PLoS ONE. 2021;16: e0246247. Article CAS PubMed PubMed Central Google Scholar 17. Choi YH, Lee I, Lee MG. Effects of water deprivation on the pharmacokinetics of metformin in rats. Biopharm Drug Dispos. 2007;28:373–83. Article CAS PubMed Google Scholar 18. Tsuboi N, Yoshida H, Shibamura K, Hikita M, Tomonari H, Kuriyama S, et al. Acute renal failure after binge drinking of alcohol and nonsteroidal antiinflammatory drug ingestion. Intern Med. 1997;36:102–6. Article CAS PubMed Google Scholar 19. Hulisz DT, Bonfiglio MF, Murray RD. Metformin-associated lactic acidosis. J Am Board Fam Pract. 1998;11:233–6. Article CAS PubMed Google Scholar 20. Gudmundsdottir H, Aksnes H, Heldal K, Krogh A, Froyshov S, Rudberg N, et al. Metformin and antihypertensive therapy with drugs blocking the renin angiotensin system, a cause of concern? Clin Nephrol. 2006;66:380–5. Article CAS PubMed Google Scholar 21. Otsuka M, Matsumoto T, Morimoto R, Arioka S, Omote H, Moriyama Y. A human transporter protein that mediates the final excretion step for toxic organic cations. Proc Natl Acad Sci U S A. 2005;102:17923–8. Article CAS PubMed PubMed Central Google Scholar 22. Masuda S, Terada T, Yonezawa A, Tanihara Y, Kishimoto K, Katsura T, Ogawa O, Inui K. Identification and functional characterization of a new human kidney-specific H+/organic cation antiporter, kidney-specific multidrug and toxin extrusion 2. J Am Soc Nephrol. 2006;17:2127–35. Article CAS PubMed Google Scholar 23. Stocker SL, Morrissey KM, Yee SW, Castro RA, Xu L, Dahlin A, et al. The effect of novel promoter variants in MATE1 and MATE2 on the pharmacokinetics and pharmacodynamics of metformin. Clin Pharmacol Ther. 2013;93:186–94. Article CAS PubMed Google Scholar Download references Acknowledgements None. Funding Not applicable. Author information Author notes Maho Ariga and Junichiro Hagita are co-first authors and equally contributed to this study. Authors and Affiliations Laboratory of Pharmaceutics, Department of Biomedical Pharmaceutics, Gifu Pharmaceutical University, 1-25-4 Daigakunishi, Gifu, 501-1196, Japan Maho Ariga, Midori Soda, Yasuhisa Oida & Kiyoyuki Kitaichi 2. Kariya Toyota General Hospital, Aichi, Japan Junichiro Hagita 3. Laboratory of Clinical Pharmacy, Department of Pharmacy Practice and Science, Gifu Pharmaceutical University, Gifu, Japan Hitomi Teramachi Authors Maho Ariga View author publications Search author on:PubMed Google Scholar 2. Junichiro Hagita View author publications Search author on:PubMed Google Scholar 3. Midori Soda View author publications Search author on:PubMed Google Scholar 4. Yasuhisa Oida View author publications Search author on:PubMed Google Scholar 5. Hitomi Teramachi View author publications Search author on:PubMed Google Scholar 6. Kiyoyuki Kitaichi View author publications Search author on:PubMed Google Scholar Contributions JH treated the patients. JH, HT, MS, YO, and KK designed the study. MA and MS measured Met concentrations in plasma. MA wrote the manuscript. JH, MS, and KK drafted the manuscript. Corresponding author Correspondence to Kiyoyuki Kitaichi. Ethics declarations Ethics approval and consent to participate In Japan, a case report does not require ethics approval. The study adhered to the Ethical Guidelines for Medical and Health Research Involving Human Subjects established by the government of Japan. Consent for publication Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data. Reprints and permissions About this article Cite this article Ariga, M., Hagita, J., Soda, M. et al. Daily dose of metformin caused acute kidney injury with lactic acidosis: a case report. J Med Case Reports 17, 393 (2023). Download citation Received: Accepted: Published: DOI: Share this article Anyone you share the following link with will be able to read this content: Provided by the Springer Nature SharedIt content-sharing initiative Keywords Metformin Acute kidney injury Diabetes Case report Lactic acidosis
12296
https://www.youtube.com/watch?v=SFwCZslZZYI
Art of Problem Solving: Solving Linear Equations Part 2 Art of Problem Solving 103000 subscribers 60 likes Description 22644 views Posted: 24 Dec 2011 Art of Problem Solving's Richard Rusczyk continues his exploration of one-variable linear equations. This video is part of our AoPS Prealgebra and Algebra curriculums. Take your math skills to the next level with our advanced materials: 📚 AoPS Prealgebra Textbook: 🖥️ AoPS Prealgebra 1 Course: 📚 AoPS Introduction to Algebra Textbook: 🖥️ AoPS Introduction to Algebra A Course: 🔔 Subscribe to our channel for more engaging math videos and updates Transcript: [introductory music] Let's solve some more equations. And maybe you know right off the top of your head what number we multiply 5 by to get 215. But I want to talk about a little more methodical approach so if we get to some wackier numbers, we still know what to do. Now, the name of the game in solving equations, of course, is isolate the variable, get the variable alone on one side of the equation, whatever's on the other side, that's your solution. So we want to get the x alone here; it's being bothered by this 5. Now we can't just subtract 5, that would give us 5x minus 5, and what are we going to do with that? The x is multiplied by 5. We undo multiplication with division. We divide both sides by 5. You start with two equal numbers, and you divide by the same thing (that's not zero - no dividing by zero!). If you divide by the same thing on both sides, then the quotients have to be equal. And the 5's here cancel out, and you remember why, right? We can write this as 5 over 5 times x. 5 over 5 is just 1, so this side's just x. And we're left with 215 divided by 5. And you might have been able to jump straight to this. If 5 times x is 215, then x has to be 215 divided by 5, right? I mean, say, 3 times 4 is 12, that means 3 is 12 divided by 4. Kind of the same idea here. x is 215 divided by 5, you just divide that out. 200 divided by 5, well that's 40, so 205 divided by 5 is 41. 210 divided by 5 is 42. 215 divided by 5 is 43. And we're on to the next problem. Now here, the variable is divided by something instead of multiplied. We undo division with multiplication. We're going to multiply both sides by 9. If we have two equal numbers and we multiply them by the same thing, we get two equal products. We're doing the same thing to both sides. We're multiplying them both by 9. These 9's will cancel out, leave our variable all by itself, that was the plan all along, and b is 9 times 31. 9 times 30 is 270. 9 times the 1 is another 9. And that gives us 279. And we're on to the final equation. Yikes. We have all sorts of complications here. Um, well, I don't see a magical thing to do this just in one step like we did in the other two, so I'm going to try to handle the complications one at a time. The first complication out here is this negative sign; c doesn't want to be near that negative sign so we're going to get rid of the negative sign by multiplying both sides by negative 1. We have negative 1 times 40 equals negative 1 times the negative 2c over 5. Now that just gives us negative 40 over here; negative times a negative, that's a positive, so that's 2c over 5 here. At least we got rid of the negative. Now, we can get rid of the 5 by multiplying both sides by 5. Of course, the 5's cancel over here, this gives me negative 200 over here on the left. Negative 200 equals 2c, and you can see where this is going now. We can divide off the 2. Divide off the 2 and we get negative 100 equals c. We got the c all by itself, we've solved the equation, c is negative 100. What's that? Oh, you don't like having to do all these steps? Do it in one step? You know how to do it in one step? Oh, we can combine everything we did here. We multiply by negative 1, we multiply by 5, we divide it by 2. We can do that all in one step by multiplying by negative 5 over 2. That's very clever. Let's check that out. So we start with this equation. Another way to think about this equation is 40 equals negative 2/5 times c. And we want to cancel out this negative 2/5. Now, if you take a number and you multiply it by its reciprocal, you get 1. So what I want to do here is, I want to multiply by the reciprocal of negative 2/5. That's very clever. The reciprocal of negative 2/5 is negative 5/2. So what I'll do here is, I'll go . . .I'm going to multiply both sides by the same thing, so multiply this by negative 5/2 and I'm going to multiply the other side by negative 5/2 as well. These are reciprocals, they cancel out, that'll be 1. And I'll be left with c over here. Negative 5/2 of 40, negative 5 times 40 is negative 200, divided by 2 is negative 100. All in one step. We have a constant times a variable, we just multiply it by the reciprocal of the constant. And away it goes. Now the great thing about equations, again, is we can check our answers. We can put negative 100 back into this equation and make sure we get the right answer, make sure that 40 comes out when we put negative 100 in there for c. So let's check that out. Negative 2 times negative 100 over 5, let's multiply this out, we have negative . . .this is negative 200 over 5. Negative 200 over 5, that gives us negative 40, and then we negate it again, that gives us 40 back. Matches up. And we're done.
12297
https://www.andrews.edu/~rwright/a2_book/text/a2_04-04.html
Algebra 2 by Richard Wright Previous LessonTable of Contents Next Lesson Are you not my student and has this helped you? 4-04 Find Rational Zeros of Polynomial Functions (4.5) Evaluate a polynomial using the remainder theorem. List the possible rational zeros of a polynomial. Find the rational zeros of a polynomial. SDA NAD Content Standards (2018): AII.4.3, AII.5.1, AII.5.3, AII.6.3 A landscape company is going to put some decorative rectangular prism-shaped stepping stones to make a path across a creek. Each stone will use 648 cubic inches of cement because that is convenient based on their cement supply. They decided that having the width be six inches greater than the length is a pleasing proportion and that the height should be one fourth the width for strength. What should be the dimensions of the stepping stone? This problem can be solved by writing a cubic function and solving a cubic equation for the volume of the stepping stone. This lesson highlights a variety of tools for writing polynomial functions and solving polynomial equations. Evaluating a Polynomial Using the Remainder Theorem The Remainder Theorem provides a convenient way to evaluate polynomials based on division. A polynomial may be evaluated at f(k) by dividing it by x − k. In other words, f(k) is the remainder obtained by dividing f(x) by x − k. Synthetic division makes the process quick. The Remainder Theorem If a polynomial f(x) is divided by x − k, then the remainder is the value f(k). Use the Remainder Theorem to Evaluate a Polynomial To evaluate polynomial f(x) at x = k using the Remainder Theorem, Use synthetic division to divide the polynomial by x − k. The remainder is the value f(k). Example 1: Using the Remainder Theorem to Evaluate a Polynomial Use the Remainder Theorem to evaluate f(x) = x4 − 3x3 − x2 + 2x − 13 at x = 2. Solution To use the Remainder Theorem, use synthetic division to divide the polynomial by x − 2. 2–|11−32−1−1−2−32−6−4−13−8|−21−−−− The remainder is −21. Therefore, f(2) = −21. Analysis It is possible to check the answer by evaluating f(2). f(x) = x4 − 3x3 − x2 + 2x − 13 f(2) = (2)4 − 3(2)3 − (2)2 + 2(2) − 13 = −21 Try It 1 Use the Remainder Theorem to evaluate f(x) = 3x5 − x4 − 2x3 + x2 + 3 at x = 1. Solution f(1) = 4 Using the Factor Theorem to Solve a Polynomial Equation The Factor Theorem says that if (x − k) is a factor of a function, then x = k is a zero of the function. A zero is a value of x that makes f(x) = 0. It turn out that a polynomial of degree n in the complex number system will have n zeros. The Factor Theorem can be used to completely factor a polynomial into the product of n factors. Once the polynomial has been completely factored, its zeros can easily be found. The Factor Theorem According to the Factor Theorem, k is a zero of f(x) if and only if (x − k) is a factor of f(x). Use the Factor Theorem to Solve a Polynomial Equation To solve a polynomial equation given one factor using the factor theorem, Use synthetic division to divide the polynomial by the given factor, (x − k). Confirm that the remainder is 0. If the quotient is NOT a quadratic, repeat steps 1 and 2 with another factor using the quotient as the polynomial. If the quotient IS a quadratic, factor the quadratic quotient if possible. Set each factor equal to zero and solve for x. Example 2: Use the Factor Theorem to Solve a Polynomial Equation Show that (x − 1) is a factor of x3 − 2x2 − 5x + 6. Find the remaining factors. Use the factors to determine the zeros of the polynomial. Solution Use synthetic division to show that (x − 1) is a factor of the polynomial. 1–|11−21−1−5−1−66−6|00−− The remainder is zero, so (x − 1) is a factor of the polynomial. The quotient is x2 − x − 6 which is a quadratic. Factor that quadratic. x2 − x − 6 = (x + 2)(x − 3) Set each factor, including the given one, equal to zero and solve for x. x − 1 = 0 x = 1 Or x + 2 = 0 x = −2 Or x − 3 = 0 x = 3 The zeros of x3 − 2x2 − 5x + 6 are −2, 1, and 3. Try It 2 Use the Factor Theorem to find the zeros of f(x) = x3 − 5x2 − 10x + 24 given that (x − 4) is a factor of the polynomial. Answer The zeros are −2, 3, and 4. Practice Problems Use the remainder theorem to evaluate f(x) at the given x value. f(x) = x2 + 5x − 15; x = 3 f(x) = 2x3 − 2x2 + x + 1; x = −2 f(x) = 3x3 − 4x2 − 15; x = 3 f(x) = 2x3 − x2 − 3x − 10; x = 5 f(x) = 4x3 − 2x + 9; x = −1 Show that the given binomial is a factor of f(x), then find the zeros of f(x). f(x) = x3 + 6x2 + 5x − 12; (x + 4) f(x) = x3 − 19x − 30; (x + 2) f(x) = 2x3 − 3x2 − 32x − 15; (x + 3) f(x) = 2x4 − x3 − 9x2 + 4x + 4; (x − 2), (x − 1) f(x) = 3x4 + 2x3 − 13x2 − 8x + 4; (x + 1), (x + 2) Mixed Review (4-03) Simplify (2x3 + 3x2 − 4x + 5) ÷ (x2 + x − 4) (4-03) Simplify (3x5 − 5x3 − x + 4) ÷ (x − 2) Solve by factoring. (4-02) 2x2 + 9x + 4 = 0 (4-02) 2x3 + 2x2 = 24x (4-02) x3 + 2x2 − 16x − 32 = 0 Answers 9 −25 30 200 7 −4, −3, 1 −3, −2, 5 −3, −1/2, 5 −2, −1/2, 1, 2 −2, −1, 1/3, 2 2x + 1 + 3x+9x2+x−4 3x4 + 6x3 + 7x2 + 14x + 27 + 58x−2 −4, −1/2 −4, 0, 3 −4, −2, 4
12298
https://www.berkshirehathaway.com/letters/1979.html
BERKSHIRE HATHAWAY INC. ``` To the Shareholders of Berkshire Hathaway Inc.: Again, we must lead off with a few words about accounting. Since our last annual report, the accounting profession has decided that equity securities owned by insurance companies must be carried on the balance sheet at market value. We previously have carried such equity securities at the lower of aggregate cost or aggregate market value. Because we have large unrealized gains in our insurance equity holdings, the result of this new policy is to increase substantially both the 1978 and 1979 yearend net worth, even after the appropriate liability is established for taxes on capital gains that would be payable should equities be sold at such market valuations. As you know, Blue Chip Stamps, our 60% owned subsidiary, is fully consolidated in Berkshire Hathaway’s financial statements. However, Blue Chip still is required to carry its equity investments at the lower of aggregate cost or aggregate market value, just as Berkshire Hathaway’s insurance subsidiaries did prior to this year. Should the same equities be purchased at an identical price by an insurance subsidiary of Berkshire Hathaway and by Blue Chip Stamps, present accounting principles often would require that they end up carried on our consolidated balance sheet at two different values. (That should keep you on your toes.) Market values of Blue Chip Stamps’ equity holdings are given in footnote 3 on page 18. 1979 Operating Results We continue to feel that the ratio of operating earnings (before securities gains or losses) to shareholders’ equity with all securities valued at cost is the most appropriate way to measure any single year’s operating performance. Measuring such results against shareholders’ equity with securities valued at market could significantly distort the operating performance percentage because of wide year-to-year market value changes in the net worth figure that serves as the denominator. For example, a large decline in securities values could result in a very low “market value” net worth that, in turn, could cause mediocre operating earnings to look unrealistically good. Alternatively, the more successful that equity investments have been, the larger the net worth base becomes and the poorer the operating performance figure appears. Therefore, we will continue to report operating performance measured against beginning net worth, with securities valued at cost. On this basis, we had a reasonably good operating performance in 1979 - but not quite as good as that of 1978 - with operating earnings amounting to 18.6% of beginning net worth. Earnings per share, of course, increased somewhat (about 20%) but we regard this as an improper figure upon which to focus. We had substantially more capital to work with in 1979 than in 1978, and our performance in utilizing that capital fell short of the earlier year, even though per-share earnings rose. “Earnings per share” will rise constantly on a dormant savings account or on a U.S. Savings Bond bearing a fixed rate of return simply because “earnings” (the stated interest rate) are continuously plowed back and added to the capital base. Thus, even a “stopped clock” can look like a growth stock if the dividend payout ratio is low. The primary test of managerial economic performance is the achievement of a high earnings rate on equity capital employed (without undue leverage, accounting gimmickry, etc.) and not the achievement of consistent gains in earnings per share. In our view, many businesses would be better understood by their shareholder owners, as well as the general public, if managements and financial analysts modified the primary emphasis they place upon earnings per share, and upon yearly changes in that figure. Long Term Results In measuring long term economic performance - in contrast to yearly performance - we believe it is appropriate to recognize fully any realized capital gains or losses as well as extraordinary items, and also to utilize financial statements presenting equity securities at market value. Such capital gains or losses, either realized or unrealized, are fully as important to shareholders over a period of years as earnings realized in a more routine manner through operations; it is just that their impact is often extremely capricious in the short run, a characteristic that makes them inappropriate as an indicator of single year managerial performance. The book value per share of Berkshire Hathaway on September 30, 1964 (the fiscal yearend prior to the time that your present management assumed responsibility) was $19.46 per share. At yearend 1979, book value with equity holdings carried at market value was $335.85 per share. The gain in book value comes to 20.5% compounded annually. This figure, of course, is far higher than any average of our yearly operating earnings calculations, and reflects the importance of capital appreciation of insurance equity investments in determining the overall results for our shareholders. It probably also is fair to say that the quoted book value in 1964 somewhat overstated the intrinsic value of the enterprise, since the assets owned at that time on either a going concern basis or a liquidating value basis were not worth 100 cents on the dollar. (The liabilities were solid, however.) We have achieved this result while utilizing a low amount of leverage (both financial leverage measured by debt to equity, and operating leverage measured by premium volume to capital funds of our insurance business), and also without significant issuance or repurchase of shares. Basically, we have worked with the capital with which we started. From our textile base we, or our Blue Chip and Wesco subsidiaries, have acquired total ownership of thirteen businesses through negotiated purchases from private owners for cash, and have started six others. (It’s worth a mention that those who have sold to us have, almost without exception, treated us with exceptional honor and fairness, both at the time of sale and subsequently.) But before we drown in a sea of self-congratulation, a further - and crucial - observation must be made. A few years ago, a business whose per-share net worth compounded at 20% annually would have guaranteed its owners a highly successful real investment return. Now such an outcome seems less certain. For the inflation rate, coupled with individual tax rates, will be the ultimate determinant as to whether our internal operating performance produces successful investment results - i.e., a reasonable gain in purchasing power from funds committed - for you as shareholders. Just as the original 3% savings bond, a 5% passbook savings account or an 8% U.S. Treasury Note have, in turn, been transformed by inflation into financial instruments that chew up, rather than enhance, purchasing power over their investment lives, a business earning 20% on capital can produce a negative real return for its owners under inflationary conditions not much more severe than presently prevail. If we should continue to achieve a 20% compounded gain - not an easy or certain result by any means - and this gain is translated into a corresponding increase in the market value of Berkshire Hathaway stock as it has been over the last fifteen years, your after-tax purchasing power gain is likely to be very close to zero at a 14% inflation rate. Most of the remaining six percentage points will go for income tax any time you wish to convert your twenty percentage points of nominal annual gain into cash. That combination - the inflation rate plus the percentage of capital that must be paid by the owner to transfer into his own pocket the annual earnings achieved by the business (i.e., ordinary income tax on dividends and capital gains tax on retained earnings) - can be thought of as an “investor’s misery index”. When this index exceeds the rate of return earned on equity by the business, the investor’s purchasing power (real capital) shrinks even though he consumes nothing at all. We have no corporate solution to this problem; high inflation rates will not help us earn higher rates of return on equity. One friendly but sharp-eyed commentator on Berkshire has pointed out that our book value at the end of 1964 would have bought about one-half ounce of gold and, fifteen years later, after we have plowed back all earnings along with much blood, sweat and tears, the book value produced will buy about the same half ounce. A similar comparison could be drawn with Middle Eastern oil. The rub has been that government has been exceptionally able in printing money and creating promises, but is unable to print gold or create oil. We intend to continue to do as well as we can in managing the internal affairs of the business. But you should understand that external conditions affecting the stability of currency may very well be the most important factor in determining whether there are any real rewards from your investment in Berkshire Hathaway. Sources of Earnings We again present a table showing the sources of Berkshire’s earnings. As explained last year, Berkshire owns about 60% of Blue Chip Stamps which, in turn, owns 80% of Wesco Financial Corporation. The table shows both aggregate earnings of the various business entities, as well as Berkshire’s share. All of the significant capital gains or losses attributable to any of the business entities are aggregated in the realized securities gain figure at the bottom of the table, and are not included in operating earnings. Net Earnings Earnings Before Income Taxes After Tax -------------------------------------- ------------------ Total Berkshire Share Berkshire Share ------------------ ------------------ ------------------ (in thousands of dollars) 1979 1978 1979 1978 1979 1978 -------- -------- -------- -------- -------- -------- Total - all entities ......... $68,632 $66,180 $56,427 $54,350 $42,817 $39,242 ======== ======== ======== ======== ======== ======== Earnings from Operations: Insurance Group: Underwriting ............ $ 3,742 $ 3,001 $ 3,741 $ 3,000 $ 2,214 $ 1,560 Net Investment Income ... 24,224 19,705 24,216 19,691 20,106 16,400 Berkshire-Waumbec textiles 1,723 2,916 1,723 2,916 848 1,342 Associated Retail Stores, Inc. ........... 2,775 2,757 2,775 2,757 1,280 1,176 See’s Candies ............. 12,785 12,482 7,598 7,013 3,448 3,049 Buffalo Evening News ...... (4,617) (2,913) (2,744) (1,637) (1,333) (738) Blue Chip Stamps - Parent 2,397 2,133 1,425 1,198 1,624 1,382 Illinois National Bank and Trust Company .......... 5,747 4,822 5,614 4,710 5,027 4,262 Wesco Financial Corporation - Parent ... 2,413 1,771 1,098 777 937 665 Mutual Savings and Loan Association ............ 10,447 10,556 4,751 4,638 3,261 3,042 Precision Steel ........... 3,254 -- 1,480 -- 723 -- Interest on Debt .......... (8,248) (5,566) (5,860) (4,546) (2,900) (2,349) Other ..................... 1,342 720 996 438 753 261 -------- -------- -------- -------- -------- -------- Total Earnings from Operations .......... $57,984 $52,384 $46,813 $40,955 $35,988 $30,052 Realized Securities Gain 10,648 13,796 9,614 13,395 6,829 9,190 -------- -------- -------- -------- -------- -------- Total Earnings ......... $68,632 $66,180 $56,427 $54,350 $42,817 $39,242 ======== ======== ======== ======== ======== ======== Blue Chip and Wesco are public companies with reporting requirements of their own. On pages 37-43 of this report, we have reproduced the narrative reports of the principal executives of both companies, in which they describe 1979 operations. Some of the numbers they mention in their reports are not precisely identical to those in the above table because of accounting and tax complexities. (The Yanomamo Indians employ only three numbers: one, two, and more than two. Maybe their time will come.) However, the commentary in those reports should be helpful to you in understanding the underlying economic characteristics and future prospects of the important businesses that they manage. A copy of the full annual report of either company will be mailed to any shareholder of Berkshire upon request to Mr. Robert H. Bird for Blue Chip Stamps, 5801 South Eastern Avenue, Los Angeles, California 90040, or to Mrs. Bette Deckard for Wesco Financial Corporation, 315 East Colorado Boulevard, Pasadena, California 91109. Textiles and Retailing The relative significance of these two areas has diminished somewhat over the years as our insurance business has grown dramatically in size and earnings. Ben Rosner, at Associated Retail Stores, continues to pull rabbits out of the hat - big rabbits from a small hat. Year after year, he produces very large earnings relative to capital employed - realized in cash and not in increased receivables and inventories as in many other retail businesses - in a segment of the market with little growth and unexciting demographics. Ben is now 76 and, like our other “up-and-comers”, Gene Abegg, 82, at Illinois National and Louis Vincenti, 74, at Wesco, regularly achieves more each year. Our textile business also continues to produce some cash, but at a low rate compared to capital employed. This is not a reflection on the managers, but rather on the industry in which they operate. In some businesses - a network TV station, for example - it is virtually impossible to avoid earning extraordinary returns on tangible capital employed in the business. And assets in such businesses sell at equally extraordinary prices, one thousand cents or more on the dollar, a valuation reflecting the splendid, almost unavoidable, economic results obtainable. Despite a fancy price tag, the “easy” business may be the better route to go. We can speak from experience, having tried the other route. Your Chairman made the decision a few years ago to purchase Waumbec Mills in Manchester, New Hampshire, thereby expanding our textile commitment. By any statistical test, the purchase price was an extraordinary bargain; we bought well below the working capital of the business and, in effect, got very substantial amounts of machinery and real estate for less than nothing. But the purchase was a mistake. While we labored mightily, new problems arose as fast as old problems were tamed. Both our operating and investment experience cause us to conclude that “turnarounds” seldom turn, and that the same energies and talent are much better employed in a good business purchased at a fair price than in a poor business purchased at a bargain price. Although a mistake, the Waumbec acquisition has not been a disaster. Certain portions of the operation are proving to be valuable additions to our decorator line (our strongest franchise) at New Bedford, and it’s possible that we may be able to run profitably on a considerably reduced scale at Manchester. However, our original rationale did not prove out. Insurance Underwriting We predicted last year that the combined underwriting ratio (see definition on page 36) for the insurance industry would “move up at least a few points, perhaps enough to throw the industry as a whole into an underwriting loss position”. That is just about the way it worked out. The industry underwriting ratio rose in 1979 over three points, from roughly 97.4% to 100.7%. We also said that we thought our underwriting performance relative to the industry would improve somewhat in 1979 and, again, things worked out as expected. Our own underwriting ratio actually decreased from 98.2% to 97.1%. Our forecast for 1980 is similar in one respect; again we feel that the industry’s performance will worsen by at least another few points. However, this year we have no reason to think that our performance relative to the industry will further improve. (Don’t worry - we won’t hold back to try to validate that forecast.) Really extraordinary results were turned in by the portion of National Indemnity Company’s insurance operation run by Phil Liesche. Aided by Roland Miller in Underwriting and Bill Lyons in Claims, this section of the business produced an underwriting profit of $8.4 million on about $82 million of earned premiums. Only a very few companies in the entire industry produced a result comparable to this. You will notice that earned premiums in this segment were down somewhat from those of 1978. We hear a great many insurance managers talk about being willing to reduce volume in order to underwrite profitably, but we find that very few actually do so. Phil Liesche is an exception: if business makes sense, he writes it; if it doesn’t, he rejects it. It is our policy not to lay off people because of the large fluctuations in work load produced by such voluntary volume changes. We would rather have some slack in the organization from time to time than keep everyone terribly busy writing business on which we are going to lose money. Jack Ringwalt, the founder of National Indemnity Company, instilled this underwriting discipline at the inception of the company, and Phil Liesche never has wavered in maintaining it. We believe such strong-mindedness is as rare as it is sound - and absolutely essential to the running of a first-class casualty insurance operation. John Seward continues to make solid progress at Home and Automobile Insurance Company, in large part by significantly expanding the marketing scope of that company in general liability lines. These lines can be dynamite, but the record to date is excellent and, in John McGowan and Paul Springman, we have two cautious liability managers extending our capabilities. Our reinsurance division, led by George Young, continues to give us reasonably satisfactory overall results after allowing for investment income, but underwriting performance remains unsatisfactory. We think the reinsurance business is a very tough business that is likely to get much tougher. In fact, the influx of capital into the business and the resulting softer price levels for continually increasing exposures may well produce disastrous results for many entrants (of which they may be blissfully unaware until they are in over their heads; much reinsurance business involves an exceptionally “long tail”, a characteristic that allows catastrophic current loss experience to fester undetected for many years). It will be hard for us to be a whole lot smarter than the crowd and thus our reinsurance activity may decline substantially during the projected prolonged period of extraordinary competition. The Homestate operation was disappointing in 1979. Excellent results again were turned in by George Billings at Texas United Insurance Company, winner of the annual award for the low loss ratio among Homestate companies, and Floyd Taylor at Kansas Fire and Casualty Company. But several of the other operations, particularly Cornhusker Casualty Company, our first and largest Homestate operation and historically a winner, had poor underwriting results which were accentuated by data processing, administrative and personnel problems. We have made some major mistakes in reorganizing our data processing activities, and those mistakes will not be cured immediately or without cost. However, John Ringwalt has thrown himself into the task of getting things straightened out and we have confidence that he, aided by several strong people who recently have been brought aboard, will succeed. Our performance in Worker’s Compensation was far, far better than we had any right to expect at the beginning of 1979. We had a very favorable climate in California for the achievement of good results but, beyond this, Milt Thornton at Cypress Insurance Company and Frank DeNardo at National Indemnity’s California Worker’s Compensation operation both performed in a simply outstanding manner. We have admitted - and with good reason - some mistakes on the acquisition front, but the Cypress purchase has turned out to be an absolute gem. Milt Thornton, like Phil Liesche, follows the policy of sticking with business that he understands and wants, without giving consideration to the impact on volume. As a result, he has an outstanding book of business and an exceptionally well functioning group of employees. Frank DeNardo has straightened out the mess he inherited in Los Angeles in a manner far beyond our expectations, producing savings measured in seven figures. He now can begin to build on a sound base. At yearend we entered the specialized area of surety reinsurance under the management of Chet Noble. At least initially, this operation will be relatively small since our policy will be to seek client companies who appreciate the need for a long term “partnership” relationship with their reinsurers. We are pleased by the quality of the insurers we have attracted, and hope to add several more of the best primary writers as our financial strength and stability become better known in the surety field. The conventional wisdom is that insurance underwriting overall will be poor in 1980, but that rates will start to firm in a year or so, leading to a turn in the cycle some time in 1981. We disagree with this view. Present interest rates encourage the obtaining of business at underwriting loss levels formerly regarded as totally unacceptable. Managers decry the folly of underwriting at a loss to obtain investment income, but we believe that many will. Thus we expect that competition will create a new threshold of tolerance for underwriting losses, and that combined ratios will average higher in the future than in the past. To some extent, the day of reckoning has been postponed because of marked reduction in the frequency of auto accidents - probably brought on in major part by changes in driving habits induced by higher gas prices. In our opinion, if the habits hadn’t changed, auto insurance rates would have been very little higher and underwriting results would have been much worse. This dosage of serendipity won’t last indefinitely. Our forecast is for an average combined ratio for the industry in the 105 area over the next five years. While we have a high degree of confidence that certain of our operations will do considerably better than average, it will be a challenge to us to operate below the industry figure. You can get a lot of surprises in insurance. Nevertheless, we believe that insurance can be a very good business. It tends to magnify, to an unusual degree, human managerial talent - or the lack of it. We have a number of managers whose talent is both proven and growing. (And, in addition, we have a very large indirect interest in two truly outstanding management groups through our investments in SAFECO and GEICO.) Thus we expect to do well in insurance over a period of years. However, the business has the potential for really terrible results in a single specific year. If accident frequency should turn around quickly in the auto field, we, along with others, are likely to experience such a year. Insurance Investments In recent years we have written at length in this section about our insurance equity investments. In 1979 they continued to perform well, largely because the underlying companies in which we have invested, in practically all cases, turned in outstanding performances. Retained earnings applicable to our insurance equity investments, not reported in our financial statements, continue to mount annually and, in aggregate, now come to a very substantial number. We have faith that the managements of these companies will utilize those retained earnings effectively and will translate a dollar retained by them into a dollar or more of subsequent market value for us. In part, our unrealized gains reflect this process. Below we show the equity investments which had a yearend market value of over $5 million: No. of Sh. Company Cost Market ---------- ------- ---------- ---------- (000s omitted) 289,700 Affiliated Publications, Inc. ........... $ 2,821 $ 8,800 112,545 Amerada Hess ............................ 2,861 5,487 246,450 American Broadcasting Companies, Inc. ... 6,082 9,673 5,730,114 GEICO Corp. (Common Stock) .............. 28,288 68,045 328,700 General Foods, Inc. ..................... 11,437 11,053 1,007,500 Handy & Harman .......................... 21,825 38,537 711,180 Interpublic Group of Companies, Inc. .... 4,531 23,736 1,211,834 Kaiser Aluminum & Chemical Corp. ........ 20,629 23,328 282,500 Media General, Inc. ..................... 4,545 7,345 391,400 Ogilvy & Mather International ........... 3,709 7,828 953,750 SAFECO Corporation ...................... 23,867 35,527 1,868,000 The Washington Post Company ............. 10,628 39,241 771,900 F. W. Woolworth Company ................. 15,515 19,394 ---------- ---------- Total ................................... $156,738 $297,994 All Other Holdings ...................... 28,675 38,686 ---------- ---------- Total Equities .......................... $185,413 $336,680 ========== ========== We currently believe that equity markets in 1980 are likely to evolve in a manner that will result in an underperformance by our portfolio for the first time in recent years. We very much like the companies in which we have major investments, and plan no changes to try to attune ourselves to the markets of a specific year. Since we have covered our philosophy regarding equities extensively in recent annual reports, a more extended discussion of bond investments may be appropriate for this one, particularly in light of what has happened since yearend. An extraordinary amount of money has been lost by the insurance industry in the bond area - notwithstanding the accounting convention that allows insurance companies to carry their bond investments at amortized cost, regardless of impaired market value. Actually, that very accounting convention may have contributed in a major way to the losses; had management been forced to recognize market values, its attention might have been focused much earlier on the dangers of a very long-term bond contract. Ironically, many insurance companies have decided that a one-year auto policy is inappropriate during a time of inflation, and six-month policies have been brought in as replacements. “How,” say many of the insurance managers, “can we be expected to look forward twelve months and estimate such imponderables as hospital costs, auto parts prices, etc.?” But, having decided that one year is too long a period for which to set a fixed price for insurance in an inflationary world, they then have turned around, taken the proceeds from the sale of that six-month policy, and sold the money at a fixed price for thirty or forty years. The very long-term bond contract has been the last major fixed price contract of extended duration still regularly initiated in an inflation-ridden world. The buyer of money to be used between 1980 and 2020 has been able to obtain a firm price now for each year of its use while the buyer of auto insurance, medical services, newsprint, office space - or just about any other product or service - would be greeted with laughter if he were to request a firm price now to apply through 1985. For in virtually all other areas of commerce, parties to long-term contracts now either index prices in some manner, or insist on the right to review the situation every year or so. A cultural lag has prevailed in the bond area. The buyers (borrowers) and middlemen (underwriters) of money hardly could be expected to raise the question of whether it all made sense, and the sellers (lenders) slept through an economic and contractual revolution. For the last few years our insurance companies have not been a net purchaser of any straight long-term bonds (those without conversion rights or other attributes offering profit possibilities). There have been some purchases in the straight bond area, of course, but they have been offset by sales or maturities. Even prior to this period, we never would buy thirty or forty-year bonds; instead we tried to concentrate in the straight bond area on shorter issues with sinking funds and on issues that seemed relatively undervalued because of bond market inefficiencies. However, the mild degree of caution that we exercised was an improper response to the world unfolding about us. You do not adequately protect yourself by being half awake while others are sleeping. It was a mistake to buy fifteen-year bonds, and yet we did; we made an even more serious mistake in not selling them (at losses, if necessary) when our present views began to crystallize. (Naturally, those views are much clearer and definite in retrospect; it would be fair for you to ask why we weren’t writing about this subject last year.) Of course, we must hold significant amounts of bonds or other fixed dollar obligations in conjunction with our insurance operations. In the last several years our net fixed dollar commitments have been limited to the purchase of convertible bonds. We believe that the conversion options obtained, in effect, give that portion of the bond portfolio a far shorter average life than implied by the maturity terms of the issues (i.e., at an appropriate time of our choosing, we can terminate the bond contract by conversion into stock). This bond policy has given us significantly lower unrealized losses than those experienced by the great majority of property and casualty insurance companies. We also have been helped by our strong preference for equities in recent years that has kept our overall bond segment relatively low. Nevertheless, we are taking our lumps in bonds and feel that, in a sense, our mistakes should be viewed less charitably than the mistakes of those who went about their business unmindful of the developing problems. Harking back to our textile experience, we should have realized the futility of trying to be very clever (via sinking funds and other special type issues) in an area where the tide was running heavily against us. We have severe doubts as to whether a very long-term fixed- interest bond, denominated in dollars, remains an appropriate business contract in a world where the value of dollars seems almost certain to shrink by the day. Those dollars, as well as paper creations of other governments, simply may have too many structural weaknesses to appropriately serve as a unit of long term commercial reference. If so, really long bonds may turn out to be obsolete instruments and insurers who have bought those maturities of 2010 or 2020 could have major and continuing problems on their hands. We, likewise, will be unhappy with our fifteen-year bonds and will annually pay a price in terms of earning power that reflects that mistake. Some of our convertible bonds appear exceptionally attractive to us, and have the same sort of earnings retention factor (applicable to the stock into which they may be converted) that prevails in our conventional equity portfolio. We expect to make money in these bonds (we already have, in a few cases) and have hopes that our profits in this area may offset losses in straight bonds. And, of course, there is the possibility that our present analysis is much too negative. The chances for very low rates of inflation are not nil. Inflation is man-made; perhaps it can be man-mastered. The threat which alarms us may also alarm legislators and other powerful groups, prompting some appropriate response. Furthermore, present interest rates incorporate much higher inflation projections than those of a year or two ago. Such rates may prove adequate or more than adequate to protect bond buyers. We even may miss large profits from a major rebound in bond prices. However, our unwillingness to fix a price now for a pound of See’s candy or a yard of Berkshire cloth to be delivered in 2010 or 2020 makes us equally unwilling to buy bonds which set a price on money now for use in those years. Overall, we opt for Polonius (slightly restated): “Neither a short-term borrower nor a long-term lender be.” Banking This will be the last year that we can report on the Illinois National Bank and Trust Company as a subsidiary of Berkshire Hathaway. Therefore, it is particularly pleasant to report that, under Gene Abegg’s and Pete Jeffrey’s management, the bank broke all previous records and earned approximately 2.3% on average assets last year, a level again over three times that achieved by the average major bank, and more than double that of banks regarded as outstanding. The record is simply extraordinary, and the shareholders of Berkshire Hathaway owe a standing ovation to Gene Abegg for the performance this year and every year since our purchase in 1969. As you know, the Bank Holding Company Act of 1969 requires that we divest the bank by December 31, 1980. For some years we have expected to comply by effecting a spin-off during 1980. However, the Federal Reserve Board has taken the firm position that if the bank is spun off, no officer or director of Berkshire Hathaway can be an officer or director of the spun-off bank or bank holding company, even in a case such as ours in which one individual would own over 40% of both companies. Under these conditions, we are investigating the possible sale of between 80% and 100% of the stock of the bank. We will be most choosy about any purchaser, and our selection will not be based solely on price. The bank and its management have treated us exceptionally well and, if we have to sell, we want to be sure that they are treated equally as well. A spin-off still is a possibility if a fair price along with a proper purchaser cannot be obtained by early fall. However, you should be aware that we do not expect to be able to fully, or even in very large part, replace the earning power represented by the bank from the proceeds of the sale of the bank. You simply can’t buy high quality businesses at the sort of price/earnings multiple likely to prevail on our bank sale. Financial Reporting During 1979, NASDAQ trading was initiated in the stock of Berkshire Hathaway This means that the stock now is quoted on the Over-the-Counter page of the Wall Street journal under “Additional OTC Quotes”. Prior to such listing, the Wall Street journal and the Dow-Jones news ticker would not report our earnings, even though such earnings were one hundred or more times the level of some companies whose reports they regularly picked up. Now, however, the Dow-Jones news ticker reports our quarterly earnings promptly after we release them and, in addition, both the ticker and the Wall Street journal report our annual earnings. This solves a dissemination problem that had bothered us. In some ways, our shareholder group is a rather unusual one, and this affects our manner of reporting to you. For example, at the end of each year about 98% of the shares outstanding are held by people who also were shareholders at the beginning of the year. Therefore, in our annual report we build upon what we have told you in previous years instead of restating a lot of material. You get more useful information this way, and we don’t get bored. Furthermore, perhaps 90% of our shares are owned by investors for whom Berkshire is their largest security holding, very often far and away the largest. Many of these owners are willing to spend a significant amount of time with the annual report, and we attempt to provide them with the same information we would find useful if the roles were reversed. In contrast, we include no narrative with our quarterly reports. Our owners and managers both have very long time- horizons in regard to this business, and it is difficult to say anything new or meaningful each quarter about events of long-term significance. But when you do receive a communication from us, it will come from the fellow you are paying to run the business. Your Chairman has a firm belief that owners are entitled to hear directly from the CEO as to what is going on and how he evaluates the business, currently and prospectively. You would demand that in a private company; you should expect no less in a public company. A once-a-year report of stewardship should not be turned over to a staff specialist or public relations consultant who is unlikely to be in a position to talk frankly on a manager- to-owner basis. We feel that you, as owners, are entitled to the same sort of reporting by your manager as we feel is owed to us at Berkshire Hathaway by managers of our business units. Obviously, the degree of detail must be different, particularly where information would be useful to a business competitor or the like. But the general scope, balance, and level of candor should be similar. We don’t expect a public relations document when our operating managers tell us what is going on, and we don’t feel you should receive such a document. In large part, companies obtain the shareholder constituency that they seek and deserve. If they focus their thinking and communications on short-term results or short-term stock market consequences they will, in large part, attract shareholders who focus on the same factors. And if they are cynical in their treatment of investors, eventually that cynicism is highly likely to be returned by the investment community. Phil Fisher, a respected investor and author, once likened the policies of the corporation in attracting shareholders to those of a restaurant attracting potential customers. A restaurant could seek a given clientele - patrons of fast foods, elegant dining, Oriental food, etc. - and eventually obtain an appropriate group of devotees. If the job were expertly done, that clientele, pleased with the service, menu, and price level offered, would return consistently. But the restaurant could not change its character constantly and end up with a happy and stable clientele. If the business vacillated between French cuisine and take-out chicken, the result would be a revolving door of confused and dissatisfied customers. So it is with corporations and the shareholder constituency they seek. You can’t be all things to all men, simultaneously seeking different owners whose primary interests run from high current yield to long-term capital growth to stock market pyrotechnics, etc. The reasoning of managements that seek large trading activity in their shares puzzles us. In effect, such managements are saying that they want a good many of the existing clientele continually to desert them in favor of new ones - because you can’t add lots of new owners (with new expectations) without losing lots of former owners. We much prefer owners who like our service and menu and who return year after year. It would be hard to find a better group to sit in the Berkshire Hathaway shareholder “seats” than those already occupying them. So we hope to continue to have a very low turnover among our owners, reflecting a constituency that understands our operation, approves of our policies, and shares our expectations. And we hope to deliver on those expectations. Prospects Last year we said that we expected operating earnings in dollars to improve but return on equity to decrease. This turned out to be correct. Our forecast for 1980 is the same. If we are wrong, it will be on the downside. In other words, we are virtually certain that our operating earnings expressed as a percentage of the new equity base of approximately $236 million, valuing securities at cost, will decline from the 18.6% attained in 1979. There is also a fair chance that operating earnings in aggregate dollars will fall short of 1979; the outcome depends partly upon the date of disposition of the bank, partly upon the degree of slippage in insurance underwriting profitability, and partly upon the severity of earnings problems in the savings and loan industry. We continue to feel very good about our insurance equity investments. Over a period of years, we expect to develop very large and growing amounts of underlying earning power attributable to our fractional ownership of these companies. In most cases they are splendid businesses, splendidly managed, purchased at highly attractive prices. Your company is run on the principle of centralization of financial decisions at the top (the very top, it might be added), and rather extreme delegation of operating authority to a number of key managers at the individual company or business unit level. We could just field a basketball team with our corporate headquarters group (which utilizes only about 1500 square feet of space). This approach produces an occasional major mistake that might have been eliminated or minimized through closer operating controls. But it also eliminates large layers of costs and dramatically speeds decision-making. Because everyone has a great deal to do, a very great deal gets done. Most important of all, it enables us to attract and retain some extraordinarily talented individuals - people who simply can’t be hired in the normal course of events - who find working for Berkshire to be almost identical to running their own show. We have placed much trust in them - and their achievements have far exceeded that trust. Warren E. Buffett, Chairman March 3, 1980 ```
12299
https://www.teacherspayteachers.com/Product/Comparing-and-Ordering-Fractions-Digital-Lesson-Activities-6975843
Log InSign Up Cart is empty Total: View Wish ListView Cart Comparing and Ordering Fractions - Digital Lesson & Activities Rated 4.7 out of 5, based on 10 reviews 4.7 (10 ratings) $2.00 DescriptionReviews10Q&A Share What others say "This is a clear, well-organized resource that was easy to use and engaging for students. I appreciate the thoughtful design and alignment with learning goals. Thanks for creating something so helpful!" Kylie K. Description Comparing and Ordering Fractions (Both Like and Unlike Denominators) Includes detailed notes and visuals for fractions, how to compare fractions with the same denominators, how to compare fractions with the same numerators, comparing fractions to one whole, comparing fractions to one half, and how to order fractions using the LCM to create equivalent fractions, and how to order fractions using models. Comparing Decimals Activities: Students will compare 2 fractions with like denominators using <, >, or =. Students will mark the fractions on the fraction bars, then compare using <, >, or =. Students will write the fraction or mixed number for the given models, then compare using <, >, or =. Students will compare two fractions by finding the LCM and creating equivalent fractions. Students will mark the fractions on the fraction bars, then order the fractions from least to greatest. Report this resource to TPT Reported resources will be reviewed by our team. Report this resource to let us know if this resource violates TPT's content guidelines. Comparing and Ordering Fractions - Digital Lesson & Activities Rated 4.7 out of 5, based on 10 reviews 4.7 (10 ratings) Whitt's End 963Followers $2.00 Grade 3rd - 5th Subject Fractions, Math, Math Test Prep Tags Activities, Centers Reviews Rated 4.7 out of 5, based on 10 reviews 4.7 Based on 10 reviews All ratings 5 stars 4 stars 3 stars 2 stars 1 star All grades PreK Kindergarten 1st grade 2nd grade 3rd grade 4th grade 5th grade Learning difficulties Emerging bilinguals/ELs/ESOLs/ENLs Sort by: Most Recent Most Relevant Most Recent KK Kylie K. Aug 6, 2025 Rated 5 out of 5 Extremely satisfied This is a clear, well-organized resource that was easy to use and engaging for students. I appreciate the thoughtful design and alignment with learning goals. Thanks for creating something so helpful! Students used with PreK, K, 1st, 2nd grades TS Tori S. Apr 14, 2025 Rated 5 out of 5 Extremely satisfied My students LOVED this resource! Perfect for any 3rd Graders! Students used with 3rd grade Primarily students with learning difficulties Students were engaged Strongly disagreeStrongly agree HD Haley D. Feb 27, 2025 Rated 5 out of 5 Extremely satisfied This was a great resource to incorporate for my 5th graders. Students used with 5th grade Students were engaged Strongly disagreeStrongly agree JG Jamie G. Aug 15, 2024 Rated 5 out of 5 Extremely satisfied Fantastic resource! Students used with 4th grade Students were engaged Strongly disagreeStrongly agree (TPT Seller) Aug 6, 2024 Rated 5 out of 5 Extremely satisfied This is such a great resource. Thank you so much!! Questions & Answers Loading TPT is the largest marketplace for PreK-12 resources, powered by a community of educators. Who we are We're hiring Press Blog Gift Cards Help & FAQ Security Privacy policy Student privacy Terms of service Tell us what you think Get our weekly newsletter with free resources, updates, and special offers. Get newsletter IXL family of brands Comprehensive K-12 personalized learning Rosetta Stone Immersive learning for 25 languages Trusted tutors for 300 subjects 35,000 worksheets, games, and lesson plans Adaptive learning for English vocabulary Fast and accurate language certification Essential reference for synonyms and antonyms Comprehensive resource for word definitions and usage Spanish-English dictionary, translator, and learning French-English dictionary, translator, and learning Diccionario inglés-español, traductor y sitio de aprendizaje Fun educational games for kids © 2025 by IXL Learning|Protected by reCAPTCHA Privacy•Terms