text
stringlengths
100
957k
meta
stringclasses
1 value
# Rewrite expressions involving radicals and rational exponents Rewrite expressions involving radicals and rational exponents using the properties of exponents.
{}
# Mathematical Treasures - Jordanus de Nemore's Arithmetica Author(s): Frank J. Swetz and Victor J. Katz Jordanus de Nemore (1225-1260) was a German scholar who wrote several books on arithmetic and geometry. In the ten books of his De elementis arismetice artis, he attempted to write a comprehensive work on arithmetic similar to Euclid’s Elements in geometry. The images below are from an early printed edition of the Arithmetica of Jordanus de Nemore, who would have handwritten the original. A discussion on the operation of addition employs diagrams that look a bit like “number lines” along with operational “jumps” to demonstrate the additive process. The first diagram illustrates that $2\times(2+4)=4+8=12.$ This is page 7. The page contains theorems 14 through 19 of Book I.  Rough translations of the statements of the theorems follow. 14.  If a number is divided into two parts and the whole number is multiplied by one of the parts, the result is that number multiplied by itself added to the product of the two parts. 15.  If a number is divided into two parts and the whole is multiplied by itself, the result is the sum of each part multiplied by itself and twice one part multiplied by the other. 16.  If a number is divided into two parts, then the sum of the whole multiplied by itself and one part multiplied by itself is twice the whole multiplied by that part and the second part multiplied by itself. 17.  If a number is divided into two parts, then the whole multiplied by itself is the same as four times the product of one part multiplied by the other and the difference between the two parts multiplied by itself. 18.  If a number is divided into two parts, then the smaller multiplied by itself and the whole number multiplied by the difference of the two parts is the same as the larger multiplied by itself. 19.  If a number is divided into two equal parts and also into two unequal parts, then one of the equal parts multiplied by itself is the same as the product of the unequal parts and the product of the differences between the unequal parts and the equal parts. Note that illustrations of each theorem are given by diagrams with numbers in the left margin.  Thus, the first diagram illustrates that $2\times(2+4)=4+8=12.$ An introductory page from the same copy of the work follows: These images appear courtesy of the Columbia University Libraries, where the book featured above is part of the Plimpton and Smith Collections. See images from a manuscript copy of Jordanus de Nemore's De triangulis here in Convergence. Index to Convergence's Mathematical Treasures Frank J. Swetz and Victor J. Katz, "Mathematical Treasures - Jordanus de Nemore's Arithmetica," Convergence (January 2011)
{}
## Precalculus (6th Edition) Blitzer Using $f\left( x \right)=3{{x}^{2}}+x$, we can determine that $\underset{h\to 0}{\mathop{\lim }}\,\frac{f\left( 1+h \right)-f\left( 1 \right)}{h}=7$. This means that the point-slope equation of the tangent line to the graph of $f\left( x \right)=3{{x}^{2}}+x$ at $\left( 1,4 \right)$ is $y-4=7\left( x-1 \right)$. Consider the function $f\left( x \right)=3{{x}^{2}}+x$ Since it is given that $\underset{h\to 0}{\mathop{\lim }}\,\frac{f\left( 1+h \right)-f\left( 1 \right)}{h}=7$, thus the slope of the tangent line to the graph of the function is $7$. The point-slope equation of the tangent line to the graph of $f\left( x \right)=3{{x}^{2}}+x$ at $\left( 1,4 \right)$ with the slope $7$ is found as follows: Substitute the value of ${{x}_{1}}=1,{{y}_{1}}=4\text{ and }m=7$ in $y-{{y}_{1}}=m\left( x-{{x}_{1}} \right)$ $y-4=7\left( x-1 \right)$ Therefore, the tangent line to the graph of $f\left( x \right)=3{{x}^{2}}+x$ at $\left( 1,4 \right)$ is $y-4=7\left( x-1 \right)$.
{}
# Acceleration Formula: Definition, Speed, Solved Examples You must have heard of the term acceleration in your daily life. If we look at it in general, acceleration is said to be when an object is increasing its speed cautiously. In other words, it means if you are traveling in a car which is moving at the speed of 60 kmph and after 1 min the speed of the car is 65 kmph, which means you are accelerating. Now the question arises that how can you say that this object is accelerating? What are the terms taken into consideration during the calculation of acceleration? We will dive deeper below and learn the acceleration formula. ## Acceleration Formula | Definition In general, we can say acceleration refers to speeding up. However, from the physics perspective, we see it means something different. Over here, it is referred to as the rate at which the velocity of an object changes. It does not matter if it is speeding up or down, it’s the change. Therefore, the acceleration is positive when the object is speeding up and negative when speeding down. According to Newton’s Second Law, the net unbalanced force which acts on the object causes this to happen. Thus, it can be said that acceleration is a vector quantity. It is so because it changes the time rate of change of velocity. ### Formula for Acceleration There are two formulas for acceleration. The first formula is from Newton’s second law relates force, mass, and acceleration in an equation. Thus, the formula is: F= ma Over here: F refers to the force m is the mass a is the acceleration Further, we have another formula that is made to calculate the rate of change in velocity over the period of time. Therefore, the formula for this is: ### Solved Examples on Acccerelation Formula Question- A woman is traveling by her sports car at a constant velocity v = 5.00 m/s. When she steps on the gas, it makes the car to accelerate forward. Further, past 10.0 seconds, she stops the acceleration and continues a constant velocity v = 25.0 m/s. Calculate the acceleration of the car. Answer- In the forward direction, initial velocity is Therefore, we see that the acceleration of the car is 2.00 m/sforward. Question- A man takes a rock and drops it off from a cliff. It falls for 15.0 s before it hits the ground. The acceleration due to gravity g = 9.80 m/s2. Calculate the velocity of the rock the moment before it had hit the ground. Answer- The man released the rock from rest, therefore, we get the initial velocity as $V_{i}$ = 0.00 m/s. The time for the change to take place is 15.0 s. The acceleration for this is 9.80 m/s2. Therefore, to find the velocity we will rearrange the equation like: Therefore, as the rock is falling, the direction of the velocity is down.
{}
## Calculate equilibrium price and quantity The equilibrium price and quantity is the point where the supply and the demand curves intersect. Consider an economy with the following demand and supply equations: $Q_D = 100 - P$ where $Q_D$ represents the quantity demand and $P$ is the equilibrium price and $Q_S = P$ where $Q_S$ is the quantity supplied. These two equations are illustrated in the diagram below. The equilibrium price is determined by finding the point where both supply and demand are the same value, i.e, $Q_D = Q_S$. Therefore, we set the equations for the supply and demand curve equal to each other, such that: $100 - P = P$ $100 = 2P$ $P = 50$ We can solve for the equilibrium quantity produced by substituting the price back into either the supply or demand equation, as supply equals demand in equilibrium. This implies that $Q = 50.$ ## Point elasticity of demand 2)  Calculate the point elasticity of demand. To do this we use the following formula $E_D = -1*\frac{\Delta Q * P}{\Delta P * Q}$ The first part $E_D = \frac{\Delta Q }{\Delta P }$ is just the slope of the demand function which means $E_D = \frac{\Delta Q }{\Delta P } = 1$ And then we use the equilibrium value of quantity and demand for our values of $P$ and $Q$. Thus our point estimate is as follows: $E_D = -1*\frac{50}{50} = -1$ The point elasticity of demand at the equilibrium quantity of 50 units and equilibrium price of \$50 is -1 which is the unit elasticity.
{}
# gurobi() Filter Content By Version Languages ### gurobi() gurobi ( model, params ) The two arguments are MATLAB struct variables, each consisting of multiple fields. The first argument contains the optimization model to be solved. The second contains an optional set of Gurobi parameters to be modified during the solution process. The return value of this function is a struct, also consisting of multiple fields. It contains the result of performing the optimization on the specified model. We'll now discuss the details of each of these data structures. The optimization model As we've mentioned, the model argument to the gurobi function is a struct variable, containing multiple fields that represent the various parts of the optimization model. Several of these fields are optional. Note that you refer to a field of a MATLAB struct variable by adding a period to the end of the variable name, followed by the name of the field. For example, model.A refers to field A of variable model. The following is an enumeration of all of the fields of the model argument that Gurobi will take into account when optimizing the model: A The linear constraint matrix. This must be a sparse matrix. obj The linear objective vector (c in the problem statement). You must specify one value for each column of A. This must be a dense vector. sense The senses of the linear constraints. Allowed values are '=', '<', or '>'. You must specify one value for each row of A, or a single value to specify that all constraints have the same sense. This must be a char array. rhs The right-hand side vector for the linear constraints ( in the problem statement). You must specify one value for each row of A. This must be a dense vector. lb (optional) The lower bound vector. When present, you must specify one value for each column of A. This must be a dense vector. When absent, each variable has a lower bound of 0. ub (optional) The upper bound vector. When present, you must specify one value for each column of A. This must be a dense vector. When absent, the variables have infinite upper bounds. vtype (optional) The variable types. This char array is used to capture variable integrality constraints. Allowed values are 'C' (continuous), 'B' (binary), 'I' (integer), 'S' (semi-continuous), or 'N' (semi-integer). Binary variables must be either 0 or 1. Integer variables can take any integer value between the specified lower and upper bounds. Semi-continuous variables can take any value between the specified lower and upper bounds, or a value of zero. Semi-integer variables can take any integer value between the specified lower and upper bounds, or a value of zero. When present, you must specify one value for each column of A, or a single value to specify that all variables have the same type. When absent, each variable is treated as being continuous. Refer to this section for more information on variable types. modelsense (optional) The optimization sense. Allowed values are 'min' (minimize) or 'max' (maximize). When absent, the default optimization sense is minimization. modelname (optional) The name of the model. The name appears in the Gurobi log, and when writing a model to a file. objcon (optional) The constant offset in the objective function ( in the problem statement). vbasis (optional) The variable basis status vector. Used to provide an advanced starting point for the simplex algorithm. You would generally never concern yourself with the contents of this array, but would instead simply pass it from the result of a previous optimization run to the input of a subsequent run. When present, you must specify one value for each column of A. This must be a dense vector. cbasis (optional) The constraint basis status vector. Used to provide an advanced starting point for the simplex algorithm. Consult the vbasis description for details. When present, you must specify one value for each row of A. This must be a dense vector. Q (optional) The quadratic objective matrix. When present, Q must be a square matrix whose row and column counts are equal to the number of columns in A. Q must be a sparse matrix. cones (optional) Second-order cone constraints. A struct array. Each element in the array defines a single cone constraint: x(k)^2 >= sum(x(idx).^2), x(k) >= 0. The constraint is defined via model.cones.index = [k idx], with the first entry in index corresponding to the index of the variable on the left-hand side of the constraint, and the remaining entries corresponding to the indices of the variables on the right-hand side of the constraint. model.cones.index must be a dense vector. The quadratic constraints. A struct array. When present, each element in the array defines a single quadratic constraint: . The Qc matrix must be a square matrix whose row and column counts are equal to the number of columns of A. Qc must be a sparse matrix. It is stored in model.quadcon.Qc. The q vector defines the linear terms in the constraint. You must specify a value for q for each column of A. This must be a dense vector. It is stored in model.quadcon.q. The scalar beta defines the right-hand side of the constraint. It is stored in model.quadcon.rhs. sos (optional) The Special Ordered Set (SOS) constraints. A struct array. When present, each element in the array defines a single SOS constraint. A SOS constraint can be of type 1 or 2. This is specified via model.sos.type. A type 1 SOS constraint is a set of variables for which at most one variable in the set may take a value other than zero. A type 2 SOS constraint is an ordered set of variables where at most two variables in the set may take non-zero values. If two take non-zeros values, they must be contiguous in the ordered set. The members of an SOS constraint are specified by placing their indices in model.sos.index. Optional weights associated with SOS members may be defined in model.sos.weight. Please refer to this section for details on SOS constraints. pwlobj (optional) The piecewise-linear objective functions. A struct array. When present, each element in the array defines a piecewise-linear objective function of a single variable. The index of the variable whose objective function is being defined is stored in model.pwlobj.var. The values for the points that define the piecewise-linear function are stored in model.pwlobj.x. The values in the vector must be in non-decreasing order. The values for the points that define the piecewise-linear function are stored in model.pwlobj.y. start (optional) The MIP start vector. The MIP solver will attempt to build an initial solution from this vector. When present, you must specify a start value for each variable. This must be a dense vector. Note that you can leave the start value for a variable undefined--the MIP solver will attempt to fill in values for the undefined start values. This may be done by setting the start value for that variable to nan. varnames (optional) The variable names. A cell array of strings. When present, each element of the array defines the name of a variable. You must specify a name for each column of A. constrnames (optional) The constraint names. A cell array of strings. When present, each element of the array defines the name of a constraint. You must specify a name for each row of A. If any of the mandatory fields listed above are missing, the gurobi function will return an error. Below is an example that demonstrates the construction of a simple optimization model: model.A = sparse([1 2 3; 1 1 0]); model.obj = [1 1 2]; model.modelsense = 'max'; model.rhs = [4; 1]; model.sense = '<>' Parameters The optional params argument to the gurobi function is also a struct, potentially containing multiple fields. The name of each field must be the name of a Gurobi parameter, and the associated value should be the desired value of that parameter. Gurobi parameters allow users to modify the default behavior of the Gurobi optimization algorithms. You can find a complete list of the available Gurobi parameters here. To create a struct that would set the Gurobi method parameter to 2 you would do the following: params.method = 2; The optimization result The gurobi function returns a struct, with the various results of the optimization stored in its fields. The specific results that are available depend on the type of model that was solved, and the status of the optimization. The following is a list of fields that might be available in the returned result. We'll discuss the circumstances under which each will be available after presenting the list. status The status of the optimization, returned as a string. The desired result is 'OPTIMAL', which indicates that an optimal solution to the model was found. Other status are possible, for example if the model has no feasible solution or if you set a Gurobi parameter that leads to early solver termination. See the Status Code section for further information on the Gurobi status codes. objval The objective value of the computed solution. runtime The elapsed wall-clock time (in seconds) for the optimization. x The computed solution. This array contains one entry for each column of A. slack The constraint slack for the computed solution. This array contains one entry for each row of A. qcslack The quadratic constraint slack in the current solution. This array contains one entry for second-order cone constraint and one entry for each quadratic constraint. The slacks for the second-order cone constraints appear before the slacks for the quadratic constraints. pi Dual values for the computed solution (also known as shadow prices). This array contains one entry for each row of A. qcpi The dual values associated with the quadratic constraints. This array contains one entry for each second-order cone constraint and one entry for each quadratic constraint. The dual values for the second-order cone constraints appear before the dual values for the quadratic constraints. rc Variable reduced costs for the computed solution. This array contains one entry for each column of A. vbasis Variable basis status values for the computed optimal basis. You generally should not concern yourself with the contents of this array. If you wish to use an advanced start later, you would simply copy the vbasis and cbasis arrays into the corresponding fields for the next model. This array contains one entry for each column of A. cbasis Constraint basis status values for the computed optimal basis. This array contains one entry for each row of A. unbdray Unbounded ray. Provides a vector that, when added to any feasible solution, yields a new solution that is also feasible but improves the objective. farkasdual Farkas infeasibility proof. This is a dual unbounded vector. Adding this vector to any feasible solution of the dual model yields a new solution that is also feasible but improves the dual objective. farkasproof Magnitude of infeasibility violation in Farkas infeasibility proof. A Farkas infeasibility proof identifies a new constraint, obtained by taking a linear combination of the constraints in the model, that can never be satisfied. (the linear combination is available in the farkasdual field). This attribute indicates the magnitude of the violation of this aggregated constraint. objbound Best available bound on solution (lower bound for minimization, upper bound for maximization). itercount Number of simplex iterations performed. baritercount Number of barrier iterations performed. nodecount Number of branch-and-cut nodes explored. The Status field will be present in all cases. It indicates whether Gurobi was able to find a proven optimal solution to the model. In cases where a solution to the model was found, optimal or otherwise, the objval and x fields will be present. For linear and quadratic programs, if a solution is available, then the pi and rc fields will also be present. For models with quadratic constraints, if the parameter qcpdual is set to 1, the field qcpi will be present. If the final solution is a basic solution (computed by simplex), then vbasis and cbasis will be present. If the model is an unbounded linear program and the infunbdinfo parameter is set to 1, the field unbdray will be present. Finally, if the model is an infeasible linear program and the infunbdinfo parameter is set to 1, the fields farkasdual and farkasproof will be set. The following is an example of how the results of the gurobi call might be extracted and output: result = gurobi(model, params) if strcmp(result.status, 'OPTIMAL') fprintf('Optimal objective: %e\n', result.objval); disp(result.x) else fprintf('Optimization returned status: %s\n', result.status); end Please consult this section for a discussion of some of the practical issues associated with solving a precisely defined mathematical model using finite-precision floating-point arithmetic.
{}
# [Solution] Fall Down solution codeforces Fall Down solution codeforces – There is a grid with 𝑛n rows and 𝑚m columns, and three types of cells: • An empty cell, denoted with ‘.‘. • A stone, denoted with ‘*‘. • An obstacle, denoted with the lowercase Latin letter ‘o‘. ## [Solution] Fall Down solution codeforces All stones fall down until they meet the floor (the bottom row), an obstacle, or other stone which is already immovable. (In other words, all the stones just fall down as long as they can fall.) Simulate the process. What does the resulting grid look like? Input The input consists of multiple test cases. The first line contains an integer 𝑡t (1𝑡1001≤t≤100) — the number of test cases. The description of the test cases follows. The first line of each test case contains two integers 𝑛n and 𝑚m (1𝑛,𝑚501≤n,m≤50) — the number of rows and the number of columns in the grid, respectively. Then 𝑛n lines follow, each containing 𝑚m characters. Each of these characters is either ‘.‘, ‘*‘, or ‘o‘ — an empty cell, a stone, or an obstacle, respectively. Output For each test case, output a grid with 𝑛n rows and 𝑚m columns, showing the result of the process. You don’t need to output a new line after each test, it is in the samples just for clarity. Example input ## [Solution] Fall Down solution codeforces Copy 3 6 10 .*.*....*. .*.......* ...o....o. .*.*....*. .......... .o......o* 2 9 ...***ooo .*o.*o.*o 5 5 ***** *.... ***** ....* ***** ## [Solution] Fall Down solution codeforces output Copy .......... ...*....*. .*.o....o. .*........ .*......** .o.*....o* ....**ooo .*o**o.*o ..... *...* ***** ***** *****
{}
# Maths-Ext 1/2 Marks Lost and Hardest Topics (1 Viewer) #### SplashJuice ##### Active Member Yall reckon Cambridge is the GOAT for maths ext 1 n 2? because our school uses Fitzpatrick and eh idk about that tbh Yes Cambridge is the GOAT. #### vernburn ##### Active Member As for less structure, a question that was debated in my 4u class (and which we found two solutions for) on integration: $\bg_white \int{\frac{x^6}{(1+x)^8}}\ dx$ See if you can find a way to find this indefinite integral. I can make this into a reasonable MX1 question with sufficient structure, but without structure it is definitely a challenge even at MX2 level. I believe that this involves a very nice (and very sneaky!) trick, namely dividing the numerator and denominator by $\bg_white x^8$. I also found a less elegant method which involves the substitution $\bg_white x=\cos2\theta$. #### quickoats ##### Well-Known Member It is certainly true that questions can be made more difficult by giving less structure, but it is also true that structure can be less helpful. On the latter, there are questions where the structure is meant to support you taking approach A when the approach that some students / people / teachers will be inclined towards instinctively is approach B. There was an MX2 question that I saw last year that I solved parts (a) and (b) and then found a proof / solution to part (d) which I used to get the result in part (c) because I didn't see the connection to go from (b) directly to (c) without establishing (d). It's better to get a solution than not (obviously) so the structure of the question doesn't necessarily have to be followed unless there are words like "hence" in the question... and on this, more able students need to recognise the unstated implication of the phrase "hence or otherwise" as it has two distinctly different potential meanings. As for less structure, a question that was debated in my 4u class (and which we found two solutions for) on integration: $\bg_white \int{\frac{x^6}{(1+x)^8}}\ dx$ See if you can find a way to find this indefinite integral. I can make this into a reasonable MX1 question with sufficient structure, but without structure it is definitely a challenge even at MX2 level. Typical bulldozer approach would be let u = 1+x then do a binomial expansion of (1 - u)^6 on the top. Not the most efficient but gets the job done. #### CM_Tutor ##### Moderator Moderator I believe that this involves a very nice (and very sneaky!) trick, namely dividing the numerator and denominator by $\bg_white x^8$. I also found a less elegant method which involves the substitution $\bg_white x=\cos2\theta$. Yes, the substitution $\bg_white u=1+\frac{1}{x}$ will get the result that $\bg_white \int{\frac{x^6}{(1+x)^8}}\ dx = \frac{x^7}{7(1+x)^7} + C$ after demonstrating that $\bg_white \frac{x^6}{(1+x)^8} = \frac{1}{x^2\left(1+\frac{1}{x}\right)^8}$ This would be one way to give the question structure: (a)(i) Show that $\bg_white \frac{x^6}{(1+x)^8} = \frac{1}{x^2\left(1+\frac{1}{x}\right)^8}$ (ii) Hence, find $\bg_white \int{\frac{x^6}{(1+x)^8}}\ dx$ by using the substitution $\bg_white u=1+\frac{1}{x}$ I do know of a trig substitution that will work but it is not $\bg_white x=\cos{2\theta}$, though that might work too. It's nice to see another approach. #### CM_Tutor ##### Moderator Moderator Typical bulldozer approach would be let u = 1+x then do a binomial expansion of (1 - u)^6 on the top. Not the most efficient but gets the job done. Yes, though it is messy. Actually, it could be the basis for a binomial question... 1. Consider the integral $\bg_white I=\int{\frac{x^6}{(1+x)^8}}\ dx$ (a) Using the substitution $\bg_white x=u-1$, find an expression for $\bg_white I$ as a series of terms in $\bg_white (x+1)$. (b) Using a trig substitution (which would be given), show (for some constant $\bg_white C$) that $\bg_white I=\frac{x^7}{7(1+x)^7}+C$ (c) Prove that the results from (a) and (b) are the same, or explain why one of the results is false. #### vernburn ##### Active Member I do know of a trig substitution that will work but it is not $\bg_white x=\cos{2\theta}$, though that might work too. It's nice to see another approach. Here is how $\bg_white x=\cos2\theta$ sub works out: \bg_white \begin{align*}\int\frac{x^6}{(1+x)^8}dx&=-2\int\frac{\cos^62\theta}{(1+\cos2\theta)^8}\sin2\theta d\theta\\ &=-2\int\frac{\cos^62\theta}{(2\cos^2\theta)^8}2\sin\theta\cos\theta d\theta\\ &=-\frac{1}{64}\int\frac{(2\cos^2\theta-1)^6}{\cos^{15}\theta}\sin\theta d\theta\\ &=-\frac{1}{64}\int\frac{64\cos^{12}\theta-192\cos^{10}\theta+240\cos^8\theta-160\cos^6\theta+60\cos^4\theta-12\cos^2\theta+1}{\cos^{15}\theta}\sin\theta d\theta\\ &=-\frac{1}{64}\int\left(\frac{64}{\cos^3\theta}-\frac{192}{\cos^5\theta}+\frac{240}{\cos^7\theta}-\frac{160}{\cos^9\theta}+\frac{60}{\cos^{11}\theta}-\frac{12}{\cos^{13}\theta}+\frac{1}{\cos^{15}\theta}\right)\sin\theta d\theta\end{align*} Now I'm sure you can see how this works out from here - far from elegant but it still works! As for another trig sub, maybe $\bg_white x=\tan^2\theta$: \bg_white \begin{align*}\int\frac{x^6}{(1+x)^8}dx&=\int\frac{\tan^{12}\theta}{(1+\tan^2\theta)^8}2\tan\theta\sec^2\theta d\theta\\ &=2\int\frac{\tan^{13}\theta}{\sec^{14}\theta}d\theta\\ &=2\int\sin^{13}\theta\cos\theta d\theta\\ &=\frac17\sin^{14}\theta+C\\ &=\frac{x^7}{7(1+x)^7}+C\end{align*} #### idkkdi ##### Well-Known Member Here is how $\bg_white x=\cos2\theta$ sub works out: \bg_white \begin{align*}\int\frac{x^6}{(1+x)^8}dx&=-2\int\frac{\cos^62\theta}{(1+\cos2\theta)^8}\sin2\theta d\theta\\ &=-2\int\frac{\cos^62\theta}{(2\cos^2\theta)^8}2\sin\theta\cos\theta d\theta\\ &=-\frac{1}{64}\int\frac{(2\cos^2\theta-1)^6}{\cos^{15}\theta}\sin\theta d\theta\\ &=-\frac{1}{64}\int\frac{64\cos^{12}\theta-192\cos^{10}\theta+240\cos^8\theta-160\cos^6\theta+60\cos^4\theta-12\cos^2\theta+1}{\cos^{15}\theta}\sin\theta d\theta\\ &=-\frac{1}{64}\int\left(\frac{64}{\cos^3\theta}-\frac{192}{\cos^5\theta}+\frac{240}{\cos^7\theta}-\frac{160}{\cos^9\theta}+\frac{60}{\cos^{11}\theta}-\frac{12}{\cos^{13}\theta}+\frac{1}{\cos^{15}\theta}\right)\sin\theta d\theta\end{align*} Now I'm sure you can see how this works out from here - far from elegant but it still works! As for another trig sub, maybe $\bg_white x=\tan^2\theta$: \bg_white \begin{align*}\int\frac{x^6}{(1+x)^8}dx&=\int\frac{\tan^{12}\theta}{(1+\tan^2\theta)^8}2\tan\theta\sec^2\theta d\theta\\ &=2\int\frac{\tan^{13}\theta}{\sec^{14}\theta}d\theta\\ &=2\int\sin^{13}\theta\cos\theta d\theta\\ &=\frac17\sin^{14}\theta+C\\ &=\frac{x^7}{7(1+x)^7}+C\end{align*} anyone bothered to finish off the sectan integrating in the cos2theta substitution lol.? btw, u would factor out sec tan , and yeet that out with a sec theta = u substitution. any smarter way than doing that algebra in cos2theta solution? Last edited: #### Jojofelyx ##### Active Member lmao it looks like yall finished the course already damn. leads me to a question, do most band 6ers in ext 1 and 2 have the course finished by term 2 of year 12 or wot? #### quickoats ##### Well-Known Member lmao it looks like yall finished the course already damn. leads me to a question, do most band 6ers in ext 1 and 2 have the course finished by term 2 of year 12 or wot? nope lol #### quickoats ##### Well-Known Member no gods in this thread pls Seriously pace yourself and try to understand the concepts fully. This is a much better approach than rushing through content. #### Jojofelyx ##### Active Member Seriously pace yourself and try to understand the concepts fully. This is a much better approach than rushing through content. idkkid seems to have it down pat tho and it seems to be term 2 of his yr 12 journey? #### shashysha ##### Well-Known Member lmao it looks like yall finished the course already damn. leads me to a question, do most band 6ers in ext 1 and 2 have the course finished by term 2 of year 12 or wot? I finished the whole course like middle-end of the last term (few weeks after trials) so pretty late actually. School just not did cover mechanics until after trials and same for 3U projectile. Although they did throw some 2U questions in our 3U exam and the first CDF question I did ever was in that trial lol Moderator #### Drdusk ##### π Moderator Is the 100 atar orange too annoying for you? I never said it was me.... #### idkkdi ##### Well-Known Member idkkid seems to have it down pat tho and it seems to be term 2 of his yr 12 journey? bruh it's a 300 page book. you could probably finish the entire book in 2 weeks if you're good at maths and skip over the trivial questions lol. I have a feeling that @Qeru was done with 4u before 4u even started lmao #### idkkdi ##### Well-Known Member Seriously pace yourself and try to understand the concepts fully. This is a much better approach than rushing through content. not much to understand tbh. cambridge book is crisp. also new syllabus has quite a bit less content than old it seems. After that it's a bloody long ass algebra grind lol. #### idkkdi ##### Well-Known Member lmao it looks like yall finished the course already damn. leads me to a question, do most band 6ers in ext 1 and 2 have the course finished by term 2 of year 12 or wot? no way. band 6 in maths ext 1 and ext 2 doesn't mean anything, it's way too common. like half the students in ext 2. As for super early finisher like @Qeru, we're looking at maybe 1 in 30 odds. #### Jojofelyx ##### Active Member I finished the whole course like middle-end of the last term (few weeks after trials) so pretty late actually. School just not did cover mechanics until after trials and same for 3U projectile. Although they did throw some 2U questions in our 3U exam and the first CDF question I did ever was in that trial lol Dont wanna seem intrusive, but did you manage a mid band 6? #### Jojofelyx ##### Active Member bruh it's a 300 page book. you could probably finish the entire book in 2 weeks if you're good at maths and skip over the trivial questions lol. I have a feeling that @Qeru was done with 4u before 4u even started lmao my Cambridge ext 1 is like 750 lmao :\\ no way. band 6 in maths ext 1 and ext 2 doesn't mean anything, it's way too common. like half the students in ext 2. As for super early finisher like @Qeru, we're looking at maybe 1 in 30 odds. But isn't getting a band 6 based of the difficulty, I read something on SMH that was about extension subjects and having higher % of kids who get band 6, mainly cuz its something they are good at and willing to invest time into it, that's why music extension got like 50% or something band 6s?idk could be wrong, feel free to correct me tho. not much to understand tbh. cambridge book is crisp. also new syllabus has quite a bit less content than old it seems. After that it's a bloody long ass algebra grind lol. and wdym algebra grind? copious amounts of working out or wot?
{}
# python check if matrix is symmetric The matrix is said to be horizontal symmetric if the first row is same as the last row, the second row is same as the second last row and so on. In Python, when an assignment to sm[1, 1] is executed, the interpreter calls the __setitem__() magic method. Later, this matrix needs to be shared between several processes. A(3,1) = -1i; Determine if the modified matrix is Hermitian. A Skew Symmetric Matrix or Anti-Symmetric Matrix is a square matrix whose transpose is negative to that of the original matrix. A list is symmetric if the first row is the same as the first column, the second row is the same as the second column and so on. Methods to test Positive Definiteness: Remember that the term positive definiteness is valid only for symmetric matrices. A transpose of a matrix is when we flip the matrix over its diagonal, which resultant switches its row and columns indices of the matrix. Watch Queue Queue Use the “inv” method of numpy’s linalg module to calculate inverse of a Matrix. This video is unavailable. Later on, the implementation of this method will be shown. ... Python progression path-From apprentice to guru A = [1 0 0 2 1 0 1 0 1] is both symmetric and Hermitian. A square matrix as sum of symmetric and skew-symmetric matrices; C Program To Check whether Matrix is Skew Symmetric or not; Minimum flip required to make Binary Matrix symmetric; Find a Symmetric matrix of order N that contain integers from 0 to N-1 and main diagonal should contain only 0's; Program to check diagonal matrix and scalar matrix tf = issymmetric(A) tf = logical 1 d = eig(A) d = 3×1 0.7639 5.2361 7.0000 Print all elements in sorted order from row and column wise sorted matrix, Given an n x n square matrix, find sum of all sub-squares of size k x k, Collect maximum points in a grid using two traversals, Collect maximum coins before hitting a dead end, Find length of the longest consecutive path from a given starting character, Find the longest path in a matrix with given constraints, Minimum Initial Points to Reach Destination, Divide and Conquer | Set 5 (Strassen’s Matrix Multiplication), Maximum sum rectangle in a 2D matrix | DP-27, Find distinct elements common to all rows of a matrix, Find all permuted rows of a given row in a matrix, Find pairs with given sum such that elements of pair are in different rows, Common elements in all rows of a given matrix, Find the number of islands | Set 1 (Using DFS), Find if there is a rectangle in binary matrix with corners as 1, Construct Ancestor Matrix from a Given Binary Tree, Find pair of rows in a binary matrix that has maximum bit difference, Print unique rows in a given boolean matrix, Creative Common Attribution-ShareAlike 4.0 International. The entries of a symmetric matrix are symmetric with respect to the main diagonal. If we sum all elements that need to be saved from all rows, we get the following result: $$1 + 2 + \cdots + N = (1 + N) \cdot \frac{N}{2}$$. The following source code shows how to create a $$4 \times 4$$ symmetric matrix: To make this code runnable, the SymmetricMatrix class has to be implemented. ... # Python 3 program to check # whether given matrix Now check if the original matrix is same as its transpose. For now, only one special method has to be written, particularly the __init__() method, which takes a single parameter called size. Similarly as in the previous case, to get the desired element from the matrix, the position has to be converted to a proper index to the underlying storage. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.. Program to check whether given Square Matrix is symmetric or not. To save space, only elements under and on the diagonal need to be saved. The cumtime column informs us about the cumulative time spent in this function and all sub-functions during all calls. The first case is simple: "if A equals its transpose". Firstly, memory usage is compared. In linear algebra a matrix M[][] is said to be a symmetric matrix if and only if transpose of the matrix is equal to the matrix itself. C C++ JAVA PYTHON SQL HTML CSS DSA Robotics AWS SDE PREPARATION. The Python symmetric_difference() method returns the symmetric difference of two sets. When creating a symmetric matrix, array.array() is used as the underlying storage. How to write a C Program to check Matrix is a Symmetric Matrix or Not with example. We also need to write to and read from the matrix. 1) Create transpose of given matrix. The source code of this method can be broken down into two steps that have to be executed in the provided order: If the given position, (row, column), is above the diagonal, then row is swapped with column, since every element above the diagonal has its counterpart exactly at the (column, row) position. Algorithm Step 1: Create two matrix. Finally, the implementation of calculating the index into the underlying storage is shown in the following source code: Now, we have a working implementation of a symmetric matrix. Count number of islands where every island is row-wise and column-wise separated, Find a common element in all rows of a given row-wise sorted matrix, Given a matrix of ‘O’ and ‘X’, replace ‘O’ with ‘X’ if surrounded by ‘X’, Given a matrix of ‘O’ and ‘X’, find the largest subsquare surrounded by ‘X’. If A is a symmetric matrix, then A = A T and if A is a skew-symmetric matrix then A T = – A.. Also, read: C Program to check Matrix is a Symmetric Matrix Example. A list is symmetric if the first row is the same as the first column, the second row … A matrix is called symmetric if $$a_{ij}$$ is equal to $$a_{ji}$$. I think this is a relatively good solution and it didn’t take long. Since real matrices are unaffected by complex conjugation, a real matrix that is symmetric is also Hermitian. Write a procedure, symmetric, which takes a list as input, and returns the boolean True if the list is symmetric and False if it is not. Note: The symmetry of a matrix can only be determined when it is a square matrix. Let R be a binary relation on A . Python procedure takes in a list, returns True if the list is symmetric, False if it is not. Firstly, one parameter, namely create_storage, is added with default value set to None. Given a Boolean Matrix, find k such that all elements in k’th row are 0 and k’th column are 1. This article is attributed to GeeksforGeeks.org. From the following table, we can see that the average access time for the implemented symmetric matrix is much worse than the average access time for the numpy matrix: The reasons behind the slow access time for the symmetric matrix can be revealed by the cProfile module. Specify skewOption as 'skew' to determine if A is skew-symmetric. Python doesn't have a built-in type for matrices. A symmetric matrix and skew-symmetric matrix both are square matrices. This program allows the user to enter the number of rows and columns of a Matrix. Test whether the matrix is Hermitian. By using our site, you consent to our Cookies Policy. Program to swap upper diagonal elements with lower diagonal elements of matrix. As can be seen from the output, the time is spent mostly in __setitem__() and _get_index(). ... Is there a better pythonic way of checking if a ndarray is diagonally symmetric in a particular dimension? If you on the other hand have a symmetric matrix and want to represent it as a sum B = A + A T, the trivial solution to this is just A = (1/2) B, forcing A to be symmetric. I need to make a matrix (in the form of a numpy array) by taking a list of parameters of length N and returning an array of dimensions N+1 x N+1 where the off-diagonals are symmetric and each triangle is made up of the values given. But the difference between them is, the symmetric matrix is equal to its transpose whereas skew-symmetric matrix is a matrix whose transpose is equal to its negative.. For the third row, the situation is a little bit complicated because the elements from all the previous rows have to be summed. calculate the correct index into the underlying storage. Any matrix can be the symmetric matrix if the original matrix is equal to the transpose of that matrix. But the difference between them is, the symmetric matrix is equal to its transpose whereas skew-symmetric matrix is a matrix whose transpose is equal to its negative.. If you are familiar with the Python implementation of list, you may know that list does not contain elements that you insert into it. A = (a ij) then the symmetric condition becomes a ij = −a ji. If A is a symmetric matrix, then A = A T and if A is a skew-symmetric matrix then A T = – A.. Also, read: For example, the matrix. This means that it satisfies the equation A = −A T. If the entry in the i-th row and j-th column is a ij, i.e. testing if a numpy array is symmetric? Before running the script with the cProfile module, only the relevant parts were present. If any of the condition is not satisfied, set the flag to false and break the loop. Since only elements under and on the diagonal are stored and the whole matrix is saved in a one-dimensional data storage, a correct index to this storage needs to be calculated. Both matrices have the same order. Auxiliary Space : O(N x N). The Cholesky decomposition is an efficient and reliable way to check if a symmetric matrix is positive definite. To create the numpy matrix, numpy.zeros() is called. Some of the scipy.linalg routines do accept flags (like sym_pos=True on linalg.solve) which get passed on to BLAS routines, although more support for this in numpy would be nice, in particular wrappers for routines like DSYRK (symmetric rank k update), which would allow a Gram matrix to be computed a fair bit quicker than dot(M.T, M). Since we want the usage of the matrix be as much comfortable and natural as possible, the subscript operator [] will be used when accessing the matrix: Firstly, let us focus on writing to the matrix. Condition for symmetric : R is said to be symmetric, if a is related to b implies that b is related to a. lRm that is, l perpendicular to m. mRl, m is perpendicular to l, both are true. In computer science, symmetric matrices can be utilized to store distances between objects or represent as adjacency matrices for undirected graphs. Flood fill Algorithm – how to implement fill() in paint? If the sum of the left diagonal and right diagonal of the matrix is equal then the above matrix is said to be symmetric matrix. Examples : If no special demands are present then list can be used as the default storage type. Every element above this mirror is reflected to an element under this mirror. Any square matrix called a Symmetric Matrix if a matrix is equal to its Transposed Matrix. __init__() firstly checks if the provided size is valid. The first one, named ncalls, represents how many times the function from filename:lineno(function) was called. For identically two matrix should be equal, number of rows and columns in both the matrix should be equal and the corresponding elements should also be equal. I have listed down a few simple methods to test the positive definiteness of a matrix. Symmetric matrix can be obtain by changing row to column and column to row. Written By - Garvit. Of course, there are other data structures that are more memory efficient than list. The result of this experiment can be seen in the table below. i want to check if the matrix is symmetric or not by using nested loops and display a certain message if it is or not. In fact if you take any square matrix A (symmetric or not), adding it to its transpose (A + A T) creates a symmetric matrix. Check if array is a square matrix ( n==m) Accept the elements of the array using scanf ( arr[][] ) Check if array elements are symmetric ( arr[i][j]==arr[j][i]) If the condition fails , change flag to 0 and jump out of the 2 loops ( goto here) If the array is not a square change flag to 0 ; After the loop check if flag is 0 or 1 ( flag==1) This means that it satisfies the equation A = −A T. If the entry in the i-th row and j-th column is a ij, i.e. Python Set symmetric_difference_update() Python Set union() ... Join our newsletter for the latest updates. If you need to check if a set is a superset of another set, you can use issuperset() in Python. In order to perform Cholesky Decomposition of a matrix, the matrix has to be a positive definite matrix. This is a demo video to get program to check whether a given square matrix is symmetric or not. A Simple solution is to do following. Below the example of Symmetric matrix − The symmetric difference of two sets A and B is the set of elements that are in either A or B , but not in their intersection. ; Transpose of a matrix is achieved by exchanging indices of rows and columns. Otherwise, size of the matrix is stored and the data storage for the matrix, a list in this case, is initialized. So, the question is which one should be used. In this tutorial we first find inverse of a matrix then we test the above property of an Identity matrix. While some BLAS routines do exploit symmetry to speed up computations on symmetric matrices, they still use the same memory structure as a full matrix, that is, n^2 space rather than n(n+1)/2. From these two trees, the first one is symmetric, but the second one is not. 200_success. A square matrix is said to be symmetric matrix if the transpose of the matrix is same as the given matrix. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. An example of such a matrix is shown below. Program to check whether given Square Matrix is symmetric or not. Be sure to learn about Python lists before proceed this article. There is no need to pass the number of columns since symmetric matrices are square. i.e for all of x (arr [:,:, x]. C Program to check whether a Matrix is Symmetric or not: A Square Matrix is said to be symmetric if it is equal to its transpose.Transpose of a matrix is achieved by exchanging indices of rows and columns. We have to check whether the tree is a symmetric tree or not. Made with & Code. A square matrix is said to be symmetric if its transpose is equal to it:A=A’ Or … Symmetric-Matrix. As mentioned previously, symmetric matrices can be used to represent distance or adjacency matrices. i.e … This leads us to think if the used list is the best data structure for the storage. Condition for transitive : R is said to be transitive if “a is related to b and b … Else if the negative of the matrix is equal to its transpose, a temporary variable ‘y’ is assigned 1. If the entry in the ith row and . $$0 + 1 + 2 + 3 + \cdots + row + column$$, convert a position above the diagonal into a proper position below the diagonal and. if it can be negative then it is not positive definite or vice versa for example if answer comes to be x1^2+x2^2+x3^2 then it can never be negative as there are squared terms so in this case matrix … In this and subsequent sections, I show a particular usage first and then I show the implementation. The passed position is a pair of the form (row, column). Algorithm Step 1: Create two matrix. In terms of elements of matrices: M(i, j) = M(j, i) Following is a python code for demonstrating how to check for Symmetric Matrix. We basically need to compare mat[i][j] with mat[j][i]. Join. If it is not, the ValueError exception is raised. (2) Is there a better pythonic way of checking if a ndarray is diagonally symmetric in a particular dimension? We use cookies to provide and improve our services. A symmetric matrix and skew-symmetric matrix both are square matrices. Algorithm: Take matrix input from the user. The complete source code of the implemented SymmetricMatrix class, alongside with unit tests and the benchmark script, is available on GitHub. for all indices and .. Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. It would be nice to have a possibility to use a standard Python way for gaining the matrix size, which is the len() function. The task is to check whether the matrix is horizontal symmetric, vertical symmetric or both. This work is licensed under Creative Common Attribution-ShareAlike 4.0 International Example: Python Matrix. Logic: To find whether the matrix is symmetric or not we need to compare the original matrix with its transpose. As mentioned prev… This service is done by the _get_index() method for which the last part of this section is devoted. We basically need to compare mat [i] [j] with mat [j] [i]. A square matrix, A, is skew-symmetric if it is equal to the negation of its nonconjugate transpose, A = -A. Therefore, for the first row only one element has to be stored, for the second row two elements are saved and so on. tf = ishermitian(A) tf = logical 0 The result is logical 0 (false) because A is not Hermitian. Kay Cee / September 20, 2013. This computation is performed five times and then the average result is calculated. ', but not its complex conjugate transpose, A'. Hence, the memory requirements are higher for list than, for example, for array.array that stores the elements directly. To explain the computation of the number of elements, suppose that we have a $$N \times N$$ symmetric matrix. Therefore, a better solution when choosing the underlying data structure is leaving space for users to choose the type of the storage according to their requirements. The matrix diagonal can be seen as a mirror. ; Transpose is only defined for a square matrix. The matrix diagonal can be seen as a mirror. by suresh. This is a demo video to get program to check whether a given square matrix is symmetric or not. If neither of the conditions satisfies, the matrix is neither symmetric nor skew-symmetric. The overhead is due to internal workings of Python and computing indexes to the underlying storage. It is smaller than $$size^2$$. In this post, a Python implementation of such a matrix is described. testing if a numpy array is symmetric? Auxiliary Space : O(1). Thus, this symmetric matrix implementation is suitable in circumstances where memory usage is a bigger problem than processor power. Python Set issubset() The issubset() method returns True if all elements of a set are present in another set (passed as an argument). In the following part of this post, a Python implementation of a symmetric matrix is explained step by step along with its usage. Your task is to construct a Python function called symmetric which expects one argument: a two-dimensional table representing a square matrix. However, we can treat list of a list as a matrix. Change the element in A(3,1) to be -1i. a)symmetric b) skew-symmetric c) none of two # Understanding the terms. Sparse Matrix Representations | Set 3 ( CSR ), Ways of filling matrix such that product of all rows and all columns are equal to unity, Shortest distance between two cells in a matrix or grid, Counting sets of 1s and 0s in a binary matrix, Search in a row wise and column wise sorted matrix, Create a matrix with alternating rectangles of O and X, Inplace (Fixed space) M x N size matrix transpose | Updated, Minimum cost to sort a matrix of numbers from 0 to n^2 – 1, Count entries equal to x in a special matrix, Row-wise common elements in two diagonals of a square matrix, Check if sums of i-th row and i-th column are same in matrix, Find row number of a binary matrix having maximum number of 1s, Program to check if a matrix is symmetric, Find if a 2-D array is completely traversed or not by following the cell values, Print all palindromic paths from top left to bottom right in a matrix, Efficiently compute sums of diagonals of a matrix, Print a matrix in a spiral form starting from a point, Program to Interchange Diagonals of Matrix, Find difference between sums of two diagonals, Circular Matrix (Construct a matrix with numbers 1 to m*n in spiral way), Program to find Normal and Trace of a matrix, Sort a Matrix in all way increasing order, Minimum operations required to set all elements of binary matrix, Print a given matrix in reverse spiral form, C Program To Check whether Matrix is Skew Symmetric or not, Sum of matrix element where each elements is integer division of row and column, Sparse Matrix and its representations | Set 2 (Using List of Lists and Dictionary of keys), Find number of transformation to make two Matrix Equal, Sum of matrix in which each element is absolute difference of its row and column numbers, Check horizontal and vertical symmetry in binary matrix, Maximum determinant of a matrix with every values either 0 or n, Sum of both diagonals of a spiral odd-order square matrix, Find perimeter of shapes formed with 1s in binary matrix, Print cells with same rectangular sums in a matrix, Maximum difference of sum of elements in two rows in a matrix, Total coverage of all zeros in a binary matrix, Replace every matrix element with maximum of GCD of row or column, Maximum mirrors which can transfer light from bottom to right, Print K’th element in spiral form of matrix, Count zeros in a row wise and column wise sorted matrix, Count Negative Numbers in a Column-Wise and Row-Wise Sorted Matrix, Find size of the largest ‘+’ formed by all ones in a binary matrix, Return previous element in an expanding matrix, Print n x n spiral matrix using O(1) extra space, Find orientation of a pattern in a matrix, Print maximum sum square sub-matrix of given size, In-place convert matrix in specific order. In this C++ Symmetric Matrix example, first, we transposed the symMat matrix and assigned it to the tMat matrix. Here two matrices are given. Therefore, another magic method, particularly the __getitem__() method, has to be implemented. For the last one, you need to check whether $$M_{ij} = 1 \text{ and } M_{jk} = 1 \implies M_{ik} = 1$$ This is not true for the first relation. 2) Check if transpose and given matrices are same or not, Time Complexity : O(N x N) All code was written, tested and profiled in Python 3.4. Check if two lists are permutations of the one another ( Improvement) Hot Network Questions Broken 105 shifter What is the motivation for teaching Factoring by Grouping? This method requires that you use issymmetric to check whether the matrix is symmetric before performing the test (if the matrix is not symmetric, then there is no need to calculate the eigenvalues). Time Complexity : O (N x N) Auxiliary Space : O (N x N) An Efficient solution to check a matrix is symmetric or not is to compare matrix elements without creating a transpose. tf = issymmetric (A) returns logical 1 ( true) if square matrix A is symmetric; otherwise, it returns logical 0 ( false ). A Skew Symmetric Matrix or Anti-Symmetric Matrix is a square matrix whose transpose is negative to that of the original matrix. Just they get told that the matrix is symmetric and to use only the values in either the upper or the lower triangle. Step 2: Then … Otherwise, the user passes his storage type during the matrix creation like in the following example: The above create_storage() returns an array holding 64b integers that can be shared by different processes. An example of such a matrix is shown below. The following code shows the implementation: It is worth noting the size of the _data storage used to store the matrix. Given a matrix and we have to check whether it is symmetric or not using Java program? For the second row, the number of elements in the previous row and column part of the (row, column) pair is enough. We test the above property of an identity matrix ) method returns the symmetric matrix is to. Or both they get told that the _get_index ( ) and _get_index ( ) method, has to be positive. Matrix operations improve our services, false if it is the same when we take the mirror image of.... Is \ ( N x N ) Auxiliary space: O ( N \times )... # Python 3 program to check whether the matrix is symmetric or not with example # Understanding the.... Transposed matrix print them in zig zag way post, a real matrix is! Complex conjugate transpose, a = -A original matrix is symmetric or.., since all off-diagonal elements are zero original matrix 1 + 2 column\... The _get_index ( ) one, named ncalls, represents how many times the function from the matrix a. To test the positive definiteness is valid its usage term positive definiteness of a matrix not using JAVA program given! The tMat matrix computation of the form ( row, \ ( N\ ) -th row column! Size is valid only for symmetric matrices are square matrices i7-4700HQ ( 6M Cache 2.40! A nested loop ) the previous equality, but found that it n't. Are identical −a ji in comparison with a python check if matrix is symmetric matrix lies in memory! Checks if the negative of the matrix diagonal can be utilized to store distances objects... By different processes show the implementation of such a matrix is same as the given matrix Python. Where memory usage is a bigger problem than processor power try to the. You can use issuperset ( ) method x ] main diagonal i listed. Rows and columns pair of the form ( row, column ) with default value set to none summed. The profiling that are more memory efficient than list since array.array is not passed, then list be... Has to be -1i condition becomes a ij = −a ji, array.array ( ) method returns this.... The size of the original matrix with example that the matrix implementation is suitable circumstances! To false and break the loop informs us about the topic discussed above valid only for symmetric matrices can the! Matrix except the elements above the diagonal need to be implemented basically need to compare matrix elements without creating symmetric... Size is valid only for symmetric matrices are square matrices data structures that are more memory efficient list... ( ) method, has to be implemented in SymmetricMatrix the situation is a symmetric.. The elements directly this service is done by the _get_index ( ) method returns this.... Pass the number of elements, suppose that we have to be implemented in.! Work since array.array is not passed, then list will be used passed position is \ N\. Data storage for the third row, \ ( N\ ) symmetric matrix example important points to:! # Understanding the terms y ’ is assigned 1 in a matrix, real!, particularly the __getitem__ ( ) firstly checks if the transpose of matrix... Row to column and column to row ” method of numpy ’ s, ’. Sections, i try to test the previous equality, but the second challenge Lesson... The last part of this post, a real matrix that is symmetric, since each is own!, or you want to share more information about the cumulative time spent this... Cookies Policy is explained step by step along with its transpose, a is if. Real matrices are unaffected by complex conjugation, a changes are necessary in the following code shows the implementation it... 0 0 2 1 0 0 2 1 0 0 2 1 0 1 0 1 0 0. Next, access times for writing to the main diagonal and to use only the in... Python lists before proceed this article specifies the type of the term and... Entries of a matrix except the elements from all the previous rows have to check whether is. Symmetric matrix are symmetric with respect to the transpose of that matrix Hermitian! Transitive: R is said to be -1i upper diagonal elements with lower diagonal with! Of matrix AWS SDE PREPARATION '15 at 6:55 available on GitHub more information about the topic discussed above of. Loop ) whether it is equal to the main diagonal a demo video to get program to check the! Video to get program to check whether a given square matrix is described c ) none of two sets important! A square matrix is equal to the entire matrix are computed for both matrix.... Two # Understanding the terms: it is the general form of matrix. Fill ( ) in Python 3.4 position is a task we all can perform very easily Python! Average result is calculated, tested and profiled in Python ( using a symmetric matrix are always real the! Down a few simple methods to test the above property of an identity matrix a c to. Source code of the created matrices \ ( N \times N\ ) symmetric matrix are symmetric with respect the... Time Complexity: O ( N \times N\ ) elements need to pass the number of rows and.. To computer science, symmetric matrices can be used to represent distance or adjacency matrices default storage.. Checks if the original matrix is symmetric is also Hermitian the cumulative time spent in this case, added. Them in zig zag way, assume that the term X^TAX and then check whether given square matrix Hermitian... Matrix elements without creating a transpose value set to none for a matrix... Learn about Python lists before proceed this article should return True if the original matrix is symmetric or is... Can treat list of a matrix is neither symmetric nor skew-symmetric ) specifies the type of the is. Property of an identity [ i ] [ j ] [ i ] [ j with! Assigned 1 row sum of all elements in row and/or column of given cell, false if it equal... Is said to be shared by different processes accordingly, for array.array that stores elements! ) in Python ( using a nested loop ) variable ‘ y ’ is assigned 1 you find incorrect! Of Lesson 3: problem set ( Optional ) of Udacity ’ s linalg module to inverse! Row sum of all elements in a matrix to use only the values either... Of the matrix, a = -A to its Transposed matrix approximately 50 % of memory space need check... Data structures that are more memory efficient than list takes in a list as a mirror term! Matrix or Anti-Symmetric matrix is an identity matrix [ i ] question | follow | edited 10... Values in either the upper or the lower triangle of using a symmetric matrix can save approximately 50 of. Row sum of all elements in row and/or column of given python check if matrix is symmetric of Udacity ’ s print them in zag. Function ) was called how to implement fill ( ) the symmetric difference of two sets part this! Transpose is only defined for a symmetric matrix or not is calculated if y equal. The second challenge of python check if matrix is symmetric 3: problem set ( Optional ) of Udacity ’ linalg... Space, only elements under and on the diagonal do not have to be symmetric if it not... The entries of a symmetric matrix or Anti-Symmetric matrix is symmetric python check if matrix is symmetric not is.... An identity matrix [ i ] CSS DSA Robotics AWS SDE PREPARATION array.array instead of list during the symmetric becomes... Firstly checks if the flag to false and break the loop mentioned prev… given a matrix not... Share more information about the cumulative time spent in this post, a Python implementation of matrix! Namely create_storage, is skew-symmetric the computation of the original matrix is a demo to! Check a matrix not very useful in real life N \times N\ ) symmetric matrix main advantage of a. Above the diagonal do not have to be shared between several processes for which the last part of this,... Read from the output, the elements directly is a superset of another set you! The sizes of the matrix diagonal can be seen as a mirror shown.... A positive definite, the matrix is described false ) because a equal. Creating a symmetric matrix if the original matrix to use only the values in either the upper and lower matrices! Valueerror exception is raised the sizes of the form ( row, the ValueError exception is raised its negative. The symmetric difference of two sets to swap upper diagonal elements with lower elements... R is said to be a positive definite tested and profiled in Python ( using a symmetric matrix if ndarray... The best data structure for the third row, \ ( N\ ) -th row, )! Higher for list than, for array.array that stores the elements directly find anything incorrect, or you want share. A transpose under and on the diagonal need to compare the original matrix its... Is no need to be shared by different processes course, there are other python check if matrix is symmetric structures that are more efficient. Cprofile module, only the relevant parts were present can see that the X^TAX... Points to remember: a square matrix an example of such a matrix is symmetric... The type of the form ( row, the question is which one be. Because the elements above the diagonal need to pass the number of and! To provide python check if matrix is symmetric improve our services certainly, it will not work since is... A 2D binary matrix of N rows and columns efficient solution to check a then! Is shown below matrix implementation progression path-From apprentice to guru Python: check if a ndarray is diagonally in...
{}
Mathematica Volumen 37, 2012, 107-118 # BOUNDARY MODULUS OF CONTINUITY AND QUASICONFORMAL MAPPINGS ## Milos Arsenovic, Vesna Manojlovic and Raimo Näkki University of Belgrade, Faculty of Mathematics Studentski Trg 16, 11000 Belgrade, Serbia; arsenovic 'at' matf.bg.ac.rs University of Belgrade, Faculty of Organizational Sciences Jove Ilica 154, 11000 Belgrade, Serbia; vesnam 'at' fon.bg.ac.rs University of Jyväskylä, Department of Mathematics and Statistics P.O. Box 35 (MaD), FI-40014 Jyväskylä, Finland; raimon 'at' maths.jyu.fi Abstract. Let D be a bounded domain in Rn, n \ge 2, and let f be a continuous mapping of \overline D into Rn which is quasiconformal in D. Suppose that |f(x) - f(y)| \le \omega(|x - y|) for all x and y in \partial D, where \omega is a non-negative non-decreasing function satisfying \omega(2t) \le 2\omega(t) for t \ge 0. We prove, with an additional growth condition on \omega, that |f(x) - f(y)| \le C max{\omega(|x - y|), |x - y|\alpha} for all x,y \in D, where \alpha = KI(f)1/(1-n). 2010 Mathematics Subject Classification: Primary 30C65. Key words: Quasiconformal mapping, modulus of continuity. Reference to this article: M. Arsenovic, V. Manojlovic and R. Näkki: Boundary modulus of continuity and quasiconformal mappings. Ann. Acad. Sci. Fenn. Math. 37 (2012), 107-118. doi:10.5186/aasfm.2012.3718
{}
re-declaring variables in c++? This topic is 4610 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts ok i have a basic map GLOBAL variables declare like this tile map[30][50]; (tile is a structure i made) so after i read from a file, i found out that i only need to map to be 10x10 (could be any size), so i need re-declare the variable map[30][50] to map[10][10]. it's a waste to have a huge [30[50] variable and not use it fully. i read something about dynamic variables, but it is not clear to me. can someone plez tell me how to redeclare this variable again? Share on other sites You didn't specify a language, but that looks like it could be C or C++. Assuming C++ there are a couple ways to do this, although none looks incredibly nice without using some 3rd party library like boost. Anyway, what you need is some type of dynamic array, you were correct. A one dimension dynamic array can be made like this; int size = 20;tile * map = new map[size]; you can USE this as a two dimensional array pretty easily. If you wanted a 10x11 array int row = 10;int column = 11;tile * map = new map[row * column];// accessing the [x][y] element:tile temp = map[x + y * column];// stuff// erasedelete[] temp;temp = NULL; It is far far easier to do a 2d dynamic array that way than the alternative, which is make a pointer to pointer. int row = 10;int colum = 10;tile ** map = new tile*[row];for(int i = 0; i < row; i++){ map = new tile[column];} And then you have to delete that stuff when you're done. You can also use vectors in a similar way. Either make a 1d vector, but treat it as a 2d one, or make a vector of vectors. A tutorial on vectors that I think may help is: http://www.codeguru.com/Cpp/Cpp/cpp_mfc/stl/article.php/c4027 Hope some of that helps. Share on other sites 0) Belongs in For Beginners. 1) Your best bet is probably boost::multi_array. Don't do the work yourself. 2) If you really want to do the work yourself, read this (and the following three sections as well). 3) If I am incorrect in assuming C++, please elaborate. Share on other sites You could also use a vector of a vector, which might seem a little messy, but it's 'safe' and dynamic: // Global variable, do not need to set the size if you do not use it yetvector< vector<tile> > map;int Rows = 0;int Cols = 0;// You'd need to get the size before you read in the dataIF >> Rows;IF >> Cols;// Set the array's sizesmap.resize(Rows);for(int x=0;x<map.size();x++) map[x].resize(Cols);// Read in all the data Then you can still use map[r][c] as needed. Just another suggestion though. Share on other sites thx for the help guys, i tried all ways and found nobodynews's method the easieiest. :) 1. 1 2. 2 3. 3 4. 4 frob 15 5. 5 • 16 • 12 • 20 • 12 • 19 • Forum Statistics • Total Topics 632163 • Total Posts 3004514 ×
{}
XSProc: Standard Composition Examples Standard composition fundamentals The standard composition specification data are used to define mixtures using standardized engineering data entered in a free-form format. The XSProc uses the standard composition specification data and information from the Standard Composition Library to provide number densities for each nuclide of every defined mixture according to (7). (7)$NO = \frac{RHO \times AVN \times C}{AWT} ,$ where NO is the number density of the nuclide in atoms/b-cm, RHO is the actual density of the nuclide in g/cm3, AVN is Avogadro’s number, 6.02214199 × 1023, in atoms/mol, C is a constant, 10−24 cm2/b, AWT is the atomic or molecular weight of the nuclide in g/mol. The actual density, RHO, is defined by (8)$RHO = ROTH \times VF \times WGTF ,$ where RHO is the actual density of the standard composition in g/cm3, ROTH is either the specified density of the standard composition or the theoretical density of the standard composition in g/cm3, VF is a density multiplier compatible with ROTH as defined by Eq. , WGTF is the weight fraction of the nuclide in the standard composition. This value is automatically obtained by the code from the Standard Composition Library. WGTF is 1.0 for a single nuclide standard composition. (9)$VF = DFRAC \times VFRAC ,$ where VF is the density multiplier, DFRAC is the density fraction, VFRAC is the volume fraction. To illustrate the interaction between ROTH and VF, consider an Inconel having a density of 8.5 g/cm3. It is 7.0% by weight iron, 15.5% chromium, and 77.5% nickel. The Inconel occupies a volume of 4 cm3. Method 1: To describe the iron, enter 8.5 for ROTH and 0.07 for VF. To describe the chromium, enter 8.5 for ROTH and 0.155 for VF. To describe the nickel, enter 8.5 for ROTH and 0.775 for VF. Method 2: Do not enter the density, and by default the theoretical density of each component will be used for ROTH. DFRAC will be the ratio of the specified density to the theoretical density. The specified density of each component is the density of the Inconel × the weight fraction of that component. Thus, the density of the iron is 8.5 × 0.07 = 0.595 g/cm3 chromium is 8.5 × 0.155 = 1.318 g/cm3 nickel is 8.5 × 0.775 = 6.588 g/cm3. To calculate DFRAC, the theoretical density of each material must be obtained from the table Elements and special nuclide symbols in the STDCMP chapter. These values are 7.86 g/cm3 for iron 8.90 g/cm3 for nickel 7.20 g/cm3 for chromium The DFRAC entered for the iron is 0.595/7.86 = 0.0757 for the nickel is 1.318/8.90 = 0.1481 for the chromium is 6.588/7.20 = 0.9163. Since there are no volumetric corrections, VFRAC is 1.0 and the values of DFRAC are entered for VF. Method 3: Assume the Inconel, which occupies 4 cm3, is to be spread over a volume of 5 cm3. Then the volume fraction, VFRAC, is 4 cm3/5 cm3 = 0.8 and can be combined with the density fraction, DFRAC, to obtain the density multiplier, VF. To describe the iron, enter 8.5 for ROTH and 0.07 × 0.8 = 0.056 for VF or chromium, enter 8.5 for ROTH and 0.155 × 0.8 = 0.124 for VF for nickel, enter 8.5 for ROTH and 0.775 × 0.8 = 0.620 for VF. Alternatively, the volume fraction can be applied to the density before it is entered. Then the ROTH can be entered as 8.5 g/cm3 × 0.8 = 6.8 g/cm3, and DFRAC is entered for the density multiplier, VF. To describe the iron, enter 6.8 for ROTH and 0.07 for VF for chromium, enter 6.8 for ROTH and 0.155 for VF for nickel, enter 6.8 for ROTH and 0.775 for VF. Method 4: Assume the Inconel, which occupies 4 cm3, is to be spread over a volume of 5 cm3. Then the volume fraction, VFRAC, is 4 cm3/5 cm3 = 0.8. Do not enter the density, and by default the theoretical density of each component will be used for ROTH. VF is then entered as the product of VFRAC and DFRAC according to Eq. . The specified density of each component is the density of the Inconel × the weight fraction of that component. Thus, the density of the iron is 8.5 × 0.07 = 0.595 g/cm3 chromium is 8.5 × 0.155 = 1.318 g/cm3 nickel is 8.5 × 0.775 = 6.588 g/cm3. To calculate DFRAC, the theoretical density of each material must be obtained from Table 113. These values are 7.86 g/cm3 for iron 8.90 g/cm3 for nickel 7.20 g/cm3 for chromium. Then DFRAC for the iron is 0.595/7.86 = 0.0756 for nickel is 1.318/8.90 = 0.1481 for chromium is 6.588/7.20 = 0.9150. Then VF is DFRAC × VFRAC VF for the iron is 0.0757 × 0.8 = 0.0606 for nickel is 0.1481 × 0.8 = 0.1185 for chromium is 0.9150 × .8 = 0.7320. Basic standard composition specifications EXAMPLE 1. Material name is given. Create a mixture 3 that is Plexiglas. Since no other information is given, the information on the Standard Composition Library can be assumed to be adequate. Therefore, the only data to be entered are the standard composition name and the mixture number PLEXIGLAS 3 END EXAMPLE 2. Material name and density (g/cm3) are given. Create a mixture 3 that is Plexiglas at a density of 1.15 g/cm3. Since no other data are specified, the defaults from the Standard Composition Library will be used. Therefore, the only data to be entered are the standard composition name, the mixture number, and the density. PLEXIGLAS 3 DEN=1.15 END EXAMPLE 3. Material name and number density (atoms/b-cm) are given. Create a mixture 2 that is aluminum having a number density of 0.060244. AL 2 0 .060244 END EXAMPLE 4. Material name, density (g/cm3) and isotopic abundance are given. Create a mixture 1 that is uranium metal at 18.76 g/cm3 whose isotopic composition is 93.2 wt % 235U, 5.6 wt % 238U, and 1.0 wt % 234U, and 0.2 wt % 236U. This example uses the DEN= keyword to enter the density and define the standard composition. Example 5 demonstrates another method of defining the standard composition. URANIUM 1 DEN=18.76 1 300 92235 93.2 92238 5.6 92234 1.0 92236 0.2 END EXAMPLE 5. Material name, density (g/cm3) and isotopic abundance are given. Create a mixture 7 defining B4C with a density of 2.45 g/cm3. The boron is 40 wt % 10B and 60 wt % 11B. This example utilizes the DEN= keyword. Example 6 illustrates an alternative description. B4C 7 DEN=2.45 1.0 300 5010 40.0 5011 60.0 END EXAMPLE 6. Material name, density (g/cm3) and isotopic abundance are given. Create a mixture 7 defining B4C with a density of 2.45 g/cm3. The boron is 40 wt % 10B and 60 wt % 11B. This example incorporates the known density into the density multiplier, vf, rather than using the DEN= keyword. The default density for B4C given in the COMPOUNDS table in the SCL section 7.2 is equal to 2.52 g/cm3. B4C 7 0.9722 300 5010 40.0 5011 60.0 END Note In the above examples, the actual density is input for materials containing enriched multi-isotope nuclides (uranium in Examples 4 and 5 and boron in Examples 6 and 7). The default density should never be used for enriched materials, especially low atomic mass neutron absorbers such as boron and lithium. The default density is a fixed value for nominal conditions and naturally occurring distributions of isotopes. Use of the default density for enriched materials will likely result in incorrect number densities User-defined (arbitrary) chemical compound specifications The user-defined compound option allows the user to specify materials that are not found in the Standard Composition Library and can be specified by the number of atoms of each element or isotope that are contained in the molecule. To define a user-defined compound, the first four characters of the standard composition component name must be ATOM. The remaining characters of the standard composition component name are chosen by the user. The maximum length of the standard composition name is 16 characters. All the information that would normally be found in the Standard Composition Library must be entered in the user-defined compound specification. Standard composition specification data contains data input details for arbitrary compounds. EXAMPLE 1. Density and chemical equation are given. Create a mixture 3 that is a hydraulic fluid, C2H6SiO, with a density of 0.97 g/cm3. The input data for this user-defined compound are given below: ATOM 3 0.97 4 6000 2 1001 6 14000 1 8000 1 END EXAMPLE 2. Density and chemical equation are given. Create a mixture 7, TBP, also known as phosphoric acid tributyl ester or tributylphosphate, (C4H9O)3PO, having a density of 0.973 g/cm3. ATOMtbp 7 0.973 4 1001 27 6000 12 8016 4 15031 1 end User-defined (arbitrary) mixture/alloy specifications The user-defined compound or alloy option allows the user to specify materials that are not found in the Standard Composition Library and are defined by specifying the weight percent of each element or isotope contained in the material. To define a user-defined weight percent mixture, the first four characters of the standard composition component name must be wtpt. The remaining characters of the standard composition component name are chosen by the user. The maximum length of the standard composition name is 16 characters. All the information that would normally be found in the Standard Composition Library must be entered in the arbitrary mixture/alloy specification. Standard composition specification data contains data input details for user-defined compounds. EXAMPLE 1. Density and weight percents are given. Create a mixture 5 that defines a borated aluminum that is 2.5 wt % natural boron. The density of the borated aluminum is 2.65 g/cm3. SOLUTION MIX=2 RHO[UO2(NO3)2]=415 92235 92.6 92238 5.9 92234 1 92236 0.5 MASSFRAC[HNO3]=6.339-6 TEMPERATURE=293 END SOLUTION EXAMPLE 2. Density, weight percents, and isotopic abundance are given. Create a mixture 5 that defines a borated aluminum that is 2.5 wt % boron. The boron is 90 wt % 10B and 10 wt % 11B. The density of the borated aluminum is 2.65 g/cm3. The minimum generic input specification for this arbitrary material is WTPTBAL 5 2.65 2 5000 2.5 13027 97.5 1 293 5010 90. 5011 10. END Fissile solution specifications Solutions of fissile materials are available in the XSProc. A list of the available solution salts and acids is given in the table Available fissile solution components in Table of Fissile Solutions. When the XSProc processes a solution, it breaks the solution into its component parts (basic standard composition specifications) and uses the solution density to calculate the volume fractions. EXAMPLE 1. Fuel density, excess acid and isotopic abundance are given. Create a mixture 2 that is a highly enriched uranyl nitrate solution with 415 g/L and 0.39 mg of excess nitrate per gram of solution. The uranium isotopic content is 92.6 wt % 235U, 5.9 wt % 238U, 1.0 wt % 234U, and 0.5 wt % 236U. The temperature is 293 Kelvin. SOLUTION MIX=2 RHO[UO2(NO3)2]=415 92235 92.6 92238 5.9 92234 1 92236 0.5 MASSFRAC[HNO3]=6.339-6 TEMPERATURE=293 END SOLUTION where The molecular weight of NO3 is 62.0049 g/mole, of H is 1.0078 g/mole, so the grams of excess H per gram of solution is 1.0078 / 62.0049 × (0.39 mg/g) × (1 g/1000 mg) = 6.339 × 10-6. Combinations of standard composition materials to define a mixture Frequently more than one standard composition is required to define a mixture. This section contains such examples. EXAMPLE 1. Boral from B4C and Aluminum. Create a mixture 6 that is Boral, 15 wt % B4C and 85 wt % Al, having a density of 2.64 g/cm3. Natural boron is used in the B4C. Note that Example 2 demonstrates the use of the keyword DEN= to enter the density of the mixture and avoid having to look up the theoretical density from the table Isotopes in standard composition library, in the section 7.2.2, and calculate the density multiplier (VF) B4C 6 0.1571 END  AL 6 0.8305 END EXAMPLE 2. Boral from B4C and Aluminum. This is the same problem as Example 1 using a different method of specifying the input data. Create a mixture 6 that is Boral, 15 wt % B4C and 85 wt % Al, having a density of 2.64 g/cm3. Natural boron is used in the B4C. B4C 6 DEN=2.64 0.15 END  AL 6 DEN=2.64 0.85 END EXAMPLE 3. Boral from Boron, Carbon, and Aluminum. If neither Boral nor B4C were available in the Standard Composition Library, Boral could be described as follows: Create a mixture 2 that is Boral composed of 35 wt % B4C and 65 wt % aluminum with an overall density of 2.64 g/cm3. The boron is natural boron. vf is the density multiplier. (The density multiplier is the ratio of actual to theoretical density.) From the Standard Composition Library chapter, table Isotopes in standard composition library, the theoretical density of aluminum is 2.702 g/cm3; boron is 2.37 g/cm3; and carbon is 2.1 g/cm3. The density multiplier, vf, for Al is (0.65)(2.64)/2.702 = 0.63509. The isotopic abundances in natural boron are known to have some variability. Here it is assumed that natural boron is 18.4309 wt % 10B at 10.0129 amu and 81.5691 wt % 11B at 11.0096 amu. C is 12.000 amu. Convert the weight percents to atom percents for the natural boron where w denotes weight fraction, a denotes atom fraction, and M denotes atomic mass: $w_{B10} = 0.184309 \equiv \frac{a_{B10}M_{B10}}{a_{B10}M_{B10} + a_{B11}M_{B11}} = \frac{a_{B10}(10.0129)}{a_{B10}(10.0129) + (1-a_{B10}))(11.0093)}$ Solving for $$a_{B10}$$ gives: $[{{\text{a}}_{\text{B10}}}\text{=0.184309}\ \ \text{=}\ \ \frac{\text{(0.184309)}\ \text{(11.0093)}}{\ \text{(0.184309)}\ \text{(11.0093)-(0.184309)}\ \text{(10.0129)+(10.0129)}}\quad \text{=}\ \ \text{19.900}$ Therefore the atom percent of 11B is, aB11 = 80.1 a%. Similarly, the mass of the B4C molecule is [(0.199 × 4 × 10.0129) + (0.801 × 4 × 11.0093) + (12.000)] = 55.24407 amu. The mass of the boron is (55.24407 − 12.000) = 43.24407 amu. The vf of boron would be $$\left( \frac{43.24407}{55.24407} \right)\left( \frac{(0.35)(2.64)}{2.37} \right)$$ = 0.30519 The vf of C would be $\left( \frac{12.0000}{55.24407} \right)\left( \frac{(0.35)(2.64)}{2.1} \right) = 0.09558$ $\left(\frac{12.000}{55.25045}\right)\left[\frac{(0.35)(2.64)}{2.30}\right] = 0.08725$ The standard composition input data for the Boral follows: AL 2 0.63509 END BORON 2 0.30519 END C 2 0.09558 END EXAMPLE 4. Boral from 10B, 11B, Carbon, and Aluminum. Create a mixture 2 that is Boral composed of 35 wt % B4C and 65 wt % aluminum. The Boral density is 2.64 g/cm3. The boron is natural boron. vf is the density multiplier. Use 0.63581 for AL and 0.08725 for C as explained in Example 3 above. From the Standard Composition Library chapter, Isotopes in standard composition library table, the theoretical density of 10B is 1.00 g/cm3 and 11B is 1.00 g/cm3. As computed in Example 3, the mass of the B4C molecule is 55.25045 amu, and the boron is 19.764 atom % 10B and 80.236 atom % 11B. The mass of 10B is 10.0129 amu and the 11B is 11.0096. Thus, the vf of 10B is $\left( \frac{(4)(0.199)(10.0129)}{55.24407} \right)\left( \frac{(0.35)(2.64)}{1.0} \right)\ \ =\ \ 0.13331\ .$ The vf of 11B is $\left( \frac{(4)(0.801)(11.0093)}{55.24407} \right)\left( \frac{(0.35)(2.64)}{1.0} \right)\ \ =\ \ 0.58998\ .$ The standard composition input data for the Boral are given as AL 2 0.63509 END B-10 2 0.13331 END B-11 2 0.58998 END C 2 0.09558 END EXAMPLE 5. Specify all of the number densities in a mixture. Create a mixture 1 that is vermiculite, defined as hydrogen at a number density of 6.8614−4 atoms/b-cm oxygen at a number density of 2.0566−3 atoms/b-cm magnesium at a number density of 3.5780−4 atoms/b-cm aluminum at a number density of 1.9816−4 atoms/b-cm silicon at a number density of 4.4580−4 atoms/b-cm potassium at a number density of 1.0207−4 atoms/b-cm iron at a number density of 7.7416−5 atoms/b-cm In this example we use the 2nd syntax option described in Standard composition specification data, in which the 3rd entry must be 0. The standard composition input data for the vermiculite are given below: H 1 0 6.8614-4 END O 1 0 2.0566-3 END MG 1 0 3.5780-4 END AL 1 0 1.9816-4 END SI 1 0 4.4580-4 END K 1 0 1.0207-4 END FE 1 0 7.7416-5 END Combinations of user-defined compound and user-defined mixture/alloy to define a mixture Mixtures can usually be created using only basic standard composition specifications. Occasionally, it is convenient to create two or more user-defined materials for a given mixture. This procedure is demonstrated in the following example. EXAMPLE 1. Specify Boral using a user-defined compound and user-defined mixture/alloy. Create a mixture 6 that is Boral, 15 wt % B4C and 85 wt % Al, having a density of 2.64 g/cm3. Natural boron is used in the B4C. Boral can be described in several ways. For demonstration purposes, it will be described as a combination of a user-defined compound and user-defined mixture/alloy. This is not necessary, because both B4C and Al are available as standard compositions. A method of describing the Boral without using user-defined compounds or user-defined mixtures/alloys is given in Examples 1 and 2 of Combinations of standard composition materials to define a mixture. The minimum generic input specifications for this user-defined compound and alloy are ATOM-B4C 6 2.64 2 5000 4 6012 1 0.15 END WTPT-AL 6 2.64 1 13027 100.0 0.85 END Combinations of solutions to define a mixture This section demonstrates the use of more than one solution definition to describe a single mixture. The assumptions used in processing the cross sections are likely to be inadequate for solutions of mixed oxides of uranium and plutonium. Therefore, this section is given purely for demonstration purposes. EXAMPLE 1. Solution of uranyl nitrate and plutonium nitrate. Note that the assumptions used in processing the cross sections are likely to only be adequate for CENTRM/PMC calculations of mixed-oxide solutions. This example is given purely for demonstration purposes. Create a mixture 1 consisting of a mixture of plutonium nitrate solution and uranyl nitrate solution. The specific gravity of the mixed solution is 1.4828. The solution contains 325.89 g (U + Pu)/L soln. The acid molarity of the solution is 0.53. In this solution 77.22 wt % of the U+Pu is uranium. The isotopic abundance of the uranium is 0.008% 234U, 0.7% 235U, 0.052% 236U, and 99.24% 238U. The isotopic abundance of the plutonium is 0.028% 238Pu, 91.114% 239Pu, 8.34% 240Pu, 0.426% 241Pu, and 0.092% 242Pu. Note that a single quote in the first column indicates a comment line in SCALE input. ' Uranium density of 77.22% of 325.89 g/L SOLUTION MIX=1 RHO[UO2(NO3)2]=251.65 92234 .008 92235 .700 92236 .052 92238 99.240 ' Plutonium density if 22.78% of 325.89 g/L RHO[PU(NO3)4]=74.24 94238 .028 94239 91.114 94240 8.34 94241 .426 94242 .092 ' Acid molarity is 0.53 M MOLAR[HNO3]=0.53 ' Specifying the density over specifies the problem, which means the solution may ' not be in thermodynamic equilibrium. The specification below adds about 0.3% ' extra hydrogen to the problem DENSITY=1.4828 END SOLUTION Combinations of basic and user-defined standard compositions to define a mixture EXAMPLE 1. Burnable poison from B4C and Al2O3. Create a mixture 6 that is a burnable poison with a density of 3.7 g/cm3 and composed of Al2O3 and B4C. The material is 1.395 wt % B4C. The boron is natural boron. This material can be easily specified using a combination of user-defined material to describe the Al2O3 and a simple standard composition to define the B4C. The minimum generic input specification for this user-defined material and the standard composition are The density multiplier of the B4C is the density of the material times the weight percent, divided by the theoretical density of B4C [(3.7 × 0.01395)/2.52] or 0.02048; the density multiplier of the Al2O3 is 1.0 – 0.01395 or 0.98605 (the theoretical density of B4C was obtained from Isotopes in standard composition library table in the STDCMP chapter). The input data for the burnable poison are given below: ATOM-AL2O3 6 3.70 2 13027 2 8016 3 0.98605 END B4C 6 2.048-2 END The B4C input can be specified using the DEN= parameter as shown below: ATOM-AL2O3 6 3.70 2 13027 2 8016 3 0.98605 END B4C 6 DEN=3.7 0.01395 END The fraction of B4C in the mixture is ((3.7 × 0.01395)/2.52) = 0.02048. The fraction of Al2O3 in the mixture is 1.0 – 0.02048 = 0.979518. The density of the Al2O3 can be calculated as shown below. Input data using the density of Al2O3 are given below: ATOM-AL2O3 6 3.72467 2 13027 2 8016 3 END B4C 6 2.048-2 END EXAMPLE 2. Borated water from H3BO3 and water. Create a mixture 2 that is borated water at 4350 parts per million (ppm) by weight, resulting from the addition of boric acid, H3BO3 to water. The density of the borated water is 1.0078 g/cm3 (see “Specific Gravity of Boric Acid Solutions,” Handbook of Chemistry, 1162, Compiled and Edited by Norbert A. Lange, Ph.D, 1956.). The solution temperature is 15ºC and the boron is natural boron. An easy way to describe this mixture is to use a combination of a user-defined compound to describe the boric acid, and a basic composition to describe the water. STEP 1. INPUT DATA TO DESCRIBE THE USER-DEFINED COMPOUNDThe generic input data for the boric acid are given below. The actual input data are derived in steps 2 through 5. ATOMH3BO3 2 0.025066 3 5000 1 1001 3 8016 3 1.0 288.15 END STEP 2. AUXILIARY CALCULATIONS FOR THE USER-DEFINED COMPOUND INPUT DATA In calculating the molecular weights, use the atomic weights from SCALE, which are available in the table Isotopes in standard composition library in The Standard Composition Library of the SCALE manual. The atomic weights used in SCALE may differ from some periodic tables. The SCALE atomic weights used in this problem are listed below: H (1001) 1.0078 O (8016) 15.9949 10B 10.0129 11B 11.0093 The natural boron abundance, in weight percent, is defined to be: 10B 18.4309 11B 81.5691 The molecular weight of natural boron is given by DEN nat B/AWT nat B = DEN 10B/AWT 10B + DEN 11B/AWT 11B DEN 10B = WTF 10B × DEN nat B DEN 11B = WTF 11B × DEN nat B where: DEN is density in g/cm3, AWT is the atomic weight in g/mol, WTF is the weight fraction of the isotope. Substituting, DEN nat B/AWT nat B = DEN nat B × ((WTF 10B/AWT 10B) + (WTF 11B/AWT 11B)) Solving for AWT nat B yields: AWT nat B = 1/((WTF 10B/AWT 10B) + (WTF 11B/AWT 11B)) The atomic weight of natural boron is thus 1.0/((0.184309 g 10B/g nat B/10.0129 g 10B/mol 10B) + (0.815691 g 11B/g nat B/11.0093 g /mol 11B)) = 10.81103 g nat B/mol nat B The molecular weight of the boric acid, H3BO3 is given by: (3 × 1.0078) + 10.81103 + (3 × 15.9949) = 61.8191 Calculate the grams of boric acid in a gram of solution: Boric acid, H3BO3 is 61.8222 g/mol Natural boron is 10.81261 g/mol (4350 × 10–6 g B/g soln) × (1 mol/10.81261 g B) × (61.8191 g boric acid/mol) = 0.024874 g boric acid/g soln (2.4874 wt %) Interpolating from the referenced page from Lange’s Handbook of Chemistry, the specific gravity of the boric acid solution at 2.4872 weight percent is 1.0087. This value is based on water at 15ºC. The density of pure air free water at 15°C is 0.99913 g/cm3. Therefore, the density of the boric acid solution is 1.0087 × 0.99913 g/cm3 = 1.0078 g soln/cm3. Calculate ROTH, the theoretical density of the boric acid. 1.0078 g soln/cm3 × 0.024874 g boric acid/g soln = 0.025068 g boric acid/cm3 STEP 3. DESCRIBE THE BASIC STANDARD COMPOSITION INPUT DATA H2O 2 0.984507 288.15 END where the volume fraction =0.984506 (see step 4 auxiliary calculations below) STEP 4. AUXILIARY CALCULATIONS FOR THE BASIC STANDARD COMPOSITION INPUT DATA Calculate the volume fraction of the water in the solution, assuming 0.9982 is the theoretical density of water from Table 114. Each gram of solution contains 0.024872 g of boric acid, so there is 0.975128 g of water in each gram of solution. The volume fraction of water is then given by: (1.0078 g soln/cm3 × 0.975128 g water/g soln)/0.9982 g water/cm3 = 0.984506 STEP 5. CREATE THE MIXTURE FOR BORATED WATER ATOMH3BO3 2 0.025068 3 5000 1 1001 3 8016 3 1.0 288.15 END H2O 2 0.984506 288.15 END Combinations of basic and solution standard compositions to define a mixture The solution specification is the easiest way of specifying the solutions listed in the Available fissile solution components table in Table of Fissile Solutions. A combination of solution and basic standard compositions can be used to describe a mixture that contains more than just a solution as demonstrated in the following example. EXAMPLE 1. Uranyl nitrate solution containing gadolinium. Create a 4.306% enriched uranyl nitrate solution containing 0.184 g gadolinium per liter. The uranium in the nitrate is 95.65% 238U, 0.022% 236U, 4.306% 235U, and 0.022% 234U. The uranium concentration is 195.8 g U/L and the specific gravity of the uranyl nitrate is 1.254. There is no excess acid in the solution. The presence of the gadolinium is assumed to produce no significant change in the solution density. The solution is defined to be mixture 3. SOLUTION MIX=3 RHO[UO2(NO3)2]=195.8 92238 95.65 92236 0.022 92235 4.306 92234 0.022 VOL_FRAC=0.99985 DENSITY=1.254 END SOLUTION GD 3 0.000184 293 END Combinations of user-defined compound and solution to define a mixture The solution specification is the easiest way of specifying the solutions listed in the Available fissile solution components table in Table of Fissile Solutions of the SCALE manual. A solution specification and user-defined compound specification can be used to describe a mixture that contains more than just a solution as demonstrated in the following example. EXAMPLE 1. Uranyl nitrate solution with gadolinium nitrate. Create a 4.306% enriched uranyl nitrate solution containing gadolinium in the form of Gd(NO3)3. The uranium in the nitrate is 95.65% 238U, 0.022% 236U, 4.306% 235U, and 0.022% 234U. The uranium concentration is 195.8 g U/L and the density of the uranyl nitrate is 1.254. There is no excess acid in the solution. The concentration of the gadolinium is 0.184 g/L. The volume fraction of the mixture that is uranyl nitrate (0.99985 = 1.254/ (1.254 + 0.000184)). The solution is defined to be mixture 3. SOLUTION MIX=3 RHO[UO2(NO3)2]=195.8 92238 95.65 92236 0.022 92235 4.306 92234 0.022 VOL_FRAC=0.99985 DENSITY=1.254 END SOLUTION The density of the gadolinium is given as 0.184 g/L. To describe the user-defined compound, the density of the Gd(NO3)3 is needed. The atomic weights from the Standard Composition Library are: Gd 157.25 N 14.0067 O 15.999 Therefore, the density of the Gd(NO3)3 = 0.000184 g Gd/cm3 × (157.25 + 3(14.0067 + 3(15.999))/157.25) = 0.0004017 g/cm3. The input data for this user-defined compound are given below: ATOMGD(NO3)3 3 .0004017 3 64000 1 7014 3 8016 9 1.0 300 END The complete input data for the mixture of uranyl nitrate and gadolinium nitrate are given as: SOLUTION MIX=3 RHO[UO2(NO3)2]=195.8 92238 95.65 92236 0.022 92235 4.306 92234 0.022 VOL_FRAC=0.99985 DENSITY=1.254 END SOLUTION ATOMGD(NO3)3 3 .0004017 3 64000 1 7014 3 8016 9 1.0 300 END Note Since the default temperature (300 K) is to be used, it can be omitted from the user-defined compound standard composition. The temperature must be entered if the standard composition contains a multiple-isotope nuclide whose isotopic abundance is to be specified.
{}
# 10th International Conference on Hard and Electromagnetic Probes of High-Energy Nuclear Collisions May 31, 2020 to June 5, 2020 Online US/Central timezone ## Investigating collective flow patterns and the influence of electromagnetic fields in relativistic proton-nucleus collisions Jun 2, 2020, 7:30 AM 1h 20m Online #### Online Poster Presentation Initial State ### Speaker Lucia Oliva (Institute for Theoretical Physics (ITP), Frankfurt am Main) ### Description The recent experimental observations of azimuthally anisotropic flow in small systems at RHIC and LHC energies has stimulated a big interest in these collisions, traditionally regarded only as control measurements for heavy-ion collisions and now becoming a new study area for the formation and evolution of the quark-gluon plasma. In the early stage of proton-nucleus collisions extremely intense electromagnetic fields are produced with a magnitude of few $m_{\pi}^2$; unlike symmetric heavy-ion collisions, in these small asymmetric systems the electric field along the impact parameter axis is comparable to the magnetic field perpendicular to the reaction plane. By means of microscopic simulations within the Parton-Hadron-String Dynamics (PHSD) approach we investigate the emergence of collectivity and the influence of electromagnetic fields on final hadronic observables in proton-nucleus collisions at relativistic energy. One of the main effects of the combined asymmetry of electromagnetic fields and particle distributions is a splitting in the rapidity dependence of the directed flow of positively and negatively charged mesons [1]. [1] L. Oliva, P. Moreau, V. Voronyuk and E. Bratkovskaya, arXiv:1909.06770. Track Initial State Contributed Talk ### Primary author Lucia Oliva (Institute for Theoretical Physics (ITP), Frankfurt am Main) ### Co-authors Elena Bratkovskaya (GSI, Darmstadt) Pierre Moreau (Duke University, Durham) Vadim Voronyuk (JINR, Dubna)
{}
# Fourier Series at Discontinuities 1. Dec 10, 2011 ### Hermes10 Dear all, I am wondering why the Fourier Series converges at a finite discontinuity of a periodic function at 1/2*(y1+y2) at the point f(x1), where x1 is the point at which the discontinuity occurs and y1 is the limiting value for the function when we approach x=x1 from one side and y2 is the limiting value when we approach x=x1 from the other side? Say, in a particular case y2 is 5 and y1 is 2, shouldn't the Fourier series converge to 1/2*(5-2)? I would have though that the Fourier series just converges at the midpoint between y1 and y2 on the graph that is if you draw the function I would have draw the value for x1 at which the discontinuity occurs to be in the middle of the two limiting values. Is that correct? All the Best, Hermes10 2. Dec 10, 2011 ### mathman Your idea is correct, except for an error (typo?) y2=5 and y1=2 gives 1/2(5+2) as the midpoint.
{}
# Question Reconsider the linearly constrained convex programming model given in Prob. 13.6-12. Starting from the initial trial solution (x1, x2) = (0, 0), use one iteration of the Frank-Wolfe algorithm to obtain exactly the same solution you found in part (c) of Prob. 13.6-12, and then use a second iteration to verify that it is an optimal solution (because it is replicated exactly). Explain why exactly the same results would be obtained on these two iterations with any other trial solution. Sales0 Views23 • CreatedSeptember 22, 2015 • Files Included
{}
Chapter 20, Problem 14IC ### Fundamentals of Financial Manageme... 14th Edition Eugene F. Brigham + 1 other ISBN: 9781285867977 Chapter Section ### Fundamentals of Financial Manageme... 14th Edition Eugene F. Brigham + 1 other ISBN: 9781285867977 Textbook Problem # PREFERRED STOCK, WARRANTS, AND CONVERTIBLES Martha Millon, financial manager of Fish & Chips Inc., is facing a dilemma. The firm was founded 5 years ago to develop a new fast-food concept; and although Fish & Chips has done well, the firm′s founder and chairman believes that an industry shake-mil is imminent. To survive, the firm must capture market share now, which requires a large infusion of new capital.Because the stock price may rise rapidly. Millon does not want to issue new common stock. On the other hand, interest rates are currently very high by historical standards, and with the firm′s B rating, the interest payments on a new debt issue would be too much to handle if sales took a downturn. Thus, Millon has narrowed her choice to bonds with warrants or convertible bonds. She has asked you to help in the decision process by answering the following questions.a. How does preferred stock differ from common equity and debt?b. What is adjustable-rate preferred?c. How can a knowledge of call options help a person understand warrants and convertibles?d. One of Millon′s alternatives is to issue a bond with warrants attached. Fish & Chips′s current stock price is S10, and the company′s investment bankers estimate its cost of 20-year annual coupon debt without warrants to be 12%. The bankers suggest attaching 50 warrants to each bond, with each warrant having an exercise price of $12.50. It is estimated that each warrant, whim detached and traded separately, will have a value of$1 -50. 1. What coupon rate should be set on the bond with warrants if the total package is to sell for $1,000? 2. Suppose the bonds are issued and the warrants immediately trade for$2.50 each. What does this imply about the terms of the issue? Did the company "win” or “lose"? 3. When would you expect the warrants to be exercised? 4. Will the warrants bring in additional capital when exercised? If so, how much and what type of capital? 5. Because warrants lower the cost of the accompanying debt, shouldn′t all debt be issued with warrants? What is the expected cost of the bond with warrants if the warrants an- expected to be exercised in 5 years, when Fish & Chips′s stock price is expected to be $17.50? How would you expect the cost of the bond with warrants to compare with the cost of straight debt? With the cost of common stock? e. As an alternative to the bond with warrants. Millon is considering convertible bonds. The firm′s investment bankers estimate that Fish & Chips could sell a 20-year, 10% annual coupon, callable convertible bond for its$1,000 par value, whereas a straight-debt issue would require a 12% coupon. Fish & Chips′s current stock price is $10, its last dividend was$0.74, and the dividend is expected to grow at a constant rate of 8%. The convertible could be converted into 80 shares of Fish & Chips stock al the owner′s option. 1. What conversion price, P, is implied in the convertible′s terms? 2. What is the- straight-debt values of the convertible? What is the implied value of the convertibility feature? 3. What is the formula for the bond′s conversion value in any year? Its value at Year 0? At Year 10? 4. What is meant by the term floor value of a convertible? What is the convertible′s expected floor value in Year 0? In Year 10? 5. Assume that Fish & Chips intends to force conversion by calling the bond when its conversion value is 20% above it′s par value, or at 1.2($1,000) =$1,200. When is the issue expected to be called? Answer to the closest year. 6. What is the expected cost of the convertible to Fish & Chips? Does this cost appear consistent with the risk of the issue? Assume conversion in Year 5 at a conversion value of \$1,200. f. Millon believes that the cost of the bond with warrants and the cost of the convertible bond arc essentially equal, so her decision must be based on other factors. What are some factors she should consider when making her decision between the two securities? a. Summary Introduction To discuss: The preferred stock differs from debt and common equity. Introduction: Stock is a type of security in a company that denotes ownership. The company can raise the capital by issuing stocks. Explanation The preferred stock differs from debt and common equity is as follows: Preferred stock can be termed as hybrid stock because it is similar characteristically with common equity and debt. The preferred payments made to the investors remain contractually stable resembling debt whereas like common equity the non-payment of dividend does amount to default and bankruptcy... b. Summary Introduction To discuss: The meaning of adjustable-rate preferred. c. Summary Introduction To discuss: The knowledge of call options helps people to understand convertibles and warrants. Introduction: Option is a contract to purchase a financial asset from one party and sell it to another party on an agreed price for a future date. There are two types of options, which are as follows: • An option that buys an asset called call option • An option that sells an asset called put option d.1. Summary Introduction To determine: The coupon rate that is set on the bond with warrants. Introduction: A warrant is securities that give the bondholder the right, yet not the obligation, to purchase a specific number of securities at a specific cost before a particular period. d.2. Summary Introduction To discuss: The implication of terms of issues and whether the company loses or wins. d.3. Summary Introduction To discuss: The expected period when the warrants to be exercised. d.4. Summary Introduction To discuss: Whether the warrants will bring in additional capital when exercised and determine the type of capital. d.5. Summary Introduction To discuss: Whether all the debt been issued with warrants when the warrants lower the cost of debt and determine the expected cost of the bond with warrants. Summary Introduction To discuss: The comparison on the cost of bond with warrants by the cost of straight debt and cost of common stock. e.1. Summary Introduction To determine: The conversion price and whether ot is implied in the convertible’s terms. e.2. Summary Introduction To determine: The straight-debt value of the convertible and implied value of the convertibility. e.3. Summary Introduction To determine: The formula for the conversion value of bond in any year and even compute the value of conversion at Year 0 and Year 10. e.4. Summary Introduction To discuss: The meaning of the term floor value of a convertible and compute the convertible’s expected floor value in the Year 0 and Year 10. e.5. Summary Introduction To determine: The number of year or period when the issue is expected to callable. e.6. Summary Introduction To determine: The expected cost of the convertible and whether the cost appears consistent with the risk of the issue. f. Summary Introduction To discuss: The factors that Person M as to consider on making decision between to securities. ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started #### What is inflation and what causes it? Principles of Economics (MindTap Course List) #### MIRR A firm is considering two mutually exclusive projects, X and Y, with the following cash flows: The projec... Fundamentals of Financial Management, Concise Edition (with Thomson ONE - Business School Edition, 1 term (6 months) Printed Access Card) (MindTap Course List) #### Define and provide examples of the three types of business activities. College Accounting, Chapters 1-27 (New in Accounting from Heintz and Parry) #### What is a joint cost? How does it relate to by-products? Cornerstones of Cost Management (Cornerstones Series)
{}
# qiskit.ignis.verification.postselection_decoding¶ postselection_decoding(results)[código fonte] Calculates the logical error probability using postselection decoding. This postselects all results with trivial syndrome. Parâmetros results (dict) – A results dictionary, as produced by the process_results method of a code. Retorna Dictionary of logical error probabilities for each of the encoded logical states whose results were given in the input. Tipo de retorno dict
{}
# A xenon compound 'A' upon partial hydrolysis gives Question: A xenon compound 'A' upon partial hydrolysis gives $\mathrm{XeO}_{2} \mathrm{~F}_{2}$. The number of lone pair of electrons present in compound $\mathrm{A}$ is_______________ . (Round off to the Nearest integer) Solution: (19) Total l.p. on $(\mathrm{A})=19$
{}
# All the Way Down (Beta) | All the Way Down, by Margin of Error by Margin of Error ### Preview Type: JPEG image Margin of Error Saturday, 21 May 2016, 2:44 PM Saturday, 21 May 2016, 2:45 PM 38.6K (39562 bytes)
{}
H. It's showtime time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output You are given a mysterious language (codenamed "UnknownX") available in "Custom Test" tab. Find out what this language is, and use it to solve the following problem. You are given an integer $input = 1000 * n + mod$ ($1 \le n, mod \le 999$). Calculate double factorial of $n$ modulo $mod$. Input The input contains a single integer $input$ ($1001 \le input \le 999999$). You are guaranteed that $input \mod 1000 \neq 0$. Output Output a single number. Examples Input 6100 Output 48 Input 9900 Output 45 Input 100002 Output 0 Input 123456 Output 171 Note In the first test case you need to calculate $6!! \mod 100$; $6!! = 6 * 4 * 2 = 48$. In the second test case you need to calculate $9!! \mod 900$; $9!! = 9 * 7 * 5 * 3 = 945$. In the third test case you need to calculate $100!! \mod 2$; you can notice that $100!!$ is a multiple of 100 and thus is divisible by 2.
{}
## 每週問題 February 1, 2016 Let $A_1$ and $A_2$ be $n\times n$ normal matrices. If $A_1B=BA_2$, show that $A_1^\ast B=BA_2^\ast$. $A_1=U_1D_1U_1^\ast,~~A_2=U_2D_2U_2^\ast$ $U_1D_1U_1^\ast B=BU_2D_2U_2^\ast$ $D_1U_1^\ast BU_2=U_1^\ast BU_2D_2$ $D_1^\ast U_1^\ast BU_2=U_1^\ast BU_2D_2^\ast$
{}
Hello, I was just registering my WD Red drive when I noticed a software update for the drive. Is this a recommended update or only if the drive is having issues? The other question I have, is which one to download. They have three choices listed: wd5741.exe (x86 and x64) wd5741x32 wd5741x64 thanks Hello, If you are using this drive in Windows then you need: wd5741.exe (x86 and x64) If you are using Linux then it is: “wd5741x32” or “wd5741x64”
{}
Publication PrePrints Abstract - Ensembles of alpha-Trees for Imbalanced Classification Problems Ensembles of alpha-Trees for Imbalanced Classification Problems PrePrint ISSN: 1041-4347 ASCII Text x Yubin Park, Joydeep Ghosh, "Ensembles of alpha-Trees for Imbalanced Classification Problems," IEEE Transactions on Knowledge and Data Engineering, vol. 99, no. 1, pp. 1, , 5555. BibTex x @article{ 10.1109/TKDE.2012.255,author = {Yubin Park and Joydeep Ghosh},title = {Ensembles of alpha-Trees for Imbalanced Classification Problems},journal ={IEEE Transactions on Knowledge and Data Engineering},volume = {99},number = {1},issn = {1041-4347},year = {5555},pages = {1},doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2012.255},publisher = {IEEE Computer Society},address = {Los Alamitos, CA, USA},} RefWorks Procite/RefMan/Endnote x TY - JOURJO - IEEE Transactions on Knowledge and Data EngineeringTI - Ensembles of alpha-Trees for Imbalanced Classification ProblemsIS - 1SN - 1041-4347SPEPEPD - 1A1 - Yubin Park, A1 - Joydeep Ghosh, PY - 5555KW - and association rulesKW - Computing MethodologiesKW - Pattern RecognitionKW - GeneralKW - Information Technology and SystemsKW - Database ManagementKW - Database ApplicationsKW - Data miningKW - Information Technology and SystemsKW - Database ManagementKW - Database ApplicationsKW - ClusteringKW - classificationVL - 99JA - IEEE Transactions on Knowledge and Data EngineeringER - Yubin Park, The University of Texas at Austin, Austin Joydeep Ghosh, UThe niversity of Texas at Austin, Austin This paper introduces two kinds of decision tree ensembles for imbalanced classification problems, extensively utilizing properties of $\alpha$-divergence. First, a novel splitting criterion based on $\alpha$-divergence is shown to generalize several well-known splitting criteria such as those used in C4.5 and CART. When the $\alpha$-divergence splitting criterion is applied to imbalanced data, one can obtain decision trees that tend to be less correlated ($\alpha$-diversification) by varying the value of $\alpha$. This increased diversity in an ensemble of such trees improves AUROC values across a range of minority class priors. The second ensemble uses the same alpha trees as base classifiers, but uses a lift-aware stopping criterion during tree growth. The resultant ensemble produces a set of interpretable rules that provide higher lift values for a given coverage, a property that is much desirable in applications such as direct marketing. Experimental results across many class-imbalanced datasets, including BRFSS, and MIMIC datasets from the medical community and several sets from UCI and KEEL, are provided to highlight the effectiveness of the proposed ensembles over a wide range of data distributions and of class imbalance. Index Terms: and association rules,Computing Methodologies,Pattern Recognition,General,Information Technology and Systems,Database Management,Database Applications,Data mining,Information Technology and Systems,Database Management,Database Applications,Clustering,classification Citation: Yubin Park, Joydeep Ghosh, "Ensembles of alpha-Trees for Imbalanced Classification Problems," IEEE Transactions on Knowledge and Data Engineering, 31 Dec. 2012. IEEE computer Society Digital Library. IEEE Computer Society, <http://doi.ieeecomputersociety.org/10.1109/TKDE.2012.255>
{}
# Partially Ordered Set Jump to: navigation, search Let $P$ be a set and let $\leq$ be a relation on $P$. We say that $\leq$ is a partial order when it is reflexive, antisymmetric, and transitive. In this situation, we say that $(P, \leq)$ is a partially ordered set.
{}
Ufimskii Matematicheskii Zhurnal RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Ufimsk. Mat. Zh.: Year: Volume: Issue: Page: Find Ufimsk. Mat. Zh., 2019, Volume 11, Issue 2, Pages 19–35 (Mi ufa469) Difference schemes for partial differential equations of fractional order A. K. Bazzaevab, I. D. Tsopanovb a Khetagurov North-Ossetia State University, Vatutina str., 44-46, 362025, Vladikavkaz, Russia Abstract: Nowadays, fractional differential equations arise while describing physical systems with such properties as power nonlocality, long-term memory and fractal property. The order of the fractional derivative is determined by the dimension of the fractal. Fractional mathematical calculus in the theory of fractals and physical systems with memory and non-locality becomes as important as classical analysis in continuum mechanics. In this paper we consider higher order difference schemes of approximation for differential equations with fractional-order derivatives with respect to both spatial and time variables. Using the maximum principle, we obtain apriori estimates and prove the stability and the uniform convergence of difference schemes. Keywords: initial-boundary value problem, fractional differential equations, Caputo fractional derivative, stability, slow diffusion equation, difference scheme, maximum principle, stability, uniform convergence, apriori estimate, heat capacity concentrated at the boundary. Full text: PDF file (473 kB) References: PDF file   HTML file English version: Ufa Mathematical Journal, 2019, 11:2, 19–33 (PDF, 404 kB); https://doi.org/10.13108/2019-11-2-19 Bibliographic databases: UDC: 519.633 MSC: 65M12 Citation: A. K. Bazzaev, I. D. Tsopanov, “Difference schemes for partial differential equations of fractional order”, Ufimsk. Mat. Zh., 11:2 (2019), 19–35; Ufa Math. J., 11:2 (2019), 19–33 Citation in format AMSBIB \Bibitem{BazTso19} \by A.~K.~Bazzaev, I.~D.~Tsopanov \paper Difference schemes for partial differential equations of fractional order \jour Ufimsk. Mat. Zh. \yr 2019 \vol 11 \issue 2 \pages 19--35 \mathnet{http://mi.mathnet.ru/ufa469} \transl \jour Ufa Math. J. \yr 2019 \vol 11 \issue 2 \pages 19--33 \crossref{https://doi.org/10.13108/2019-11-2-19} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000511171600002} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85078655530} • http://mi.mathnet.ru/eng/ufa469 • http://mi.mathnet.ru/eng/ufa/v11/i2/p19 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. V. I. Vasilev, A. M. Kardashevskii, “Iteratsionnaya identifikatsiya koeffitsienta diffuzii v nachalno-kraevoi zadache dlya uravneniya subdiffuzii”, Sib. zhurn. industr. matem., 24:2 (2021), 23–37 • Number of views: This page: 265 Full text: 211 References: 19
{}
Linear Algebra, Water Flow Problem 1. The problem statement, all variables and given/known data 2. Relevant equations Flow into a node=flow out of a node. Turned relative flows through each node (A,B,C,D,E,F) into system of equations; and entered system of equations into augmented matrix as shown. Reduced matrix to REF. 3. The attempt at a solution I am not certain if I can be more exact with part a f_5 and f_6. I entered all the pertinent equations into the system possible for each node. (Is there another eautaion that could be entered (to help reduce answer)?) Part b, I have no visual explanation, as per the diagram, to why f_1=f_6. PhysOrg.com science news on PhysOrg.com >> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens>> Google eyes emerging markets networks Okay, this is a lot of work to sift through, so I'll just rework it.. If we end up with the same thing.. It'll just be a lot of work for nothing. Anyway, here goes: So, as you said.. flow in = flow out. So, I can set of a system of equations: A: $$f_{3} + 200 = f_{1} + 100$$ B: $$f_{1} + 150 = f_{2} + f_{4}$$ C: $$f_{2} + f_{5} = 200 + 100$$ D: $$f_{6} + 100 = f_{3} + 200$$ E: $$f_{4} + f_{7} = f_{6} + 100$$ F: $$150 + 100 = f_{5} + f_{7}$$ Now I can rearrange these to make them easier to be put into a matrix: A: $$-f_{1} + f_{3} = -100$$ B: $$f_{1} - f_{2} - f_{4} = -150$$ C: $$f_{2} + f_{5} = 300$$ D: $$-f_{3} + f_{6} = 100$$ E: $$f_{4} - f_{6} + f_{7} = 100$$ F: $$f_{5} + f_{7} = 250$$ Now, writing this as a matrix, we get: $$\left( \begin{array}{cccccccc} -1 & 0 & 1 & 0 & 0 & 0 & 0 & -100 \\ 1 & -1 & 0 & -1 & 0 & 0 & 0 & -150 \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 300 \\ 0 & 0 & -1 & 0 & 0 & 1 & 0 & 100 \\ 0 & 0 & 0 & 1 & 0 & -1 & 1 & 100 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 250 \end{array} \right)$$ Now, writing this in reduced row echelon form, we get: $$\left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & -1 & 50 \\ 0 & 0 & 1 & 0 & 0 & -1 & 0 & -100 \\ 0 & 0 & 0 & 1 & 0 & -1 & 1 & 100 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 250 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right)$$ Now, we can rewrite these as equations to see what we get.. (notice that we have $$f_{6}$$ and $$f_{7}$$ being a free variable since the last row is completely zero... therefore $$f_{7} = r$$ and $$f_{6} = s$$ where r,s is any real number). I am not sure if we can make both arbitrary since we only have 6 equations and there are 7 unknowns, but I think that's what we have to do.... Assuming this is the correct way to do it, the following work is logical.. Now, we get: $$f_{1} = s$$ $$f_{2} = r + 50$$ $$f_{3} = s - 100$$ $$f_{4} = s - r + 100$$ $$f_{5} = -r + 250$$ $$f_{6} = s$$ $$f_{7} = r$$ where $$r,s \geq 0$$ (because there can't be negative flow) part b) since $$f_{1} = s$$ and $$f_{6} = s$$, these two need to be the same, therefore one cannot be 100 and the other be 150. part c) If $$f_{4} = 0$$, we get: $$s - r + 100 = 0$$ This implies that $$s = r - 100$$ and $$r = s + 100$$. Substituting into the equations, we get: $$f_{1} = s$$ $$f_{2} = s + 150$$ $$f_{3} = s - 100$$ $$f_{4} = 0$$ $$f_{5} = 150 - s$$ $$f_{6} = s$$ $$f_{7} = s + 100$$. Now, knowing that we can't have negative flow, we see that $$f_{5}$$ tells us that s cannot be greater than 150, and $$f_{3}$$ tells us that s cannot be less than 100. Therefore, $$100 \leq s \leq 150$$. Using this, we can find the range of flow on each: $$100 \leq f_{1} \leq 150$$ $$250 \leq f_{2} \leq 300$$ $$0 \leq f_{3} \leq 50$$ $$f_{4} = 0$$ (this was given). $$0 \leq f_{5} \leq 50$$ $$100 \leq f_{6} \leq 150$$ $$200 \leq f_{7} \leq 250$$ We should get the same ranges if we would have written all the equations in terms of r instead of s. I'm not sure if everything I did is correct, but it seems to make sense to me. A second opinion would probably be nice. Anyway, that's my input. Good luck. Wow jacobpm64, thanks sincerely,,, This statement was particularly helpful: "notice that we have f_6 and f_7 being a free variable since the last row is completely zero..." Also, knowing that I'm not totally off base gives me confindence to progress forward... I originally started with two varaiables; but then I (pointlessly) opted to try to look for correlate them. Linear Algebra, Water Flow Problem well I mean.. sometimes you'll have extra things that can give you more equations. Especially when you're dealing with circuits. You have kirschoff's laws. There are two different things to look for, so you can easily get more equations. I couldn't see anything right off in this problem though to give more equations. If you don't have enough information, heck, just make them arbitrary.
{}
# Images and Preimages of Subobjects under the Morphisms in a New Category of Fuzzy Sets-I Images and Preimages of Subobjects under the Morphisms in a New Category of Fuzzy Sets-I Fuzzy Inf. Eng. (2012) 3: 273-291 DOI 10.1007/s12543-012-0116-y ORIGINAL ARTICLE Images and Preimages of Subobjects under the Morphisms in a New Category of Fuzzy Sets-I Aparna Jain · Naseem Ajmal Received: 8 July 2010/ Revised: 10 July 2012/ Accepted: 1 August 2012/ © Springer-Verlag Berlin Heidelberg and Fuzzy Information and Engineering Branch of the Operations Research Society of China Abstract This paper is the third in a sequence of papers on categories by the same authors. In one of the papers, a new category of fuzzy sets was defined and a few results were established pertaining to that special category of fuzzy sets S. Here, the concept of a fuzzy subset of a fuzzy set is defined under the category S. Besides, the notions of images and preimages of fuzzy sets are also defined under morphisms in the category of fuzzy sets and how smoothly these images and preimages behave under the action of these morphisms is analyzed. Finally, results have been proved on algebra of morphisms of this categoryS. Keywords Category· Fuzzy set· Subobject· Fuzzy subset· Monomorphism· Image · preimage 1. Introduction In 1967, Goguen laid a categorical foundation to the theory of fuzzy sets by introducing the category of L-fuzzy sets. Since then, several authors have studied various aspects in the categories of fuzzy sets and fuzzy groups. Most of the work done in this context is based on Goguen’s category. To have an overview of a few works done in this regard, readers are referred to [7, 14, 16-19, 23, 26]. M. Winter [23] showed how Goguen categories are a suitable extension of the theory in binary relations to the fuzzy world. F. Bayomi [7] studied the behaviour of functors to and fro between the category of crisp topological spaces and the category of L-topological spaces with a special reference to topological groups. Sergey A. Aparna Jain () Department of Mathematics, Shivaji College, University of Delhi, New Delhi, India email: jainaparna@yahoo.com Naseem Ajmal () Department of Mathematics, Zakir Hussain College, University of Delhi, New Delhi, India email: nasajmal@yahoo.com 274 Aparna Jain· Naseem Ajmal (2012) Solovyov [16, 17] introduced a category X(A), which was a generalization of the category of lattice valued subsets of A. Dan Ralescu [14], in his paper, defined category of C-sets, again a generalization of Goguen’s category of L-fuzzy sets. He replaced a lattice L by an arbitrary category C. Then the degree of membership is no longer a point in a lattice. It is rather an object in a category. Various points in that category of C-sets can thus be compared using morphisms between their membership degrees. Lawrence N. Stout [18, 19], tried to compare fuzzy logic and topos logic in which basis was again Goguen’s category. Zaidi and Ansari [26] introduced a few subcategories of Goguen’s category of L-fuzzy subgroups and studied their properties. It is worthwhile here to mention one of the statements quoted in the works of Lawrence N. Stout “In Goguen’s category, objects have fuzzy boundary with fuzziness measured in L, but the maps are crisp. Goguen suggests that a nicer category may result in taking maps which are fuzzy as well.” In [4], A. Jain and N. Ajmal introduced a new categoryG of fuzzy groups in which the object class was the class of all fuzzy groups in all groups. This category differed from the already known Goguen’s category [9] of fuzzy groups in the sense that the two categories comprised of different notions of morphisms. Thus, whereas most of the work done by various authors was based on Goguen’s category with the nature of object changed, here we propose a category in which morphisms are changed. Ever since the introduction of Metatheorem and subdirect product theorem by Tom Head [20] and the work of A. Weinberger [21, 22], it is sufficiently clear that the concept of a fuzzy group comprises of simple fibring of groups. Motivated by the works of Tom Head and Weinberger, these authors felt that there was a need to define the notion of a morphism in the category G of fuzzy groups in such a way that they consisted of a fibring of mappings (homomorphisms), which acted differently on different fibres, and the same was achieved in our paper [5]. Moreover, it was due to the nature of these morphisms that we were able to construct the reflective subcategories in our category of fuzzy groups in [4]. It is to be noticed that such type of subcategories do not exist in the Goguen’s framework. Dan Ralescu in [14] defined fuzzy subobjects of usual categories. On the other hand, in our category of fuzzy groups discussed in [5], objects were structured fuzzy sets and we considered the categorical subobjects of these objects. In Goguen’s category of fuzzy groups, a morphism between two objects (X,μ) and (Y,η) was simply a group homomorphism f between their underlying groups satisfying the property thatμ(x) ≤ η ( f (x)),∀ x ∈ X. Whereas in our category of fuzzy groupsG, a morphism between twoG-objects is a family of homomorphisms between their level subgroups satisfying a few obvious chain conditions. In [4], the nature of morphisms in the categoryG was discussed. Consequently, the important notion of a fuzzy subgroup of a fuzzy group was introduced in [5]. It was also demonstrated in [4] that this categoryG has uncountabely many reflective subcategories. In [5], the category G of fuzzy groups was further studied and parallel category S of fuzzy sets was defined, whose object class is the class of all fuzzy sets in all sets and a morphism between two S-objects is not just a single mapping between their parent sets, rather it is a family of mappings between their level subsets, again with a few obvious chain conditions. In [4], the subobjects of the category G were Fuzzy Inf. Eng. (2012) 3: 273-291 275 considered and the characterization of monomorphisms gives rise to the notion of a fuzzy subgroup of a fuzzy group in the same way as the notion of subgroup of a group arises in the category Grp of ordinary groups. In [5] the notion of fuzzy subset of a fuzzy set in the categoryS was defined. The notions of lower well ordered and upper well ordered fuzzy subsets were also defined and using these notions, it was proved that S(μ), the collection of all fuzzy subsets of a fuzzy set μ is a complete lattice if μ satisfies the property of being lower or upper well ordered. A similar result was proved for L(μ), the collection of all fuzzy subgroups of a fuzzy groupμ. In the present paper, we examine the category S in the framework of algebra of morphisms. The images and preimages of a fuzzy subset of a fuzzy set under an S-morphism are defined. These definitions are carefully formulated using the chain properties of level subsets of a fuzzy set. Following this, we prove some interesting results on algebra of morphisms in the category S. Then, a subcategory S of the category S is defined with a restricted object class as compared to the object class of S. The object class of S consists of all fuzzy sets with finite range sets, and its morphism class is the same as that of S. Some very important results on algebra of morphisms can be obtained in the categoryS (see Propositions 4.1, 4.2, 4.4 and 4.5). All these results show the compatibility of our definitions ofS, images and preimages of fuzzy subsets with that of algebraic properties of sets that exist in classical set theory. 2. Preliminaries Zadeh [25] defined a fuzzy set as a function from a nonempty set to a closed unit interval. Definition 2.1 [8] Let μ be a fuzzy set in a set X and let t ∈ [0, 1]. Then the t-cut μ ofμ is defined as: μ = {x ∈ X/μ(x) ≥ t}. Observe that if t > s, then μ ⊆ μ . t s Assume that the reader is familiar with the definition of a category, a few related concepts of category theory are recalled in the following definitions. Here C is any category and A, B areC-objects. Definition 2.2 [10] A C-morphism f : A → B is said to be a monomorphism in C if for all C-morphisms h and k such that f ◦ h = f ◦ k, it follows that h = k. Definition 2.3 [10] Let A, Bbe C-objects. If f : A → B is a monomorphism. Then (A, f ) is called a subobject of B. Definition 2.4 [10] A categoryF is a subcategory of a categoryH.If (i) Ob(F ) ⊆ Ob(H), (ii) [A, B] ⊆ [A, B] ∀ A, B ∈ Ob(F ), where [A, B] . denotes the collection of all F H F F -morphisms from A to B. 276 Aparna Jain· Naseem Ajmal (2012) (iii) EveryF identity is an H identity. (iv) Composition function of F is the restriction of the corresponding function of H. If in additionF satisfies the condition: (v) [A, B] = [A, B] ∀ A, B ∈ Ob(F ), thenF is called a full subcategory of H. F H Readers are referred to [1-3, 6, 15, 24] for details on fuzzy sets and categories. 3. Category S of Fuzzy Sets and Image, Preimage of a Fuzzy Subset under an S-morphism We first recall the definition of the CategoryS of fuzzy sets from [5]. Notice that the object class of S consists of ordered pairs (X,μ), where X is a set and μ is a fuzzy subset of X. We shall call the pair (X,μ) a fuzzy set in the category S. When there is no likelihood of any confusion about the base set, we shall briefly denote it by μ. A similar phrase “A fuzzy group in a group G” used instead of saying “A fuzzy subgroup in a group G” is used for the objects in the categoryG of fuzzy groups. This is because the objects in our category G are fuzzy groups in all groups and a fuzzy subgroup of a fuzzy group is defined using the notion of a subobject in the category G of fuzzy groups and similar notions and phrases are used in the categoryS. Definition 3.1 S is the quintupleS=(O,M,dom,cod,o), where (i) O is the class of all fuzzy sets in all sets. Members of O are called S-objects. (ii) M is the class of allS-morphisms, where anS-morphism is a relation f between twoS-objects μ andθ defined as follows: f : μ → θ, f is a pair f = ({ f } ,α), t t∈Imμ where the following axioms are satisfied: (a) α :Imμ → Imθ is an order preserving map. (b) ∀ t ∈ Imμ, f : μ → θ is a mapping. t t α(t) (c) If t > t in Imμ, A and B are subsets of μ and μ respectively, such that i j t t i j A ⊆ B, then f (A) ⊆ f (B). t t i j (d) If t > t in Imμ, C and D are subsets of θ and θ respectively such i j α(t ) α(t ) i j −1 −1 that C ⊆ D, then f (C) ⊆ f (D). t t i j (iii) Dom and cod are functions from M to O. If f is a morphism from μ to θ, then dom ( f ) = μ and cod ( f ) = θ. (iv) ‘O’ is a function from D = {( f, g)/ f, g ∈ M and dom ( f ) = cod(g)} into M, called the composition law of S. Let ( f, g) ∈ D,μ =dom (g),η =cod ( f ) and dom ( f ) = cod (g) = θ such that f = ({ f } ,α), g = ({g } ). Define the r r∈Imθ t t∈Imμ,β composition of f and g as f ◦ g = ({ f ◦ g } ,α,◦β). Since f ◦ g turns out β(t) t t∈Imμ to be an S-morphism, therefore we set O( f, g) = f ◦ g. Fuzzy Inf. Eng. (2012) 3: 273-291 277 It can be easily verified that f ◦ g is in fact anS-morphism. Moreover, the identity morphisms exist in S and the composition of morphisms satisfy associativity. Let us briefly recall the definition of the category G of fuzzy groups introduced in [4]. The objects of the category G are ordered pairs (G,μ), where G is a group and μ is a fuzzy subgroup of G. The pair (G,μ) is called a fuzzy group and briefly denoted by μ. The morphism class of G consists of pairs f = ({ f } ,α), where f is a t t∈Imμ morphism from the fuzzy groupμ to the fuzzy groupθ. Hereα is an order preserving map from Imμ to Imθ and { f } is a family of homomorphisms from the level t t∈Imμ subgroup μ to the level subgroup θ satisfying the other axioms of Definition 3.1. t α(t) Readers are referred to [4] for details. Let us recall here the following notion from [4] which gives rise to the concept of a fuzzy subgroup of a fuzzy group in the categoryG. Definition 3.2 A G-morphism f = ({ f } ,α) is said to be an M-morphism if α is t t∈Imμ injective and each f for t ∈ Imμ is also injective. Following that is the characterisation of the monomorphisms ofG. Theorem 3.1 [4] A G-morphism f is an M-morphism if and only if f is a monomorphism. It is clear by Definition 2.3 that in any category, a subobject of an object is a pair consisting of an object and a monomorphism. Most often, a subobject is identified by its associated monomorphism. Moreover, it is well known that in any category whose objects are algebraic structures, subobjects give rise to subalgebras. For example, in the category Grp of ordinary groups, subobjects of objects arising from monomorphisms give rise to the notion of subgroup of a group. Further, in any category of algebraic structures in which the images and preimages of objects under morphisms are defined, the notion of a subalgebra arises naturally. For example, if (G, f ) is a subobject in the category Grp, then f (G) is a subgroup of its codomain. In the reverse direction, if there is a subgroup G of a group H, then the pair (G, i) provides a subobject where i is the inclusion map from G to H. Thus, there is a one to one correspondence between the subobjects of an object (group) in the category Grp and the subgroups of that group. In our category G of fuzzy groups, since the images of objects under morphisms are defined [5], a similar treatment is carried out to formulate the notion of a fuzzy subgroup of a fuzzy group. We discuss here the motivation behind the three axioms in Definition 3.3. Let ((G,μ), f ) be a subobject in the category G and let (H,θ)be the codomain of f . Then f is the pair f = ({ f } ,α), where α is an injective t t∈Imμ order preserving map from Imμ to Imθ and for each t ∈ Imμ, f is an injective homomorphism fromμ toθ (in view of Theorem 3.1). t α(t) Notice that for each t ∈ Imμ, f is a group homomorphism from the subgroupμ to t t the subgroup θ in the following result. α(t) Proposition 3.1 Let (X,μ), (Y,θ) be G-objects, f :(X,μ) → (Y,θ) be a G-morphism, f = ({ f } ,α) and Imμ = {t} . Then t t∈Imμ i i∈Λ f (μ) = f (μ )∀ α(t ) ∈ Im f (μ). α(t ) t t i i i i 278 Aparna Jain· Naseem Ajmal (2012) Now, f (μ ) is a subgroup of θ . Further, due to Axiom 3 of a G-morphism, t t α(t) { f (μ )} is an ascending chain of subgroups of H and thus the union f (μ )is t t t∈Imμ t t t∈Imμ a subgroup of H. Therefore, we have the following: (i) f (μ ) is a subgroup of H, t t t∈Imμ (ii) α(Imμ) ⊆ Imθ, (iii) f (μ) is a subgroup ofθ ∀α(t) ∈ Im f (μ). α(t) α(t) Thus, any subobject in the category G gives rise to these three properties. These facts motivated us to define a fuzzy subgroup of a fuzzy group in G in [5] given here by Definition 3.3. Notice that in the following definition, since μ and θ are fuzzy groups in G and H respectively, the level subsets μ and θ are subgroups of G and H t t respectively. Definition 3.3 [5] A fuzzy group (G,μ) is said to be a fuzzy subgroup of a fuzzy group (H,θ) if (i) G is a subgroup of H, (ii) Imμ ⊆ Imθ, (iii) μ is a subgroup of θ ∀ t Imμ. t t Now, in the reverse direction, let (G,μ), (H,θ)be G-objects satisfying the above three conditions, i.e., G is a subgroup of H,Imμ ⊆ Imθ, and μ is a subgroup of θ ∀ t ∈ Imμ. Then, we can define a monomorphism I = ({I } , i), where t t t∈Imμ i :Imμ → Imθ is the inclusion map and for each t ∈ Imμ, I : μ → θ = θ t t i(t) t is the inclusion homomorphism, thus providing us with a subobject ((G,μ), I) in the category G. Notice that I (μ ) = G. t t t∈Imμ We now define the notion of a fuzzy subset of a fuzzy set in the categoryS: Definition 3.4 A fuzzy set (X ,μ ) is said to be a fuzzy subset of a fuzzy set (X,μ) if (i) X ⊆ X, (ii) Imμ ⊆ Imμ, (iii) μ ⊆ μ ∀ t ∈ Imμ . t t A fuzzy set μ which is a fuzzy subset of a fuzzy set μ, will be denoted by μ  μ or (X ,μ )  (X,μ). Notice that the third axiom in the above definition is equivalent to saying that μ (x) ≤ μ(x) ∀ x ∈ X . We now introduce the concepts of image and preimage of a fuzzy subset of a fuzzy set under an S-morphism f . Definition 3.5 Let (X,μ), (X ,μ ) and (Y,η) be S-objects such that (X ,μ )  (X,μ). Let f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α). Let Imμ = {t} .We t t∈Imμ i i∈Λ Fuzzy Inf. Eng. (2012) 3: 273-291 279 define f (μ ), the image of the fuzzy subset (X ,μ ) of (X,μ) under the morphism f as a fuzzy set in the union of the family{ f (μ )} as follows: t t t∈imμ f (μ ): f (μ ) → [0, 1], t∈Imμ f (μ )(y) = α(t ) if y ∈ f (μ )− f (μ ), where t, t ∈ Im(μ ). i t t i j i t j t i j t >t j i Note that ( f (μ ), f (μ )) is an S-object. t∈Imμ Definition 3.6 Let (X,μ), (Y,η) and (Y ,η ) be S-objects such that (Y ,η )  (Y,η). Let f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α) and Imη = {p } .We t t∈Imμ k k∈Ω −1 define f (η ), the preimage of the fuzzy subset (Y ,η ) of (Y,η) under the morphism −1 −1 f as a fuzzy set in the union of the family { f (η )}, where p ∈ Imη , t ∈ α (p ) k k k t p k k as follows: −1 −1 f (η ): f (η ) → [0, 1], t p k k −1 t ∈α (p ) k k p ∈Imη −1 −1 −1 f (η )(x) = t if x ∈ f (η )− f (η ). t p t p k k j j t >t j k −1 t ∈α (p ) j j p ≥p in Imη j k −1 −1 Here too, note that f (η ), f (η ) ∈ Ob(S). t p k k −1 t ∈α (p ) k k p ∈Imη Lemma 3.1 Let (X,μ), (X ,μ ) and (Y,η) be S-objects such that (X ,μ )  (X,μ), f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α) and Imμ = {t} . Then t t∈Imμ i i∈Λ f (μ ) = f (μ )∀ α(t ) ∈ Im f (μ ). α(t ) t t i i i i Proof Let α(t ) ∈ Im f (μ ) and y ∈ f (μ ). Suppose if possible, i t i t f (μ )(y)<α(t ). That is α(t )<α(t ), where f (μ )(y) = α(t ). k i k This implies t < t . Now, since f (μ )(y) = α(t ), therefore k i k y ∈ f (μ ) and y  f (μ )∀ t > t in Imμ . t t n k k t n t k n This is a contradiction to the fact that t > t in Imμ , and y ∈ f (μ ). Hence i k t i t f (μ )(y) ≥ α(t ). That is y ∈ f (μ ) and thus α(t ) f (μ ) ⊆ f (μ ) . t t α(t ) i i i 280 Aparna Jain· Naseem Ajmal (2012) To show the reverse inclusion, let y ∈ f (μ ) . Then α(t ) f (μ )(y) ≥ α(t ). That is α(t ) ≥ α(t ), where f (μ )(y) = α(t ). j i j Case I: α(t ) = α(t ). Then f (μ )(y) = α(t ) and therefore by Definition 3.5, j i i y ∈ f (μ ). i t Case II: α(t )  α(t ). Then α(t ) >α(t ). Since α is an order preserving map, this j i j i implies t > t. j i Now t , t ∈ Imμ . Therefore, j i μ  μ . t t j i Then by Axiom (iii) of an S-morphism, we have f (μ ) ⊆ f (μ t ). t t i j t i Since f (μ )(y) = α(t ), we have by Definition 3.5, y ∈ f (μ ). Hence y ∈ f (μ ). j t t j t i t j i Tus f (μ ) ⊆ f (μ ). α(t ) t i i t This gives the required equality. Proposition 3.2 Let (X,μ), (X ,μ ) and (Y,η) beS-objects such that (X ,μ )  (X,μ), f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α) and Imμ = {t} . Then t t∈Imμ i i∈Λ f (μ ) is a fuzzy subset ofη, that is f (μ ), f (μ )  (y,η). t∈Imμ Proof Since μ  μ,wehaveImμ  Imμ andμ ⊆ μ ∀ t ∈ Imμ . This implies f (μ ) ⊆ η ⊆ Y ∀ t ∈ Imμ . t α(t) Thus f (μ ) ⊆ Y. t∈Imμ Next, let α(t ) ∈ Im f (μ ). By Definition 3.5, this implies t ∈ Imμ ⊆ Imμ. i i Thereforeα(t ) ∈ Imη. Thus Im f (μ ) ⊆ Imη. Finally, to show that f (μ ) ⊆ η ∀ α(t ) ∈ Im f (μ ), let α(t ) ∈ Im f (μ ). α(t ) α(t ) i i i i Then t ∈ Imμ ⊆ Imμ. Since f is a map from μ to η ,we have f (μ ) ⊆ η . i t t α(t ) t α(t ) i i i i t i Therefore, in view of Lemma 3.1, we have f (μ ) ⊆ η ∀ α(t ) ∈ Im f (μ ). α(t ) α t ) i i ( i Hence f (μ ) is a fuzzy subset of η. Fuzzy Inf. Eng. (2012) 3: 273-291 281 Lemma 3.2 Let (X,μ), (Y ,η ) and (Y,η) be S-objects such that (Y ,η )  (Y,η) and f :(X,μ) → (Y,η) be an S-morphism. Then for p ∈ Imη such that α(t ) = p , k k k −1 −1 f (η ) = f (η ). k t p k k −1 Proof Let p ∈ Imη such that α(t ) = p for some t ∈ Imμ. Let x ∈ f (η ) . k k k k t Then −1 f (η )(x) ≥ t . −1 −1 Now, if f (η )(x) = t , then we have t ≥ t . By Definition 3.6, f (η )(x) = t j j k j implies −1 x ∈ f (η ). t p j j −1 If t = t , then clearly x ∈ f (η ). And if t > t , then by Axiom 4 of an S- j k j k t p k k −1 −1 −1 morphism f (η ) ⊆ f (η ). This implies x ∈ f (η ). Thus we have in both the t p t p t p j j k k k k cases, −1 x ∈ f (η ). t p k k That is −1 −1 f (η ) ⊆ f (η ). k t p k k −1 −1 To prove the reverse inclusion, let x ∈ f (η ). Suppose, if possible f (η )(x) < t p k k −1 −1 t .If f (η )(x) = t , then we have t < t . Also, by Definition 3.6, f (η )(x) = t k i i k i implies −1 −1 x ∈ f (η ) and x  f (η ), ∀ t > t, p ≥ p in Imη,α(t ) = p . n i n i n n t p t p i i n n −1 Thus, keeping into consideration t > t , we get x  f (η ). This contradiction k i t p k k −1 −1 establishes that f (η )(x) ≥ t . Hence x ∈ f (η ) . Therefore k t −1 −1 f (η ) ⊆ f (η ) . t p t k k k We thus get the required equality. Proposition 3.3 If (X,μ), (Y ,η ) and (Y,η) be S-objects such that (Y ,η )  (Y,η) −1 and f :(X,μ) → (Y,η) be anS-morphism, then f (η ) is a fuzzy subset ofμ. That is, −1 −1 f (η ), f (η )  (X,μ). t p k k −1 t ∈α (p ) k k p ∈Imη Proof It is easy to verify that −1 f (η ) ⊆ X. (1) t p k k −1 t ∈α (p ) k k p ∈Imη k 282 Aparna Jain· Naseem Ajmal (2012) Since η  η,we have Imη ⊆ Imη and α is a map from Imμ to Imη. Therefore, ∀ −1 p ∈ Imη ,α (p ) ⊆ Imμ. Thus, in view of Definition 3.6, k k −1 Im f (η ) ⊆ Imμ. (2) −1 −1 −1 Finally to show that f (η ) ⊆ μ ∀ t ∈ Im f (η ), let t ∈ Im f (η ). Then, by t t Lemma 3.2, −1 −1 f (η ) = f (η ). t α(t) Since f is a map fromμ toη andη ⊆ η ∀ α(t) ∈ Imη ,wehave t t α(t) α(t) α(t) −1 f (η ) ⊆ μ . α(t) Hence −1 −1 f (η ) ⊆ μ ∀ t ∈ Im f (η ). (3) t t −1 By (1), (2) and (3), we get that f (η ) is a fuzzy subset ofμ. Following lemmas are easy to verify: Lemma 3.3 Let (X,μ), (Y,η), (X ,μ ) ∈ Ob(S),f :(X,μ) → (Y,η) be anS-morphism and (X ,μ )  (X,μ).If t < t in Imμ , then f (μ )  f (μ ). i j t t j t i t j i Lemma 3.4 Let (X,μ), (Y ,η ) and (Y,η) ∈ Ob(S),f :(X,μ) → (Y,η) be an S- morphism and (Y ,η )  (Y,η) such that f s are surjective ∀ t ∈ Imμ. Then for any p < p in Imη , we have i j −1 −1 f (η )  f (η ), where α(t ) = p andα(t ) = p . i i j j t p t p j j i i 4. A Subcategory S ofS and Algebra of Morphisms in the CategoryS f f We shall now restrict the class of objects in the category S and hence construct a subcategory S of S. The object class of S consists of all fuzzy sets with finite f f range sets and the morphisms. Considered in S are the same as in S. One can observe thatS is a full subcategory ofS. Some very important results on algebra of morphisms are achievable for this subcategory. Proposition 4.1 If (X,μ), (X ,μ ), (Y,η) ∈ Ob(S ), such that (X ,μ )  (X,μ) and f :(X,μ) → (Y,η) is anS -morphism, then α :Imμ → Im f (μ ) is a bijection. Proof We first prove that ∀ t ∈ Imμ ,α(t ) ∈ Im f (μ ). For this let t ∈ Imμ . Then i i i t = μ (x) for some x ∈ X . This implies x ∈ μ . Since f is a map from μ to η i t t α(t ) t i i i andμ ⊆ μ , therefore we have t i f (x) ∈ f (μ ). t t i i t Thus f (μ )  φ. i t Case I: t = sup Imμ . Now since f (μ )  φ, let y ∈ f (μ ). Then by Definition 3.5, i t t i t i t i i f (μ )(y) = α(t ). That is,α(t ) ∈ Im f (μ ). i i Fuzzy Inf. Eng. (2012) 3: 273-291 283 Case II: t < sup Imμ . Then t < t for some t ∈ Imμ . This implies i i j j μ  μ . t t j i Then by Lemma 3.3 f (μ )  f (μ ). t t j t i t j i This is true∀ t > t in Imμ . Therefore j i f (μ )− f (μ )  φ. t t i t j t i j t >t j i t ,t ∈Imμ i j Thus by Definition 3.5, α(t ) ∈ Im f (μ ). Now to prove thatα is injective, let t, t ∈ Imμ such thatα(t ) = α(t )inIm f (μ ). i j i j Setting α(t ) = α(t ) = p, suppose if possible t > t in Imμ ⊆ Imμ. Then by i j i j Lemma 3.3, f (μ )  f (μ ). t t i t j t i j This by Lemma 3.1 implies f (μ )  f (μ ) . α ti) α(t ) ( j That is f (μ )  f (μ ) . p p This contradiction implies t ≤ t . Similarly, we shall get t ≤ t . Thus t = t which i j j i i j proves thatα is injective. Also, by Definition 3.5, it is clear that if p ∈ Im f (μ ), then p = α(t ) for some t ∈ Imμ . Thus we get that α is surjective. k k k Proposition 4.2 Let (X,μ), (Y,η) ∈ Ob(S ) and f :(X,μ) → (Y,η) be an S - f f morphism. Then (X ,μ )  (X ,μ )  (X,μ) imply that f (μ )  f (μ ). 1 1 2 2 1 2 Proof It is easy to verify that f (μ ) ⊆ f (μ ). t 1 t 2 t t t∈Imμ t∈Imμ 1 2 Now to prove that Im f (μ ) ⊆ Im f (μ ), let α(t ) ∈ Im f (μ ). Then 1 2 i 1 t ∈ Imμ ⊆ Imμ . i 1 2 Case I: t = sup Imμ .We have α(t ) ∈ Im f (μ ). This implies ∃ y ∈ f (μ ), i 2 i 1 t 1 t∈Imμ such that f (μ )(y) = α(t ). Then by Definition 3.5, we have y ∈ f (μ ) ⊆ f (μ ). 1 i t 1 t 2 i t i t i i This implies f (μ )  ∅. Again by Definition 3.5, f (μ )(y) = α(t ). That is t 2 2 i i t α(t ) ∈ Im f (μ ). i 2 284 Aparna Jain· Naseem Ajmal (2012) Case II: t < sup Imμ . Let t ∈ Imμ such that t > t . Then i 2 j 2 j i μ  μ . 2 2 t t j i This by Lemma 3.3 implies f (μ )  f (μ ). t 2 t 2 j t i ti Now since μ has finite range set, f (μ )− f (μ )  ∅. t 2 t 2 i t j t i j t >t j i t ,t ∈Imμ i j 2 By Definition 3.5, this implies α(t ) ∈ Im f (μ ). i 2 Hence Im f (μ ) ⊆ Im f (μ ). 1 2 Finally, we show that f (μ ) ⊆ f (μ ) ∀ α(t ) ∈ Im f (μ ). 1 α(t ) 2 a(t ) i 1 i i Let α(t ) ∈ Im f (μ ) and y ∈ f (μ ) . Then f (μ )(y) ≥ α(t ). Suppose, if possible i 1 1 a(t ) 1 i f (μ )(y)<α(t ). Then 2 i f (μ )(y)<α(t ) ≤ f (μ )(y). 2 i 1 Setting f (μ )(y) = α(t ) and f (μ )(y) = α(t ), we get that 2 k 1 j α(t )<α(t ). k j Since α is order preserving, t < t . k j Now since f (μ )(y) = α(t ), we have t ∈ Imμ .Asμ  μ , we get 1 j j 1 1 2 μ ⊆ μ . 1 2 t t j j This implies f (μ ) ⊆ f (μ ). t 1 t 2 j t j t j j Again, f (μ )(y) = α(t ) by Definition 3.5 implies that y ∈ f (μ ). Thus, 1 j t 1 j t y ∈ f (μ ). (4) t 2 j t j Fuzzy Inf. Eng. (2012) 3: 273-291 285 Since f (μ )(y) = α(t ), again by Definition 3.5 we get y ∈ f (μ ) and y  f (μ ) ∀ 2 k t 2 t 2 k t n t t > t in Imμ . Therefore y  f (μ ) which contradicts (4). Hence n k 2 t 2 j t f (μ )(y) ≥ α(t ). 2 i This implies y ∈ f (μ ) . 2 α(t ) Thus f (μ ) ⊆ f (μ ) , ∀ α(t ) ∈ Im f (μ ). 1 α(t ) 2 α(t ) i 1 i i Hence f (μ )  f (μ ). 1 2 In the categoryS of fuzzy sets, we have the following result, the proof being similar to that of Proposition 4.2 is omitted. Proposition 4.3 Let (X,μ), (Y,η) ∈ Ob(S) and f :(X,μ) → (Y,η) be anS-morphism. Then (X ,μ )  (X ,μ )  (X,μ) imply the following: 1 1 2 2 (i) f (μ ) ⊆ f (μ ). t 1 t 2 t t t∈Imμ t∈Imμ 1 2 (ii) f (μ ) ⊆ f (μ ) ∀α(t ) ∈ Im f (μ ). 1 α(t ) 2 α(t ) i 1 i i Proposition 4.4 Let (X,μ), (X ,μ ), (Y,η) ∈ Ob(S ) such that (X ,μ )  (X,μ) and −1 f :(X,μ) → (Y,η) be an S -morphism. Then μ  f ( f (μ )). Proof First recall by Proposition 3.2, f (μ ) is a fuzzy subset of η. For the sake of convenience, denote f (μ )by η . It can be easily verified that −1 X ⊆ f (η ). t p k k −1 t ∈α (p ) k k p ∈Imη −1 Now we show that Imμ ⊆ Im f (η ). Let t ∈ Imμ . Case I: t = sup Imμ . By Proposition 4.1, α :Imμ → Imη is a bijection and is order preserving, therefore p = α(t ) = sup Imη . Since t ∈ Imμ , t = μ (x) for k k k k some x ∈ X . Thus x ∈ μ . Therefore f (x) ∈ f (μ ) = f (μ ) (by Lemma 3.1) t t α(t ) k k t k = η . −1 −1 Thus x ∈ f (η ). Since p = sup Imη , f (η )(x) = t by Definition 3.6. Therefore k k t p k k −1 t ∈ Im f (η ). Case II: t < sup Imμ . Then by Proposition 4.1, p = α(t ) < sup Imη . Let k k k p ∈ Imη such that p < p and letα(t ) = p . Then we have t < t in Imμ ⊆ Imμ. j k j j j k j Therefore by Axiom 4 of anS-morphism, −1 −1 f (η ) ⊆ f (η ). t p t p j j k k 286 Aparna Jain· Naseem Ajmal (2012) Since p > p in Imη ,η  η . Let y ∈ η such that y  η . By Lemma 3.1 j k p p p p j k k j y ∈ η = f (μ ) = f (μ ). p α(t ) t t k k k k This implies y = f (x) for some x ∈ μ . k k Therefore −1 −1 x ∈ f (y) ⊆ f (η ). t t p k k k −1 Suppose, if possible x ∈ f (η ). Since t > t in Imμ, by Axiom 4 of an j k t p j j S-morphism, we have −1 −1 f (η ) ⊆ f (η ). t p t p j j k j Thus −1 x ∈ f (η ). t p k j This implies y = f (x) ∈ η . k p This contradiction implies −1 x  f (η ). t p j j Hence −1 −1 f (η )  f (η ) ∀ p > p in Imη . j k t p t p j j k k Therefore −1 −1 f (η )− f (η )  φ. t p t p k k j j t >t j k −1 t ∈α (p ) j j p >p in Imη j k By Definition 3.6, this implies −1 t ∈ Im f (η ). We thus have −1 Imμ ⊆ Im f (η ). −1 Finally, we show that μ ⊆ f (η ) ∀ t ∈ Imμ . For this let t ∈ Imμ and let t k k t k −1 −1 x ∈ μ . Then μ (x) ≥ t . Suppose, if possible f (η )(x) < t . Let f (η )(x) = t . k k i k Fuzzy Inf. Eng. (2012) 3: 273-291 287 Then t < t in Imμ implies p < p in Imη , where α(t ) = p . Also in view of i k i k i i −1 Definition 3.6 and Proposition 4.1, f (η )(x) = t implies −1 −1 x ∈ f (η ) and x  f (η ), t p t p i i n n ∀ p > p in Imη , t > t, such thatα(t ) = p . (5) n i n i n n Now, x ∈ μ implies f (x) ∈ f (μ ). t t t k k k By Lemma 3.1 we have f (μ ) = f (μ ) = η . t α(t ) k t k p k k Thus −1 x ∈ f (η ), where p > p in Imη . k i t p k k This contradicts (5). Hence we have −1 f (η )(x) ≥ t . −1 That is x ∈ f (η ) and thus −1 μ ⊆ f (η ) ∀ t ∈ Imμ . t k t k Hence we arrive at the required conclusion. That is −1 μ  f ( f (μ )). Proposition 4.5 Let (X,μ), (Y,η), (Y ,η ), (Y ,η ) ∈ Ob(S ), f :(X,μ) → (Y,η) be 1 1 2 2 f an S -morphism, f = ({ f } ,α) such that f ’s are surjective ∀ t ∈ Imμ. Then f t t∈Imμ t −1 −1 (Y ,η )  (Y ,η )  (Y,η) imply that f (η )  f (η ). 1 1 2 2 1 2 Proof It is easy to verify that −1 −1 f (η ) ⊆ f (η ). t 1p t 2p k k k k p ∈Imη p ∈Imη k 1 k 2 α(t )=p α(t )=p k k k k −1 −1 −1 Next we prove that Im f (η ) ⊆ Im f (η ). Let t ∈ Im f (η ). Then by 1 2 k 1 Definition 3.6, we have α(t ) = p ∈ Imη . Since η  η  η, Imη ⊆ Imη . k k 1 1 2 1 2 Thus p ∈ Imη . k 2 Case I: p = sup Imη . Since p ∈ Imη , ∃ y ∈ Y such that η (y) = p . That is k 2 k 2 2 2 k y ∈ η . Since f : μ → η is a surjective map and η ⊆ η , ∃ x ∈ μ such that 2p t t p 2p p t k k k k k k k −1 f (x) = y ∈ η . Therefore x ∈ f (η ). Keeping in view that p = sup Imη ,by t 2p 2p k 2 k k t k −1 −1 Definition 3.6 we have f (η )(x) = t . That is t ∈ Im f (η ). 2 k k 2 Case II: p < sup Imη . Let p < p in Imη . Then η  η . Since Imη is finite, k 2 k j 2 2p 2p j k we have η  η . 2p 2p j k p >p in Imη j k 2 288 Aparna Jain· Naseem Ajmal (2012) Let y ∈ η such that y  η . Since f is surjective, choose 2p 2p t k j k p >p in Imη j k 2 −1 −1 x ∈ f (y) ⊆ f (η ). t t 2p k k Suppose, if possible −1 x ∈ f (η ). 2p n n p >p in Imη n k 2 −1 Then, x ∈ f (η ) for some p > p in Imη . By Axiom 4 of an S-morphism, we 2p n k 2 t n −1 −1 −1 have f (η ) ⊆ . f (η ). Thus x ∈ f (η ). This implies t 2p t 2p t 2p n n k n k n y = f (x) ∈ η ⊆ η . t 2p 2 k n p p >p in Imη j k 2 −1 This is a contradiction. And hence x  f (η ). 2p t n p >p in Imη n k 2 −1 −1 Therefore x ∈ f (η ) − f (η ). Then by Definition 3.6 we have 2p 2p t k t n k n p >p in Imη n k 2 −1 t ∈ Im f (η ). k 2 Hence −1 −1 Im f (η ) ⊆ Im f (η ). 1 2 −1 −1 −1 −1 Finally, we show that f (η ) ⊆ f (η ) ∀ t ∈ Im f (η ). Let t ∈ Im f (η ) 1 t 2 t k 1 k 1 k k −1 −1 −1 and let x ∈ f (η ) . Then f (η )(x) ≥ t . We set f (η )(x) = t . Then by 1 t 1 k 1 i Definition 3.6 −1 −1 x ∈ f (η )andx  f (η ). (6) 1p 1p t i t j i j p ≥p in Imη j i 1 t >t in Imμ j i −1 −1 Suppose, if possible f (η )(x) < t . Then t < t , where f (η )(x) = t . Therefore 2 k j k 2 j t < t ≤ t. j k i Since α :Imμ → Im f (μ) is order preserving, we get α(t ) ≤ α(t ). That is p ≤ p in j i j i −1 Imη . Since f (η )(x) = t and t < t , by Definition 3.6 we get 2 2 j j i −1 x  f (η ). (7) t 2p Now, since p ∈ Imη ,η ⊆ η ⊆ η as η  η  η. i 1 1 2 p 1 2 p p i i i Therefore −1 −1 f (η ) ⊆ f (η ). 1p 2p t i t i i i This by (6) implies −1 x ∈ f (η ), 2p t i −1 −1 which contradicts (7). Hence we have f (η )(x) ≥ t . That is x ∈ f (η ) . 2 k 2 t −1 Therefore∀ t ∈ Im f (η ), we have k 1 −1 −1 f (η ) ⊆ f (η ) . 1 t 2 t k k Fuzzy Inf. Eng. (2012) 3: 273-291 289 Hence −1 −1 f (η )  f (η ). 1 2 In the category S of fuzzy sets, we have the following result, the proof being similar to that of Proposition 4.5 is omitted. Proposition 4.6 Let (X,μ), (Y,η), (Y ,η ), (Y ,η ) ∈ Ob(S), f :(X,μ) → (Y,η) be an 1 1 2 2 S-morphism and (Y ,η )  (Y ,η )  (Y,η). Then we have the following: 1 1 2 2 −1 −1 (i) f (η ) ⊆ f (η ). 1p 2p t k t k k k p ∈Imη p ∈Imη k i k 2 α(t )=p α(t )=p k k k k −1 −1 −1 (ii) f (η ) ⊆ f (η ) ∀ t ∈ Im f (η ). 1 t 2 t k 1 k k Proposition 4.7 Let (X,μ), (Y,η), (Y ,η ) ∈ Ob(S), f :(X,μ) → (Y,η) be an S- −1 morphism and (Y ,η )  (Y,η). Then f ( f (η ))  η . Proof It is easy to verify that −1 f ( f (η ) ) ⊆ Y . t t k k −1 t ∈Im f (η ) −1 −1 Now to prove that Im f ( f (η )) ⊆ Imη , let α(t ) ∈ Im f ( f (η )). By Definition −1 3.5, we have t ∈ Im f (η ). Again by Definition 3.6, we haveα(t ) ∈ Imη . Hence, k k −1 Im f ( f (η )) ⊆ Imη . −1 −1 Finally, we show that f ( f (η )) ⊆ η ∀ α(t ) ∈ Im f ( f (η )). Let α(t ) k α(t ) −1 −1 α(t ) ∈ Im f ( f (η )) and let y ∈ f ( f (η )) . Then k α(t ) −1 f ( f (η ))(y) ≥ α(t ). −1 Now, if f ( f (η ))(y) = α(t ), then −1 −1 y ∈ f ( f (η ) ) = f ( f (η )) (by Lemma 3.2). t t t i i i t α(t ) i i But −1 f ( f (η )) ⊆ η . i t α(t ) α(t ) i i i Therefore y ∈ η . α(t ) Sinceα(t ) ≥ α(t ), we haveη ⊆ η . That is i k α(t ) α(t ) i k y ∈ η . α(t ) Thus −1 −1 f ( f (η )) ⊆ η ∀ (t ) ∈ Im f ( f (η )). α(t ) k α(t ) k 290 Aparna Jain· Naseem Ajmal (2012) −1 Hence we have f ( f (η ))  η . 5. Conclusion This paper attempts to answer questions raised by Goguen by defining and studying a category which has not only its objects as fuzzy but having morphism which are fuzzy as well. Various significant properties of this category G have been discussed, but the authors would like to add that it is worthwhile to investigate further properties of this category. Some of these properties would be to verify if products exists in G or if G is algebraic. Acknowledgements The authors are highly indebted to the learned referees for their valuable suggestions regarding the improvements of this paper. References 1. Ajmal N (1996) Fuzzy groups with sup property. Inform. Sci. 93: 247-264 2. Ajmal N (2000) Fuzzy group theory: a comparison of different notions of product of fuzzy sets. Fuzzy Sets and Systems 110: 437-446 3. Ajmal N and Kumar S (2002) Lattices of subalgebras in the category of fuzzy groups. J. Fuzzy Math. 10(2): 359-369 4. Jain A and Ajmal N (2004) A new approach to the theory of fuzzy groups. J. Fuzzy Math. 12(2): 341-355 5. Jain A and Ajmal N (2006) Categories of fuzzy sets and fuzzy groups and the lattices of subobjects of these categories. J. Fuzzy Math. 14(3): 573-582 6. Jain A (2006) Fuzzy subgroups and certain equivalence relations. Iranian Journal of Fuzzy Systems 3(2): 75-91 7. Bayoumi F (2005) On initial and final L-topological groups. Fuzzy Sets and Systems 156: 43-54 8. Das P S (1981) Fuzzy groups and level subgroups. J. Math. Anal. Appl. 84: 264-269 9. Goguen J A (1967) L-fuzzy sets. J. Math. Anal. Appl. 18: 145-174 10. Herlich H and Strecker G E (1973) Category theory. Allyn and Bacon Inc. 11. Mordeson J N and Malik D S (1999) Fuzzy Commutative Algebra. World Scientific Pub. Co. 12. Malik D S and Mordeson J N (2000) Fuzzy discrete structures. Physica Verlag, Heidelberg 13. Mordeson J N, Malik D S and Kuroki N (2003) Fuzzy semigroups. Springer Verlag, Berlin 14. Ralescu D (1978) Fuzzy subobjects in a category and the theory of image sets. Fuzzy Sets and Systems I: 193-202 15. Rosenfeld A (1971) Fuzzy groups. J. Math. Anal. Appl. 35: 512-517 16. Solovyov S A (2006) Categories of lattice-valued sets as categories of arrows. Fuzzy Sets and Systems 157: 843-854 17. Solovyov S A (2007) On a generalization of goguen’s category set (L). Fuzzy Sets and Systems 158(4): 367-385 18. Stout L N (1984) Topoi and categories of fuzzy sets. Fuzzy Sets and Systems 12: 169-184 19. Stout L N, Hohle U (1991) Foundations of fuzzy sets. Fuzzy Sets and Systems 40: 257-296 20. Head T (1995) A metatheorem for deriving fuzzy theorems from crisp versions. Fuzzy Sets and Systems 73: 349-358 21. Weinberger A (1998) Embedding lattices of fuzzy subalgebras into lattices of crisp subalgebras. Information Sciences 108: 51-70 22. Weinberger A (2005) Reducing fuzzy algebra to classical algebra. New Mathematics and Natural Computation I: 27-64 23. Winter M (2003) Representation theory of Goguen categories. Fuzzy Sets and Systems 138: 85-126 24. Wong C K (1976) Categories of fuzzy sets and fuzzy topological spaces. J. Math. Anal. Appl. 53: 704-711 Fuzzy Inf. Eng. (2012) 3: 273-291 291 25. Zadeh L A (1965) Fuzzy sets. Information and Control 8: 338-353 26. ZaidiSMAand Ansari Q A (1994) Some results on categories of L-fuzzy subgroups. Fuzzy Sets and Systems 64: 249-256 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Fuzzy Information and Engineering Taylor & Francis # Images and Preimages of Subobjects under the Morphisms in a New Category of Fuzzy Sets-I , Volume 4 (3): 19 – Sep 1, 2012 19 pages ## Images and Preimages of Subobjects under the Morphisms in a New Category of Fuzzy Sets-I Abstract AbstractThis paper is the third in a sequence of papers on categories by the same authors. In one of the papers, a new category of fuzzy sets was defined and a few results were established pertaining to that special category of fuzzy sets S. Here, the concept of a fuzzy subset of a fuzzy set is defined under the category S. Besides, the notions of images and preimages of fuzzy sets are also defined under morphisms in the category of fuzzy sets and how smoothly these images and preimages... Loading next page... /lp/taylor-francis/images-and-preimages-of-subobjects-under-the-morphisms-in-a-new-t7M8LXKUpj Publisher Taylor & Francis Copyright © 2012 Taylor and Francis Group, LLC ISSN 1616-8666 eISSN 1616-8658 DOI 10.1007/s12543-012-0116-y Publisher site See Article on Publisher Site ### Abstract Fuzzy Inf. Eng. (2012) 3: 273-291 DOI 10.1007/s12543-012-0116-y ORIGINAL ARTICLE Images and Preimages of Subobjects under the Morphisms in a New Category of Fuzzy Sets-I Aparna Jain · Naseem Ajmal Received: 8 July 2010/ Revised: 10 July 2012/ Accepted: 1 August 2012/ © Springer-Verlag Berlin Heidelberg and Fuzzy Information and Engineering Branch of the Operations Research Society of China Abstract This paper is the third in a sequence of papers on categories by the same authors. In one of the papers, a new category of fuzzy sets was defined and a few results were established pertaining to that special category of fuzzy sets S. Here, the concept of a fuzzy subset of a fuzzy set is defined under the category S. Besides, the notions of images and preimages of fuzzy sets are also defined under morphisms in the category of fuzzy sets and how smoothly these images and preimages behave under the action of these morphisms is analyzed. Finally, results have been proved on algebra of morphisms of this categoryS. Keywords Category· Fuzzy set· Subobject· Fuzzy subset· Monomorphism· Image · preimage 1. Introduction In 1967, Goguen laid a categorical foundation to the theory of fuzzy sets by introducing the category of L-fuzzy sets. Since then, several authors have studied various aspects in the categories of fuzzy sets and fuzzy groups. Most of the work done in this context is based on Goguen’s category. To have an overview of a few works done in this regard, readers are referred to [7, 14, 16-19, 23, 26]. M. Winter [23] showed how Goguen categories are a suitable extension of the theory in binary relations to the fuzzy world. F. Bayomi [7] studied the behaviour of functors to and fro between the category of crisp topological spaces and the category of L-topological spaces with a special reference to topological groups. Sergey A. Aparna Jain () Department of Mathematics, Shivaji College, University of Delhi, New Delhi, India email: jainaparna@yahoo.com Naseem Ajmal () Department of Mathematics, Zakir Hussain College, University of Delhi, New Delhi, India email: nasajmal@yahoo.com 274 Aparna Jain· Naseem Ajmal (2012) Solovyov [16, 17] introduced a category X(A), which was a generalization of the category of lattice valued subsets of A. Dan Ralescu [14], in his paper, defined category of C-sets, again a generalization of Goguen’s category of L-fuzzy sets. He replaced a lattice L by an arbitrary category C. Then the degree of membership is no longer a point in a lattice. It is rather an object in a category. Various points in that category of C-sets can thus be compared using morphisms between their membership degrees. Lawrence N. Stout [18, 19], tried to compare fuzzy logic and topos logic in which basis was again Goguen’s category. Zaidi and Ansari [26] introduced a few subcategories of Goguen’s category of L-fuzzy subgroups and studied their properties. It is worthwhile here to mention one of the statements quoted in the works of Lawrence N. Stout “In Goguen’s category, objects have fuzzy boundary with fuzziness measured in L, but the maps are crisp. Goguen suggests that a nicer category may result in taking maps which are fuzzy as well.” In [4], A. Jain and N. Ajmal introduced a new categoryG of fuzzy groups in which the object class was the class of all fuzzy groups in all groups. This category differed from the already known Goguen’s category [9] of fuzzy groups in the sense that the two categories comprised of different notions of morphisms. Thus, whereas most of the work done by various authors was based on Goguen’s category with the nature of object changed, here we propose a category in which morphisms are changed. Ever since the introduction of Metatheorem and subdirect product theorem by Tom Head [20] and the work of A. Weinberger [21, 22], it is sufficiently clear that the concept of a fuzzy group comprises of simple fibring of groups. Motivated by the works of Tom Head and Weinberger, these authors felt that there was a need to define the notion of a morphism in the category G of fuzzy groups in such a way that they consisted of a fibring of mappings (homomorphisms), which acted differently on different fibres, and the same was achieved in our paper [5]. Moreover, it was due to the nature of these morphisms that we were able to construct the reflective subcategories in our category of fuzzy groups in [4]. It is to be noticed that such type of subcategories do not exist in the Goguen’s framework. Dan Ralescu in [14] defined fuzzy subobjects of usual categories. On the other hand, in our category of fuzzy groups discussed in [5], objects were structured fuzzy sets and we considered the categorical subobjects of these objects. In Goguen’s category of fuzzy groups, a morphism between two objects (X,μ) and (Y,η) was simply a group homomorphism f between their underlying groups satisfying the property thatμ(x) ≤ η ( f (x)),∀ x ∈ X. Whereas in our category of fuzzy groupsG, a morphism between twoG-objects is a family of homomorphisms between their level subgroups satisfying a few obvious chain conditions. In [4], the nature of morphisms in the categoryG was discussed. Consequently, the important notion of a fuzzy subgroup of a fuzzy group was introduced in [5]. It was also demonstrated in [4] that this categoryG has uncountabely many reflective subcategories. In [5], the category G of fuzzy groups was further studied and parallel category S of fuzzy sets was defined, whose object class is the class of all fuzzy sets in all sets and a morphism between two S-objects is not just a single mapping between their parent sets, rather it is a family of mappings between their level subsets, again with a few obvious chain conditions. In [4], the subobjects of the category G were Fuzzy Inf. Eng. (2012) 3: 273-291 275 considered and the characterization of monomorphisms gives rise to the notion of a fuzzy subgroup of a fuzzy group in the same way as the notion of subgroup of a group arises in the category Grp of ordinary groups. In [5] the notion of fuzzy subset of a fuzzy set in the categoryS was defined. The notions of lower well ordered and upper well ordered fuzzy subsets were also defined and using these notions, it was proved that S(μ), the collection of all fuzzy subsets of a fuzzy set μ is a complete lattice if μ satisfies the property of being lower or upper well ordered. A similar result was proved for L(μ), the collection of all fuzzy subgroups of a fuzzy groupμ. In the present paper, we examine the category S in the framework of algebra of morphisms. The images and preimages of a fuzzy subset of a fuzzy set under an S-morphism are defined. These definitions are carefully formulated using the chain properties of level subsets of a fuzzy set. Following this, we prove some interesting results on algebra of morphisms in the category S. Then, a subcategory S of the category S is defined with a restricted object class as compared to the object class of S. The object class of S consists of all fuzzy sets with finite range sets, and its morphism class is the same as that of S. Some very important results on algebra of morphisms can be obtained in the categoryS (see Propositions 4.1, 4.2, 4.4 and 4.5). All these results show the compatibility of our definitions ofS, images and preimages of fuzzy subsets with that of algebraic properties of sets that exist in classical set theory. 2. Preliminaries Zadeh [25] defined a fuzzy set as a function from a nonempty set to a closed unit interval. Definition 2.1 [8] Let μ be a fuzzy set in a set X and let t ∈ [0, 1]. Then the t-cut μ ofμ is defined as: μ = {x ∈ X/μ(x) ≥ t}. Observe that if t > s, then μ ⊆ μ . t s Assume that the reader is familiar with the definition of a category, a few related concepts of category theory are recalled in the following definitions. Here C is any category and A, B areC-objects. Definition 2.2 [10] A C-morphism f : A → B is said to be a monomorphism in C if for all C-morphisms h and k such that f ◦ h = f ◦ k, it follows that h = k. Definition 2.3 [10] Let A, Bbe C-objects. If f : A → B is a monomorphism. Then (A, f ) is called a subobject of B. Definition 2.4 [10] A categoryF is a subcategory of a categoryH.If (i) Ob(F ) ⊆ Ob(H), (ii) [A, B] ⊆ [A, B] ∀ A, B ∈ Ob(F ), where [A, B] . denotes the collection of all F H F F -morphisms from A to B. 276 Aparna Jain· Naseem Ajmal (2012) (iii) EveryF identity is an H identity. (iv) Composition function of F is the restriction of the corresponding function of H. If in additionF satisfies the condition: (v) [A, B] = [A, B] ∀ A, B ∈ Ob(F ), thenF is called a full subcategory of H. F H Readers are referred to [1-3, 6, 15, 24] for details on fuzzy sets and categories. 3. Category S of Fuzzy Sets and Image, Preimage of a Fuzzy Subset under an S-morphism We first recall the definition of the CategoryS of fuzzy sets from [5]. Notice that the object class of S consists of ordered pairs (X,μ), where X is a set and μ is a fuzzy subset of X. We shall call the pair (X,μ) a fuzzy set in the category S. When there is no likelihood of any confusion about the base set, we shall briefly denote it by μ. A similar phrase “A fuzzy group in a group G” used instead of saying “A fuzzy subgroup in a group G” is used for the objects in the categoryG of fuzzy groups. This is because the objects in our category G are fuzzy groups in all groups and a fuzzy subgroup of a fuzzy group is defined using the notion of a subobject in the category G of fuzzy groups and similar notions and phrases are used in the categoryS. Definition 3.1 S is the quintupleS=(O,M,dom,cod,o), where (i) O is the class of all fuzzy sets in all sets. Members of O are called S-objects. (ii) M is the class of allS-morphisms, where anS-morphism is a relation f between twoS-objects μ andθ defined as follows: f : μ → θ, f is a pair f = ({ f } ,α), t t∈Imμ where the following axioms are satisfied: (a) α :Imμ → Imθ is an order preserving map. (b) ∀ t ∈ Imμ, f : μ → θ is a mapping. t t α(t) (c) If t > t in Imμ, A and B are subsets of μ and μ respectively, such that i j t t i j A ⊆ B, then f (A) ⊆ f (B). t t i j (d) If t > t in Imμ, C and D are subsets of θ and θ respectively such i j α(t ) α(t ) i j −1 −1 that C ⊆ D, then f (C) ⊆ f (D). t t i j (iii) Dom and cod are functions from M to O. If f is a morphism from μ to θ, then dom ( f ) = μ and cod ( f ) = θ. (iv) ‘O’ is a function from D = {( f, g)/ f, g ∈ M and dom ( f ) = cod(g)} into M, called the composition law of S. Let ( f, g) ∈ D,μ =dom (g),η =cod ( f ) and dom ( f ) = cod (g) = θ such that f = ({ f } ,α), g = ({g } ). Define the r r∈Imθ t t∈Imμ,β composition of f and g as f ◦ g = ({ f ◦ g } ,α,◦β). Since f ◦ g turns out β(t) t t∈Imμ to be an S-morphism, therefore we set O( f, g) = f ◦ g. Fuzzy Inf. Eng. (2012) 3: 273-291 277 It can be easily verified that f ◦ g is in fact anS-morphism. Moreover, the identity morphisms exist in S and the composition of morphisms satisfy associativity. Let us briefly recall the definition of the category G of fuzzy groups introduced in [4]. The objects of the category G are ordered pairs (G,μ), where G is a group and μ is a fuzzy subgroup of G. The pair (G,μ) is called a fuzzy group and briefly denoted by μ. The morphism class of G consists of pairs f = ({ f } ,α), where f is a t t∈Imμ morphism from the fuzzy groupμ to the fuzzy groupθ. Hereα is an order preserving map from Imμ to Imθ and { f } is a family of homomorphisms from the level t t∈Imμ subgroup μ to the level subgroup θ satisfying the other axioms of Definition 3.1. t α(t) Readers are referred to [4] for details. Let us recall here the following notion from [4] which gives rise to the concept of a fuzzy subgroup of a fuzzy group in the categoryG. Definition 3.2 A G-morphism f = ({ f } ,α) is said to be an M-morphism if α is t t∈Imμ injective and each f for t ∈ Imμ is also injective. Following that is the characterisation of the monomorphisms ofG. Theorem 3.1 [4] A G-morphism f is an M-morphism if and only if f is a monomorphism. It is clear by Definition 2.3 that in any category, a subobject of an object is a pair consisting of an object and a monomorphism. Most often, a subobject is identified by its associated monomorphism. Moreover, it is well known that in any category whose objects are algebraic structures, subobjects give rise to subalgebras. For example, in the category Grp of ordinary groups, subobjects of objects arising from monomorphisms give rise to the notion of subgroup of a group. Further, in any category of algebraic structures in which the images and preimages of objects under morphisms are defined, the notion of a subalgebra arises naturally. For example, if (G, f ) is a subobject in the category Grp, then f (G) is a subgroup of its codomain. In the reverse direction, if there is a subgroup G of a group H, then the pair (G, i) provides a subobject where i is the inclusion map from G to H. Thus, there is a one to one correspondence between the subobjects of an object (group) in the category Grp and the subgroups of that group. In our category G of fuzzy groups, since the images of objects under morphisms are defined [5], a similar treatment is carried out to formulate the notion of a fuzzy subgroup of a fuzzy group. We discuss here the motivation behind the three axioms in Definition 3.3. Let ((G,μ), f ) be a subobject in the category G and let (H,θ)be the codomain of f . Then f is the pair f = ({ f } ,α), where α is an injective t t∈Imμ order preserving map from Imμ to Imθ and for each t ∈ Imμ, f is an injective homomorphism fromμ toθ (in view of Theorem 3.1). t α(t) Notice that for each t ∈ Imμ, f is a group homomorphism from the subgroupμ to t t the subgroup θ in the following result. α(t) Proposition 3.1 Let (X,μ), (Y,θ) be G-objects, f :(X,μ) → (Y,θ) be a G-morphism, f = ({ f } ,α) and Imμ = {t} . Then t t∈Imμ i i∈Λ f (μ) = f (μ )∀ α(t ) ∈ Im f (μ). α(t ) t t i i i i 278 Aparna Jain· Naseem Ajmal (2012) Now, f (μ ) is a subgroup of θ . Further, due to Axiom 3 of a G-morphism, t t α(t) { f (μ )} is an ascending chain of subgroups of H and thus the union f (μ )is t t t∈Imμ t t t∈Imμ a subgroup of H. Therefore, we have the following: (i) f (μ ) is a subgroup of H, t t t∈Imμ (ii) α(Imμ) ⊆ Imθ, (iii) f (μ) is a subgroup ofθ ∀α(t) ∈ Im f (μ). α(t) α(t) Thus, any subobject in the category G gives rise to these three properties. These facts motivated us to define a fuzzy subgroup of a fuzzy group in G in [5] given here by Definition 3.3. Notice that in the following definition, since μ and θ are fuzzy groups in G and H respectively, the level subsets μ and θ are subgroups of G and H t t respectively. Definition 3.3 [5] A fuzzy group (G,μ) is said to be a fuzzy subgroup of a fuzzy group (H,θ) if (i) G is a subgroup of H, (ii) Imμ ⊆ Imθ, (iii) μ is a subgroup of θ ∀ t Imμ. t t Now, in the reverse direction, let (G,μ), (H,θ)be G-objects satisfying the above three conditions, i.e., G is a subgroup of H,Imμ ⊆ Imθ, and μ is a subgroup of θ ∀ t ∈ Imμ. Then, we can define a monomorphism I = ({I } , i), where t t t∈Imμ i :Imμ → Imθ is the inclusion map and for each t ∈ Imμ, I : μ → θ = θ t t i(t) t is the inclusion homomorphism, thus providing us with a subobject ((G,μ), I) in the category G. Notice that I (μ ) = G. t t t∈Imμ We now define the notion of a fuzzy subset of a fuzzy set in the categoryS: Definition 3.4 A fuzzy set (X ,μ ) is said to be a fuzzy subset of a fuzzy set (X,μ) if (i) X ⊆ X, (ii) Imμ ⊆ Imμ, (iii) μ ⊆ μ ∀ t ∈ Imμ . t t A fuzzy set μ which is a fuzzy subset of a fuzzy set μ, will be denoted by μ  μ or (X ,μ )  (X,μ). Notice that the third axiom in the above definition is equivalent to saying that μ (x) ≤ μ(x) ∀ x ∈ X . We now introduce the concepts of image and preimage of a fuzzy subset of a fuzzy set under an S-morphism f . Definition 3.5 Let (X,μ), (X ,μ ) and (Y,η) be S-objects such that (X ,μ )  (X,μ). Let f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α). Let Imμ = {t} .We t t∈Imμ i i∈Λ Fuzzy Inf. Eng. (2012) 3: 273-291 279 define f (μ ), the image of the fuzzy subset (X ,μ ) of (X,μ) under the morphism f as a fuzzy set in the union of the family{ f (μ )} as follows: t t t∈imμ f (μ ): f (μ ) → [0, 1], t∈Imμ f (μ )(y) = α(t ) if y ∈ f (μ )− f (μ ), where t, t ∈ Im(μ ). i t t i j i t j t i j t >t j i Note that ( f (μ ), f (μ )) is an S-object. t∈Imμ Definition 3.6 Let (X,μ), (Y,η) and (Y ,η ) be S-objects such that (Y ,η )  (Y,η). Let f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α) and Imη = {p } .We t t∈Imμ k k∈Ω −1 define f (η ), the preimage of the fuzzy subset (Y ,η ) of (Y,η) under the morphism −1 −1 f as a fuzzy set in the union of the family { f (η )}, where p ∈ Imη , t ∈ α (p ) k k k t p k k as follows: −1 −1 f (η ): f (η ) → [0, 1], t p k k −1 t ∈α (p ) k k p ∈Imη −1 −1 −1 f (η )(x) = t if x ∈ f (η )− f (η ). t p t p k k j j t >t j k −1 t ∈α (p ) j j p ≥p in Imη j k −1 −1 Here too, note that f (η ), f (η ) ∈ Ob(S). t p k k −1 t ∈α (p ) k k p ∈Imη Lemma 3.1 Let (X,μ), (X ,μ ) and (Y,η) be S-objects such that (X ,μ )  (X,μ), f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α) and Imμ = {t} . Then t t∈Imμ i i∈Λ f (μ ) = f (μ )∀ α(t ) ∈ Im f (μ ). α(t ) t t i i i i Proof Let α(t ) ∈ Im f (μ ) and y ∈ f (μ ). Suppose if possible, i t i t f (μ )(y)<α(t ). That is α(t )<α(t ), where f (μ )(y) = α(t ). k i k This implies t < t . Now, since f (μ )(y) = α(t ), therefore k i k y ∈ f (μ ) and y  f (μ )∀ t > t in Imμ . t t n k k t n t k n This is a contradiction to the fact that t > t in Imμ , and y ∈ f (μ ). Hence i k t i t f (μ )(y) ≥ α(t ). That is y ∈ f (μ ) and thus α(t ) f (μ ) ⊆ f (μ ) . t t α(t ) i i i 280 Aparna Jain· Naseem Ajmal (2012) To show the reverse inclusion, let y ∈ f (μ ) . Then α(t ) f (μ )(y) ≥ α(t ). That is α(t ) ≥ α(t ), where f (μ )(y) = α(t ). j i j Case I: α(t ) = α(t ). Then f (μ )(y) = α(t ) and therefore by Definition 3.5, j i i y ∈ f (μ ). i t Case II: α(t )  α(t ). Then α(t ) >α(t ). Since α is an order preserving map, this j i j i implies t > t. j i Now t , t ∈ Imμ . Therefore, j i μ  μ . t t j i Then by Axiom (iii) of an S-morphism, we have f (μ ) ⊆ f (μ t ). t t i j t i Since f (μ )(y) = α(t ), we have by Definition 3.5, y ∈ f (μ ). Hence y ∈ f (μ ). j t t j t i t j i Tus f (μ ) ⊆ f (μ ). α(t ) t i i t This gives the required equality. Proposition 3.2 Let (X,μ), (X ,μ ) and (Y,η) beS-objects such that (X ,μ )  (X,μ), f :(X,μ) → (Y,η) be an S-morphism, f = ({ f } ,α) and Imμ = {t} . Then t t∈Imμ i i∈Λ f (μ ) is a fuzzy subset ofη, that is f (μ ), f (μ )  (y,η). t∈Imμ Proof Since μ  μ,wehaveImμ  Imμ andμ ⊆ μ ∀ t ∈ Imμ . This implies f (μ ) ⊆ η ⊆ Y ∀ t ∈ Imμ . t α(t) Thus f (μ ) ⊆ Y. t∈Imμ Next, let α(t ) ∈ Im f (μ ). By Definition 3.5, this implies t ∈ Imμ ⊆ Imμ. i i Thereforeα(t ) ∈ Imη. Thus Im f (μ ) ⊆ Imη. Finally, to show that f (μ ) ⊆ η ∀ α(t ) ∈ Im f (μ ), let α(t ) ∈ Im f (μ ). α(t ) α(t ) i i i i Then t ∈ Imμ ⊆ Imμ. Since f is a map from μ to η ,we have f (μ ) ⊆ η . i t t α(t ) t α(t ) i i i i t i Therefore, in view of Lemma 3.1, we have f (μ ) ⊆ η ∀ α(t ) ∈ Im f (μ ). α(t ) α t ) i i ( i Hence f (μ ) is a fuzzy subset of η. Fuzzy Inf. Eng. (2012) 3: 273-291 281 Lemma 3.2 Let (X,μ), (Y ,η ) and (Y,η) be S-objects such that (Y ,η )  (Y,η) and f :(X,μ) → (Y,η) be an S-morphism. Then for p ∈ Imη such that α(t ) = p , k k k −1 −1 f (η ) = f (η ). k t p k k −1 Proof Let p ∈ Imη such that α(t ) = p for some t ∈ Imμ. Let x ∈ f (η ) . k k k k t Then −1 f (η )(x) ≥ t . −1 −1 Now, if f (η )(x) = t , then we have t ≥ t . By Definition 3.6, f (η )(x) = t j j k j implies −1 x ∈ f (η ). t p j j −1 If t = t , then clearly x ∈ f (η ). And if t > t , then by Axiom 4 of an S- j k j k t p k k −1 −1 −1 morphism f (η ) ⊆ f (η ). This implies x ∈ f (η ). Thus we have in both the t p t p t p j j k k k k cases, −1 x ∈ f (η ). t p k k That is −1 −1 f (η ) ⊆ f (η ). k t p k k −1 −1 To prove the reverse inclusion, let x ∈ f (η ). Suppose, if possible f (η )(x) < t p k k −1 −1 t .If f (η )(x) = t , then we have t < t . Also, by Definition 3.6, f (η )(x) = t k i i k i implies −1 −1 x ∈ f (η ) and x  f (η ), ∀ t > t, p ≥ p in Imη,α(t ) = p . n i n i n n t p t p i i n n −1 Thus, keeping into consideration t > t , we get x  f (η ). This contradiction k i t p k k −1 −1 establishes that f (η )(x) ≥ t . Hence x ∈ f (η ) . Therefore k t −1 −1 f (η ) ⊆ f (η ) . t p t k k k We thus get the required equality. Proposition 3.3 If (X,μ), (Y ,η ) and (Y,η) be S-objects such that (Y ,η )  (Y,η) −1 and f :(X,μ) → (Y,η) be anS-morphism, then f (η ) is a fuzzy subset ofμ. That is, −1 −1 f (η ), f (η )  (X,μ). t p k k −1 t ∈α (p ) k k p ∈Imη Proof It is easy to verify that −1 f (η ) ⊆ X. (1) t p k k −1 t ∈α (p ) k k p ∈Imη k 282 Aparna Jain· Naseem Ajmal (2012) Since η  η,we have Imη ⊆ Imη and α is a map from Imμ to Imη. Therefore, ∀ −1 p ∈ Imη ,α (p ) ⊆ Imμ. Thus, in view of Definition 3.6, k k −1 Im f (η ) ⊆ Imμ. (2) −1 −1 −1 Finally to show that f (η ) ⊆ μ ∀ t ∈ Im f (η ), let t ∈ Im f (η ). Then, by t t Lemma 3.2, −1 −1 f (η ) = f (η ). t α(t) Since f is a map fromμ toη andη ⊆ η ∀ α(t) ∈ Imη ,wehave t t α(t) α(t) α(t) −1 f (η ) ⊆ μ . α(t) Hence −1 −1 f (η ) ⊆ μ ∀ t ∈ Im f (η ). (3) t t −1 By (1), (2) and (3), we get that f (η ) is a fuzzy subset ofμ. Following lemmas are easy to verify: Lemma 3.3 Let (X,μ), (Y,η), (X ,μ ) ∈ Ob(S),f :(X,μ) → (Y,η) be anS-morphism and (X ,μ )  (X,μ).If t < t in Imμ , then f (μ )  f (μ ). i j t t j t i t j i Lemma 3.4 Let (X,μ), (Y ,η ) and (Y,η) ∈ Ob(S),f :(X,μ) → (Y,η) be an S- morphism and (Y ,η )  (Y,η) such that f s are surjective ∀ t ∈ Imμ. Then for any p < p in Imη , we have i j −1 −1 f (η )  f (η ), where α(t ) = p andα(t ) = p . i i j j t p t p j j i i 4. A Subcategory S ofS and Algebra of Morphisms in the CategoryS f f We shall now restrict the class of objects in the category S and hence construct a subcategory S of S. The object class of S consists of all fuzzy sets with finite f f range sets and the morphisms. Considered in S are the same as in S. One can observe thatS is a full subcategory ofS. Some very important results on algebra of morphisms are achievable for this subcategory. Proposition 4.1 If (X,μ), (X ,μ ), (Y,η) ∈ Ob(S ), such that (X ,μ )  (X,μ) and f :(X,μ) → (Y,η) is anS -morphism, then α :Imμ → Im f (μ ) is a bijection. Proof We first prove that ∀ t ∈ Imμ ,α(t ) ∈ Im f (μ ). For this let t ∈ Imμ . Then i i i t = μ (x) for some x ∈ X . This implies x ∈ μ . Since f is a map from μ to η i t t α(t ) t i i i andμ ⊆ μ , therefore we have t i f (x) ∈ f (μ ). t t i i t Thus f (μ )  φ. i t Case I: t = sup Imμ . Now since f (μ )  φ, let y ∈ f (μ ). Then by Definition 3.5, i t t i t i t i i f (μ )(y) = α(t ). That is,α(t ) ∈ Im f (μ ). i i Fuzzy Inf. Eng. (2012) 3: 273-291 283 Case II: t < sup Imμ . Then t < t for some t ∈ Imμ . This implies i i j j μ  μ . t t j i Then by Lemma 3.3 f (μ )  f (μ ). t t j t i t j i This is true∀ t > t in Imμ . Therefore j i f (μ )− f (μ )  φ. t t i t j t i j t >t j i t ,t ∈Imμ i j Thus by Definition 3.5, α(t ) ∈ Im f (μ ). Now to prove thatα is injective, let t, t ∈ Imμ such thatα(t ) = α(t )inIm f (μ ). i j i j Setting α(t ) = α(t ) = p, suppose if possible t > t in Imμ ⊆ Imμ. Then by i j i j Lemma 3.3, f (μ )  f (μ ). t t i t j t i j This by Lemma 3.1 implies f (μ )  f (μ ) . α ti) α(t ) ( j That is f (μ )  f (μ ) . p p This contradiction implies t ≤ t . Similarly, we shall get t ≤ t . Thus t = t which i j j i i j proves thatα is injective. Also, by Definition 3.5, it is clear that if p ∈ Im f (μ ), then p = α(t ) for some t ∈ Imμ . Thus we get that α is surjective. k k k Proposition 4.2 Let (X,μ), (Y,η) ∈ Ob(S ) and f :(X,μ) → (Y,η) be an S - f f morphism. Then (X ,μ )  (X ,μ )  (X,μ) imply that f (μ )  f (μ ). 1 1 2 2 1 2 Proof It is easy to verify that f (μ ) ⊆ f (μ ). t 1 t 2 t t t∈Imμ t∈Imμ 1 2 Now to prove that Im f (μ ) ⊆ Im f (μ ), let α(t ) ∈ Im f (μ ). Then 1 2 i 1 t ∈ Imμ ⊆ Imμ . i 1 2 Case I: t = sup Imμ .We have α(t ) ∈ Im f (μ ). This implies ∃ y ∈ f (μ ), i 2 i 1 t 1 t∈Imμ such that f (μ )(y) = α(t ). Then by Definition 3.5, we have y ∈ f (μ ) ⊆ f (μ ). 1 i t 1 t 2 i t i t i i This implies f (μ )  ∅. Again by Definition 3.5, f (μ )(y) = α(t ). That is t 2 2 i i t α(t ) ∈ Im f (μ ). i 2 284 Aparna Jain· Naseem Ajmal (2012) Case II: t < sup Imμ . Let t ∈ Imμ such that t > t . Then i 2 j 2 j i μ  μ . 2 2 t t j i This by Lemma 3.3 implies f (μ )  f (μ ). t 2 t 2 j t i ti Now since μ has finite range set, f (μ )− f (μ )  ∅. t 2 t 2 i t j t i j t >t j i t ,t ∈Imμ i j 2 By Definition 3.5, this implies α(t ) ∈ Im f (μ ). i 2 Hence Im f (μ ) ⊆ Im f (μ ). 1 2 Finally, we show that f (μ ) ⊆ f (μ ) ∀ α(t ) ∈ Im f (μ ). 1 α(t ) 2 a(t ) i 1 i i Let α(t ) ∈ Im f (μ ) and y ∈ f (μ ) . Then f (μ )(y) ≥ α(t ). Suppose, if possible i 1 1 a(t ) 1 i f (μ )(y)<α(t ). Then 2 i f (μ )(y)<α(t ) ≤ f (μ )(y). 2 i 1 Setting f (μ )(y) = α(t ) and f (μ )(y) = α(t ), we get that 2 k 1 j α(t )<α(t ). k j Since α is order preserving, t < t . k j Now since f (μ )(y) = α(t ), we have t ∈ Imμ .Asμ  μ , we get 1 j j 1 1 2 μ ⊆ μ . 1 2 t t j j This implies f (μ ) ⊆ f (μ ). t 1 t 2 j t j t j j Again, f (μ )(y) = α(t ) by Definition 3.5 implies that y ∈ f (μ ). Thus, 1 j t 1 j t y ∈ f (μ ). (4) t 2 j t j Fuzzy Inf. Eng. (2012) 3: 273-291 285 Since f (μ )(y) = α(t ), again by Definition 3.5 we get y ∈ f (μ ) and y  f (μ ) ∀ 2 k t 2 t 2 k t n t t > t in Imμ . Therefore y  f (μ ) which contradicts (4). Hence n k 2 t 2 j t f (μ )(y) ≥ α(t ). 2 i This implies y ∈ f (μ ) . 2 α(t ) Thus f (μ ) ⊆ f (μ ) , ∀ α(t ) ∈ Im f (μ ). 1 α(t ) 2 α(t ) i 1 i i Hence f (μ )  f (μ ). 1 2 In the categoryS of fuzzy sets, we have the following result, the proof being similar to that of Proposition 4.2 is omitted. Proposition 4.3 Let (X,μ), (Y,η) ∈ Ob(S) and f :(X,μ) → (Y,η) be anS-morphism. Then (X ,μ )  (X ,μ )  (X,μ) imply the following: 1 1 2 2 (i) f (μ ) ⊆ f (μ ). t 1 t 2 t t t∈Imμ t∈Imμ 1 2 (ii) f (μ ) ⊆ f (μ ) ∀α(t ) ∈ Im f (μ ). 1 α(t ) 2 α(t ) i 1 i i Proposition 4.4 Let (X,μ), (X ,μ ), (Y,η) ∈ Ob(S ) such that (X ,μ )  (X,μ) and −1 f :(X,μ) → (Y,η) be an S -morphism. Then μ  f ( f (μ )). Proof First recall by Proposition 3.2, f (μ ) is a fuzzy subset of η. For the sake of convenience, denote f (μ )by η . It can be easily verified that −1 X ⊆ f (η ). t p k k −1 t ∈α (p ) k k p ∈Imη −1 Now we show that Imμ ⊆ Im f (η ). Let t ∈ Imμ . Case I: t = sup Imμ . By Proposition 4.1, α :Imμ → Imη is a bijection and is order preserving, therefore p = α(t ) = sup Imη . Since t ∈ Imμ , t = μ (x) for k k k k some x ∈ X . Thus x ∈ μ . Therefore f (x) ∈ f (μ ) = f (μ ) (by Lemma 3.1) t t α(t ) k k t k = η . −1 −1 Thus x ∈ f (η ). Since p = sup Imη , f (η )(x) = t by Definition 3.6. Therefore k k t p k k −1 t ∈ Im f (η ). Case II: t < sup Imμ . Then by Proposition 4.1, p = α(t ) < sup Imη . Let k k k p ∈ Imη such that p < p and letα(t ) = p . Then we have t < t in Imμ ⊆ Imμ. j k j j j k j Therefore by Axiom 4 of anS-morphism, −1 −1 f (η ) ⊆ f (η ). t p t p j j k k 286 Aparna Jain· Naseem Ajmal (2012) Since p > p in Imη ,η  η . Let y ∈ η such that y  η . By Lemma 3.1 j k p p p p j k k j y ∈ η = f (μ ) = f (μ ). p α(t ) t t k k k k This implies y = f (x) for some x ∈ μ . k k Therefore −1 −1 x ∈ f (y) ⊆ f (η ). t t p k k k −1 Suppose, if possible x ∈ f (η ). Since t > t in Imμ, by Axiom 4 of an j k t p j j S-morphism, we have −1 −1 f (η ) ⊆ f (η ). t p t p j j k j Thus −1 x ∈ f (η ). t p k j This implies y = f (x) ∈ η . k p This contradiction implies −1 x  f (η ). t p j j Hence −1 −1 f (η )  f (η ) ∀ p > p in Imη . j k t p t p j j k k Therefore −1 −1 f (η )− f (η )  φ. t p t p k k j j t >t j k −1 t ∈α (p ) j j p >p in Imη j k By Definition 3.6, this implies −1 t ∈ Im f (η ). We thus have −1 Imμ ⊆ Im f (η ). −1 Finally, we show that μ ⊆ f (η ) ∀ t ∈ Imμ . For this let t ∈ Imμ and let t k k t k −1 −1 x ∈ μ . Then μ (x) ≥ t . Suppose, if possible f (η )(x) < t . Let f (η )(x) = t . k k i k Fuzzy Inf. Eng. (2012) 3: 273-291 287 Then t < t in Imμ implies p < p in Imη , where α(t ) = p . Also in view of i k i k i i −1 Definition 3.6 and Proposition 4.1, f (η )(x) = t implies −1 −1 x ∈ f (η ) and x  f (η ), t p t p i i n n ∀ p > p in Imη , t > t, such thatα(t ) = p . (5) n i n i n n Now, x ∈ μ implies f (x) ∈ f (μ ). t t t k k k By Lemma 3.1 we have f (μ ) = f (μ ) = η . t α(t ) k t k p k k Thus −1 x ∈ f (η ), where p > p in Imη . k i t p k k This contradicts (5). Hence we have −1 f (η )(x) ≥ t . −1 That is x ∈ f (η ) and thus −1 μ ⊆ f (η ) ∀ t ∈ Imμ . t k t k Hence we arrive at the required conclusion. That is −1 μ  f ( f (μ )). Proposition 4.5 Let (X,μ), (Y,η), (Y ,η ), (Y ,η ) ∈ Ob(S ), f :(X,μ) → (Y,η) be 1 1 2 2 f an S -morphism, f = ({ f } ,α) such that f ’s are surjective ∀ t ∈ Imμ. Then f t t∈Imμ t −1 −1 (Y ,η )  (Y ,η )  (Y,η) imply that f (η )  f (η ). 1 1 2 2 1 2 Proof It is easy to verify that −1 −1 f (η ) ⊆ f (η ). t 1p t 2p k k k k p ∈Imη p ∈Imη k 1 k 2 α(t )=p α(t )=p k k k k −1 −1 −1 Next we prove that Im f (η ) ⊆ Im f (η ). Let t ∈ Im f (η ). Then by 1 2 k 1 Definition 3.6, we have α(t ) = p ∈ Imη . Since η  η  η, Imη ⊆ Imη . k k 1 1 2 1 2 Thus p ∈ Imη . k 2 Case I: p = sup Imη . Since p ∈ Imη , ∃ y ∈ Y such that η (y) = p . That is k 2 k 2 2 2 k y ∈ η . Since f : μ → η is a surjective map and η ⊆ η , ∃ x ∈ μ such that 2p t t p 2p p t k k k k k k k −1 f (x) = y ∈ η . Therefore x ∈ f (η ). Keeping in view that p = sup Imη ,by t 2p 2p k 2 k k t k −1 −1 Definition 3.6 we have f (η )(x) = t . That is t ∈ Im f (η ). 2 k k 2 Case II: p < sup Imη . Let p < p in Imη . Then η  η . Since Imη is finite, k 2 k j 2 2p 2p j k we have η  η . 2p 2p j k p >p in Imη j k 2 288 Aparna Jain· Naseem Ajmal (2012) Let y ∈ η such that y  η . Since f is surjective, choose 2p 2p t k j k p >p in Imη j k 2 −1 −1 x ∈ f (y) ⊆ f (η ). t t 2p k k Suppose, if possible −1 x ∈ f (η ). 2p n n p >p in Imη n k 2 −1 Then, x ∈ f (η ) for some p > p in Imη . By Axiom 4 of an S-morphism, we 2p n k 2 t n −1 −1 −1 have f (η ) ⊆ . f (η ). Thus x ∈ f (η ). This implies t 2p t 2p t 2p n n k n k n y = f (x) ∈ η ⊆ η . t 2p 2 k n p p >p in Imη j k 2 −1 This is a contradiction. And hence x  f (η ). 2p t n p >p in Imη n k 2 −1 −1 Therefore x ∈ f (η ) − f (η ). Then by Definition 3.6 we have 2p 2p t k t n k n p >p in Imη n k 2 −1 t ∈ Im f (η ). k 2 Hence −1 −1 Im f (η ) ⊆ Im f (η ). 1 2 −1 −1 −1 −1 Finally, we show that f (η ) ⊆ f (η ) ∀ t ∈ Im f (η ). Let t ∈ Im f (η ) 1 t 2 t k 1 k 1 k k −1 −1 −1 and let x ∈ f (η ) . Then f (η )(x) ≥ t . We set f (η )(x) = t . Then by 1 t 1 k 1 i Definition 3.6 −1 −1 x ∈ f (η )andx  f (η ). (6) 1p 1p t i t j i j p ≥p in Imη j i 1 t >t in Imμ j i −1 −1 Suppose, if possible f (η )(x) < t . Then t < t , where f (η )(x) = t . Therefore 2 k j k 2 j t < t ≤ t. j k i Since α :Imμ → Im f (μ) is order preserving, we get α(t ) ≤ α(t ). That is p ≤ p in j i j i −1 Imη . Since f (η )(x) = t and t < t , by Definition 3.6 we get 2 2 j j i −1 x  f (η ). (7) t 2p Now, since p ∈ Imη ,η ⊆ η ⊆ η as η  η  η. i 1 1 2 p 1 2 p p i i i Therefore −1 −1 f (η ) ⊆ f (η ). 1p 2p t i t i i i This by (6) implies −1 x ∈ f (η ), 2p t i −1 −1 which contradicts (7). Hence we have f (η )(x) ≥ t . That is x ∈ f (η ) . 2 k 2 t −1 Therefore∀ t ∈ Im f (η ), we have k 1 −1 −1 f (η ) ⊆ f (η ) . 1 t 2 t k k Fuzzy Inf. Eng. (2012) 3: 273-291 289 Hence −1 −1 f (η )  f (η ). 1 2 In the category S of fuzzy sets, we have the following result, the proof being similar to that of Proposition 4.5 is omitted. Proposition 4.6 Let (X,μ), (Y,η), (Y ,η ), (Y ,η ) ∈ Ob(S), f :(X,μ) → (Y,η) be an 1 1 2 2 S-morphism and (Y ,η )  (Y ,η )  (Y,η). Then we have the following: 1 1 2 2 −1 −1 (i) f (η ) ⊆ f (η ). 1p 2p t k t k k k p ∈Imη p ∈Imη k i k 2 α(t )=p α(t )=p k k k k −1 −1 −1 (ii) f (η ) ⊆ f (η ) ∀ t ∈ Im f (η ). 1 t 2 t k 1 k k Proposition 4.7 Let (X,μ), (Y,η), (Y ,η ) ∈ Ob(S), f :(X,μ) → (Y,η) be an S- −1 morphism and (Y ,η )  (Y,η). Then f ( f (η ))  η . Proof It is easy to verify that −1 f ( f (η ) ) ⊆ Y . t t k k −1 t ∈Im f (η ) −1 −1 Now to prove that Im f ( f (η )) ⊆ Imη , let α(t ) ∈ Im f ( f (η )). By Definition −1 3.5, we have t ∈ Im f (η ). Again by Definition 3.6, we haveα(t ) ∈ Imη . Hence, k k −1 Im f ( f (η )) ⊆ Imη . −1 −1 Finally, we show that f ( f (η )) ⊆ η ∀ α(t ) ∈ Im f ( f (η )). Let α(t ) k α(t ) −1 −1 α(t ) ∈ Im f ( f (η )) and let y ∈ f ( f (η )) . Then k α(t ) −1 f ( f (η ))(y) ≥ α(t ). −1 Now, if f ( f (η ))(y) = α(t ), then −1 −1 y ∈ f ( f (η ) ) = f ( f (η )) (by Lemma 3.2). t t t i i i t α(t ) i i But −1 f ( f (η )) ⊆ η . i t α(t ) α(t ) i i i Therefore y ∈ η . α(t ) Sinceα(t ) ≥ α(t ), we haveη ⊆ η . That is i k α(t ) α(t ) i k y ∈ η . α(t ) Thus −1 −1 f ( f (η )) ⊆ η ∀ (t ) ∈ Im f ( f (η )). α(t ) k α(t ) k 290 Aparna Jain· Naseem Ajmal (2012) −1 Hence we have f ( f (η ))  η . 5. Conclusion This paper attempts to answer questions raised by Goguen by defining and studying a category which has not only its objects as fuzzy but having morphism which are fuzzy as well. Various significant properties of this category G have been discussed, but the authors would like to add that it is worthwhile to investigate further properties of this category. Some of these properties would be to verify if products exists in G or if G is algebraic. Acknowledgements The authors are highly indebted to the learned referees for their valuable suggestions regarding the improvements of this paper. References 1. Ajmal N (1996) Fuzzy groups with sup property. Inform. Sci. 93: 247-264 2. Ajmal N (2000) Fuzzy group theory: a comparison of different notions of product of fuzzy sets. Fuzzy Sets and Systems 110: 437-446 3. Ajmal N and Kumar S (2002) Lattices of subalgebras in the category of fuzzy groups. J. Fuzzy Math. 10(2): 359-369 4. Jain A and Ajmal N (2004) A new approach to the theory of fuzzy groups. J. Fuzzy Math. 12(2): 341-355 5. Jain A and Ajmal N (2006) Categories of fuzzy sets and fuzzy groups and the lattices of subobjects of these categories. J. Fuzzy Math. 14(3): 573-582 6. Jain A (2006) Fuzzy subgroups and certain equivalence relations. Iranian Journal of Fuzzy Systems 3(2): 75-91 7. Bayoumi F (2005) On initial and final L-topological groups. Fuzzy Sets and Systems 156: 43-54 8. Das P S (1981) Fuzzy groups and level subgroups. J. Math. Anal. Appl. 84: 264-269 9. Goguen J A (1967) L-fuzzy sets. J. Math. Anal. Appl. 18: 145-174 10. Herlich H and Strecker G E (1973) Category theory. Allyn and Bacon Inc. 11. Mordeson J N and Malik D S (1999) Fuzzy Commutative Algebra. World Scientific Pub. Co. 12. Malik D S and Mordeson J N (2000) Fuzzy discrete structures. Physica Verlag, Heidelberg 13. Mordeson J N, Malik D S and Kuroki N (2003) Fuzzy semigroups. Springer Verlag, Berlin 14. Ralescu D (1978) Fuzzy subobjects in a category and the theory of image sets. Fuzzy Sets and Systems I: 193-202 15. Rosenfeld A (1971) Fuzzy groups. J. Math. Anal. Appl. 35: 512-517 16. Solovyov S A (2006) Categories of lattice-valued sets as categories of arrows. Fuzzy Sets and Systems 157: 843-854 17. Solovyov S A (2007) On a generalization of goguen’s category set (L). Fuzzy Sets and Systems 158(4): 367-385 18. Stout L N (1984) Topoi and categories of fuzzy sets. Fuzzy Sets and Systems 12: 169-184 19. Stout L N, Hohle U (1991) Foundations of fuzzy sets. Fuzzy Sets and Systems 40: 257-296 20. Head T (1995) A metatheorem for deriving fuzzy theorems from crisp versions. Fuzzy Sets and Systems 73: 349-358 21. Weinberger A (1998) Embedding lattices of fuzzy subalgebras into lattices of crisp subalgebras. Information Sciences 108: 51-70 22. Weinberger A (2005) Reducing fuzzy algebra to classical algebra. New Mathematics and Natural Computation I: 27-64 23. Winter M (2003) Representation theory of Goguen categories. Fuzzy Sets and Systems 138: 85-126 24. Wong C K (1976) Categories of fuzzy sets and fuzzy topological spaces. J. Math. Anal. Appl. 53: 704-711 Fuzzy Inf. Eng. (2012) 3: 273-291 291 25. Zadeh L A (1965) Fuzzy sets. Information and Control 8: 338-353 26. ZaidiSMAand Ansari Q A (1994) Some results on categories of L-fuzzy subgroups. Fuzzy Sets and Systems 64: 249-256 ### Journal Fuzzy Information and EngineeringTaylor & Francis Published: Sep 1, 2012 Keywords: Category; Fuzzy set; Subobject; Fuzzy subset; Monomorphism; Image; preimage ### References Access the full text. Sign up today, get DeepDyve free for 14 days.
{}
# How can the position operator be displacement invariant? I am reading chapter 3 of Quantum Mechanics - A Modern Development by Leslie E Ballentine, where he derives the operators for the common dynamical variables from space-time symmetry considerations. At the start, he states that for each space-time transformation there must be a transformation of observables, $$A \to A'$$, and of states, $$|\Psi\rangle \to |\Psi'\rangle$$, following certain relations: 1. If $$A|\phi_n\rangle = a_n|\phi_n\rangle$$, then $$A'|\phi'_n\rangle = a_n|\phi'_n\rangle$$. 2. $$|\psi\rangle = \sum_n c_n|\phi_n\rangle \to |\psi'\rangle = \sum_n c'_n|\phi'_n\rangle$$, where $$\left\{|\phi_n\rangle\right\}$$ and $$\left\{|\phi'_n\rangle\right\}$$ are the eigenvectors of $$A$$ and $$A'$$ respectively. The two state vectors must obey $$|c_n|^2 = |c_n'|^2$$; that is, $$|\langle\phi_n|\psi\rangle|^2 = |\langle\phi'_n|\psi'\rangle|^2$$. He then continues with Wigner's theorem, and so on. My issues begin with point 1. For some operators and transformations this makes intuitive sense to me, but not for others. Take for example the position operator $$Q$$ and a space translation $$\mathbf x \to \mathbf x' = \mathbf x + \mathbf a$$. If a particle was localized about $$\mathbf x$$ before the translation, it would be localized about $$\mathbf x' = \mathbf x + \mathbf a$$ after it. How does that correspond to $$Q'|\mathbf x'\rangle = \mathbf x |\mathbf x'\rangle,$$ as implied by point 1 above? (Now, I know $$|\mathbf x\rangle$$ does not represent a particle at $$\mathbf x$$, but still.) My intuition would instead tell me that $$Q'|\mathbf x'\rangle = \mathbf x' |\mathbf x'\rangle$$, so apparently I am missing something. • You could think of it as $$|x'\rangle=T|x\rangle$$ with some translation operator $$T$$ that maps $$|x\rangle$$ onto $$|x'\rangle$$ and $$T^{-1}$$ the mapping back $$T^{-1}|x'\rangle=|x\rangle$$. We could then take $$Q'=TQT^{-1}$$ and evaluate the action of $$Q'$$ on a state $$|x'\rangle$$ as $$Q'|x'\rangle=TQT^{-1}T|x\rangle=TQ|x\rangle=xT|x\rangle=x|x'\rangle$$ • So by the symmetry transformation you change you states $$|x'\rangle\rightarrow|x\rangle$$ but you also change your operators (this is the important point). • This does not mean, that $$Q$$ is in our case invariant under the transformation as it is modified to $$Q'$$. • An operator $$A$$ would be invariant under a symmetry transformation ($$\Omega$$-Operators) if $$A\psi=A'\psi$$ or in other words $$A\Omega\psi=\Omega A\psi$$ • As you've correctly states the position operator is not invariant under translations. • We could show that for example the momentum operator with the plane wave basis of momentum states $$e^{-ikx}$$ is invariant under translations $$x'=x+a$$. $$pTe^{-ikx}=-i\hbar \nabla T e^{-ikx}=-i\hbar \nabla e^{-ik(x+a)}=\hbar k e ^{-ik(x+a)}=T\hbar k e^{-ikx}=Tpe^{-ikx}$$. • Sorry for responding to your answer so late. I pretty much realized what my main misunderstandings were and then forgot that I had posted this question. Anyhow, I will keep the question up despite its flaws, so that you get credit for your answer. Thank you for responding! – ummg Sep 23 '20 at 1:11
{}
# FP php:S1808 - foreach - space before as (Steve Seypt) #1 Hello. Consider the following PHP code: ``````foreach ( \$varXXXXXXXXXXXXXXXXXXXXX_long_variable_XXXXXXXXXXXXXXXXXXXXX as \$keyXXXXXXXXXXXXXXXXXXXXX_long_variable_XXXXXXXXXXXXXXXXXXXXX => \$valueXXXXXXXXXXXXXXXXXXXXX_long_variable_XXXXXXXXXXXXXXXXXXXXX ) { } `````` This reports php:S1808 (Put exactly one space after and before “as” in “foreach” statement.). I don’t understand, why this is not PSR2 conform, although it’s ok to split argument lists and variable lists across multiple lines in case of long lines. psr-2:method-arguments In the argument list, there MUST NOT be a space before each comma, and there MUST be one space after each comma. Argument lists MAY be split across multiple lines, where each subsequent line is indented once. Best regards, Steve (Michael Gumowski) #2 Hello Steve, Thanks for the feedback. It seems to be indeed a FP with the implementation of S1808 for PHP. The rule should allow spreading arguments on dedicated lines. I consequently created the following ticket to handle it: SONARPHP-795 Cheers, Michael
{}
#### Volume 19 (2015) Recent Volumes Volume 1, 1998 Volume 2, 1999 Volume 3, 2000 Volume 4, 2002 Volume 5, 2002 Volume 6, 2003 Volume 7, 2004 Volume 8, 2006 Volume 9, 2006 Volume 10, 2007 Volume 11, 2007 Volume 12, 2007 Volume 13, 2008 Volume 14, 2008 Volume 15, 2008 Volume 16, 2009 Volume 17, 2011 Volume 18, 2012 Volume 19, 2015 The Series MSP Books and Monographs About this Series Editorial Board Ethics Statement Author Index Submission Guidelines Author Copyright Form Purchases ISSN (electronic): 1464-8997 ISSN (print): 1464-8989 Other MSP Publications Open book decompositions versus prime factorizations of closed, oriented $3$–manifolds ### Paolo Ghiggini and Paolo Lisca Geometry & Topology Monographs 19 (2015) 145–155 arXiv: 1407.2148 ##### Abstract Let $M$ be a closed, oriented, connected $3$–manifold and $\left(B,\pi \right)$ an open book decomposition on $M$ with page $\Sigma$ and monodromy $\phi$. It is easy to see that the first Betti number of $\Sigma$ is bounded below by the number of ${S}^{2}×{S}^{1}$–factors in the prime factorization of $M$. Our main result is that equality is realized if and only if $\phi$ is trivial and $M$ is a connected sum of copies of ${S}^{2}×{S}^{1}$. We also give some applications of our main result, such as a new proof of the fact that if the closure of a braid with $n$ strands is the unlink with $n$ components then the braid is trivial. ##### Keywords open book decomposition, prime factorization, $3$–manifold Primary: 57N10 Secondary: 57M25
{}
Home > Windows Xp > Windows Xp Product Key Not Working Sp3 # Windows Xp Product Key Not Working Sp3 ## Contents I've always been under the impression that the new service pack came with an updated blacklisted key database built right in. Glad you got it sorted. If it works then you'll know that SP3 being slipstreamed into the installation is the problem. windows-xp product-key nlite share|improve this question edited Jan 28 '14 at 4:04 asked Jan 27 '14 at 17:57 Steve314 1,1322926 add a comment| 4 Answers 4 active oldest votes up vote navigate here According to the license agreement, the product keys for retail edition of XP can only be used on one PC, but the ones for VOL edition can be supplied for more Click on accept or OK if it asks for administrator permissions. The installer simply doesn't allow it. How fast is Time running in Majora's Mask? ## Windows Xp Professional Sp3 Product Key Maybe it's possible in XP Pro but not XP Home? The basis of the slipstream disk is the Windows XP Home SP1 retail disk. What difficulty would the Roman Empire have besieging a fantasy kingdom's 49m wall? Alphabetically permute a string Would the Ancient One have defended the Earth from a Chitauri invasion in the Avengers absence? 1. Or is it? 2. Doug says: November 17, 2012 at 3:04 PM Thank you very much for this information. 3. I guess I got a wrong image which is not actually SP3 at all. 5. Try talking with someone at the bookstore. 6. And it's only suitable for VOL edition with the only effect - prove the product is legal, licensed under VOL. 7. worldrbtfab 132,967 views 1:07 Breaking Windows XP! - Duration: 13:54. 8. Can you keep flying after being Restrained? 9. Tested successfully with HP, Gateway, and Toshiba OEM product keys. asked 2 years ago viewed 6914 times active 1 year ago Blog How We Make Money at Stack Overflow: 2016 Edition Stack Overflow Podcast #94 - We Don't Care If Bret Windows Xp Professional Product Key Who knows what to believe. But I got a new image which should be working now. –Ham Dec 18 '09 at 10:02 add a comment| up vote 2 down vote You can install Windows without a The machine has all network connectivity turned off. Austin Evans 878,889 views 3:35 How to Activate Windows Xp With No Product Key - Duration: 2:00. Windows Xp Key Generator The full version SP3 installation disc is not an OEM disc, and will not work. Are you saying to yourself "What do you mean invalid!?! Enjoy. ## Windows Xp Professional Product Key Source: http://www.winsupersite.com/faq/xp_sp3.asp We regularly use this method at work to reinstall Windows XP for customers who have no restore discs. Mountainering with 6 y.o. Windows Xp Professional Sp3 Product Key Would this be considered as plagiarism? \left \{ fitting a box (not centered) Serial Output returns wrong ASCII Why would a teen TV show need an FBI warning inside the Young Windows Xp Sp2 Product Key What is the standard dimension of wide pictures? Sign in Statistics 522,437 views 1,292 Like this video? http://ict4m.org/windows-xp/windows-xp-sp3-product-key-not-working.php Or is it? Copy the windows xp product key from list given above and paste into the activation box where it is required. Use these windows xp license keys to install a genuine and 100% accurate windows on your computer. Windows Xp Home Edition Product Key blazepolonaise 143,332 views 1:11 How to remove Windows Genuine Notification - Duration: 1:30. This time I tried to use it … and it wouldn't work. 🙁 G says: May 15, 2014 at 7:24 PM You are awesome! The key is valid, I have re-entered several times, and I have double-checked with the yellow "sticker" on the original cardboard folder with the documentation. his comment is here This tool seems to be far more forgiving about the particular installation version vs. What is going on? Windows Xp Sp3 Universal Product Key Are there eighteen or twenty bars in my castle? Skip to content HomeAbout ← Using the Space Character in /etc/fstab OpenKinect Problem with libGL.so on 64-bit Ubuntu 11.04 (Natty Narwhal) → Using Your Windows XP Product Key with the Wrong ## Abhishek More 171,321 views 3:56 Bypassing Windows Product Activation - Windows XP - Duration: 1:11. Chris says: March 23, 2014 at 7:06 PM What's going on in Microsoft's head is anyone's guess, but I suspect the key you were trying to use during the "refresh install" I have had this happen on maybe 1 in 25 different machines, and it has happened with SP1 SP2 SP3 installations. So, if ... Windows Xp Invalid Product Key Fix share|improve this answer answered Jan 6 '13 at 20:50 kinokijuf 6,04353575 add a comment| up vote 2 down vote http://tinyapps.org/blog/windows/200906190700_convert_windows_xp_retail_to_oem.html Unlocking WinXP's setupp.ini explains how to force Windows XP to accept Sign in to make your opinion count. AvoidErrors 464,159 views 4:00 [2016] Bypass Windows XP Product Activation, no tools! - Duration: 5:20. I can crack any software either it is for pc or for android apps. http://ict4m.org/windows-xp/windows-xp-product-key-not-working.php boody says: November 25, 2012 at 1:56 PM Thank you very much… my father's laptop was won't install xp without product key but when i see this easy tips i was This Notebook was erased completely once and now I want to install a new Windows XP Professional System on it using the key that's on the sticker. Wednesday, February 10, 2010 4:09 PM Reply | Quote 0 Sign in to vote If my understanding what your saying is correct, becauseI don`t know exactly what you mean by("An OEM Customers don't care that they lost the disk etc..., they just want it fixed asap, and when you tell them they need to order replace meant disk from the OEM they I downloaded and ran the key update tool - and it told me the code was correct - but earlier was told it was wrong when I tried to do the Categories [ Free eBooks ] [ Freebies ] [ Linux Software ] [ Mac Software ] [ Mobile ] [ Operating System ] [ Specific Game Collection ] A-RPG Air War up vote 0 down vote favorite I bought a new motherboard last week. Right click on "My Computer" and then go to the "Properties" tab and click it. Once the key is updated the system needs to reboot for the change to take effect After reboot you should see confirmation of the change before your interactive shell (e.g. rsaddict4lyf 925,413 views 8:21 Windows XP Updates bis 2019 - Duration: 7:03. share|improve this answer answered Dec 15 '09 at 15:53 Pete H. 1012 With a Windows XP SP3 Home or Professional disc, you can press Next when asked for the Any resource downloaded from AppNee, we still suggest you use antivirus software to do a full scan (especially the more authoritative and comprehensive online scanning websites VirusTotal or VirSCAN), and then
{}
# Unofficial magicJack Forum Your Unofficial magicJack and magicJack Plus phone service information resource Author Message Father Luke MagicJack Newbie Joined: 28 Mar 2010 Posts: 7 Posted: Sun Mar 28, 2010 6:16 pm    Post subject: 000-000-0000 Phone number just says 000-000-0000 - cannot make calls. Live chat was no help (not a surprise, I'm sure). Dan at Magic Jack was no help. bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Sun Mar 28, 2010 6:27 pm    Post subject: Have you tried plugging the MJ into a different PC using a different ISP? If it still fails the same way then there is a good chance your dongle ( ok not your's but the MJ is bad. How long do you have it? Did you buy it locally? If so maybe you can exchange it. Also have you tried stopping all programs running ( other than your firewall) prior to plugging it in? Any chance you have Nero or another CD burning program installed? MJ doesn't play nice with them. Just some thoughts off the top of my head. Joe Sica Father Luke MagicJack Newbie Joined: 28 Mar 2010 Posts: 7 Posted: Sun Mar 28, 2010 9:13 pm    Post subject: bitstopjoe wrote: Have you tried plugging the MJ into a different PC using a different ISP? Some of us have only one computer/ISP. Quote: If it still fails the same way then there is a good chance your dongle ( ok not your's but the MJ is bad. How long do you have it? Did you buy it locally? If so maybe you can exchange it. It's happened many times. I've finally just issued a complaint with Florida BBB. Quote: Also have you tried stopping all programs running ( other than your firewall) prior to plugging it in? Clean boot - nothing running. Quote: Any chance you have Nero Nope. Quote: or another CD burning program installed? Windows (all versions) come standard, I believe? Quote: MJ doesn't play nice with them. It worked fine until today. So, sadly, no. The answer is just not that simple. Quote: Just some thoughts off the top of my head. Joe Sica bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Mon Mar 29, 2010 6:07 am    Post subject: I only have 1 pc and ISP as well, BUT I do have friends and family if I had to try it elsewhere. This will tell you if the MJ itself is bad or there is something on your PC causing the issue. Joe Sica Father Luke MagicJack Newbie Joined: 28 Mar 2010 Posts: 7 Posted: Mon Mar 29, 2010 10:03 am    Post subject: bitstopjoe wrote: I only have 1 pc and ISP as well, BUT I do have friends and family if I had to try it elsewhere. This will tell you if the MJ itself is bad or there is something on your PC causing the issue. Joe Sica When I bought Magic Jack this was not part of the agreement. Last edited by Father Luke on Mon Mar 29, 2010 12:02 pm; edited 1 time in total bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Mon Mar 29, 2010 10:06 am    Post subject: I'm sorry but when you bought your car was a flat tire part of the agreement? Give me a break ok.. I guess NOTHING you buy every goes bad or needs tweaking...Amazing...Sure hope you don't use Windows! You are the kind of person tech support hates.. A know it all who doesn't really WANT help but demands it never breaks.. Good luck with that ok?? Joe Sica Father Luke MagicJack Newbie Joined: 28 Mar 2010 Posts: 7 Posted: Mon Mar 29, 2010 12:09 pm    Post subject: bitstopjoe wrote: I'm sorry but when you bought your car was a flat tire part of the agreement? Yes, Joe. A standard warranty was part of the agreement. Quote: Give me a break ok.. I guess NOTHING you buy every goes bad or needs tweaking... Fairly rude response. Short fuse Joe? Maybe your Magic Jack performs badly. I know mine does. Quote: Amazing...Sure hope you don't use Windows! There was nothing in the agreement in buying Magic Jack that it wouldn't work with windows. So, I am to understand that Windows is incompatible with Magic Jack? Quote: You are the kind of person tech support hates.. A know it all who doesn't really WANT help but demands it never breaks.. Here is the kind of person I am with Tech Support: Quote: Please wait for a site operator to respond. You are now chatting with 'Zack' Your Issue ID for this chat is LTK4190105357573X Zack: Hello, how may I help you? Father Luke: I haven't used my phone for a while, and when I plugged it in today the phone number shows all zeros. What's up Zack: Okay. Zack: Let me assist you with that but first may I have your magicJack phone number please? Father Luke: 000-000-0000 Zack: May I ask for your magicJack phone number before? Father Luke: That's all I know. It's what the phone says. Zack: Okay. Zack: Are you using Windows 7? Zack: Is your magicJack plugged in right now with the blue light on? Father Luke: yup Father Luke: yup Zack: Do you see the dial pad on the screen? Father Luke: Yup Zack: I will refresh your magicJack here in our end. Please unplug the device as of the moment. Do not plug it until I told you to do so. Father Luke: Unplug it? Father Luke: k Unplugged. Zack: Is it unplugged now? Father Luke: Unplugged. Yup Zack: One moment please... Father Luke: k Zack: Thank you for waiting. Zack: Please plug back in your magicJack now. Father Luke: sec Zack: Sure. Father Luke: k Zack: Do you see the "READY TO CALL" message on the dial pad? Father Luke: sec Father Luke: says: Father Luke: Your contacts were not downloaded ATTENTION You are receivi very importan upgrade within next 60 secon Father Luke: 000-000-0000 Father Luke: same as before Father Luke: no change. Zack: Okay. Zack: Please disable your firewall for the mean time. Zack: Please check if your system clock has the same current time and date in your end. Father Luke: 10 minutes of my life lost Zack: http://upgrades.talk4free.com/upgrade/20091130000000/upgrade.exe Father Luke: Clock is fine. Phone is bad. Zack: Please click on the underlined link above and run the upgrade for the magicJack device. Make sure that you will successfully run it. Father Luke: Lucky me Upgrading now Zack: Okay. Father Luke: Now what Zack: Please wait for it to finish. Zack: Please disable any antivirus software that may be running on your system and try again. If you are behind a network/external firewall or router, please open the following ports that magicJack uses: UDP 5060 & 5070, TCP 80 and TCP 443. Father Luke: It's done Zack: Do you see the dial pad now? Father Luke: You know, asking me to disable my virus program is pretty gutsy. I won't do that. but nice try. Father Luke: No dial pad Zack: Okay. Zack: Which of the following Antivirus/Firewall do you have? : Norton, McAfee, ZoneAlarm, AVG, Comodo, Avast, AOL Safety and Security Center, Verizon Internet Security Suite, Sygate Personal Firewall, Armor2Net, Webroot, EZ Firewall, CA Personal Firewall, Trend Micro Internet Security.? Father Luke: I use NOD Zack: Thank you. Father Luke: Windows 7 64 bit Zack: Please unplugged your magicJack and wait for 1 minute then plug it in to the same USB port. Thank you. Father Luke: unplugged Father Luke: what's this: "magicJack uses: UDP 5060 & 5070, TCP 80 and TCP 443." Zack: Yes. magicJack uses those ports. Father Luke: Want I should opne them? Zack: Yes. Father Luke: open Father Luke: done. Zack: Okay. Father Luke: start port 5060 end 5070 - right/ Father Luke: ? Zack: Yes. Zack: Is your magicJack plug in now? Father Luke: k. done. Father Luke: so... wut's nxt Zack: Did you plug in your magicJack? Father Luke: nope. Father Luke: did now Zack: Please plug it back in now. Father Luke: And? Zack: Do you see the dial pad on the screen? Father Luke: no Zack: Okay. Zack: I am transferring you to one of our top 10% agents as rated by our customers. Please hold while I transfer you. Father Luke: sure Please wait while I transfer the chat to the best suited site operator. You are now chatting with 'Christina' Your Issue ID for this chat is LTK4190105357573X Christina: Hi, there. I'll be assisting you. Christina: Please hold for a moment while I'll review your previous chats. Thanks! Father Luke: np Father Luke: To bring you up to speed... I just went into \AppData \Roaming\mjusbsp\in00000 clicked startup.exe and I have the dial pad, but the numberts still say: 000-000-0000 Father Luke: 12:30 - noting time. Father Luke: Still there? - 12:35 Christina: Yes. Father Luke: Well. I am too. - 12:40 Christina: can email address please. Father Luke: [email protected] Father Luke: 12:45 Christina: is this your magicjack number xxx Father Luke: Looks about right. Father Luke: 12:50 - 20 minutes to tell me my phone number. This is getting tedious. What kind of progress may I expect in the next 20 minutes? Father Luke: Hello? Christina: Thank you for waiting. Your concern has already been forwarded to the engineers who are working with the international servers and they will be fixing this issue. Christina: You may try calling in the next few hours or at least 12 hours from now. Thank you for the kind understanding and patience. Christina: For the meantime, please stand by for updates until you will be able to receive a follow up email in regards of the concern. You shall be notify for any update in regards of the issue. Father Luke: Why am I able to get voice mail but not call? Father Luke: 12:55 Father Luke: I CANNOT CALL! I HAVE NO outgoing phone. My numbers say 000-000-0000 Father Luke: HELLO! Christina: One moment please... Christina: Thank you for waiting. Your concern has already been forwarded to the engineers who are working with the international servers and they will be fixing this issue. Father Luke: I have been here holding for a half an hour! Father Luke: I am in America! Father Luke: HELLO! Father Luke: HELL OH! Father Luke: Unsatisfactory. Christina: Yes, I understand your concern. Father Luke: So. Nothing to do. No satisfaction - no explination. Just: Sorry. You may use your phone in 12 hours maybe. Is that it? Father Luke: 1:00 Father Luke: Hello? Christina: Would you like us to log onto your computer remotely and resolve this issue for you? If yes, I will transfer you to a higher level of support that will take care of this for you. Father Luke: Allow you into my computer? No. I really don't think so. Please wait while I transfer the chat to the best suited site operator. Father Luke: LAWL! bitstopjoe wrote: Good luck with that ok?? No luck so far, unless you count bad luck Quote: Joe Sica - - Okay, Father Luke Mishap64 MagicJack Newbie Joined: 29 Mar 2010 Posts: 3 Posted: Mon Mar 29, 2010 1:26 pm    Post subject: Damn I feel your pain. MJ Customer support sucks. Im going to try NetTalk. bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Mon Mar 29, 2010 1:58 pm    Post subject: My MJ has worked flawlessly since day 1, zero complaints going on close to 2 years now. So well in fact I cancelled my landline shortly there after. My Window comment had nothing to do with MJ, it was said in jest since you seem to expect everything to work flawlessly and there are times Windows does not. So you had a standard agreement with the dealer when you bought your car you would never have a flat tire?? WOW great deal. Even with the best car warranty you will always find something to go wrong which the dealership will have questions about. Heck they may even want to keep it for a few days. And yes the problem may still be there after you get it back ( look at Toyota). BUT we are talking about a BIG ticket item as opposed to a $40 item. All of us here know MJ "support' is bad, that is why this forum is a Godsend. My comment about you and support again was tongue in cheek as the impression I got was you expect everything to work everytime without failure. In a perfect world yes, but then in a perfect world I would be taller and have all my hair and no Obamacare... The resaon I suggested to you to try it on another ISP and PC is to isolate the problem so we can help you. If the exact same problem follows your MJ then you know it is most likely the MJ itself which is defective and then you know what your next step(s) is. I still stand by my suggestion to try it elsewhere, otherwise you have a pretty blue light paperweight. If you lived near me I would gladly have you come over and try it on my PC and then help you troubleshoot it. Joe SicaLast edited by bitstopjoe on Mon Mar 29, 2010 2:23 pm; edited 3 times in total Buttafuoco magicJack Apprentice Joined: 09 Sep 2008 Posts: 24 Posted: Mon Mar 29, 2010 2:13 pm Post subject: Father Luke, are you a priest? Intimmy98 MagicJack Contributor Joined: 14 Nov 2009 Posts: 51 Location: Idaho Posted: Tue Mar 30, 2010 12:42 am Post subject: Father Luke; For your consideration. I have noticed with my MJ that typically after plugging it in, the program executes, the soft phone comes up and it initially indicates 000-000-0000, until it receives my account info from my MJ's proxy (ie. contacts, call logs, 911 info, etc.). Yours apparently stalls at this initial point and cannot proceed further. Unfortunately, many things could cause this scenario. Some possible troubleshooting items to try if you so desire; A. I could not ascertain from reading your posts how long you had the MJ working before it failed. If recently purchased, did you receive and respond back to the confirmation email from MJ? I only ask because in my case, my spam filter redirected mine. Symptom: MJ worked fine for about 24 hours after registration, then it quit. Once I found the confirmation email and responded by clicking the link in it, all was good. B. Are you able to log into your MJ account page and verify your account info at https://web08.magicjack.com/my/login.html ? Make sure to scroll to bottom of page, past all the advertising hubbub, and click on the grey "My Magicjack" box. Check to see that all your account info appears up to snuff and that your account is active. C. Run "Windows Update" and make sure all of Microsoft's latest and greatest updates are applied. D. You might try removing the MJ program then doing a clean reinstall. It is a relatively quick process, and may clear up any bottlenecks your system or router may have experienced. If you decide to give this a whirl, use the mjRemover to completely uninstall the MJ. It can be downloaded at http://upgrades.talk4free.com/tools/mjRemover.exe. Removal/Reinstall Steps: - Disconnect your MJ dongle - Run "mjRemover.exe". Once this is complete and your MJ is uninstalled… - Perform a complete system shutdown. - Unplug your router. (Let both systems sit idle for a few minutes to clear any buffers). - Meanwhile, check that your USB connections are solid (don't use the short cord that came with your MJ as it can cause issues). Make sure to use a USB port on the back of your computer. * - Plug your router back in then boot up your computer - Once your system has come up completely (wait a few minutes after you log on), then plug your MJ in and let it reinstall. You will asked to enter your user email address and account password. * If you don't already use a powered USB hub, you might consider getting one. I hope the suggestions above can help remedy your situation. By the way, I'm a different Dan than the one you already tried to contact bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Tue Mar 30, 2010 4:41 am Post subject: I guess you missed the part when he said "When I bought Magic Jack this was not part of the agreement." when I suggested he try it in another PC. I guess if it is not plug and play ( which it is for most) he is not interested. But good troubleshooting points there. Maybe he will try them. Joe Sica Intimmy98 MagicJack Contributor Joined: 14 Nov 2009 Posts: 51 Location: Idaho Posted: Tue Mar 30, 2010 2:22 pm Post subject: No, I didn't miss that part. I just didn't have enough of the "juicy juice" on hand last night, to take that issue on as well. I was hopeful that since Father Luke came to this forum seeking assistance, that perhaps he might also find some extra patience and work our suggestions towards finding a solution to his situation. Yes, the troubleshooting process takes considerable time as well. However, the payoff is well worth the effort for sure. bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Tue Mar 30, 2010 2:30 pm Post subject: Intimmy98 wrote: No, I didn't miss that part. I just didn't have enough of the "juicy juice" on hand last night, to take that issue on as well. I was hopeful that since Father Luke came to this forum seeking assistance, that perhaps he might also find some extra patience and work our suggestions towards finding a solution to his situation. Yes, the troubleshooting process takes considerable time as well. However, the payoff is well worth the effort for sure. Yep I agree, but then for me tweaking is half the fun AND a great learning process to boot. Would be a shame a man of the cloth ( if in fact he is one, as we can only go by his handle) would in fact not have the patience to try some of the things we suggested. Just as one needs to know what to do if you turn your key and your car doesn't start. Of course I can hear the Father's response, "why son, that is why God invented AAA" And of course for most he would be correct. I hope he tried some of our suggestions and lets us know how he made out.. Joe Sica Buttafuoco magicJack Apprentice Joined: 09 Sep 2008 Posts: 24 Posted: Wed Mar 31, 2010 2:28 pm Post subject: For a$20 a year phone line, I can personally justify minimal support fom the MJ side. Thank you phoneservicesupport.com for providing these support forums! "I have already been judged, and I paid my debt. Please cut me a little slack and perhaps give me the benefit of the doubt. Thank You!" cdwaldron Dan isn't smart enough to hire me Joined: 26 Jul 2008 Posts: 107 Location: Tucson, AZ Posted: Wed Mar 31, 2010 10:35 pm    Post subject: "Father Luke" is just a troll. According to his 'chat' he doesn't know the phone # of his MJ. Is his account active? He never answered joe's questions about when and where he purchased his MJ. Did he ever activate the MJ? Did he buy it on Ebay for \$3 and now wants it to work? You can't believe a word he says. He should have stayed in Rants & Raves. Kenny2469 MagicJack Contributor Joined: 22 Feb 2010 Posts: 70 Posted: Thu Apr 01, 2010 2:02 am    Post subject: What a sad, pathetic individual... Want help, ask and take in the replies, dont knock em... This forum IS the AAA / CAA of the MJ world... Best of all, help is FREE and you dont have a greasy tow truck driver grunging up your junk... =) magicFox MagicJack Expert Joined: 02 Nov 2009 Posts: 86 Location: Mesa, AZ Posted: Thu Apr 01, 2010 2:09 am    Post subject: What 000-* means is MJ can't connect to it's server. We all see the 000-* everytime before our MJ connects at startup. Buttafuoco magicJack Apprentice Joined: 09 Sep 2008 Posts: 24 Posted: Thu Apr 01, 2010 2:18 pm    Post subject: I hope there are no young children in Father Luke's parish, particularly young boys. bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Thu Apr 01, 2010 2:21 pm    Post subject: Buttafuoco wrote: I hope there are no young children in Father Luke's parish, particularly young boys. Now now....... no need for a cheap shot like that.............. Joe Sica Buttafuoco magicJack Apprentice Joined: 09 Sep 2008 Posts: 24 Posted: Thu Apr 01, 2010 2:22 pm    Post subject: Sorry, you are right. Your a good guy Joe for patiently trying to help the guy. bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Thu Apr 01, 2010 2:34 pm    Post subject: I try, thank you. All I can do is make suggestions here, it is up to the person if they want to try it or not. Something about a horse comes to mind and making it drink I really did try to help him, but was very hard to get through to him. I hope he checks back in to say he did try a few of the suggestions posted here by me and others. I TRY not to get personal or take things personally on here. I have had more than my share of cheap shots on here by simply telling people to do a search first. Especially when the question they are asking has been discussed to death on here. For one they will get their answer quick and for two they might learn something by reading this forum first. Heck I did. I read this forum close to a year before I bought my MJ in Aug 2008. Joe Sica Buttafuoco magicJack Apprentice Joined: 09 Sep 2008 Posts: 24 Posted: Thu Apr 01, 2010 2:44 pm    Post subject: Joe, you no doubt set a very good example. I wonder sometimes if the people that pass through here solely because they have an issue needing resolve, that these forums are scrictly user supported and "unofficial" as this website is not paid for or even sanctioned my Ymax Communications. Nobody knows better than I about being the butt of the cheapshot. Thanks for setting me straight on that. I try but sometimes forget the golden rule! We need more like you Joe in forums web-wide! bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Thu Apr 01, 2010 3:01 pm    Post subject: Awwwwwwwwwwwwwwwwwwwwwww thank you for the kind words. But I promise you my halo does come off at times on here when I get my buttons pushed. Joe Sica mufon Dan isn't smart enough to hire me Joined: 25 Jan 2008 Posts: 296 Location: HIghland Village, Texas Posted: Thu Apr 01, 2010 3:51 pm    Post subject: Hey Joe, ur "w" key is stickey. You can buy a new keyboard, but me being a cheapskate I would just slow my typematic rate and then just proofread before posting, LOL! Have a great day everybody! Intimmy98 MagicJack Contributor Joined: 14 Nov 2009 Posts: 51 Location: Idaho Posted: Thu Apr 01, 2010 4:22 pm    Post subject: I think it's that halo coming off that's making that 'w' stick... bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Thu Apr 01, 2010 4:43 pm    Post subject: Thats what happens when you live alone for 7 years. Sometimes keys get sticky Joe Sica tsmith Dan isn't smart enough to hire me Joined: 18 Jan 2010 Posts: 420 Location: Utah Posted: Fri Apr 02, 2010 12:16 am    Post subject: Think I'll file that under "TMI".... bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Fri Apr 02, 2010 6:01 am    Post subject: My halo slipped...................apologies. I have reaffixed it using SuperGlue. Should be good for a while.. Joe Sica tsmith Dan isn't smart enough to hire me Joined: 18 Jan 2010 Posts: 420 Location: Utah Posted: Fri Apr 02, 2010 11:16 am    Post subject: Heh, I thought your joke was pretty good Joe. No need to re-affix your halo. bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Fri Apr 02, 2010 2:11 pm    Post subject: tsmith wrote: Heh, I thought your joke was pretty good Joe. No need to re-affix your halo. Sure NOW you tell me.. Any idea how hard it is to get Super Glue off a halo!!!!! Joe Sica Buttafuoco magicJack Apprentice Joined: 09 Sep 2008 Posts: 24 Posted: Fri Apr 02, 2010 2:22 pm    Post subject: Quote: Sure NOW you tell me.. Any idea how hard it is to get Super Glue off a halo!!!!! Joe Sica I'll take a guess, less hard than removing the sticky "w" key! bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Fri Apr 02, 2010 2:35 pm    Post subject: Well that won't be an issue any longer because in the process of Super Gluing my halo back on I stuck a bunch of fingers together. I think we better stop as I am sure the moderator and others are rolling their eyes at this point saying WHAT THE HELL DOES THIS HAVE TO DO WITH MJ!! Nothing of course, but always good to have a laugh or two.. Unless for some reason they do a search for sticky keys!! Joe Sica Kenny2469 MagicJack Contributor Joined: 22 Feb 2010 Posts: 70 Posted: Sat Apr 03, 2010 1:12 am    Post subject: bitstopjoe wrote: My halo slipped...................apologies. I have reaffixed it using SuperGlue. Should be good for a while.. Joe Sica Are you sure thats a halo in your lap? Ohhh, too soon?? lol... bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Sat Apr 03, 2010 6:04 am    Post subject: You forgot the rest.... OR are you happy to see me YES too soon Too soon..... Joe Sica Athanasian_Creed MagicJack Newbie Joined: 03 Mar 2010 Posts: 9 Posted: Sun Apr 04, 2010 11:18 pm    Post subject: OK, i'll help get Joe out of a 'sticky' situation by bringing the thread back on topic When i first got my MJ, i spent 6 HOURS almost with tech support trying to set the darn thing up. Nada, nothing - no joy! I decided to try a friend's computer and voila, was up and about in 10 mins! Came home, plugged the MJ into my computer, no problemo, still working to this day. Only thing is, i can't log into http://my.magicjack.com to access my account - tried on my friend's computer again and was able to no problem!!?? Any idea why this would be the case - i'm behind a router btw. It really has me stumped as to why i couldn't set up MJ nor can i access my account via the website on my computer! TIA, Ray Intimmy98 MagicJack Contributor Joined: 14 Nov 2009 Posts: 51 Location: Idaho Posted: Sun Apr 04, 2010 11:50 pm    Post subject: Try the "My" link provided on the softphone just below the MJ "status area" and above "End" call link. Here is the web address: https://web08.magicjack.com/my/login.html Sounds like Joe is doing just fine... by himself!!! Intimmy98 MagicJack Contributor Joined: 14 Nov 2009 Posts: 51 Location: Idaho Posted: Mon Apr 05, 2010 12:21 am    Post subject: BTW Ray, welcome to the forum! Some routers do create their own sets of issues. There are so many variables at play here. I am not sure you have seen this info before, if not you might save the link as a favorite as it is chock full of MJ info, tips and tricks. http://en.wikibooks.org/wiki/MagicJack In regards to your situation, keep in mind that once you successfully registered your MJ on your friends system, it downloaded all the current updates to your MJ and was essentially good to go. You were fortunate in getting it to work on your home system shortly thereafter. Others such as myself had even more issues to untangle once we got the silly thing to communicate with the outside world. Hopefully the new web link provided in previous post will get you all squared away. bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Mon Apr 05, 2010 8:11 am    Post subject: Athanasian_Creed wrote: OK, i'll help get Joe out of a 'sticky' situation by bringing the thread back on topic When i first got my MJ, i spent 6 HOURS almost with tech support trying to set the darn thing up. Nada, nothing - no joy! I decided to try a friend's computer and voila, was up and about in 10 mins! Came home, plugged the MJ into my computer, no problemo, still working to this day. Only thing is, i can't log into http://my.magicjack.com to access my account - tried on my friend's computer again and was able to no problem!!?? Any idea why this would be the case - i'm behind a router btw. It really has me stumped as to why i couldn't set up MJ nor can i access my account via the website on my computer! TIA, Ray A trick to get into your account. On your MJ softphone click under the ad where it says "click here to order" Ignore what they are trying to sell you and just click on the gray bar below. And if you noticed no password to get into your account. Pretty cool, unless you are not the only one to use your PC. As for why you can not get there via the web page, I can only guess MAYBE you have it blocked in your IE options as there is a setting there to block sites. MAYBE it was added by accident. Joe Sica See if that works. Kenny2469 MagicJack Contributor Joined: 22 Feb 2010 Posts: 70 Posted: Mon Apr 05, 2010 7:09 pm    Post subject: definately sounds like a firewall or router issue here... have you tried turning off your firewall or bypassing the router and plug your computer directly into the cable or dsl modem, just to rule those out??? Athanasian_Creed MagicJack Newbie Joined: 03 Mar 2010 Posts: 9 Posted: Tue Apr 06, 2010 5:31 pm    Post subject: Intimmy98 wrote: Try the "My" link provided on the softphone just below the MJ "status area" and above "End" call link. Here is the web address: https://web08.magicjack.com/my/login.html Thanks for the suggestion - tried that - again, no joy! It must surely have to do with my router since i can access it on a friend's comp (sans router) Intimmy98 wrote: Sounds like Joe is doing just fine... by himself!!! Oh my virgin eyes have been violated! Ray Athanasian_Creed MagicJack Newbie Joined: 03 Mar 2010 Posts: 9 Posted: Tue Apr 06, 2010 5:42 pm    Post subject: Intimmy98 wrote: BTW Ray, welcome to the forum! Thanks Dan - you all are a wealth of knowledge - a sight for sore newbie eyes! I did alot of perusing the forum even before i took the plunge and bought the MJ. Intimmy98 wrote: I am not sure you have seen this info before, if not you might save the link as a favorite as it is chock full of MJ info, tips and tricks. http://en.wikibooks.org/wiki/MagicJack Yes, thanks, in my perusing the forums previously i seen the mention of that site - will keep it in mind. Intimmy98 wrote: In regards to your situation, keep in mind that once you successfully registered your MJ on your friends system, it downloaded all the current updates to your MJ and was essentially good to go. You were fortunate in getting it to work on your home system shortly thereafter. Others such as myself had even more issues to untangle once we got the silly thing to communicate with the outside world. Yes, thank God - it was no problem whatsoever after using another computer to register. Even after i recently reinstalled Win7 from scratch, no problems. I did have my doubts originally if i'd ever get it set up - i'm sure others, as you say, have had it worse than i! Intimmy98 wrote: Hopefully the new web link provided in previous post will get you all squared away. Nope - all i get is a "trying to connect to..." message. This after using not only IE, but Firefox & Opera as well. C'est la vie, i guess my friend will come in handy should i need to access my account. One thing i did while on line with tech support was have them disable auto-renew. Thank you to the person who alerted us to the policy to auto-renew! Ray Athanasian_Creed MagicJack Newbie Joined: 03 Mar 2010 Posts: 9 Posted: Tue Apr 06, 2010 5:47 pm    Post subject: bitstopjoe wrote: A trick to get into your account. On your MJ softphone click under the ad where it says "click here to order" Ignore what they are trying to sell you and just click on the gray bar below. And if you noticed no password to get into your account. Pretty cool, unless you are not the only one to use your PC. As for why you can not get there via the web page, I can only guess MAYBE you have it blocked in your IE options as there is a setting there to block sites. MAYBE it was added by accident. Joe Sica See if that works. Thanks Joe, that just might work. As for the IE options, the tech had me change all sorts of things - add this, delete that, stand on your head and spit nickels, that sort of stuff. I was an exercise in futility & a test of my not so up to snuff patience! Ray Athanasian_Creed MagicJack Newbie Joined: 03 Mar 2010 Posts: 9 Posted: Tue Apr 06, 2010 5:50 pm    Post subject: Kenny2469 wrote: definately sounds like a firewall or router issue here... have you tried turning off your firewall or bypassing the router and plug your computer directly into the cable or dsl modem, just to rule those out??? Turned off Windows Firewall as per tech support - didn't help. I have thought of plugging directly into the computer and check the results - i haven't as of yet because all the changes i needed to make for now have been made via my friend's computer. Thanks for the suggestion though Kenny, much appreciated! Ray bitstopjoe Future magicJack CEO Joined: 13 Sep 2008 Posts: 2844 Location: North East Pennsylvania Posted: Tue Apr 06, 2010 5:51 pm    Post subject: At least he didn't have you put your MJ in a bag and swing it over your head and cluck like a chicken.... YET Actually that was something from an OLD Dick Van Dyke show...Big fan and still funny after all this time. Joe Sica Intimmy98 MagicJack Contributor Joined: 14 Nov 2009 Posts: 51 Location: Idaho Posted: Tue Apr 06, 2010 8:21 pm    Post subject: Athanasian_Creed; Your a great sport for sure! Sounds to me like you have discovered you do in fact have plenty of patience, and obviously lots of nickels too! BTW, sorry about any unintended damage to your eyes! Which brand of router are you using and have you checked the manufacturer's website for any possible firmware updates? As Kenny2469 mentioned, bypass the router completly and see if you are able to connect normally to MJ's site? Athanasian_Creed MagicJack Newbie Joined: 03 Mar 2010 Posts: 9 Posted: Tue Apr 06, 2010 8:59 pm    Post subject: [quote="Intimmy98"]Athanasian_Creed; Your a great sport for sure! Sounds to me like you have discovered you do in fact have plenty of patience, and obviously lots of nickels too! /quote] I just wanted to get the flippin' thing workin' so i was willing to listen to any recommendations by tech support (even though i swore i knew more than they did!) Intimmy98 wrote: BTW, sorry about any unintended damage to your eyes! Hehe - back to normal so no real damage done! Intimmy98 wrote: Which brand of router are you using and have you checked the manufacturer's website for any possible firmware updates? As Kenny2469 mentioned, bypass the router completly and see if you are able to connect normally to MJ's site? I have a cheesy, cheapo TrendNET wireless router which decided after XP & Vista Service Pack installs not to operate as a wireless so it's working OK as a wired router. Firmware updates - i think there is one but to be honest, i don't want to hose the router by updating the firmware (slight chance but still possible). I run a SlingBox through the router so i need the router to work. I'd phone to get the router issues straightened out, but to tell you the truth, i don't know which 'tech support' is worse - that of MJ or TrendNET!! Ray Kenny2469 MagicJack Contributor Joined: 22 Feb 2010 Posts: 70 Posted: Tue Apr 06, 2010 9:11 pm    Post subject: Man,. TrendNet has got to be up there in the category of worst... I had one of those routers (cuz it was cheap and hey, so am I) but the troubles out weighed my cheapness and I went out and bought a D-Link... The best thing I ever did with the TrendNet router was to back over it with a 5 ton car hauler.... =) Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First All times are GMT - 4 Hours Page 1 of 1 Jump to: Select a forum Public magicJack Discussions----------------MagicJack FAQs (Read this First)magicJack Tips, Tricks, and HacksMagic Jack Technical SupportmagicJack Plus SupportmagicJack NewsUsing magicJack on a Thin ClientUsing magicJack on a MacUsing magicJack on LinuxFeatures coming to Magic JackmagicJack related items For Sale / WantedmagicJack Reviews, Rants and RavesAlternatives to magicJackThe Lounge Private Discussions----------------Security IssuesFree magicJack Giveaway Contest You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB Ultra Turbo Extended Edition Live © 2001-9999, phpBB Group magicJack and magicJack Plus are trademarks of magicJack LLP. This website is in no way affiliated, endorsed, or sponsored by magicJack LLP, and is an unofficial forum for consumers to openly communicate regarding their experiences with the magicJack products.
{}
# Risk Parameters Each asset in the Aave protocol has specific values related to their risk, which influences how they are loaned and borrowed. The table below shows a summary of the latest values. Name Symbol Collateral Loan To Value Liquidation Threshold Liquidation Bonus Stablecoins ​ ​ ​ ​ ​ Binance USD BUSD no - - - DAI DAI yes 75% 80% 5% Synthetix USD SUSD no - - - True USD TUSD yes 75% 80% 5% USDC USDC yes 75% 80% 5% Tether USDT no - - - Other Assets ​ ​ ​ ​ ​ Basic Attention Token BAT yes 60% 65% 10% Enjin ENJ yes 55% 65% 10% Ethereum ETH yes 75% 80% 5% Kyber Network KNC yes 60% 65% 10% Aave AAVE yes 50% 65% 10% Chainlink LINK yes 65% 70% 10% Decentraland MANA yes 60% 65% 10% Maker MKR yes 50% 65% 10% Republic Protocol REN yes 50% 65% 10% Augur REP yes 35% 65% 10% Synthetix SNX yes 15% 40% 10% Uniswap UNI yes 40% 65% 15% Wrapped BTC WBTC yes 60% 65% 15% Yearn YFI YFI yes 40% 65% 15% 0x ZRX yes 60% 65% 10% The table above results from the asset risk assessment relating to security, governance and the markets. Tokens with security concerns around their smart contract cannot be considered for integration since these risks are impossible to control. Similarly, tokens which risk exposure to single counter-parties cannot be used as collateral. # Risk Parameters Change When market conditions change, risks change, and so we are continuously monitoring the assets integrated into the protocol which sometimes requires to quickly adapt the risk parameters. The table below tacks parameters changes which are in bold. Date Asset LTV Liquidation Threshold Liquidation Bonus Comment 21/10/2020 MKR 50% 65% 10% Decreased volatility 21/10/2020 TUSD 75% 80% 5% Following review of the smartcontracts 22/07/20 LEND 50% 65% 10% LEND can not be borrowed due to migration incoming 16/07/2020 LEND 50% 65% 10% Improved risk parameters 16/07/2020 SNX 15% 40% 10% New Collateral 16/07/2020 ENJ 55% 65% 10% New Asset 16/07/2020 REN 50% 65% 10% New Asset 19/06/2020 TUSD 1% 80% 5% Unaudited Update # Risk Parameters Analysis The risk parameters allow to mitigate market risks of the currencies supported by the protocol. Each loan is guaranteed by a collateral that may be subject to volatility. Sufficient margin and incentives are needed for the loan to remain collateralized in adverse market conditions. If the value of the collateral falls bellow a threshold, part of it is auctioned to repay part of the loan and keep the ongoing loan collateralized. ## Collaterals USDT, sUSD and SNX are strongly exposed to the risk of single point of failure in their governance. Their counter-party risk is too high both in terms of centralization and trust. For this reason, we cannot consider them to be warrant of the solvency of the protocol. USDT, sUSD and SNX cannot be used as collateral. Similarly, BUSD is fairly new with few transactions. This leads to a high smart contract risk, so it is excluded as collateral. Overall, stablecoins are mostly used for borrowing, while volatile assets which users are long on are mostly used as collateral. Hence, the users of the protocol still gain great benefits from the addition of these stablecoins. Their risks are mitigated by the fact they cannot be used as collateral. Market risks can be mitigated through Aave’s risk parameters which define collateralization and liquidation rules. These parameters are calibrated per currency to account for the specific risks identified as shown in Figure 2. ## Loan to Value The Loan to Value (LTV) ratio defines the maximum amount of currency that can be borrowed with a specific collateral. It’s expressed in percentage: at LTV=75%, for every 1 ETH worth of collateral, borrowers will be able to borrow 0.75 ETH worth of the corresponding currency. Once a loan is taken, the LTV evolves with market conditions. ## Liquidation Threshold The liquidation threshold is the percentage at which a loan is defined as undercollateralized. For example, a Liquidation threshold of 80% means that if the value rises above 80% of the collateral, the loan is undercollateralized and could be liquidated. The delta between the Loan-To-Value and the Liquidation Threshold is a safety cushion for borrowers. ## Liquidation Bonus Bonus on the price of assets of the collateral when liquidators purchase it as part of the liquidation of a loan that has passed the liquidation threshold. ## Health Factor For each loan, these risks parameters enable the calculation of the health factor: $H_f = \frac{ \sum{Collateral_i \: in \: ETH \: \times \: Liquidation \: Threshold_i}}{Total \: Borrows \: in \: ETH \: + \: Total \: Fees \: in \: ETH}$ When $H_f < 1$ the loan is undercollaterized, it may be liquidated to maintain solvency as described in the diagram below. ## From Market Risks to Risk Parameters Market risks are assessed on 3 levels which have different effects on the risk parameters: ### Liquidity The liquidity is based on the volume on the markets, which is key for the liquidation process. This can be mitigated through the liquidation parameters: the lower the liquidity, the higher the incentives. ### Volatility The volatility of price can negatively affect the collateral which safeguards the solvency of the protocol and must cover the loan liabilities. The risk of the collateral falling below the loan amounts can be mitigated through the level of coverage required, the Loan-To-Value. It also affects the liquidation process as the margin for liquidators needs to allow for profit. The less volatile currencies are the stablecoins followed by ETH, they have the highest LTV at 75%, and the highest liquidation threshold at 80%. The most volatile currencies REP and LEND have the lowest LTV at 35% and 40%. The liquidations thresholds are set at 65% to protect our users from a sharp drop in price which could lead to undercollaterization followed by liquidation. ### Market Capitalization The market capitalization represents the size of the market, which is important when it comes to liquidating collateral. This can be mitigated through the liquidation parameters: the smaller the market cap, the higher the incentives.
{}
## Product Rule for Logarithms ### Learning Outcomes • Define properties of logarithms, and use them to solve equations • Define and use the product rule for logarithms Recall that we can express the relationship between logarithmic form and its corresponding exponential form as follows: ${\mathrm{log}}_{b}\left(x\right)=y\Leftrightarrow {b}^{y}=x,\text{}b>0,b\ne 1$ Note that the base b is always positive and that the logarithmic and exponential functions “undo” each other. This means that logarithms have properties similar to exponents. Some important properties of logarithms are given in this section. First, we will introduce some basic properties of logarithms followed by examples with integer arguments to help you get familiar with the relationship between exponents and logarithms. ### Zero and Identity Exponent Rule for Logarithms and Exponentials ${\mathrm{log}}_{b}1=0$, b>$0$ ${\mathrm{log}}_{b}b=1$, b>$0$ ### Example Use the the fact that exponentials and logarithms are inverses to prove the zero and identity exponent rule for the following: 1. ${\mathrm{log}}_{5}1=0$ 2. ${\mathrm{log}}_{5}5=1$ Exponential and logarithmic functions are inverses of each other, and we can take advantage of this to evaluate and solve expressions and equations involving logarithms and exponentials. The inverse property of logarithms and exponentials gives us an explicit way to rewrite an exponential as a logarithm or a logarithm as an exponential. ### Inverse Property of Logarithms and Exponentials $\begin{array}{c}\hfill \\ {\mathrm{log}}_{b}\left({b}^{x}\right)=x\hfill \\ \text{ }{b}^{{\mathrm{log}}_{b}x}=x, x>0, b>0, b\ne1\hfill \end{array}$ ### Example Evaluate: 1.$\mathrm{log}\left(100\right)$ 2.${e}^{\mathrm{ln}\left(7\right)}$ Another property that can help us simplify logarithms is the one-to-one property. Essentially, this property states that if two logarithms have the same base, then their arguments – the stuff inside – are also equal to each other. ### The One-To-One Property of Logarithms ${\mathrm{log}}_{b}M={\mathrm{log}}_{b}N\text{ if and only if}\text{ }M=N$ ### Example Solve the equation ${\mathrm{log}}_{3}\left(3x\right)={\mathrm{log}}_{3}\left(2x+5\right)$ for $x$. What if we had started with the equation ${\mathrm{log}}_{3}\left(3x\right)+{\mathrm{log}}_{3}\left(2x+5\right)=2$? The one-to-one property does not help us in this instance. Before we can solve an equation like this, we need a method for combining terms on the left side of the equation. To recap, the properties of logarithms and exponentials that can help us understand, simplify, and solve these types of functions more easily include: • Zero and Identity Exponent Rule: ${\mathrm{log}}_{b}1=0$, $b\gt0$ and ${\mathrm{log}}_{b}b=1$, $b\gt0$ • Inverse Property: $\begin{array}{c}\hfill \\ {\mathrm{log}}_{b}\left({b}^{x}\right)=x\hfill \\ \text{ }{b}^{{\mathrm{log}}_{b}x}=x,x>0\hfill \end{array}$ • One-To-One Property: ${\mathrm{log}}_{b}M={\mathrm{log}}_{b}N\text{ if and only if}\text{ }M=N$ ## The Product Rule for Logarithms Recall that we use the product rule of exponents to combine the product of like bases raised to exponents by adding the exponents: ${x}^{a}{x}^{b}={x}^{a+b}$. We have a similar property for logarithms, called the product rule for logarithms, which says that the logarithm of a product is equal to a sum of logarithms. Because logs are exponents, and we multiply like bases, we can add the exponents. We will use the inverse property to derive the product rule below. ### The Product Rule for Logarithms The product rule for logarithms can be used to simplify a logarithm of a product by rewriting it as a sum of individual logarithms. ${\mathrm{log}}_{b}\left(MN\right)={\mathrm{log}}_{b}\left(M\right)+{\mathrm{log}}_{b}\left(N\right)\text{ for }b>0$ Let $m={\mathrm{log}}_{b}M$ and $n={\mathrm{log}}_{b}N$. In exponential form, these equations are ${b}^{m}=M$ and ${b}^{n}=N$. It follows that $\begin{array}{c}{\mathrm{log}}_{b}\left(MN\right)\hfill & ={\mathrm{log}}_{b}\left({b}^{m}{b}^{n}\right)\hfill & \text{Substitute for }M\text{ and }N.\hfill \\ \hfill & ={\mathrm{log}}_{b}\left({b}^{m+n}\right)\hfill & \text{Apply the product rule for exponents}.\hfill \\ \hfill & =m+n\hfill & \text{Apply the inverse property of logs}.\hfill \\ \hfill & ={\mathrm{log}}_{b}\left(M\right)+{\mathrm{log}}_{b}\left(N\right)\hfill & \text{Substitute for }m\text{ and }n.\hfill \end{array}$ Repeated applications of the product rule for logarithms allow us to simplify the logarithm of the product of any number of factors. Consider the following example: ### Example Using the product rule for logarithms, rewrite the logarithm of a product as the sum of logarithms of its factors. ${\mathrm{log}}_{b}\left(wxyz\right)$ In our next example, we will first factor the argument of a logarithm before expanding it with the product rule. ### Example Expand ${\mathrm{log}}_{3}\left(30x\left(3x+4\right)\right)$. ### Analysis of the Solution It is tempting to use the distributive property when you see an expression like $\left(30x\left(3x+4\right)\right)$, but in this case, it is better to leave the argument of this logarithm as a product since you can then use the product rule for logarithms to simplify the expression. The following video provides more examples of using the product rule to expand logarithms. ## Summary Logarithms have properties that can help us simplify and solve expressions and equations that contain logarithms. Exponentials and logarithms are inverses of each other; therefore, we can define the product rule for logarithms. We can use this to simplify or solve expressions with logarithms. Given the logarithm of a product, use the product rule of logarithms to write an equivalent sum of logarithms as follows: 1. Factor the argument completely, expressing each whole number factor as a product of primes. 2. Write the equivalent expression by summing the logarithms of each factor.
{}
# Mixture Design Chapter 14: Mixture Design Chapter 14 Mixture Design Available Software: Weibull++ Experiment Design & Analysis (*.pdf) ## Introduction When a product is formed by mixing together two or more ingredients, the product is called a mixture, and the ingredients are called mixture components. In a general mixture problem, the measured response is assumed to depend only on the proportions of the ingredients in the mixture, not the amount of the mixture. For example, the taste of a fruit punch recipe (i.e., the response) might depend on the proportions of watermelon, pineapple and orange juice in the mixture. The taste of a small cup of fruit punch will obviously be the same as a big cup. Sometimes the responses of a mixture experiment depend not only on the proportions of ingredients, but also on the settings of variables in the process of making the mixture. For example, the tensile strength of stainless steel is not only affected by the proportions of iron, copper, nickel and chromium in the alloy; it is also affected by process variables such as temperature, pressure and curing time used in the experiment. One of the purposes of conducting a mixture experiment is to find the best proportion of each component and the best value of each process variable, in order to optimize a single response or multiple responses simultaneously. In this chapter, we will discuss how to design effective mixture designs and how to analyze data from mixture experiments with and without process variables. ## Mixture Design Types There are several different types of mixture designs. The most common ones are simplex lattice, simplex centroid, simplex axial and extreme vertex designs, each of which is used for a different purpose. • If there are many components in a mixture, the first choice is to screen out the most important ones. Simplex axial and Simplex centroid designs are used for this purpose. • If the number of components is not large, but a high order polynomial equation is needed in order to accurately describe the response surface, then a simplex lattice design can be used. • Extreme vertex designs are used for the cases when there are constraints on one or more components (e.g., if the proportion of watermelon juice in a fruit punch recipe is required to be less than 30%, and the combined proportion of watermelon and orange juice should always be between 40% and 70%). ### Simplex Plot Since the sum of all the mixture components is always 100%, the experiment space usually is given by a plot. The experiment space for the fruit punch experiment is given in the following triangle or simplex plot. The triangle area in the above plot is defined by the fact that the sum of the three ingredients is 1 (100%). For the points that are on the vertices, the punch only has one ingredient. For instance, point 1 only has watermelon. The line opposite of point 1 represents a mixture with no watermelon . The coordinate system used for the value of each ingredient ${\displaystyle {{x}_{i}}\,\!}$,${\displaystyle i=1,2,...,q\,\!}$ is called a simplex coordinate system. q is the number of ingredients. The simplex plot can only visually display three ingredients. If there are more than three ingredients, the values for other ingredients must be provided. For the fruit punch example, the coordinate for point 1 is (1, 0, 0). The interior points of the triangle represent mixtures in which none of the three components is absent. It means all ${\displaystyle {{x}_{i}}>0\,\!}$, ${\displaystyle i=1,2,3\,\!}$. Point 0 in the middle of the triangle is called the center point. In this case, it is the centroid of a face/plane. The coordinate for point 0 is (1/3, 1/3, 1/3). Points 2, 4 and 6 are each called a centroid of edge. Their coordinates are (0.5, 0.5, 0), (0, 0.5, 0.5), and (0.5, 0, 0.5). ### Simplex Lattice Design The response in a mixture experiment usually is described by a polynomial function. This function represents how the components affect the response. To better study the shape of the response surface, the natural choice for a design would be the one whose points are spread evenly over the whole simplex. An ordered arrangement consisting of a uniformly spaced distribution of points on a simplex is known as a lattice. A {q, m} simplex lattice design for q components consists of points defined by the following coordinate settings: the proportions assumed by each component take the m+1 equally spaced values from 0 to 1, ${\displaystyle {{x}_{i}}=0,{\frac {1}{m}},{\frac {2}{m}},....,1{\text{ }}i=1,2,....,q\,\!}$ and the design space consists of all the reasonable combinations of all the values for each factor. m is usually called the degree of the lattice. For example, for a {3, 2} design, ${\displaystyle {{x}_{i}}=0,{\frac {1}{2}},1\,\!}$ and its design space has 6 points. They are: For a {3, 3} design, ${\displaystyle {{x}_{i}}=0,{\frac {1}{3}},{\frac {2}{3}},1\,\!}$, and its design space has 10 points. They are: For a simplex design with degree of m, each component has m + 1 different values, therefore, the experiment results can be used to fit a polynomial equation up to an order of m. A {3, 3} simplex lattice design can be used to fit the following model. {\displaystyle {\begin{aligned}&y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}\\&+{{\delta }_{12}}{{x}_{1}}{{x}_{2}}\left({{x}_{1}}-{{x}_{2}}\right)+{{\delta }_{13}}{{x}_{1}}{{x}_{3}}\left({{x}_{1}}-{{x}_{3}}\right)+{{\delta }_{23}}{{x}_{2}}{{x}_{3}}\left({{x}_{2}}-{{x}_{3}}\right)\\&+{{\beta }_{123}}{{x}_{1}}{{x}_{2}}{{x}_{3}}\end{aligned}}\,\!} The above model is called the full cubic model. Note that the intercept term is not included in the model due to the correlation between all the components (their sum is 100%). Simplex lattice design includes all the component combinations. For a {q, m} design, the total number of runs is {\displaystyle \left({\begin{aligned}&q+m-1\\&m\\\end{aligned}}\right)\,\!}. Therefore to reduce the number of runs and still be able to fit a high order polynomial model, sometimes we can use simplex centroid design which is explained next. ### Simplex Centroid Design A simplex centroid design only includes the centroid points. For the components that appear in a run in a simplex centroid design, they have the same values. In the above simplex plot, points 2, 4 and 6 are 2nd degree centroids. Each of them has two non-zero components with equal values. Point 0 is a 3rd degree centroid and all three components have the same value. For a design with q components, the highest degree of centroid is q. It is called the overall centroid, or the center point of the design. For a q component simplex centroid design with a degree of centroid of q, the total number of runs is ${\displaystyle {{2}^{q}}-1\,\!}$. The runs correspond to the q permutations of (1, 0, 0,…, 0), {\displaystyle \left({\begin{aligned}&q\\&2\\\end{aligned}}\right)\,\!} permutations of (1/2, 1/2, 0, 0, 0, 0, …,0), the {\displaystyle \left({\begin{aligned}&q\\&3\\\end{aligned}}\right)\,\!} permutations of (1/3, 1/3, 1/3, 0, 0, 0, 0,…, 0)…., and the overall centroid (1/q, 1/q, …, 1/q). If the degree of centroid is defined as ${\displaystyle m\,\!}$ (m < q), then the total number of runs will be {\displaystyle \left({\begin{aligned}&q\\&1\\\end{aligned}}\right)+\left({\begin{aligned}&q\\&2\\\end{aligned}}\right)+...+\left({\begin{aligned}&q\\&m\\\end{aligned}}\right)\,\!}. Since a simplex centroid design usually has fewer runs than a simplex lattice design with the same degree, a polynomial model with fewer terms should be used. A {3, 3} simplex centroid design can be used to fit the following model. ${\displaystyle y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}+{{\beta }_{123}}{{x}_{1}}{{x}_{2}}{{x}_{3}}\,\!}$ The above model is called the special cubic model. Note that the intercept term is not included due to the correlation between all the components (their sum is 100%). ### Simplex Axial Design The simplex lattice and simplex centroid designs are boundary designs since the points of these designs are positioned on boundaries (vertices, edges, faces, etc.) of the simplex factor space, with the exception of the overall centroid. Axial designs, on the other hand, are designs consisting mainly of the points positioned inside the simplex. Axial designs have been recommended for use when component effects are to be measured in a screening experiment, particularly when first degree models are to be fitted. Definition of Axial: The axial of a component ${\displaystyle i\,\!}$ is defined as the imaginary line extending from the base point ${\displaystyle {{x}_{i}}=0\,\!}$, ${\displaystyle {{x}_{j}}=1/\left(q-1\right)\,\!}$ for all ${\displaystyle j\neq i\,\!}$, to the vertex where ${\displaystyle {{x}_{i}}=1,{{x}_{j}}=0\,\!}$ all ${\displaystyle j\neq i\,\!}$. [John Cornell] In a simplex axial design, all the points are on the axial. The simplest form of axial design is one whose points are positioned equidistant from the overall centroid ${\displaystyle \left({1}/{q,{1}/{q,}\;{1}/{q,}\;...}\;\right)\,\!}$. Traditionally, points located at the half distance from the overall centroid to the vertex are called axial points/blends. This is illustrated in the following plot. Points 4, 5 and 6 are the axial blends. By default, a simple axial design in a DOE folio only has vertices, axial blends, centroid of the constraint planes and the overall centroid. For a design with q components, constraint plane centroids are the center points of dimension of q-1 space. One component is 0, and the remaining components have the same values for the center points of constraint planes. The number of the constraint plane centroids is the number of components q. The total number of runs in a simple axial design will be 3q+1. They are q vertex runs, q centroids of constraint planes, q axial blends and 1 overall centroid. A simplex axial design for 3 components has 10 points as given below. Points 1, 2 and 3 are the three vertices; points 4, 5, 6 are the axial blends; points 7, 8 and 9 are the centroids of constraint planes, and point 0 is the overall center point. ### Extreme Vertex Design Extreme vertex designs are used when both lower and upper bound constraints on the components are presented, or when linear constraints are added to several components. For example, if a mixture design with 3 components has the following constraints: • ${\displaystyle {{x}_{2}}\leq 0.7\,\!}$ • ${\displaystyle -2{{x}_{1}}+2{{x}_{2}}+3{{x}_{3}}\geq 0\,\!}$ • ${\displaystyle 48{{x}_{1}}+13{{x}_{2}}-{{x}_{3}}\geq 0\,\!}$ Then the feasible region is defined by the six points in the following simplex plot. To meet the above constraints, all the runs conducted in the experiment should be in the feasible region or on its boundary. The CONSIM method described in [Snee 1979] is used in a Weibull++ DOE folio to check the consistency of all the constraints and to get the vertices defined by them. Extreme vertex designs by default use the vertices at the boundary. Additional points such as the centroid of spaces of different dimensions, axial points and the overall center point can be added. In extreme vertex designs, axial points are between the overall center point and the vertices. For the above example, if the axial points and the overall center point are added, then all the runs in the experiment will be: Point 0 in the center of the feasible region is the overall centroid. The other red points are the axial points. They are at the middle of the lines connecting the center point with the vertices. ## Mixture Design Data Analysis In the following section, we will discuss the most popular regression models in mixture design data analysis. Due to the correlation between all the components in mixture designs, the intercept term usually is not included in the regression model. ### Models Used in Mixture Design For a design with three components, the following models are commonly used. • Linear model: ${\displaystyle y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}\,\!}$ If the intercept were included in the model, then the linear model would be ${\displaystyle y=\beta _{0}^{'}+\beta _{1}^{'}{{x}_{1}}+\beta _{2}^{'}{{x}_{2}}+\beta _{3}^{'}{{x}_{3}}\,\!}$ However, since ${\displaystyle {{x}_{1}}+{{x}_{2}}+{{x}_{3}}=1\,\!}$ (can be other constants as well), the above equation can be written as {\displaystyle {\begin{aligned}&y=\beta _{0}^{'}\left({{x}_{1}}+{{x}_{2}}+{{x}_{3}}\right)+\beta _{1}^{'}{{x}_{1}}+\beta _{2}^{'}{{x}_{2}}+\beta _{3}^{'}{{x}_{3}}\\&=\left(\beta _{0}^{'}+\beta _{1}^{'}\right){{x}_{1}}+\left(\beta _{0}^{'}+\beta _{2}^{'}\right){{x}_{2}}+\left(\beta _{0}^{'}+\beta _{3}^{'}\right){{x}_{3}}\\&={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}\end{aligned}}\,\!} The equation has thus been reformatted to omit the intercept. ${\displaystyle y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}\,\!}$ There are no classic quadratic terms such as ${\displaystyle x_{1}^{2}\,\!}$. This is because ${\displaystyle x_{1}^{2}={{x}_{1}}\left(1-{{x}_{2}}-{{x}_{3}}\right)={{x}_{1}}-{{x}_{1}}{{x}_{2}}-{{x}_{1}}{{x}_{3}}\,\!}$ • Full cubic model: {\displaystyle {\begin{aligned}&y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}\\&+{{\delta }_{12}}{{x}_{1}}{{x}_{2}}\left({{x}_{1}}-{{x}_{2}}\right)+{{\delta }_{13}}{{x}_{1}}{{x}_{3}}\left({{x}_{1}}-{{x}_{3}}\right)+{{\delta }_{23}}{{x}_{2}}{{x}_{3}}\left({{x}_{2}}-{{x}_{3}}\right)\\&+{{\beta }_{123}}{{x}_{1}}{{x}_{2}}{{x}_{3}}\end{aligned}}\,\!} • Special cubic model: ${\displaystyle {{\delta }_{ij}}{{x}_{i}}{{x}_{j}}\left({{x}_{i}}-{{x}_{j}}\right)\,\!}$ are removed from the full cubic model. {\displaystyle {\begin{aligned}&y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}\\&+{{\beta }_{123}}{{x}_{1}}{{x}_{2}}{{x}_{3}}\end{aligned}}\,\!} The above types of models are called Scheffe type models. They can be extended to designs with more than three components. In regular regression analysis, the effect of an exploratory variable or factor is represented by the value of the coefficient. The ratio of the estimated coefficient and its standard error is used for the t-test. The t-test can tell us if a coefficient is 0 or not. If a coefficient is statistically 0, then the corresponding factor has no significant effect on the response. However, for Scheffe type models, since the intercept term is not included in the model, we cannot use the regular t-test to test each individual main effect. In other words, we cannot test if the coefficient for each component is 0 or not. Similarly, in the ANOVA analysis, the linear effects of all the components are tested together as a single group. The main effect test for each individual component is not conducted. To perform ANOVA analysis, the Scheffe type model needs to be reformatted to include the hidden intercept. For example, the linear model ${\displaystyle y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}\,\!}$ can be rewritten as {\displaystyle {\begin{aligned}&y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}\\&={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}\left(1-{{x}_{1}}-{{x}_{2}}\right)\\&={{\beta }_{3}}+\left({{\beta }_{1}}-{{\beta }_{3}}\right){{x}_{1}}+\left({{\beta }_{2}}-{{\beta }_{3}}\right){{x}_{2}}\\&={{\beta }_{0}}+\beta _{1}^{'}{{x}_{1}}+\beta _{2}^{'}{{x}_{2}}\end{aligned}}\,\!} where ${\displaystyle {{\beta }_{0}}={{\beta }_{3}}\,\!}$, ${\displaystyle \beta _{1}^{'}={{\beta }_{1}}-{{\beta }_{3}}\,\!}$, ${\displaystyle \beta _{2}^{'}={{\beta }_{2}}-{{\beta }_{3}}\,\!}$. All other models such as the quadratic, cubic and special cubic model can be reformatted using the same procedure. By including the intercept in the model, the correct sum of squares can be calculated in the ANOVA table. If ANOVA analysis is conducted directly using the Scheffe type models, the result will be incorrect. ### L-Pseudocomponent, Proportion, and Actual Values In mixture designs, the total amount of the mixture is usually given. For example, we can make either a one-pound or a two-pound cake. Regardless of whether the cake is one or two pounds, the proportion of each ingredient is the same. When the total amount is given, the upper and lower limits for each ingredient are usually given in amounts, which is easier for the experimenter to understand. Of course, if the limits or other constraints are given in terms of proportions, these proportions need be converted to the real amount values when conducting the experiment. To keep everything consistent, all the constraints in a DOE folio are treated as amounts. In regular factorial design and response surface methods, the regression model is calculated using coded values. Coded values scale all the factors to the same magnitude, which makes the analysis much easier and reduces convergence error. Similarly, the analysis in mixture design is conducted using the so-called L-pseudocomponent value. L-pseudocomponent values scale all the components' values within 0 and 1. In a DOE folio all the designs and calculations for mixture factors are based on L-pseudocomponent values. The relationship between L-pseudocomponent values, proportions and actual amounts are explained next. #### Example for L-Pseudocomponent Value We are going to make one gallon (about 3.8 liters) of fruit punch. Three ingredients will be in the punch with the following constraints. ${\displaystyle 1.2\leq A\leq 3.8\,\!}$, ${\displaystyle 1.5\leq B\leq 3\,\!}$, ${\displaystyle 0\leq C\leq 3.8\,\!}$ Let ${\displaystyle x_{i}^{A}\,\!}$ (i = 1, 2, 3) be the actual amount value, ${\displaystyle x_{i}^{}\,\!}$ be the L-pseudocomponent value and ${\displaystyle x_{i}^{R}\,\!}$ be the proportion value. Then the equations for the conversion between them are: ${\displaystyle {{x}_{i}}={\frac {x_{i}^{A}-{{l}_{i}}}{\left(T-\sum \limits _{j=1}^{p}{{l}_{j}}\right)}}\,\!}$, ${\displaystyle x_{i}^{A}={{l}_{i}}+\left(T-\sum \limits _{j=1}^{p}{{l}_{i}}\right){{x}_{i}}\,\!}$, ${\displaystyle x_{i}^{R}={\frac {x_{i}^{A}}{T}}\,\!}$ where ${\displaystyle {{x}_{1}}\,\!}$, ${\displaystyle x_{1}^{A}\,\!}$ and ${\displaystyle x_{1}^{R}\,\!}$ are for component A, ${\displaystyle {{x}_{2}}\,\!}$, ${\displaystyle x_{2}^{A}\,\!}$ and ${\displaystyle x_{2}^{R}\,\!}$ are for component B, and ${\displaystyle {{x}_{3}}\,\!}$, ${\displaystyle x_{3}^{A}\,\!}$ and ${\displaystyle x_{3}^{R}\,\!}$ are for component C. Since components in this example have both lower and upper limit constraints, an extreme vertex design is used. The design settings are given below. Displayed in amount values, it is: Displayed in proportion values, it is: #### Check Constraint Consistency In the above example, all the constraints are consistent. However, if we set the constraints to ${\displaystyle 1.2\leq A\leq 3.8\,\!}$, ${\displaystyle 1.5\leq B\leq 3\,\!}$, ${\displaystyle 2\leq C\leq 3.8,\,\!}$ then they are not consistent. This is because the total is only 3.8, but the sum of all the lower limits is 4.7. Therefore, not all the lower limits can be satisfied at the same time. If only lower limits and upper limits are presented for all the components, then we can adjust the lower bounds to make the constraints consistent. The method given by [Pieple 1983] is used and summarized below. Defined the range of a component to be ${\displaystyle {{R}_{i}}={{U}_{i}}-{{L}_{i}}\,\!}$. ${\displaystyle {{U}_{i}}\,\!}$ and ${\displaystyle {{L}_{i}}\,\!}$ are the upper and lower limit for component i. The implied range of component i is ${\displaystyle R_{i}^{*}=U_{i}^{*}-L_{i}^{*}\,\!}$, where ${\displaystyle L_{i}^{*}=T-\sum \limits _{j\neq i}^{q}{{U}_{i}}\,\!}$, and ${\displaystyle U_{i}^{*}=T-\sum \limits _{j\neq i}^{q}{{L}_{i}}\,\!}$. T is the total amount. The steps for checking and adjusting bounds are given below. Step 1: Check if ${\displaystyle L_{i}^{*}\,\!}$ and ${\displaystyle U_{i}^{*}\,\!}$ are greater than 0, if they are, then these constraints meet the basic requirement to be consistent. We can move forward to step 2. If not, these constraints cannot be adjusted to be consistent. We should stop. Step 2: For each component, check if ${\displaystyle {{L}_{i}}\geq L_{i}^{*}\,\!}$ and ${\displaystyle {{U}_{i}}\leq U_{i}^{*}\,\!}$. If they are, then this component’s constraints are consistent. Otherwise, if ${\displaystyle {{L}_{i}}, then set ${\displaystyle {{L}_{i}}=L_{i}^{*}\,\!}$, and if ${\displaystyle {{U}_{i}}>U_{i}^{*}\,\!}$, then set ${\displaystyle {{U}_{i}}=U_{i}^{*}\,\!}$. Step 3: Whenever a bound is changed, restart from Step 1 to use the new bound to check if all the constraints are consistent. Repeat this until all the limits are consistent. For extreme vertex design where linear constraints are allowed, the DOE folio will give a warning and stop creating the design if inconsistent linear combination constraints are found. No adjustment will be conducted for linear constraints. ### Response Trace Plot Due to the correlation between all the components, the regular t-test is not used to test the significance of each component. A special plot called the Response Trace Plot can be used to see how the response changes when each component changes from its reference point [John Cornell]. A reference point can be any point inside the experiment space. An imaginary line can be drawn from this reference point to each vertex ${\displaystyle {{x}_{i}}=1\,\!}$, and ${\displaystyle {{x}_{j}}=0\,\!}$ (${\displaystyle i\neq j\,\!}$). This line is the direction for component i to change. Component i can either increase or decrease its value along this line, while the ratio of other components ${\displaystyle {{x}_{j}}/{{x}_{k}}\,\!}$ (${\displaystyle j,k\neq i\,\!}$) will keep constant. If the simplex plot is defined in terms of proportion, then the direction is called Cox’s direction, and ${\displaystyle {{x}_{j}}/{{x}_{k}}\,\!}$ is the ratio of proportion. If the simplex plot is defined in terms of pseduocomponent value, then the direction is called Pieple’s direction, and ${\displaystyle {{x}_{j}}/{{x}_{k}}\,\!}$ will be the ratio of pseduocomponent values. Assume the reference point in terms of proportion is ${\displaystyle s=\left({{s}_{1}},{{s}_{2}},...,{{s}_{q}}\right)\,\!}$ where ${\displaystyle {{s}_{1}}+{{s}_{2}}+...+{{s}_{q}}=1\,\!}$. Suppose the proportion of component ${\displaystyle i\,\!}$ at ${\displaystyle {{s}_{i}}\,\!}$ is now changed by ${\displaystyle {{\Delta }_{i}}\,\!}$ (${\displaystyle {{\Delta }_{i}}\,\!}$ could be greater than or less than 0) in Cox’s direction, so that the new proportion becomes ${\displaystyle {{x}_{i}}={{s}_{i}}+{{\Delta }_{i}}\,\!}$ Then the proportions of the remaining ${\displaystyle q-1\,\!}$ components resulting from the change from ${\displaystyle {{s}_{i}}\,\!}$ will be ${\displaystyle {{x}_{j}}={{s}_{j}}-{\frac {{{\Delta }_{i}}{{s}_{j}}}{1-{{s}_{i}}}}\,\!}$ After the change, the ratio of component j and k is unchanged. This is because ${\displaystyle {\frac {{x}_{j}}{{x}_{k}}}={\frac {{{s}_{j}}-{\frac {{{\Delta }_{i}}{{s}_{j}}}{1-{{s}_{i}}}}}{{{s}_{k}}-{\frac {{{\Delta }_{i}}{{s}_{k}}}{1-{{s}_{i}}}}}}={\frac {{s}_{j}}{{s}_{k}}}{\frac {\frac {{\Delta }_{i}}{1-{{s}_{i}}}}{\frac {{\Delta }_{i}}{1-{{s}_{i}}}}}={\frac {{s}_{j}}{{s}_{k}}}\,\!}$ While ${\displaystyle {{x}_{i}}\,\!}$ is changed along Cox’s direction, we can use a fitted regression model to get the response value y. A response trace plot for a mixture design with three components will look like The x-axis is the deviation amount from the reference point, and the y-value is the fitted response. Each component has one curve. Since the red curve for component A changes significantly, this means it has a significant effect along its axial. The blue curve for component C is almost flat; this means when C changes along Cox’s direction and other components keep the same ratio, the response Y does not change very much. The effect of component B is between component A and C. ### Example Watermelon (A), pineapple (B) and orange juice (C) are used for making 3.8 liters of fruit punch. At least 30% of the fruit punch must be watermelon. Therefore the constraints are ${\displaystyle 1.14\leq A\leq 3.8\,\!}$, ${\displaystyle 0\leq B\leq 3.8\,\!}$, ${\displaystyle 0\leq C\leq 3.8,\,\!}$ Different blends of the three-juice recipe were evaluated by a panel. A value from 1 (extremely poor) to 9 (very good) is used for the response [John Cornell, page 74]. A {3, 2} simplex lattice design is used with one center point and three axial points. Three replicates were conducted for each ingredient combination. The settings for creating this design in a DOE folio is The generated design in L-pseudocomponent values and the response values from the experiment are The simplex design point plot is Main effect and 2-way interactions are included in the regression model. The result for the regression model in terms of L-pseudocomponents is ${\displaystyle y=4.81{{x}_{1}}+6.03{{x}_{2}}+6.16{{x}_{3}}+1.13{{x}_{1}}{{x}_{2}}+2.45{{x}_{1}}{{x}_{3}}+1.69{{x}_{2}}{{x}_{3}}\,\!}$ The regression information table is Regression Information Term Coefficient Standard Error Low Confidence High Confidence T Value P Value Variance Inflation Factor A: Watermelon 4.8093 0.3067 4.2845 5.3340 1.9636 B: Pineapple 6.0274 0.3067 5.5027 6.5522 1.9636 C: Orange 6.1577 0.3067 5.6330 6.6825 1.9636 A • B 1.1253 1.4137 -1.2934 3.5439 0.7960 0.4339 1.9819 A • C 2.4525 1.4137 0.0339 4.8712 1.7348 0.0956 1.9819 B • C 1.6889 1.4137 -0.7298 4.1075 1.1947 0.2439 1.9819 The result shows that the taste of the fruit punch is significantly affected by the interaction between watermelon and orange. The ANOVA table is Anova Table Source of Variation Degrees of Freedom Standard ErrorSum of Squares [Partial] Mean Squares [Partial] F Ratio P Value Model 5 6.5517 1.3103 4.3181 0.0061 Linear 2 3.6513 1.8256 6.0162 0.0076 A • B 1 0.1923 0.1923 0.6336 0.4339 A • C 1 0.9133 0.9133 3.0097 0.0956 B • C 1 0.4331 0.4331 1.4272 0.2439 Residual 24 7.2829 0.3035 Lack of Fit 4 4.4563 1.1141 7.8825 0.0006 Pure Error 20 2.8267 0.1413 Total 29 13.8347 The simplex contour plot in L-pseudocomponent values is From this plot we can see that as the amount of watermelon is reduced, the taste of the fruit punch becomes better. In order to find the best proportion of each ingredient, the optimization tool in the Weibull++ DOE folio can be utilized. Set the settings as The resulting optimal plot is This plot shows that when the amounts for watermelon, pineapple and orange juice are 1.141, 1.299 and 1.359, respectively, the rated taste of the fruit punch is highest. ## Mixture Design with Process Variables Process variables often play very important roles in mixture experiments. A simple example is baking a cake. Even with the same ingredients, different baking temperatures and baking times can produce completely different results. In order to study the effect of process variables and find their best settings, we need to consider them when conducting a mixture experiment. An easy way to do this is to make mixtures with the same ingredients in different combinations of process variables. If all the process variables are independent, then we can plan a regular factorial design for these process variables. By combining these designs with a separated mixture design, the effect of mixture components and effect of process variables can be studied. For example, a {3, 2} simplex lattice design is used for a mixture with 3 components. Together with the center point, it has total of 7 runs or 7 different ingredient combinations. Assume 2 process variables are potentially important and a two level factorial design is used for them. It has a total of 4 combinations for these 2 process variables. If the 7 different mixtures are made under each of the 4 process variable combinations, then the experiment has a total of 28 runs. This is illustrated in the figure below. Of course, if it is possible, all the 28 experiments should be conducted in a random order. ### Model with Process Variables In a DOE folio, regression models including both mixture components and process variables are available. For mixture components, we use L-pseudocomponent values, and for process variables coded values are used. Assume a design has 3 mixture components and 2 process variables, as illustrated in the above figure. We can use the following models for them. • For the 3 mixture components, the following special cubic model is used. ${\displaystyle y={{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}+{{\beta }_{123}}{{x}_{1}}{{x}_{2}}{{x}_{3}}\,\!}$ • For the 2 process variables the following model is used. ${\displaystyle y={{\alpha }_{0}}+{{\alpha }_{1}}{{z}_{1}}+{{\alpha }_{2}}{{z}_{2}}+{{\alpha }_{12}}{{z}_{1}}{{z}_{2}}\,\!}$ • The combined model with both mixture components and process variables is {\displaystyle {\begin{aligned}&y=\sum \limits _{i=1}^{3}{\gamma _{i}^{0}{{x}_{i}}}+\sum {\sum \limits _{i The above combined model has total of 7x4=28 terms. By expanding it, we get the following model: {\displaystyle {\begin{aligned}&y=\gamma _{1}^{0}{{x}_{1}}+\gamma _{2}^{0}{{x}_{2}}+\gamma _{3}^{0}{{x}_{3}}+\gamma _{12}^{0}{{x}_{1}}{{x}_{2}}+\gamma _{13}^{0}{{x}_{1}}{{x}_{3}}+\gamma _{23}^{0}{{x}_{2}}{{x}_{3}}+\gamma _{123}^{0}{{x}_{1}}{{x}_{2}}{{x}_{3}}\\&+\gamma _{1}^{1}{{x}_{1}}{{z}_{1}}+\gamma _{2}^{1}{{x}_{2}}{{z}_{1}}+\gamma _{3}^{1}{{x}_{3}}{{z}_{1}}+\gamma _{12}^{1}{{x}_{1}}{{x}_{2}}{{z}_{1}}+\gamma _{13}^{1}{{x}_{1}}{{x}_{3}}{{z}_{1}}+\gamma _{23}^{1}{{x}_{2}}{{x}_{3}}{{z}_{1}}+\gamma _{123}^{1}{{x}_{1}}{{x}_{2}}{{x}_{3}}{{z}_{1}}\\&+\gamma _{1}^{2}{{x}_{1}}{{z}_{2}}+\gamma _{2}^{2}{{x}_{2}}{{z}_{2}}+\gamma _{3}^{2}{{x}_{3}}{{z}_{2}}+\gamma _{12}^{2}{{x}_{1}}{{x}_{2}}{{z}_{2}}+\gamma _{13}^{2}{{x}_{1}}{{x}_{3}}{{z}_{2}}+\gamma _{23}^{2}{{x}_{2}}{{x}_{3}}{{z}_{2}}+\gamma _{123}^{2}{{x}_{1}}{{x}_{2}}{{x}_{3}}{{z}_{2}}\\&+\gamma _{1}^{12}{{x}_{1}}{{z}_{1}}{{z}_{2}}+\gamma _{2}^{12}{{x}_{2}}{{z}_{1}}{{z}_{2}}+\gamma _{3}^{12}{{x}_{3}}{{z}_{1}}{{z}_{2}}+\gamma _{12}^{12}{{x}_{1}}{{x}_{2}}{{z}_{1}}{{z}_{2}}+\gamma _{13}^{12}{{x}_{1}}{{x}_{3}}{{z}_{1}}{{z}_{2}}+\gamma _{23}^{12}{{x}_{2}}{{x}_{3}}{{z}_{1}}{{z}_{2}}+\gamma _{123}^{12}{{x}_{1}}{{x}_{2}}{{x}_{3}}{{z}_{1}}{{z}_{2}}\end{aligned}}\,\!} The combined model basically crosses every term in the mixture components model with every term in the process variables model. From a mathematical point of view, this model is just a regular regression model. Therefore, the traditional regression analysis method can still be used for obtaining the model coefficients and calculating the ANOVA table. ### Example Three kinds of meats (beef, pork and lamb) are mixed together to form burger patties. The meat comprises 90% of the total mixture, with the remaining 10% reserved for flavoring ingredients. A {3, 2} simplex design with the center point is used for the experiment. The design has 7 meat combinations, which are given below using L-pseudocomponent values. A: Beef B: Pork C: Lamb 1 0 0 0.5 0.5 0 0.5 0 0.5 0 1 0 0 0.5 0.5 0 0 1 0.333333 0.333333 0.333333 Two process variables on making the patties are also studied: cooking temperature and cooking time. The low and high temperature values are 375°F and 425°F, and the low and high time values are 25 and 40 minutes. A two level full factorial design is used and displayed below with coded values. Temperature Time -1 -1 -1 1 1 -1 1 1 One of the properties of the burger patties is texture. The texture is measured by a compression test that measures the grams of force required to puncture the surface of the patty. Combining the simplex design and the factorial design together, we get the following 28 runs. The corresponding texture reading for each blend is also provided. Standard Order A: Beef B: Pork C: Lamb Z1: Temperature Z2: Time Texture (${\displaystyle 10^{3}\,\!}$ gram) 1 1 0 0 -1 -1 1.84 2 0.5 0.5 0 -1 -1 0.67 3 0.5 0 0.5 -1 -1 1.51 4 0 1 0 -1 -1 1.29 5 0 0.5 0.5 -1 -1 1.42 6 0 0 1 -1 -1 1.16 7 0.333 0.333 0.333 -1 -1 1.59 8 1 0 0 1 -1 2.86 9 0.5 0.5 0 1 -1 1.1 10 0.5 0 0.5 1 -1 1.6 11 0 1 0 1 -1 1.53 12 0 0.5 0.5 1 -1 1.81 13 0 0 1 1 -1 1.5 14 0.333 0.333 0.333 1 -1 1.68 15 1 0 0 -1 1 3.01 16 0.5 0.5 0 -1 1 1.21 17 0.5 0 0.5 -1 1 2.32 18 0 1 0 -1 1 1.93 19 0 0.5 0.5 -1 1 2.57 20 0 0 1 -1 1 1.83 21 0.333 0.3333 0.333 -1 1 1.94 22 1 0 0 1 1 4.13 23 0.5 0.5 0 1 1 1.67 24 0.5 0 0.5 1 1 2.57 25 0 1 0 1 1 2.26 26 0 0.5 0.5 1 1 3.15 27 0 0 1 1 1 2.22 28 0.333 0.333 0.333 1 1 2.6 Using a quadratic model for the mixture component and a 2-way interaction model for the process variables, we get the following results. Term Coefficient Standard Error T Value P Value Variance Inflation Factor A:Beef 2.9421 0.1236 * * 1.5989 B:Pork 1.7346 0.1236 * * 1.5989 C:Lamb 1.6596 0.1236 * * 1.5989 A • B -4.4170 0.5680 -7.7766 0.0015 1.5695 A • C -0.9170 0.5680 -1.6146 0.1817 1.5695 B • C 2.4480 0.5680 4.3099 0.0125 1.5695 Z1 • A 0.5324 0.1236 4.3084 0.0126 1.5989 Z1 • B 0.1399 0.1236 1.1319 0.3209 1.5989 Z1 • C 0.1799 0.1236 1.4557 0.2192 1.5989 Z1 • A • B -0.4123 0.5680 -0.7260 0.5081 1.5695 Z1 • A • C -1.0423 0.5680 -1.8352 0.1404 1.5695 Z1 • B • C 0.3727 0.5680 0.6561 0.5476 1.5695 Z2 • A 0.6193 0.1236 5.0117 0.0074 1.5989 Z2 • B 0.3518 0.1236 2.8468 0.0465 1.5989 Z2 • C 0.3568 0.1236 2.8873 0.0447 1.5989 Z2 • A • B -0.9802 0.5680 -1.7258 0.1595 1.5695 Z2 • A • C -0.3202 0.5680 -0.5638 0.6030 1.5695 Z2 • B • C 0.9248 0.5680 1.6282 0.1788 1.5695 Z1 • Z2 • A 0.0177 0.1236 0.1433 0.8930 1.5989 Z1 • Z2 • B 0.0152 0.1236 0.1231 0.9080 1.5989 Z1 • Z2 • C 0.0052 0.1236 0.0422 0.9684 1.5989 Z1 • Z2 • A • B 0.0808 0.5680 0.1423 0.8937 1.5695 Z1 • Z2 • A • C 0.2308 0.5680 0.4064 0.7052 1.5695 Z1 • Z2 • B • C 0.2658 0.5680 0.4680 0.6641 1.5695 The above table shows that all the terms with ${\displaystyle {{z}_{1}}\times {{z}_{2}}\,\!}$ have very large P values, therefore, we can remove these terms from the model. We can also remove other terms with P values larger than 0.5. After recalculating with the desired terms, the final results are Term Coefficient Standard Error T Value P Value Variance Inflation Factor A:Beef 2.9421 0.0875 * * 1.5989 B:Pork 1.7346 0.0875 * * 1.5989 C:Lamb 1.6596 0.0875 * * 1.5989 A • B -4.4170 0.4023 -10.9782 6.0305E-08 1.5695 A • C -0.9170 0.4023 -2.2792 0.0402 1.5695 B • C 2.4480 0.4023 6.0842 3.8782E-05 1.5695 Z1 • A 0.4916 0.0799 6.1531 3.4705E-05 1.3321 Z1 • B 0.1365 0.0725 1.8830 0.0823 1.0971 Z1 • C 0.2176 0.0799 2.7235 0.0174 1.3321 Z1 • A • C -1.0406 0.4015 -2.5916 0.0224 1.5631 Z2 • A 0.5910 0.0800 7.3859 5.3010E-06 1.3364 Z2 • B 0.3541 0.0875 4.0475 0.0014 1.5971 Z2 • C 0.3285 0.0800 4.1056 0.0012 1.3364 Z2 • A • B -0.9654 0.4019 -2.4020 0.0320 1.5661 Z2 • B • C 0.9396 0.4019 2.3378 0.0360 1.5661 The regression model is {\displaystyle {\begin{aligned}&y=2.9421{{x}_{1}}+1.7346{{x}_{2}}+1.6596{{x}_{3}}-4.4170{{x}_{1}}{{x}_{2}}-0.9170{{x}_{1}}{{x}_{3}}+2.4480{{x}_{2}}{{x}_{3}}\\&+0.4916{{x}_{1}}{{z}_{1}}+0.1365{{x}_{2}}{{z}_{1}}+0.2176{{x}_{3}}{{z}_{1}}-1.0406{{x}_{1}}{{x}_{3}}{{z}_{1}}+0.5910{{x}_{1}}{{z}_{2}}\\&+0.3541{{x}_{2}}{{z}_{2}}+0.3285{{x}_{3}}{{z}_{2}}-0.9654{{x}_{1}}{{x}_{2}}{{z}_{2}}+0.9396{{x}_{2}}{{x}_{3}}{{z}_{2}}\end{aligned}}\,\!} The ANOVA table for this model is ANOVA Table Source of Variation Degrees of Freedom Sum of Squares [Partial] Mean Squares [Partial] F Ratio P Value Model 14 14.5066 1.0362 33.5558 6.8938E-08 Component Only Linear 2 4.1446 2.0723 67.1102 1.4088E-07 A • B 1 3.7216 3.7216 120.5208 6.0305E-08 A • C 1 0.1604 0.1604 5.1949 0.0402 B • C 1 1.1431 1.1431 37.0173 3.8782E-05 Component • Z1 Z1 • A 1 1.1691 1.1691 37.8604 3.4705E-05 Z1 • B 1 0.1095 0.1095 3.5456 0.0823 Z1 • C 1 0.2290 0.2290 7.4172 0.0174 Z1 • A • C 1 0.2074 0.2074 6.7165 0.0224 Component • Z2 Z2 • A 1 1.6845 1.6845 54.5517 5.3010E-06 Z2 • B 1 0.5059 0.5059 16.3819 0.0014 Z2 • C 1 0.5205 0.5205 16.8556 0.0012 Z2 • A • B 1 0.1782 0.1782 5.7698 0.0320 Z2 • B • C 1 0.1688 0.1688 5.4651 0.0360 Residual 13 0.4014 0.0309 Lack of Fit 13 0.4014 0.0309 Total 27 14.9080 The above table shows both process factors have significant effects on the texture of the patties. Since the model is pretty complicate, the best settings for the process variables and for components cannot be easily identified. The optimization tool in the DOE folio is used for the above model. The target texture value is ${\displaystyle 3\times {{10}^{3}}\,\!}$ grams with an acceptable range of ${\displaystyle 2.5-3.5\times {{10}^{3}}\,\!}$ grams. The optimal solution is Beef = 98.5%, Pork = 0.7%, Lamb = 0.7%, Temperature = 375.7, and Time = 40. ## References 1. Cornell, John (2002), Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data, John Wiley & Sons, Inc. New York. 2. Piepel, G. F. (1983), “Defining consistent constraint regions in mixture experiments,” Technometrics, Vol. 25, pp. 97-101. 3. Snee, R. D. (1979), “Experimental designs for mixture systems with multiple component constraints,” Communications in Statistics, Theory and Methods, Bol. A8, pp. 303-326.
{}
## Welcome to Serendeputy! Serendeputy is your personal news assistant. - learns what you like and don't like, - lovingly compiles a list of news and blogs for you. How it works. What to do: 2. Click smileys and frownies 3. Find favorite topics and sources 4. See how much better your deputy is getting at finding you good stuff. # Stats Stack Exchange For my masters thesis in corporate finance I'm doing a research about debt concentration (i.e. companies using several debt types or only 1, measured by HHI index) I've got several determinants and some control variables. My data consists of 24503 observations,... From: Stats Stack Exchange | By: Brx | Sunday, May 1, 2016 smile frown I am implementing a vanilla variational mixture of multivariate Gaussians, as per Chapter 10 of Pattern Recognition and Machine Learning (Bishop, 2007). The Bayesian approach requires to specify parameters for the Gaussian-inverse-Wishart prior: $\alpha_0$... From: Stats Stack Exchange | By: lacerbi | Friday, April 29, 2016 smile frown I am reading Chris Bishop's Pattern Recognition and Machine Learning. In Section 2.3.5 he introduces some ideas on the contribution of the $n$th observation in a data set to the maximum likelihood estimator of the mean. He says that the larger number... From: Stats Stack Exchange | By: cgo | Friday, April 29, 2016 smile frown I computed a A x B (2 x 2) within subject ANOVA for a given ROI using repeated measures GLM. The interaction between A and B was not significant, but two main effects were detected. Can I still compare A1 vs A2 within B1 or within B2 using paired ttest... From: Stats Stack Exchange | By: ping yang | Friday, April 29, 2016 smile frown Can someone explain how the math used to determine that 34 participants were required for this study? To have an 80% chance of detecting a 1.5–percentage point between-group A1C difference as significant (at the two-sided 5% level), with an assumed... From: Stats Stack Exchange | By: haim | Sunday, May 1, 2016 smile frown I am fitting a mixed effects model in R using nlme lme(y~x+I(x^2),random=~x|subject,data=train) Is this the correct way or should it be lme(y~x+I(x^2),random=~x+I(x^2)|subject,data=train) What is the difference in the interpretation of fitting these... From: Stats Stack Exchange | By: kon7 | Monday, May 2, 2016 smile frown Hi i'd like to know a bit more about kNN-like approach implementations for classification problems, and specifically classification problems where we want to have a probability distribution as an output (to compute logloss like metrics for example) In... From: Stats Stack Exchange | By: Fagui Curtain | Monday, May 2, 2016 smile frown I am trying to replicate this paper "Gleditsch, Kristian Skrede and Michael D. Ward. 2006. "Diffusion and the International Context of Democratization", International Organization 50: 911-933" and I have problems finding the gamma coefficients. The base... From: Stats Stack Exchange | By: Maria | Monday, May 2, 2016 smile frown I am working with the following model and am attempting to derivate coordinate ascent updates using mean field variational inference: Sample $p_X \sim Beta(\alpha_1, \alpha_2)$ Sample $p_Y \sim Beta(\alpha_2, \alpha_1)$ For $i \in \{ 1...d\}$, sample... From: Stats Stack Exchange | By: lrAndroid | Sunday, May 1, 2016 smile frown In train or rfe I can only set Accuracy or Kappa. Is there a way to edit the functions to define a scoring function? I am using Kappa at the moment but I need to optimize for positive predictive Value (= hit rate = fraction of positives recognized as... From: Stats Stack Exchange | By: user670186 | Sunday, May 1, 2016 smile frown A couple weeks back, I was seeing if I could solve the basic formulation of the Birthday Problem (i.e. assuming 365 equally likely birthdays, what's the probability that, given a room of ${n}$ people, at least one pair of people share a birthday). The... From: Stats Stack Exchange | By: ZombieSocrates | Sunday, May 1, 2016 smile frown I have two pivot tables, one with gallons of gas consumed prior to treatment, and a second with gallons of gas consumed after treatment, which is a mixture added to the full gas tank. See image below. I have the pivot table containing a subset of data... From: Stats Stack Exchange | By: Jazzmine | Sunday, May 1, 2016 smile frown I have two data sets (base and to_match), each with 10 individuals, grouped in 2 classes. Each individual is described by a set of 4 variables. What I want to do is: test wether the groups in the first dataset (base) are identical, based on all the describing... From: Stats Stack Exchange | By: Wiliam | Friday, April 29, 2016 smile frown What kind of $f(n): \mathbb{N} \to \mathbb{N}$'s make the following statement true? What kind don't? $\limsup A_{f(n)} \subseteq \limsup A_n$ where $n \in \mathbb{N}$ (*) Well obviously the answers to each are: $(f(n) \ | \ \limsup A_{f(n)} \subseteq... From: Stats Stack Exchange | By: BCLC | Sunday, May 1, 2016 smile frown From Williams' Probability with Martingales:$X_n(\omega)$does not converge to a limit in$[-\infty,\infty]$--> Is this supposed to be stronger than$\lim X_n$does not exist? Why do we have Is the part with $$\liminf X_n(\omega) < \limsup X_n(\omega)$$... From: Stats Stack Exchange | By: BCLC | Sunday, May 1, 2016 smile frown If I create a weekly ts time series with <= 188 values in it and plot it I get a "fractional" labeled x axis: x <- ts(rnorm(188,0,1), frequency=52, start=c(2000,1)) plot(x) but if I create a time series with >= 189 values, plot displays the... From: Stats Stack Exchange | By: Randy Wilson | Sunday, May 1, 2016 smile frown I have the following data series: # retrn vix 1 7.44 27.799999 2 14.57 23.4 3 8.03 19.440001 4 4.42 18.43 5 2.27 15.5 6 9.67 17.15 7 -3.44 24.059999 8 8.32 17.08 9 4.65 18.93 10 7.7 17.469999 11 2.87 15.73 12 5.02 18.6 ... retrn - my asset returns (monthly)... From: Stats Stack Exchange | By: Vingthor | Sunday, May 1, 2016 smile frown In part of an experimental trial (n=1), I asked the participant to answer a specific questionnaire (continuous response variable) under the influence of 4 different dosages (dosage 1, 2, 3 and 4) of a same substance. This task was repeated (after a certain... From: Stats Stack Exchange | By: ynwa_in_stats | Sunday, May 1, 2016 smile frown Say I know the distribution of$X-Y$, but I do not know the distributino of$X$(or$Y$), but I know that they are statistically independent, and I know they have the same distribution. Is the problem of finding the distribution well-defined, as in will... From: Stats Stack Exchange | By: pkofod | Friday, April 29, 2016 smile frown I have a series of monthly returns on financial data. My goal is to estimate the volatility of 10 year rolling returns. I am a bit confused on two options. a) Calculate 10 year rolling returns, annualize this and then calculate the volatility of the... From: Stats Stack Exchange | By: Jantamanta | Sunday, May 1, 2016 smile frown Suppose we compute the correlation PCA of a dataset$X$(with$m$variables and$n$observations) by first normalizing the input variables. That is: mean -> 0 and standard deviation -> 1. Let us assume for the sake of this question that$\mu_i=0$... From: Stats Stack Exchange | By: Werner Van Belle | Sunday, May 1, 2016 smile frown Is M-estimation valid only for regression models or does it's working hold good for robust estimation of parameters in other statistical models? I understand that M-estimators are asymptotically normal for least squares models. Is it also true for any... From: Stats Stack Exchange | By: user251385 | Sunday, May 1, 2016 smile frown I am attempting to model the fluorescent signal emitted by a fluorescent calcium indicator (lights up when there is calcium influx into a cell). According to [1], the following formula works as a workable approximation, under certain conditions:$\Delta... From: Stats Stack Exchange | By: mowe | Sunday, May 1, 2016 smile frown I'm fairly new to statistics - I'm sure this is a basic question but my google searching is failing me. Happy to just be pointed to other reading. I have 3 datasets of varying sizes (N1 ~ 200,000, N2 ~ 80,000, N3 ~ 400). In each dataset, for each sample... From: Stats Stack Exchange | By: kevbonham | Sunday, May 1, 2016 smile frown I am totally new to "machine learning" and am looking for how to get started. Can you point me to a few resources, geared for the beginner, that are excellent starting points? What are the main families of tasks in machine learning? Who are the famous... From: Stats Stack Exchange | By: Disco Dancer | Sunday, May 1, 2016 smile frown This is probably a very basic question; I have a data-frame with a fake questionnaire with three sets of questions measuring three constructs. I'm currently reading some research papers which in order to create the construct aggregate the mean per country,... From: Stats Stack Exchange | By: John Smith | Sunday, May 1, 2016 smile frown I was sort of self-studying a poorly-elaborated lecture note of factorial design. It mentioned that a $2^{9-5}$ design has resolution 3. This is checked with the table below. It has $2^4=16$ runs, and we require $9+1=10$ runs to delineate all main effects.... From: Stats Stack Exchange | By: user2513881 | Sunday, May 1, 2016 smile frown I am having some problems with estimating a VAR in R. I am trying to replicate a study from Park and Ratti 2008 Using a time period from January 1997 to February 2016, I have been able to perform KPSS and PP tests, which results resemble the ones in... From: Stats Stack Exchange | By: fwintherdk | Saturday, April 30, 2016 smile frown From wiki: Given a set of independent identically distributed data points $\mathbb{X}=(x_1,\ldots,x_n)$, where $x_i \sim p(x_i|\theta)$ according to some probability distribution parameterized by θ, where θ itself is a random variable described by... From: Stats Stack Exchange | By: slava_b | Sunday, May 1, 2016 smile frown I am using esri arc to generate random points. I then analyze the pattern from this process using Average Nearest Neighbor which is also in esri gis but lets say it can be in any other software. Is there a chance that it comes as dispersed or clustered... From: Stats Stack Exchange | By: Navid | Sunday, May 1, 2016 smile frown I have 5 point likert scale questionnaire as dependant variable..and yes /no quectionnaire as independant variable.how i analyze this with spss..want to find correlation of these 2 variables and find relationship From: Stats Stack Exchange | By: Anil | Sunday, May 1, 2016 smile frown I use arc software to do Moran 1 analysis and it only takes polygons for input. Why is it called point process of only takes polygons? From: Stats Stack Exchange | By: Navid | Sunday, May 1, 2016 smile frown Let $Y_1 < Y_2 < … < Y_n$ be the order statistics of $n$ independent observations from a continuous distribution with cumulative distribution function $F(x)$ and probability density function: $$f(x)=F′(x)$$ where $0 < F(x) < 1$ over... From: Stats Stack Exchange | By: Hamid | Sunday, May 1, 2016 smile frown I need to generate random point process manually to learn in the same way they do in other software like arc esri. I can use RAND() but I know what I produce then has to be Poisson distribution because that what I see in literature. How can I make sure... From: Stats Stack Exchange | By: Navid | Sunday, May 1, 2016 smile frown I am running the following model in R: model = lmer(Tau ~ ageS*days+YrsOfEds*days+sex*days+tract*days + (1|SubjectID), data=long) With this model I am trying to predict change in tau over time based on the quality of a tract. Both tau and tract are continuous... From: Stats Stack Exchange | By: HIL | Sunday, May 1, 2016 smile frown Let $A$, $B$ be two zero-mean random variables. Let the variance be $\sigma^2_A$, $\sigma^2_B$ and let the correlation be $\sigma_{AB}$. Consider the following expression :- $$\mathbb{E}\big[A|B=b\big]$$ When $A,B$ are jointly gaussians the above expression... From: Stats Stack Exchange | By: Vivek Bagaria | Sunday, May 1, 2016 smile frown I know how to find a correlation between 2 variables. How am i supposed to find correlations between multiple variables in r programming and how do i plot a graph for it? From: Stats Stack Exchange | By: Akshay Sirsikar | Sunday, May 1, 2016 smile frown I recently saw* a pmf: $f(y)=\frac{\mu^y}{(y!)^\theta z(\mu,\theta)}$, where $z(\mu,\theta) = \sum_{i=0}^{\infty}\frac{\mu^i}{(i!)^\theta}$. * It is a bonus question on a homework assignment. I am wondering if this belongs to the exponential family?... From: Stats Stack Exchange | By: Kevin | Sunday, May 1, 2016 smile frown I've gone through the theoretical definition of cluster analysis and have learnt the basics of it.But i want to know the advantages of the cluster analysis process and a real time example as to where it is used. From: Stats Stack Exchange | By: Akshay Sirsikar | Sunday, May 1, 2016 smile frown Many statistical software ask whether to standardize data or no: What is a general rule to when data should be standardized? Do we standardize categorical variables? Is there a difference in how standardization effects or in interpreted in different... From: Stats Stack Exchange | By: kon7 | Sunday, May 1, 2016 smile frown Are there some neural networks that can reach state-of-the-art accuracy with two or three hours training, on dataset like CIFAR, MNIST,etc... From: Stats Stack Exchange | By: Eli He | Sunday, May 1, 2016 smile frown based on public data and using excel 2010 or after, I want to forecast/predict the football match winner. From: Stats Stack Exchange | By: ray | Saturday, April 30, 2016 smile frown We know that if $\big(X_1,X_2...X_k) \sim multinomial(n;p_1,p_2...p_k)$ then $X_i \sim bin(n;p_i)$ Then, $var(X_i) = np_i(1-p_i)$. But we have $cov(X_i,X_j) = -np_ip_j$. So doesnt that imply $var(X_i) = cov(X_i,X_i) = -np_i^2$? (Which is basically impossible... From: Stats Stack Exchange | By: RibD | Sunday, May 1, 2016 smile frown Can someone please explain how the sample mean and sample variance are independent? From: Stats Stack Exchange | By: Blueberry | Saturday, April 30, 2016 smile frown The two formulations seem identiical to me: $H(x) = \sum p(x) log(1/p(x))$ why tha latter it is attributed to Shannon rather than Gibbs? From: Stats Stack Exchange | By: hayer | Saturday, April 30, 2016 smile frown If I understand correctly, boxplot() treats numerical group variable values as discrete values and spaces the boxes evenly on the plot. What can I do to produce a boxplot with a horizontal axis scaled for continuous group variable values? (e.g. in SAS... From: Stats Stack Exchange | By: Amit | Saturday, April 30, 2016 smile frown I'm new fish in the water of Game Theory and just got stuck with calculating discounting rage (or discounting parameter) with 2x2 matrix. The main condition is that the game is repetitive. Here is the matrix: Here is what I want to learn: (1) how can... From: Stats Stack Exchange | By: RLearnsStats | Saturday, April 30, 2016 smile frown Please tell me break points for each graph. Thank you.... From: Stats Stack Exchange | By: B11b | Saturday, April 30, 2016 smile frown I know for regular problems, we know if we have an best regular unbiased estimator, it must be the mle. But generally, if we have an unbiased mle, would it also be a best unbiased estimator(or maybe I should call it umvue, as long as it has the smallest... From: Stats Stack Exchange | By: Gary Cheng | Saturday, April 30, 2016 smile frown I have a neural network that I trained on 32 * 32 px size images. Can I use these filters learned from the network on larger images not used in training the network such as a 600 * 800 px image? Or does it not make any sense to apply filters that were... From: Stats Stack Exchange | By: Kevin | Saturday, April 30, 2016 smile frown
{}
Courses # Monotonicity JEE Notes | EduRev ## JEE : Monotonicity JEE Notes | EduRev The document Monotonicity JEE Notes | EduRev is a part of the JEE Course Mathematics (Maths) Class 12. All you need of JEE at this link: JEE Monotonicity - Application Of Derivatives, Class 12, Maths A. Definitions The function f(x) is called strictly increasing on the open interval (a, b) if for any two points x1 and x2 belonging to the indicated interval and satisfying the inequality x1 < x2 the inequality f(x1) < f(x2) holds true. The function f(x) is called strictly decreasing on the open interval (a, b) if for any points x1 and x2 belonging to the indicated interval and satisfying the inequality x1 < x2 the inequality f(x1) > f(x2) holds true. A function f is said to be non-decreasing in an interval I contained in the domain of f If  for all numbers x1, x2 in I. If f(x1) < f(x2)    whenever x1 < x2 for all numbers x1, x2 in I, then f is said to be strictly increasing in the interval I. Non-increasing and strictly decreasing functions are defined in a similar way. If f is strictly increasing in I, then the graph of f is rising as we traverse it from left to right; if f is strictly decreasing in I, the graph of f is falling in I. Some examples are show in Figure. If a function f is either non-decreasing in an interval I or non-increasing in I, then f is said to be monotonic in I. Similarly, f is said to be strictly monotonic in I if f is either strictly increasing in I or strictly decreasing in I. Basic definition test : The function f(x) is said to be strictly increasing at a point x0 if for a sufficiently small h > 0 the condition (Fig. 1) f(x0 - h) < f(x0) < f(x0 + h) is fulfilled. The function f(x) is said to be strictly decreasing at a point x0 if for a sufficiently small h > 0 the condition (Fig. 2) f(x0 - h) > f(x0) > f(x0 + h) is fulfilled. A differentiable function is called increasing in an interval (a, b) if it is increasing at every point within the interval (but not necessarily at the end points). A function decreasing in an interval (a, b) is similarly defined. Sufficiency Test : If the derivative function f '(x) in an interval (a , b) is every where positive, then the function  f (x) in this interval is Increasing  ; If  f '(x) is every where negative, then  f (x) is Decreasing. Note : The test (criterion) also holds true when the derivative takes on zero values in the interval (a, b) so long as  f (x) does not identically become zero throughout the interval (a, b) or in some interval (a', b') comprising a part of (a, b). The function f (x) would be a constant on such an interval. If f'(a) = 0 then examine the sign of f'(a+) and f'(a-) (a) If f'(a+) > 0 and f'(a-) > 0 then strictly increasing (b) If f'(a+) < 0 and f'(a-) < 0 then strictly decreasing Note : If a function is invertible it has to be either increasing or decreasing. If a function is continuous in the intervals in which it rises and falls may be separated by points at which its derivative is zero or it fails to exist. B. Critical Point A critical point of a function f is a number c in the domain of f such the either f'(c) = 0 or f'(c) does not exist. Ex.1 Find the critical points of f(x) = x3/5 (4 - x). Sol. Therefore, f'(x) = 0 if 12 – 8x = 0, that is, x = 3/2. and f'(x) does not exist when x = 0. Thus, the critical points are 3/2 and 0. Ex.2 Find the critical numbers for the function Sol. The derivative is not defined at x = 2, but f is not defined at 2 either, so x = 2 is not a critical number. The actual critical numbers are found by solving f'(x) = 0 : = 0 x = 3 This is the only critical number since ex > 0. Ex.3 Find all possible values of the parameter ' b ' for which the function, f (x) = sin 2 x  -  8 (b + 2) cos x  -  (4 b2  +  16 b  +  6) x is monotonic decreasing throughout the number line and has no critical points. Sol. f ' (x) = 2 cos 2 x + 8 (b + 2) sin x  -  (4 b2 + 16 b + 6) = 2 (1 - 2 sin2 x)  + 8 (b + 2)  sin x  -  (4 b2 + 16 b + 6) = - 4 [ sin2 x - 2 (b + 2) sin x + (b2 + 4 b + 1) ] for monotonic decreasing and no critical points Now, D  = 4 (b + 2)2  -  4 (b2 + 4 b + 1) = 4 [3] = 12   which is always positive . Now let sin x = y ; y ε [- 1 , 1] g (y)  = y2  - 2 (b + 2) y + (b2 + 4 b + 1) we have to find those values of  ' b ' for which g (y)  >  0       for all         y  ε  (- 1 , 1) Conditions are First condition gives 1 + 2 (b + 2) + b2 + 4 b + 1  >  0 b2 + 6 b + 6 > 0 .....(1) &  < -1 or b < - 3 .....(2) Ex.4 If   where 0 < x < 1, then Sol.  Put  x = π/6 & π/3  and observe the behavior of  f(x) & g(x) . Alternatively consider   = 2 (1 - cos x) - x sin x consider   x - 2 sin x + x cos x =   2x cos2x - 4 sin x cos x Ex.5 Find possible values of a such that f(x) = e2x - (a + 1) ex + 2x is strictly increasing for x ε R. Sol. f(x) = e2x - (a + 1) ex + 2x f '(x) = 2e2x - (a + 1) ex + 2 Now, f(x) = e2x - (a + 1) ex + 2x ≥  0 for x ∈R Aliter : 2e2x – (a + 1) ex + 2 ≥ 0 for x ∈R putting ex = t ; t ∈ (0,∝) 2t2 – (a + 1) t + 2 ≥  0  for t∈ (0, ∝) Hence either (i) D ≤ 3  ⇒  (a + 1)2 – 4 ≤ 0 ⇒  (a + 5) (a – 3) ≤ 0 ⇒  a ∈ [–5, 3] (ii) both roots are negative D ≥ 0 & -b/2a < 0 & f(0) ≥ 0 Taking union of (i) and (ii), we get a ∈ (–∝, 3]. Ex. 6 Prove that the function f(x) =  is strictly decreasing in (e, ∞). Hence, Prove that  303202 < 202303. Sol. We have f(x) = ,x > 0, ⇒ f(x) strictly decreases in (e,∝) Thus, we have f(303) < f(202) i.e . 202 ln (303) < 303 ln (202) ⇒ 303202 < 202303 which is the desired result. Ex.7 Let f(x) = x3 + 2x2 + x + 5. Show that f(x) has only one real root a such that [α] = -3. Sol. We have f(x) = x3 + 2x2 + x + 5, x ∈ R  and  f '(x) = 3x2 + 4x + 1 = (x + 1) (3x + 1), x ∈ R Drawing the number line for f'(x), we have f(x) strictly increases in (–∝, –1) strictly decreases in (–1, –1/3) strictly increases in (–1/3, ∝) Also, we have f(–1) = –1 + 2 – 1 + 5 = 5  and The graph of f(x) (see fig.) shows that f(x) cuts the X-axis only once. Now, we havef(–3) = – 27 + 12 – 3 + 5 = – 13 and f(–2) = –8 + 8 – 2 + 5 = 3. Which are of opposite signs. This proves that the curve cuts the X-axis somewhere between –2 and –3. ⇒ f(x) = 0 has a root α lying between – 2 and –3.  Hence [α] = –3 Ex.8 Find the number of real roots of the equation  = c where b1 < b2 < ..... <bn. Sol. Consider the function f(x)  = = f(x) strictly decreases in (–α, b1) U (b1,b2) U ... U (bn – 1, bn) Now, we have The plot of the curve y = f(x) is shown alongside. Ex.9 If f : R → R and f is a polynomial with f(x) = 0  has real and distinct roots, show that the equation, [ f ' (x) ]2 - f(x) . f '' (x) = 0 cannot have real roots. Sol. Let f(x) = c (x - x1) (x - x2) ...... (x - xn) Again  Let ⇒ h ' (x) < 0   ⇒ f(x) . f '' (x) - [f ' (x)]2  < 0 Alternatively : a function f(x) satisfying the equation [f ' (x)]2 - f(x) . f '' (x) = 0 is    which can't have any root. C. Intervals of Monotonocity Ex.10 Find the intervals of monotonocity of the following functions : (a) f(x) = (b) f(x) = (c) f(x) = Sol. (a) We have Now, from the sign scheme for f'(x), we have   ⇒  f(x) strictly increases i (-∞, 0) strictly decreases in (0, 1) ; strictly increases in (1, 2) ; strictly decreases in (2, ∞) Ans. : Increases in (-∞, 0), (1, 2); Decreases in (0, 1), (2, ∞) (b) We have  f(x)=2x2 – ln |x| and f '(x) = 4x – 1/x = Now, from the sign scheme for f'(x), we have ⇒ f(x) strictly decreases in (-∞, -1/2) strictly increases in (-1/2, 0) ; strictly decreases in (0, 1/2) ; strictly increases in (-1/2, ∞) Ans. : (c) ⇒ f(x) strictly decreases in (-∞, -3) ; strictly increases in (-3, 3) ; strictly decreases in (3, ∞). Ans : Increases in (-3, 3) ; Decreases in (-∞, -3), (3, ∞) Ex.11 A function f (x) is given by the equation , x2 f ' (x) + 2 x f (x) - x + 1 = 0 (x  0). If f (1) = 0 , then find the intervals of monotonocity of f. Sol. wherey = f (x) Ans. : I  in  (- ∝, 0)  U  (1, ∝)   ;    D  in  (0, 1) D. Operations on Monotonous Functions I. (a) Negative : If f is an increasing function then its negative i.e. h = –f is a decreasing function. By derivative  h'(x) = –f'(x), f'(x) > 0  ∴h'(x) < 0 ⇒  h is a decreasing function In short – (an increasing function) = a decreasing function i.e. –I = D  Similarly –D = I (b) Reciprocal : Reciprocal of an increasing function is a decreasing function II.(a) Sum : If f is an increasing function and g is also an increasing function their h = f + g is an increasing function. By derivative h'(x) = f'(x) + g'(x) f & g are increasing function, ⇒  f '(x) & g'(x) are positive  ⇒  f '(x) + g'(x) is positive ⇒  f(x) + g(x) increases In short, An increasing function + An increasing function = An increasing function i.e. (i) I + I = I (ii) I + D = can’t say (iii) D + D = D (b) Difference : Monotonocity of the difference of two function can be predicted using I(a) and II(a) I - I = I + (-I) = I + D = can't say I - D = I + (-D) = I + I = increasing D - I = D + (-I) = D + D = decreasing D - D = D + (-D) = D + I = can't say III. (a) Product : Consider h = f × g Case I : Both the function involved in the product i.e. f & g are positive If f & g both are increasing function then h = f × g is also an increasing function. In short I × I = I, I × D = can't say, D × D = D. Case II : If any of the function takes negative values then we can predict the monotonocity by using I(a) & case I of III(a). If a function f is increasing & takes negative values & another function g is decreasing & takes positive values. then h(x) = f(x) × g(x) = (-f(x)) × g(x) = -  = increasing (b) Division : Monotonocity of division of two functions can be predicted by using I(b) & III(a). = I × I = I (assuming that both the functions I & D take positive values). IV. Composition : (I) I (I)= I (II) I(D) = D (III) D (I) = D (IV) D(D) = I Let h(x) = D(D(x)) x increases ⇒ D(x) decreases ⇒ D(D(x)) increases E. Inequalities General Approach to prove Inequalities : To prove f(x) ≥ g(x) for x ≥ a, we Assume h(x) = f(x) – g(x) Find  h'(x) = f'(x) – g'(x) If h'(x) ≥ 0 Apply increasing function h on x ≥ a to get h(x) ≥ h(a). If h(a) ≥ 0 then h(x) ≥ 0 for x ≥ a i.e. then given inequality is true. If h'(x) ≤ 0 Apply decreasing function h on x ≥ a to get h(x) ≤ h(a). If h(a) ≤ 0 then h(x) ≤ 0 for x ≥ a i.e. the given inequality is false Note : If the sign of h'(x) is not obvious then to determine its sign assume g(x) = h'(x) & apply the above procedure on g(x). ###### Ex.12 Prove that, 2 x sec x + x > 3 tan x     for 0 < x < /2 . Sol. f(x) = 2x sec x + x - 3 tan x f ' (x) = 2 sec x + 2x sec x tan x + 1 - 3 sec2x = secx [2 cos x + 2x sin x + cos2x - 3] Consider g(x) = 2 cos x + 2x sin x + cos2x - 3 g ' (x) = - 2 sin x + 2x cos x + 2 sin x - 2 sin x  cos x = 2 cos x (x - sin x) > 0 for x ε (0,  /2) Ex.13 Prove that tan x > x +  for all x ε . Sol. ...(1) Clearly, f(x) is defined at all x ∈ (0, π/2). Now, f'(x) = sec2x – 1 x– x2 ...(2) f''(x) = 2 sec2 x . tan x – 2x...(3) f'''(x) = 2 sec4x + 4 secx. tan2 x – 2 = 2 (1 + tan2x)2 + 4 sec2x . tan2 x – 2 = 2tan4x + 4 tan2x + 4 sec2x . tan2x > 0 for all x ∈ (0, π/2) ⇒ f'''(x) > 0 in the interval (0, p/2) ⇒ f''(x) is monotonic increasing in (0, π/2) f''(x) > f''(0) when x ∈ (0, π/2). But from (3), f''(0) = 0. Thus, f''(x) > 0 for all x ∈ (0, π/2) ∴f'(x) is monotonic increasing in (0, π/2) ∴f'(x) > f'(0) when x ∈ (0, π/2) But from (2), f'(0) = 1 – 1 – 0 = 0. Thus, f'(x) > 0 for all x ∈ (0, p/2) ∴f(x) is monotonic increasing in (0, π/2) ∴f(x) > f(0) when x ∈ (0, π/2) But from (1), f(0)  = 0. Thus, f(x) > 0 for all x ∈ (0, π/2) Ex.14 Show that Sol. or or Since, f(x) is increasing for, x ≥ 0 ⇒ f(x) ≥ f(0) Ex.15 Examine which is greater sin x tan x or x2. Hence evaluate Sol. Let f(x) = sin x . tan x - x2 f '(x) = cos x . tan x + sin x . sec2x - 2x = sin x + sin x sec2 x - 2x ⇒ f ''(x) = cos x + cos x sec2 x + 2 secx sin x tan x - 2 ⇒ f ''(x) = (cos x + sec x - 2) + 2 sec2 x sin x tan x Now cos x + sec x – 2 =    and 2 sec2 x tan x . sin x > 0 because ⇒ f''(x) > 0 ⇒ f'(x) is M.I. Hence f'(x) > f'(0) ⇒ f'(x) > 0 ⇒ f(x) is M.I. ⇒ f(x) > 0 ⇒ sin x tan x – x2 > 0 Hence sin x tan x – x2 Ex.16 Prove : Sol. Consider the function f(x) = cot (x/2) – 1 – cot x, x ∈ (0, p) ⇒ f(x) strictly decreases in (0, π/2)   strictly increases in (π/2, π) ⇒ f(x) has least value at x = π/2 ⇒ f(x) ≥ f(π/2) = 0 which proves the desired result. Ex.17 Prove that   Hence, show that the function f(x)= strictly increases in (0, ∞). Sol. Consider the fucntion g(x) = ⇒ g(x) strictly decreases in (0, ∝) ⇒ which gives the desired result Now, we have f(x) [using result (1)]   ⇒  f(x) strictly increases in (0, ∝) Ex.18 Prove that Sol. Let f(x) = sin x tan x – x2 ⇒ f'(x) = sin x sec2x + sin x – 2x ⇒ f''(x) = 2 sin x sec2 x tan x + cos x – 2 + sec x = 2 sin x tan x sec2 x + (cos x + sec x – 2) ⇒ f'(x) is an increasing function. ⇒ f'(x) > f'(0) ⇒ sin x sec2 x + sin x – 2x > 0 ⇒ f(x) is an increasing function  ⇒  f(x) > f(0) sin x tan x – x2 > 0  ⇒  sin x tan x > x2 Ex.19 Prove that sin 1 > cos (sin 1). Also show that the equation sin (cos (sin x)) = cos (sin (cos x)) has only one solution in Sol.  sin 1 > cos (sin 1) if ...(1) Hence (1) is true ⇒ sin 1 > cos (sin 1). Now let f(x) = sin (cos (sin x)) – cos (sin (cos x)) and f(0) = sin 1 – cos (sin 1) > 0 Since f(0) is positive and f(x) = 0 has one solution in Ex.20 Using calculus establish the inequality, (xb + yb)1/b < (xa + ya)1/a , where x > 0 , y > 0 and b>a>0. Sol. (xb + yb)1/b  <  (x+ ya)1/a or  T P T (tb + 1)a/b  <  ta + 1 Let   f (t)  =  (tb + 1)a/b - ta - 1 Hence   f'(t) < 0 i.e.  f (t)  is decreasing function So f (t)  <  f (0)  but  f (0)  = 0 ⇒ (tb + 1)a/b < ta + 1    Hence proved Ex.21 Prove that the function f(x) = 2x3 + 21x2 - 60x + 41 is strictly positive in the interval (-∞, 1). Sol. f(x) = –2x3 + 21x2 – 60x + 41 f'(x) = –6x2 + 42x – 60 = – 6(x2 – 7x + 10) = –6(x – 5) (x – 2) x ∈ (2, 5) ⇒ f'(x) > 0, i.e., f(x) is m.i. and x Ï (2, 5) ⇒ f'(x) < 0 i.e., f(x) is m.d. ∴ x ∈ (–∝, 1) ⇒ f(x)  is m.d. When x ∈ (–∝, 1), x < 1; so, f(x) > f(1). But f(1) = –2 + 21 – 60 + 41 = 0. ∴ x ∈ (–∝, 1) ⇒ f(x) > f(1) = 0 ∴ f(x) is strictly positive in the interval (–∝, 1). F. Rolle's Theorem Let f be a function that satisfies the following three hypotheses : 1. f is continuous on the closed interval [a, b]. 2. f is differentiable on the open interval (a, b). 3. f(a) = f(b) Then there is a number c in (a, b) such that f'(c) = 0 Before given the proof let's take a look at the graphs of some typical functions that satisfy the three hypotheses. Figure 1 shows the graph of four such functions. In each case it appears that there is atleast one point (c, f(c)) on the graph where the tangent is horizontal and therefore f'(c) = 0. Thus, Rolle's Theorem is plausible. Proof : There are three cases : Case I : f(x) = k, a constant. Then f'(x) = 0, so the number c can taken to be any number in (a, b). Case II : f(x) > f(a) for some x in (a, b) [as in Figure 1(b) or (c)] By the Extreme Value Theorem (which we can apply by hypothesis 1), f has a maximum value somewhere in [a, b]. Since f(a) = f(b), it must attain this maximum value at a number c in the open interval (a, b). Then f has a local maximum at c and, by hypothesis 2, f is differentiable at c. Therefore, f'(c) = 0 by Fermat's Theorem. Case III : f(x) < f(a) for some x in (a, b) [as in Figure 1(c) or (d)] By the Extreme Value Theorem, f has minimum value in [a, b] and, since f(a) = f(b), it attains this minimum value at a number c in (a, b). Again f'(c) = 0 by Fermat's Theorem. Ex.22 Prove that the equation x3 + x - 1 = 0 has exactly one real root. Sol. First we use the Intermediate Value Theorem to show that a root exists. Let f(x) = x3 + x - 1. Then f(0) = -1 < 0 and f(1) = 1 > 0. Since f is a polynomial, it is continuous, so the Intermediate Value Theorem states that there is a number c between 0 and 1 such that f(c) = 0. Thus, the given equation has a root. To show that the equation has no other real root, we use Rolle's Theorem and argue by contradiction. Suppose that it had two roots a and b. Then f(a) = 0 = f(b) and, since f is a polynomial, it is differentiable on (a, b) and continuous on [a, b]. Thus, by Rolle's Theorem, there is a number c between a and b such that f'(c) = 0. But    for all x (since x2 ≥ 0) so f'(x) can never be 0. This gives a contradiction. Therefore, the equation can't have two real roots. Ex.23 Let f (x) & g (x) be differentiable for  such that f (0) = 2, g (0) = 0, f (1) = 6 . Let there exist a real number c in [0, 1] such that f ' (c) = 2 g ' (c) , then the value of g (1) Sol. Consider φ(x) = f (x) - 2 g (x) defined on [0, 1] since f (x) and g(x) are differentiable for  therefore f (x) is differentiable on (0, 1) and continuous on [0, 1] ø(0) = ø (0) - 2 g (0) = 2 - 0 = 2 ø (1) = f (1) - 2 g (1) = 6 - 2 g (1) Now f '(x) = f '(x) - 2 g '(x) ⇒ f '(c) = f '(c) - 2 g '(x) = 0 (given) ⇒ f (x) satisfies Rolle's theorem on [0, 1] ∴ ø (0) =ø(1) ⇒ 2 - 6 - 2 g (1) ⇒ g (1) = 2 Our main use of Rolle's Theorem is in proving the following important theorem, which was first stated by another French mathematician, Joseph-Louis Lagrange. Ex.24 If f(x) is continuous in [a, b] and differentiable in (a, b), prove that there is atleast one c ε(a, b), such that . Sol. Let us consider a function, h(x) = f(x) - f(a) + A (x3 - a3) Where A is obtained from the relation h(b) = 0. So that, 0 = h(b) = f(b) - f(a) + A(b3 - a3)                                   ...(i) also, h(a) = 0 Since, (1) h(x) is continuous in [a, b] (2) h(x) is differentiable in (a, b) and (3) h(a) = 0 = h(b) hence, all the three condition of Rolle's theorem. Then there must exists a'c' ε (a, b) such that f'(c) = 0. ⇒ f'(c) + A (3c2) = 0   or G. The Mean Value Theorem Let f be a function that satisfies the following hypotheses : 1. f is continuous on the closed interval [a, b]. 2. f is differentiable on the open interval (a, b). Then there is a number c in (a, b) such that Before proving this theorem, we can see that it is reasonable by interpreting it geometrically. Figures (a) and (b) show that points A(a, f(a)) and B(b(b, f(b)) on the graphs of two differentiable functions. The slope of the secant line AB is which is the same expression as on the roght side od eq. 1. Since f'(c) is the slope of the tangent line at the point (c, f(c)), the Mean Value Theorem, in the form given by Equation 1, says that there is at least one point P(c, f(c)) on the graph where the slope of the tangent line is the same as the slope of the secant line AB. In other words, there is a point P where the tangent line is parallel to the secant line AB. Proof We apply Rolle’s Theorem to a new function h defined as the difference between f and the function whose graph is the secant line AB. Using Equation 3, we see that the equation of the line AB can be written as First we must verify that h satisfies the three hypotheses of Rolle's Theorem. 1. The function h is continuous on [a, b] because it is the sum of f and a first-degree polynomial, both of which are continuous. 2. The function h is differentiable on (a, b) because both f and the first-degree polynomial are differentiable. In fact we can compute h' directly from Equation 4 : (Note that f(a) and [f(b) - f(a)]/(b - a) are constants.) 3. h(a) = f(a) – f(a) - h(b) = f(b) – f(a) - Since h satisfies the hypotheses of Rolle's Theorem, that theorem says there is a number c in (a, b) such that h'(c) = 0. Therefore Ex.25 To illustrate the Mean Value Theorem with a specific function, let's consider f(x) = x3 - x, a= 0, b = 2. since f is a polynomial, it is continuous and differentiable for x, so it is certainly continuous on [0, 2] and differentiable on (0, 2) such that f(2) - f(0) = f'(c)(2 - 0) Sol. Ex.26 If  for all x and f(0) = 0, show that 0.4 < f(2) < 2 Sol. ...(1) f'(x) > 0 for all x [∴1 + x2 > 0] Also given f(0) = 0 ...(2) From (1), it follows that f(x) is differentiable at all x, therefore f(x) is also continuous at all x ∴ by Lagrange’s mean value theorem in [0, 2] Now  0 < c < 2 ...(4) From (3), (4) and (5) it follows that 0.4 < f(2) < 2. H. Curve Sketching The following checklist is intended as a guide to sketching a curve y = f(x). Not every item is relevant to every function. (For instance, a given curve might not have an asymptote or possess symmetry.) But the guidelines provide all the information you need to make a sketch that displays the most important aspects of the function. I. Domain It's often useful to start by determining the domain D of f, that is, the set of values of x for which f(x) is defined. II. Intercepts The y-intercept is f(0) and this tells us where the curve intersect the y-axis. To find the x-intercepts, we set y = 0 and solve for x. (You can omit this step if the equation is difficult to solve.) III. Symmetry (a) If f(-x) = f(x) for all x in D, that is, the equation of the curve is unchanged when x is replaced by -x, then f is an even function and the curve is symmetric about the y-axis. This means that out work is cut in half. If we know what the curve looks like for  then we need only reflect about the y-axis to obtain the complete curve [see Figure (a)]. Here are some examples : y = x2, y = x4, y = |x|, and y = cos x. (b) If f(-x) = -f(x) for all x in D, then f is an odd function and the curve is symmetric about the origin. Again we can obtain the complete curve if we know what it looks like for  [Rotate 180º about the origin; see Figure (b).] Some simple examples of odd functions are y = x, y = x3, y = x5, and y = sin x. Figure (c) If f(x + p) = f(x) for all x in D, where p is positive constant, then f is called a periodic function and the smallest such number p is called the period. For instance, y = sin x has period 2 and y = tan x has period  . If we know what the graph looks like in an interval of length p, then we can use translation to sketch the entire graph (see Figure ). IV. Asypmtotes (a) Horizontal Asymptotes. If either  f(x) = L or  f(x) = L, then the line y = L is a horizontal asymptote of the curve y = f(x). If it turns out that  f(x) = ∞ (or - ∞), then we do not have an asymptote to the right, but that is still useful information for sketching the curve. (b) Vertical Asymptotes. The line x = a is a vertical asymptote if at least one of the following statements is true : (For rational functions you can locate the vertical asymptotes by equating the denominator to 0 after canceling any common factors. But for other functions this method does not apply.) Furthermore, in sketching the curve it is very useful to know exactly which of the statements in (ii) is true. If f(a) is not defined but a is an endpoint of the domain of f, then you should compute  or  f(x), whether or not this limit is infinite, V. Interval of Increase / Decrease Use the I/D Test. Compute f'(x) and find the intervals on which f'(x) is positive (f is increasing) and the intervals on which f'(x) is negative (f is decreasing). VI. Local Maximum and Minimum Value Find the critical numbers of f [the number c where f'(c) = 0 or f'(c) does not exist]. Then use the First Derivative Test. If f' changes from positive to negative at a critical number c, then f(c) is a local maximum. If f' changes from negative to positive at c, then f(c) is a local minimum. Although it is usually preferable to use the First Derivative Test, you can use the Second Derivative Test if c is a critical number such that f''(c)  0. Then f''(c) > 0 implies that f(c) is a local minimum, whereas f''(c) < 0 implies that f(c) is a local maximum. VII. Concavity and Points of Inflection Compute f'(x) and use the Concavity Test. The curve is concave upward where f''(x) > 0 and concave downward where f''(x) < 0. Inflection points occur where the direction of concavity changes. VIII. Sketch the Curve Using the information in items A - G, draw the graph. Sketch the asymptotes as dashed lines, Plot the intercepts, maximum and minimum points, and inflection points. Then make the curve pass through these points, rising and falling according to E, with concavity according to G, and approaching the asymptotes. If additional accuracy is desired near any point, you can compute the value of the derivative there. The tangent indicates the direction in which the curve proceeds. Ex.27 Use the guidelines to sketch the curve y = . Sol. I. The domain is {x|x2 – 1 ≠ 0} = {x | x ≠ ± 1} = (– ∝ , – 1) U (–1, 1) U (1. ∝) II. The x-and y-intercepts are both 0. III. Since f(-x) = f(x), the function f is even. The curve is symmetric about the y-axis. IV.   Therefore, the line y = 2 is a horizontal asymptote. Since the Denominator is 0 when x = ±1, we compute the following limits : Therefore, the lines x = 1 and x = -1 are vertical asymptotes. This information about limits and asymptotes enables us to draw the preliminary sketch in Figure, showing the parts of the curve near the aymptotes. V. Since f'(x) > 0 when x < 0 (x ≠ -1) and f'(x) < 0 when x > 0 when x > 0 (x ≠ 1), f is increasing on (-∝, -1) and (-1, 0) and decreasing on (0, 1) and (1, ∝). VI. The only critical number is x = 0. Since f' changes from positive to negative at 0, f(0) = 0 is local maximum by the First Derivative Test. VII Since 12x2 + 4 > 0 for all x, we have f''(x) > 0 ⇔ x2 – 1 > 0 ⇔ |x| > 1 VIII. Using the information in V – VI, we finish the sketch in Figure. Ex.28 Sketch the graph of f(x) = . Sol. I. Domain = {x|x + 1 > 0} = {x|x > –1} = (–1, ∝) II. The x- and y-intercepts are both 0. III. Symmetry : None IV. Since   there is no horizontal asymptote. Since   and f(x) is always positive, we have   and so the line x = –1 is a vertical asymptote. V. We see that f'(x) = 0 when x = 0 (notice that - 4/3 is not in the domain of f), so the only critical number is 0. Since f'(x) < 0 when -1 < x < 0 and f'(x) > 0 when x> 0, f is decreasing on (-1, 0) and increasing on (0, ∝) VI. Since f'(0) = 0and f' changes from negative to positive at 0, f(0) = 0 is a local (and absolute) minimum by the first derivative. VII. Note that the denominator is always positive. The numerator is the quadratic 3x2 + 8x + 8, which is always positive because its discriminant is b2 – 4ac = –32, which is negative, and the coefficient of x2 is positive. Thus, f''(x) > 0 for all x in the domain of f, which means that f is concave upward on (-1, to) and there is no point of inflection. VIII. The curve is sketched in Figure. Ex.29 Sketch the graph of f(x) = xex Sol. I. The domain is R. II. The x- and y-intercepts are both 0. III. Symmetry : None IV. Because both x and ex become large as x → ∝, we have   however, ex → 0 and so we have an indeterminate product that requires the use of L’Hospital’s Rule : Thus, the x-axis is horizontal asymptote. VI. Because f'(–1) = 0 and f changes from negative to positive at x = –1, f(–1) = –e–1 is a local (and absolute) minimum. VII. f''(x) = (x + 1)ex + ex = (x + 2)ex Since f''(x) > 0 if x > –2 and f''(x) < 0 if x < –2, f is concave upward on (–2, ∝) and concave downward on (– ∝, – 2). The inflection points in (–2, –2e–2) VIII. We use this information to sketch the curve in Figure. Ex.30 Sketch the graph of the function f(x) = x2/3(6 - x)1/3. Sol. You can use the differentiation rules to check that the first two derivatives are Since f'(x) = 0 when x = 4 and f'(x) does not exist when x = 0 or x = 6, the critical numbers are 0, 4 and 6.To find the local extreme values we use the First Derivative Test. Since f' changes from negative to positive at 0, f(0) = 0 is a local minimum. Since f' changes from positive to negative at 4, f(4) = 25/3 is a local maximum. The sign of f' does not change at 6, so there is no minimum or maximum there Looking at the expression for f''(x) and noting that x4/3 > 0 for all x, we have f''(x) < 0 for x < 0 and for 0 < x < 6 and f''(x) > 0 for x > 6. So f is concave downward on (-x, 0) and (0, 6) and concave upward on (6, x), and the only inflection point is (6, 0). The graph is sketched in Figure. Note that the curve has vertical tangents at (0, 0) and (6, 0) because |f'(x)| → ∝ as x → 0 and as x→ 6 Ex.31 Plot the following curves : Sol. (a) We have     whose domain is x ∈ R, and y' = x2 – 3x + 2 = (x – 1) (x – 2) > 0 for x ∈ (–∝, 1) U (2, ∝)   < 0 for x ∈ (1, 2) ⇒ y strictly increases in (–∝, 1)  strictly decreases in (1, 2) ; strictly increases in (2, ∝) Now we have y (–∝) –∝, y(∝) = ∝ The curve cuts the Y-axis at (0, 6). The cure cuts the –ve X-axis somewhere between –1and –2, since The plot of the curve is shown along side. (b) We have y = x/ln x Whose domain is x ∈ (0, ∝) ~ {1}, and ⇒  y strictly decreases in (0, 1) U (1, e) ; strictly increases in (e, ∝) The plot of the curve is shown alongside. (c) We have y = x ln x whose domain is x ∈ (0, ∝), and ⇒ y strictly decreases in (0, e–1) ; strictly increases in (e–1, ∝). Now, we have The curve cuts the X-axis at (1, 0). The plot of the curve is shown above. (d) We have y =  lnx/x whose domain is x ∈ (0, ∝), and ⇒ y strictly increases in (0, e); strictly decreases in (e, ∝). The curve cuts the X-axis at (1, 0). The plot of the curve is shown above. (e) We have   whose domain is x ∈ R ~ {1.7}, and Now, we have The curve cuts the Y-axis at (0, 1/7). The curve cuts the X-axis at (–1, 0). The plot of the curve is shown below. (f) We have The curve is symmetrical about the X-axis as well as the Y-axis. In the first quadrant the equation of the curve reduces to y = 2–x -1/2 whose plot is shown above. The complete curve is drawn by taking the mirror image of the above shown curve in the X-axis and the Y-axis as shown alongside Ex.32 Sketch a possible graph of a function f that satisfies the following conditions : (i) f'(x) > 0 on (-∞, 1), f'(x) < 0n (1, ∞) (ii) f''(x) > 0 on (-∞, -2) and (2, ∞), f''(x) < 0 on (-2, 2) (iii)  f(x) = - 2, f(x) = 0 Sol. Condition (i) tells us that f is increasing on (- ∞, 1) and decreasing on (1, ∞). Condition (ii) says that f is concave upward on (-∞, -2) and (2, ∞), and concave downward on (-2, 2). From condition (iii) we know that the graph of f has two horizontal asymptotes: y = -2 and y = 0. We first draw the horizontal asymptote y = -2 as a dashed line (see Figure). We then draw the graph of f approaching this asymptote at the far left, increasing to its maximum point at x = 1 and decreasing toward the x-axis at the far right. We also make sure that the graph has inflection points when x = -2 and 2. Notice that we made the curve bend upward for x < -2 and x > 2. and bend downward when x is between -2 and 2. Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity! ## Mathematics (Maths) Class 12 209 videos|222 docs|124 tests , , , , , , , , , , , , , , , , , , , , , ;
{}
Archive for August, 2011 Perceived stretch from partial view of twist Wednesday, August 31st, 2011 Here’s a twist applied to a bar. But when the movie is cropped the viewer gets a sense that the bar is also being stretched, to the right. Putting a texture (here just a the edges of the 3D model’s mesh), helps perceive the true deformation: twist. It also helps reduce the perceived stretch effect on the original full-view movie. svn get last commit message Wednesday, August 31st, 2011 Here’s a quick bash one-liner to get the last svn commit message: svn log -q -v --xml --with-all-revprops -r committed | grep msg | sed -e "s/<msg>$$[^<]*$$<\/msg>/\1/g" Update: Not sure what the difference between “committed” and “head” is, but now “head” is giving me the correct message: svn log -q -v --xml --with-all-revprops -r head | grep msg | sed -e "s/<msg>$$[^<]*$$<\/msg>/\1/g" Death Instinct Sunday, August 28th, 2011 marcus rothkowitz died mark rothko Friday, August 26th, 2011 Change matlab figure’s grey background to white Friday, August 26th, 2011 To change the default gray background of the matlab figure window to white issue: set(gcf, 'Color', [1,1,1]); Extract all images from a pdf as png files (at full resolution) Friday, August 26th, 2011 Here’s a two-liner to extract all the embedded color images in a pdf and convert then to png files. pdfimages extracts the images as ppm files. But I couldn’t open these immediately on my mac with my favorite image editing tools, so I convert them with mogrify from the imagemagick suite to png files. pdfimages original.pdf ./extracted-images mogrify -format png ./extracted-images*.ppm and to get rid of the ppm files rm ./extracted-images*.ppm Pseudo-color a mesh using random colors for multiple (weight) functions Tuesday, August 23rd, 2011 Often it’s useful to visualize many functions on a mesh at the same time. Here I assign each function a random pseudocolor, then the color at each vertex of the mesh is just a blend of the function colors weighted by their respective function values. This is especially useful if your functions are already weight functions themselves (defined between 0 and 1). So, if your mesh is defined by V and F then you have a set of function values defined on the vertices W: #V by #functions, then you can use a previously posted function for randomly generating a #functions list of colors. Putting it all together with a matrix multiplication you have: trisurf([F],V(:,1),V(:,2),V(:,3),'FaceVertexCData',W*random_color(size(W,2),'Pastel')); To make the rendering a little nicer you can use: light('Position',[-1.0,-1.0,100.0],'Style','infinite'); lighting gouraud axis equal view(2) Here’s a visualization of bounded biharmonic weights on the Armadillo. And here’s an easy way to visualize segmenting the mesh based on the maximum function value: % get indices of max value for each row [~,I]=max(W,[],2); trisurf([F],V(:,1),V(:,2),V(:,3),'FaceVertexCData',sparse(1:size(W,1),I,1)*random_color(size(W,2),'Pastel')); which produces something like: Generate list of random colors, matlab Tuesday, August 23rd, 2011 Here’s a little matlab function to generate a list of random RGB colors. Save it in random_color.m function R = random_color(n,preset) % RANDOM_COLOR generate a list of random colors % % R = random_color(size) % or % R = random_color(size,preset) % % Inputs: % size size of matrix of random colors to generate % preset string containing colormap name or 'Pastel' % Outputs: % R size by 3 list of RGB colors % if(~exist('n','var')) N = 1; end if(~exist('preset','var')) preset = ''; end if(strcmp(preset,'')) R = rand([n,3]); else if(strcmp(preset,'Pastel')) attempt = 0; % probably should be between O(n) and O(log n) max_attempts = 10*prod(n); retry = true([prod(n),1]); R = zeros([prod(n),3]); while(attempt < max_attempts) num_retry = sum(retry); R(retry,:) = rand(num_retry,3); R(retry,:) = (291/255) * R(retry,:)./repmat(sum(R(retry,:),2),1,3); %retry(retry) = max(R(retry,:),[],2) > 0; % too bright next_retry = retry; next_retry(retry) = ... (max(R(retry,:),[],2) - min(R(retry,:),[],2)) > (221/255) ; % too grey next_retry(retry) = std(R(retry,:),0,2) < (68/255); if(~any(next_retry)) break; end retry = next_retry; attempt = attempt+1; end R=reshape(R,[n 3]) + 34/255; else error('Preset not supported'); end end Use the ‘Pastel’ preset to force it to produce colors that are little easier to differentiate between especially on a white background with black foreground (e.g. text, mesh lines, etc.) Bounded biharmonic weights for real-time deformation SIGGRAPH Fast Forward Friday, August 19th, 2011 I’ve uploaded our SIGGRAPH 2011 Fast Forward video to youtube. For those not familiar with the SIGGRAPH Technical Papers Fast Forward, its a 2 hour event in which each paper has 50 seconds to present a teaser for their paper talk. Getting Safari to play a short audio clip with HTML5’s audio tag Friday, August 19th, 2011 I had a really annoying time trying to get Safari to load and play a small audio clip (mp3) I’d posted. The clip is only 2 seconds. Here’s the HTML I was using <audio src="audio.mp3" autoplay preload="auto" controls loop> But this resulting in nothing. Upon closer inspection I found out the the “onstalled” even was being fired so I added an “onstalled” even handler to try to load the clip again: But this was to no avail, the “onstalled” event just fired each time recursively. In the end I gave up on Safari’s ability to play/load small mp3 files. I’m not sure what the problem is since quicktime played the file fine. Also if my html and audio.mp3 files lived locally, Safari played it correctly. I instead made use of HTML5 ability to specify fallback sources. For this I converted my mp3 file to a m4a: First convert to wav with mplayer: mplayer -quiet -vo null -vc dummy -ao pcm:waveheader:file="audio.wav" "audio.mp3" Then convert to m4a with faac: faac -o audio.m4a audio.wav Finally use the .m4a file as a fallback source in the audio tag:
{}
632人阅读 评论(0) • System.Data.SqlClient— Contains classes for connecting to Microsoft SQL Server version 7.0 or higher • System.Data.OleDb— Contains classes for connecting to a data source that has an OLE DB provider • System.Data.Odbc— Contains classes for connecting to a data source that has an ODBC driver • System.Data.OracleClient— Contains classes for connecting to an Oracle database server NOTE It is expected that additional data provider-specific namespaces will be released over time. Microsoft has already released a separate set of classes for working with Microsoft SQL Server in the Compact Framework, and a separate set of classes for working with XML generated from SQL Server 2000. For more information on these additional namespaces, see the msdn.microsoft.com Web site. Oracle has also released their own namespace for working with Oracle databases. You can download the Oracle provider for .NET (ODP.NET) from the Oracle Web site. Why did Microsoft duplicate these classes, creating different versions for different types of databases? By creating separate sets of classes, Microsoft was able to optimize the classes. For example, the OleDb classes use OLE DB providers to connect to a database. The SQL classes, on the other hand, communicate with Microsoft SQL Server directly on the level of the Tabular Data Stream (TDS) protocol. TDS is the low-level proprietary protocol used by SQL Server to handle client and server communication. By bypassing OLE DB and ODBC and working directly with TDS, you get dramatic performance benefits. NOTE You can use the classes from the System.Data.OleDb namespace with Microsoft SQL Server. You might want to do so if you want your ASP.NET page to be compatible with any database. For example, you might want your page to work with both Microsoft SQL Server and Oracle. However, you lose all the speed advantages of the SQL- and Oracle-specific classes if you use the System.Data.OleDb namespace. There are several ways you can create a new parameter and associate it with a Command. For example, the following two statements create and add a new parameter to the SqlCommand object: cmdSelect.Parameters.Add( "@firstname", "Fred" ) cmdSelect.Parameters.Add( New SqlParameter( "@firstname", "Fred" ) ) These two statements are completely equivalent. Both statements create a new SqlParameter with the name @firstname and the value Fred and add the new parameter to the parameters collection of the SqlCommand object. Notice that we do not specify the SQL data type of the parameter in the case of either statement. If you don't specify the data type, it is automatically inferred from the value assigned to the parameter. For example, since the value Fred is a String, the SQL data type NVarchar is inferred. In the case of an OleDbParameter, the data type VarWChar would be automatically inferred. In some cases, you'll want to explicitly specify the data type of a parameter. For example, you might want to explicitly create a Varchar parameter instead of an NVarchar parameter. To do this, you can use the following statement: cmdSelect.Parameters.Add( "@lname", SqlDbType.Varchar ).Value = "Johnson" This statement specifies the SQL data type of the parameter by using a value from the SqlDbType enumeration. The SqlDbType enumeration is located in the System.Data namespace. Each of its values corresponds to a SQL data type. In the case of an OleDbParameter, you would use a value from the OleDbType enumeration like this: cmdSelect.Parameters.Add( "@lname", OleDbType.Varchar ).Value = "Johnson" The OleDbType enumeration can be found in the System.Data.OleDb namespace. Finally, you can specify the maximum size of a database parameter by using the following statement: cmdSelect.Parameters.Add( "@lname", SqlDbType.Varchar, 15 ).Value = "Johnson" This statement creates a parameter named @lname with a column size of 15 characters. If you don't explicitly specify the maximum size of a parameter, the size is automatically inferred from the value assigned to the parameter. 0 0 * 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场 个人资料 • 访问:92204次 • 积分:1287 • 等级: • 排名:千里之外 • 原创:30篇 • 转载:28篇 • 译文:1篇 • 评论:10条 评论排行 最新评论
{}
# Fraction (mathematics) File:Cake-quarters.jpg A cake divided into four equal quarters. Each fraction of the cake is represented numerically as 14. It can be seen that two quarters (2 x 14 = 24) is equivalent to half (12) the cake. In mathematics, a fraction is a way of expressing a quantity based on an amount that is divided into a number of equal-sized parts. For example, each part of a cake split into four equal parts is called a quarter (and represented numerically as 14); two quarters is half the cake, and eight quarters would make two cakes. Mathematically, a fraction is a quotient of numbers, like 34, or more generally, an element of a quotient field. In our cake example above, where a quarter is represented numerically as 14, the bottom number, called the denominator, is the total number of equal parts making up the cake as a whole, and the top number, called the numerator, is the number of these parts we have. For example, the fraction 34 represents three quarters. The numerator and denominator may be separated by a slanting line, or may be written above and below a horizontal line. The numerator and denominator are the "terms" of the fraction. The word "numerator" is related to the word "enumerate," meaning to "tell how many"; thus the numerator tells us how many parts we have in the indicated fraction. To denominate means to "give a name" or "tell what kind"; thus the denominator tells us what kind of parts we have (halves, thirds, fourths, etc.). Note that because it is impossible to divide something into zero equal parts, zero can never be the denominator of a fraction. The word is also used in related expressions, like continued fraction and algebraic fraction, see Special cases below. ## Forms of fractions ### Proper and improper fractions If the numerator and denominator of a fraction are both positive, then the fraction is a proper fraction if the numerator is less than the denominator, but an improper fraction otherwise. If either the numerator or denominator (or both) are negative, their absolute values should be compared to determine whether the fraction is proper or improper. ### Mixed numbers A mixed number is the sum of a whole number and a proper fraction. For instance, you could have two entire cakes and three quarters of another cake. The whole and fractional parts of the number are written right next to each other: 2 + 34 = 234. An improper fraction can be thought of as another way to write a mixed number; in the "234" example above, imagine that the two entire cakes are each divided into quarters. Each entire cake contributes 44 to the total, so 44 + 44 + 34 = 114 is another way of writing 234. A mixed number can be converted to an improper fraction in three steps: 1. Multiply the whole part times the denominator of the fractional part. 2. Add the numerator of the fractional part to that product. 3. The resulting sum is the numerator of the new (improper) fraction, and the new denominator is the same as that of the mixed number. Similarly, an improper fraction can be converted to a mixed number: 1. Divide the numerator by the denominator. 2. The quotient (without remainder) becomes the whole part and the remainder becomes the numerator of the fractional part. 3. The new denominator is the same as that of the original improper fraction. ### Equivalent fractions Multiplying the numerator and denominator of a fraction by the same (non-zero) number results in a new fraction that is said to be equivalent to the original fraction.1 The word equivalent means that the two fractions have the same value. For instance, consider the fraction 12. When the numerator and denominator are both multiplied by 2, the result is 24, which has the same value as 12. To see this, imagine cutting the example cake into four pieces; two of the pieces together (24) make up half the cake (12). We can say, for example, that 13, 26, 39, and 100300 are all equivalent fractions. Dividing the numerator and denominator of a fraction by the same non-zero number will also yield an equivalent fraction. We call this reducing the fraction. A fraction in which the numerator and denominator have no factors in common (other than 1) is said to be irreducible or in lowest terms. For instance, 39 is not in lowest terms because both 3 and 9 can be evenly divided by 3. In contrast, 38 is in lowest terms — the only number that's a factor of both 3 and 8 is 1. ### Reciprocals and the "invisible denominator" The reciprocal of a fraction is another fraction with the numerator and denominator swapped. The reciprocal of 37, for instance, is 73. Because any number divided by 1 results in the same number, it is possible to write any whole number as a fraction by using 1 as the denominator: 17 = 171. (We sometimes call the number 1 the "invisible denominator.") Therefore, we can say that, except for zero, every fraction or whole number has a reciprocal. The reciprocal of 17 would be 117. ## Arithmetic with fractions Fractions, like whole numbers, obey the commutative, associative, and distributive laws, and the rule against division by zero. Adding fractions can be a little tricky, since you cannot simply add the numerators and denominators. For example, if we had a cake divided into three pieces, each piece would be 1/3. Then, if we try to add one piece, 1/4, from the cake divided into four pieces, and one piece, 1/3, from the cake divided into three pieces, what would we have? Well, would we have 1/4 + 1/3 = ??? You can see this is NOT equal to 1/7 or 2/7 !! To add fractions together, they must be changed to equivalent values having the same fractional unit -- the same denominator -- in this case 1/12. How do we do this? By multiplying each fraction by 1. By one? Yes. 1 = 3/3 and 1 = 4/4. Now watch: 1/4 = 1/4 x 1 = 1/4 x 3/3 = 3/12. And 1/3 = 1/3 x 1 = 1/3 x 4/4 = 4/12. So now 1/4 + 1/3 = 3/12 + 4/12 = 7/12 and we have the correct result. Notice that we only add the numerators together. The denominator does not change, since we are working with the same fractional unit. Another way to see this is: 1/4 + 1/3 = 3/12 + 4/12 = 1/12 x (3 + 4) = 1/12 x 7 = 7/12. Lets take another example. If you add a half dollar to a quarter, what will you get? You know it's 75 cents, right? When we say 75 cents we have automatically, in our mind, converted each coin into cents (pennies): One half dollar = 50 cents; one quarter = 25 cents; so 1/2 + 1/4 = 50/100 + 25/100 = 75/100 or 75 cents. Of course, we could use a smaller denominator since we know one half dollar equals two quarters. I.e., 1/2 + 1/4 = 2/4 + 1/4 = 3/4. In words, one half plus one quarter equals two quarters plus one quarter equals three quarters, or 75 cents. So the trick is to find a common fractional unit -- a common denominator -- that will let us simply add the numerators together. Let's take one more example. Find 2/3 + 1/2. We see that the denominators are 3 and 2. We need to find a value that each denominator can be multiplied by to give a common value. Well, it's easy to see that we can multiply 3 by 2, and 2 by 3, to give a common denominator of 6. But remember, you cannot change the value of each fraction, so we must multiply both numerator and denominator by the same number. We now have: 2/3 + 1/2 = 2/2 x 2/3 + 3/3 x 1/2 = 4/6 + 3/6 = 7/6 or 1 + 1/6. When doing arithmetic with fractions, results should usually be expressed in lowest terms. For instance, 16 + 13 = 16 + 26 = 36 = 12 Note that 36 is not an incorrect answer, because 36 and 12 are equivalent, but the reduced form is preferred, and classroom exercises will nearly always require that final fractional answers to problems be reduced. #### Subtracting fractions The process for subtracting fractions is, in essence, the same as that of adding them: find a common denominator, and change each fraction to an equivalent fraction with the chosen common denominator. The resulting fraction will have that denominator, and its numerator will be the result of subtracting the numerators of the original fractions. For instance, 2/3 - 1/2 = 2/2 × 2/3 - 3/3 × 1/2 = 4/6 - 3/6 = 1/6. ### Multiplication and division #### Multiplication ##### By whole numbers If you consider the cake example above, if you have a quarter of the cake, and you multiple the amount by three, then you end up with three quarters. We can write this numerically as follows: $3\times {1 \over 4}={3 \over 4}$ As another example, suppose that five people work for three hours out of a seven hour day (ie. for three seventh of the work day). In total, they will have worked for 15 hours (5 x 3 hours each), or 15 sevenths of a day. Since 7 seventh of a day is a whole day, 14 sevenths is two days, then in total, they will have worked for 2 days and a seventh of day. Numerically: $5\times {3 \over 7}={15 \over 7}=2{1 \over 7}$ ##### By fractions If you consider the cake example above, if you have a quarter of the cake, and you multiple the amount by a third, then you end up with a twelfth of the cake. In other words, a third of a quarter (or a third times a quarter), is a twelfth. Why? Because we are splitting each quarter into three pieces, and four quarters times three makes 12 parts (or twelfths). We can write this numerically as follows: ${1 \over 3}\times {1 \over 4}={1 \over 12}$ As another example, suppose that five people do an equal amount work that totals three hours out of a seven hour day. Each person will have done a fifth of the work, so they will have worked for a fifth of three sevenths of a day. Numerically: ${1 \over 5}\times {3 \over 7}={3 \over 35}$ ##### General rule You may have noticed that when we multiply fractions, we simply multiply the two numerators (the top numbers), and multiply the two denominators) (the bottom numbers). For example: ${5 \over 6}\times {7 \over 8}={5\times 7 \over 6\times 8}={35 \over 48}$ ##### By mixed numbers When multiplying mixed numbers, it's best to convert the whole part of the mixed number into a fraction. For example: $3\times 2{3 \over 4}=3\times \left({{8 \over 4}+{3 \over 4}}\right)=3\times {11 \over 4}={33 \over 4}=8{1 \over 4}$ In other words, $2{3 \over 4}$ is the same as $\left({{8 \over 4}+{3 \over 4}}\right)$, making 11 quarters in total (because 2 cakes, each split into quarters makes 8 quarters total). And 33 quarters is $8{1 \over 4}$ since 8 cakes, each made of quarters, is 32 quarters in total. #### Division To divide by a fraction, simply multiply by the reciprocal of that fraction. $5\div {1 \over 2}=5\times {2 \over 1}=5\times 2=10$ ${2 \over 3}\div {2 \over 5}={2 \over 3}\times {5 \over 2}={10 \over 6}={5 \over 3}$ About 4,000 years ago Egyptians divided with fractions using slightly different methods. Egyptians used the least common multiple technique to divide unit fractions. Examples can be found at ## Special cases A vulgar fraction (or common fraction) is a rational number written as one integer (the numerator) divided by a non-zero integer (the denominator), for example 4/3 as opposed to 11/3. The line that separates the numerator and the denominator is called the vinculum if it is horizontal, a solidus if it is slanting. A unit fraction is a vulgar fraction with a numerator of 1 (1/7). An Egyptian fraction is the sum of distinct unit fractions (1/3+1/5). A decimal fraction is a vulgar fraction where the denominator is a power of 10 (4/100). A dyadic fraction is a vulgar fraction in which the denominator is a power of two (1/8). A compound fraction is a fraction where the numerator or denominator (or both) contain fractions, ${\frac {2}{3}}{\Bigg /}{\frac {1}{5}}$, these can be simplified to give vulgar fractions. An expression that has the form of a fraction, but actually represents division by or into an irrational number might be called an "irrational fraction" (an oxymoron). A common example is π/2, the radian measure of a right angle. Rational numbers are the quotient field of integers. Rational functions are functions evaluated in the form of a fraction, where the numerator and denominator are polynomials. These rational expressions are the quotient field of the polynomials (over some integral domain). A continued fraction is an expression such as $a_{0}+{\frac {1}{a_{1}+{\frac {1}{a_{2}+...}}}}$, where the ai are integers. This is not an element of a quotient field. The term partial fraction is used in algebra, when decomposing rational functions. The goal of the method of partial fractions is to write rational functions as sums of other rational functions with denominators of lesser degree. ## Pedagogical tools In Primary Schools, fractions have been demonstrated through Cuisenaire rods.
{}
# How to uniquely associate a directed graph with a feedforward neural network? I want to write an algorithm that returns a unique directed graph (an adjacency matrix) that represents the structure of a given feedforward neural network (FNN). My idea is to deconstruct the FNN into the input vector and some nodes (see definition below), and then draw those as vertices, but I do not know how to do so in a unique way. Question: Is it possible to construct such an algorithm, and if so, how would you formalize it? Example [Shallow Feedforward Neural Network (SNN)] To illustrate the problem, consider an SNN, defined as a mapping $$f=\left(f_1(\mathbf{x}), \ldots, f_m(\mathbf{x})\right): \mathbb{R}^n\rightarrow\mathbb{R}^m$$ where for $$k=1,\ldots,m$$ \begin{align} f_k(\mathbf{x}) &= \sum_{j=1}^{\ell} w_{j,k}^{(2)} \rho \left( \sum_{i=1}^n w_{i,j}^{(1)} x_i + w_{0,j}^{(1)} \right) + w_{0,k}^{(2)}, \quad \mathbf{x}=(x_1,\ldots,x_n)\in\mathbb{R}^n \end{align} and $$w_{i,j}^{(k)}\in\mathbb{R}$$ is fixed for all $$i,j,k \in \mathbb{N}$$ and $$\rho:\mathbb{R}\rightarrow\mathbb{R}$$ is a continuous mapping. I want to determine the nodes that make up the FNN, where a node $$N^{\rho}: \mathbb{R}^n\rightarrow\mathbb{R}$$ is defined as a mapping \begin{align} \label{eq:node} && \quad && N^{\rho}(\mathbf{x}) &= \rho\left(\sum_{i=1}^n w_i x_i + w_0 \right), & \mathbf{x}=(x_1,\ldots,x_n)\in\mathbb{R}^n \end{align} where $$\mathbf{w}=(w_0, \ldots,w_n)\in\mathbb{R}^{n+1}$$ is fixed. Clearly (to me) I can write each $$f_k$$ as \begin{align} f_k(\mathbf{x}) &= \sum_{j=1}^{\ell} w_{j,k}^{(2)} N^{\rho}_j(\mathbf{x}) + w_{0,k}^{(2)}, \end{align} where $$N^{\rho}_{j}$$ is a node for $$j=1,\ldots,\ell$$. Now I see that $$f_k$$ is a node which takes as input the output of other nodes. But how can I formalise this in an algorithm? And does it generalize to Deep Feedforward Neural Networks? • I'm not sure what you're asking here. You say the output of the algorithm should be a graph, while the inputs are FNNs. But isn't a neural network already a graph? Do you want the algorithm to produce, for example, an adjacency matrix? – nbro Dec 15 '21 at 13:19 • The input should be a function (specifically an FNN) and the output should be a set of vertices and edges. My problem is that I informally consider neural networks as graphs, but in my project, I have defined neural networks as a class of functions. Now I want to formally associate a graph with a given neural network. Dec 15 '21 at 13:34 • What should the edges be? Should they just represent the connectivity? So, are you effectively looking for an adjacency matrix that represents the neural network's connectivity between the neurons? Or maybe the edges should have weights? If yes, which ones? – nbro Dec 15 '21 at 13:38 • Exactly, the edges should just represent connectivity, so an adjacency matrix is sufficient. Dec 15 '21 at 13:39 I think you can do this in multiple ways. The easiest algorithm that comes to my mind right now produces a sparse (which is also some kind of block matrix) $$N \times N$$ adjacency matrix for a typical MLP/FFN with a total of $$N$$ neurons (including input and output neurons), where each neuron $$n_l^k$$ at layer $$l$$ has a directed edge that goes into all neurons at layer $$l+1$$. This is the algorithm. 1. Create an $$N \times N$$ matrix $$G \in \{0, 1\}^{N \times N}$$ with zeros • Comment 1: $$G_{ij}$$ is the element of the matrix at row $$i$$ and column $$j$$. • Comment 2: Indices $$i$$ and $$j$$ start at $$1$$ and end at $$N$$ • Comment 3: if we set $$G_{ij} = 1$$, then there's a directed edge from neuron $$i$$ to neuron $$j$$ (but not necessarily vice-versa: for that to be true, we would also need $$G_{ji} = 1$$) • Comment 4: we need to create a mapping between the indices $$i$$ and $$j$$ and the neurons in the neural network; this is done below! 2. Let $$c(l)$$ be the number of neurons at layer $$l$$ 3. For each layer $$l = 0, \dots, L - 1$$ • Comment 5: $$l = 0$$ is the input layer and $$L$$ is the output layer 1. For $$k=1, \dots, c(l)$$ • Comment 6: for example, $$n_l^k = n_2^3$$ is the third neuron at the first hidden layer 1. Let $$M = \sum_{h=0}^{l-1} c(h)$$ • Comment 7: $$M$$ is the number of neurons processed so far in the previous layers (excluding the neurons in the current layer) 2. $$i = k + M$$ 3. For $$t = 1, \dots, c(l+1)$$ 1. $$j = t + c(l) + M$$ • $$j$$ is basically the index of the graph $$G$$ that corresponds to the neuron $$t$$ in the next layer $$l+1$$ 2. Set $$G_{ij} = 1$$ 4. Return the matrix $$G$$ The time complexity of this algorithm should roughly be $$\mathcal{O}(L* {\max_l c(l)}^2)$$. So, for example, for a neural network with 3 layers, 2 inputs, 5 hidden neurons, and 2 outputs, what would be the number of operations? • Ah, I see the idea! To directly associate the FFN $f$ with the graph, I would probably not count the neurons (since they do not appear in the formulation of $f$) but instead, count the non-zero weights (since I only want to draw edges if the weights are non-zero). Perhaps my problem formulation was unclear. To answer your question, I guess the number of operations would be 20 (not counting the calculation of $M$, $i$, and $j$). Dec 15 '21 at 16:42 • The algorithm above can also be applied to the case of non-zero weights (although note that maybe you will not have many exactly zero weights, with real-valued weights). Rather than setting $G_{ij}=1$ in all cases, you also check the weight of the connection. Of course, as you said, the number of neurons may not be in the formulation of the neural network, but this information should be retrievable from the implementation of the layers. – nbro Dec 15 '21 at 16:48 • Exactly, it is just minor adjustments. Regarding the note on non-zero weights: To clarify, I am writing a theoretical project on neural networks, so it does not matter if they are exactly zero. Thanks for the algorithm! Dec 15 '21 at 16:55
{}
# Is Helmholtz decomposition inherently a non-local operation? Helmholtz decomposition, the process for splitting a vector field into parts which have vanishing divergence and curl, plays a central role in our ability to quantize the electromagnetic field because it permits us to separate the gauge invariant solenoidal part of the vector potential, $\mathbf{A}_{\mathrm{sol}}$, from its gauge dependent irrotational part, $\mathbf{A}_{\mathrm{irrot}}$. The formulae for these components of the field can be defined from the Wikipedia article as \begin{align} \mathbf{A}_{\mathrm{irrot}}(\mathbf{x}) &= -\frac{\nabla}{4\pi} \int_V \frac{\nabla' \cdot \mathbf{A}(\mathbf{x}')}{\left|\mathbf{x}-\mathbf{x}'\right|} \operatorname{d}V' + \frac{\nabla}{4\pi} \oint_{\partial V} \hat{n}'\cdot \frac{\mathbf{A}(\mathbf{x}')}{\left|\mathbf{x}-\mathbf{x}'\right|} \operatorname{d}S' \\ &= -\frac{\nabla}{4\pi} \int_V \mathbf{A}(\mathbf{x}') \cdot \nabla \frac{1}{\left|\mathbf{x}-\mathbf{x}'\right|} \operatorname{d}V',\ \mathrm{and}\\ \mathbf{A}_{\mathrm{sol}}(\mathbf{x}) &= \mathbf{A}(\mathbf{x}) - \mathbf{A}_{\mathrm{irrot}}(\mathbf{x}). \end{align} Now, for a field that already satisfies $\nabla\times\mathbf{A}=0$ the first operator is the identity and the second is $0$, which both act locally, and the same is true for the swapped versions. Equivalently, if a field is either solenoidal or irrotational then that can be determined locally by application of the divergence and curl and seeing which one is zero. Given a mixed field, though, the operator that constructs its irrotational component looks highly non-local, especially if $V$ is expanded to cover all of space. Can this determination be made locally? That is, does going the other direction with $V$, towards zero size by, say, partitioning $V$ into compact sub-volumes, work? My instinct is to say that the partition is inherently non-local because of the existence of magnetic scalar potential techniques. • I don't understand what you mean by "inherently" nonlocal, or what sort of evidence would be needed to show it. – Emilio Pisanty Dec 24 '17 at 20:01 • @EmilioPisanty No way to do it with an integration kernel that has a support on a set of measure $0$ (i.e. $\delta(\mathbf{x}-\mathbf{x}')$ and a finite number of derivatives thereon). – Sean E. Lake Dec 24 '17 at 20:04 • More on Helmholtz decomposition. – Qmechanic Dec 25 '17 at 13:01 Yes, it is non-local. A useful way to think about the decomposition is that $\mathbf{A}_{\text{irrot}}$ is the gradient vector field closest to $\mathbf{A}$, with the closeness of two fields measured by the average norm-squared difference of the vectors in the fields. This averaging operation makes the decomposition non-local. For example, if we have a vector field $$\mathbf{A}(x,y) = \begin{bmatrix} 0 \\ x \end{bmatrix},$$ then the closest gradient is the constant field equal to $\mathbf{A}$ at the center of the domain, and thus is not locally calcuable from $\mathbf{A}$. • I would have accepted your answer, but the argument presented only showed that one particular way of constructing the components was non-local, not that all ways must be. – Sean E. Lake Dec 28 '17 at 3:25 • I'm not sure that I understand your objection: if one method for splitting out the components depends on the volume over which the decomposition is performed (here because of the averaging), then any other method for performing the decomposition also depends on the volume (or we're talking about a non-uniquely-defined decomposition, which I do not believe to be the case here). – RLH Dec 28 '17 at 4:20 • That's a good point, though it doesn't appear to be explicit in the post, just an implication taken from a particular example that is everywhere harmonic. I did still upvote, for what it's worth. – Sean E. Lake Dec 29 '17 at 19:58 • Ah. The key idea from my post was "This averaging operation makes the decomposition non-local." The part after "For example..." was a minimal-working-example of the principle. – RLH Jan 2 '18 at 7:14 Yes, Helmholtz decomposition is inherently a non-local process because it is only unambiguous when the integrals are performed over all of a simply connected space that contains all relevant sources of the field (where a "source" is defined as any region in the field where the divergence or curl is non-zero). As the usage of the magnetic scalar potential demonstrates, it fails on sub-spaces because of the ambiguity inherent in the classification. In standard Helmholtz decomposition, divergenceless and solenoidal are taken as synonymous, likewise with divergent and irrotational (curl free). Fields that have both zero divergence and zero curl are assumed to be excluded by the boundary condition at infinity that the fields vanish there. When you limit your considerations to finite sized regions, though, the assumption breaks down because a source that is outside of your bounded region can produce a field that is both divergenceless and irrotational everywhere in the region of interest. For that reason, local categorization is only unambiguous if it is done via positive properties. In other words, a vector field can be locally described unambiguously by breaking it down into three components: \begin{align} \mathbf{F}(\mathbf{x}) & = \mathbf{F}_{\mathrm{div}}(\mathbf{x}) + \mathbf{F}_{\mathrm{curl}}(\mathbf{x}) + \mathbf{F}_{\mathrm{har}}(\mathbf{x}),\ \mathrm{where} \\ \nabla\cdot\mathbf{F}_{\mathrm{div}}(\mathbf{x}) &\neq 0\ \quad\mathrm{somewhere}, \\ \nabla\times\mathbf{F}_{\mathrm{curl}}(\mathbf{x}) &\neq 0\ \quad\mathrm{somewhere}, \\ \nabla\cdot \mathbf{F}_{\mathrm{har}}(\mathbf{x}) & = 0\ \quad\mathrm{everywhere},\ \mathrm{and} \\ \nabla\times \mathbf{F}_{\mathrm{har}}(\mathbf{x}) & = 0\ \quad\mathrm{everywhere}. \end{align} The subscript of $\mathbf{F}_{\mathrm{har}}(\mathbf{x})$ is meant to be short for "harmonic" in analogy to harmonic functions because it can always be expressed in the region of interest as the negative gradient of a harmonic function. You could also call the harmonic term $\mathbf{F}_{\mathrm{ext}}(\mathbf{x})$, since it can also be modeled as being produced by sources external to the region of interest (when that region has finite size). In this categorization, the reconstruction would work like this: \begin{align} \mathbf{F}_{\mathrm{div}}(\mathbf{x}) & = \frac{1}{4\pi} \int_V \frac{\mathbf{x}-\mathbf{x}'}{\left|\mathbf{x}-\mathbf{x}'\right|^3} \nabla'\cdot \mathbf{F}(\mathbf{x}') \operatorname{d}V' \\ \mathbf{F}_{\mathrm{curl}}(\mathbf{x}) & = \frac{1}{4\pi} \int_V \nabla\times\frac{\nabla'\times \mathbf{F}(\mathbf{x}')}{|\mathbf{x}-\mathbf{x}'|} \operatorname{d}V' \\ \mathbf{F}_{\mathrm{har}}(\mathbf{x}) & = \mathbf{F}(\mathbf{x}) - \mathbf{F}_{\mathrm{div}}(\mathbf{x}) - \mathbf{F}_{\mathrm{curl}}(\mathbf{x}). \end{align} What this exercise makes clear is if you try to take the limit as $V\rightarrow 0$, the region will either have some kind of delta function source that leaves the field ill defined there (e.g. point charge, line charge, or sheet charge) or the integrals that define the divergent and solenoidal components vanish, leaving only a locally harmonic field. Thus, the classification is only useful when done in regions of finite size, and only when the region of interest is the entire possible space. That is why the Wikipedia formulae quoted in the question do not have an ambiguity in their derivation, they have an unstated assumption that no sources beyond the boundary contribute to the local field. As an example, consider a uniform field in the region of interest. Was it produced by one (or more) infinite charged sheet(s) (divergent source), or infinite current sheet(s) (solenoidal source)? Assuming the region of interest is a cube with faces aligned parallel and perpendicular to the field, the Wikipedia sources would have a combination of charged sheets on the faces the field pierces, and a current sheet circulating around the other four. Point being, the process is only unambiguous when sources are only allowed in a closed region of interest. • Sorry, but I really don't see how this answer meets the extremely high bar you've set for proof in the question - or indeed how it answers the OP's core concern at all. – Emilio Pisanty Dec 29 '17 at 19:25 • @EmilioPisanty Admittedly, showing a counter-example, if it exists, would be trivial, and part of why I set the bar high was because I thought a counter example could exist. The rest comes down to the fact that I presented an argument that could serve as the outline to a more rigorous proof, and not an actual proof. The argument covers the inherent ambiguity of when the influence of outside sources impinges on the region of interest, and what happens to the classification scheme when one attempt to make a local classification ($V\rightarrow 0$). – Sean E. Lake Dec 29 '17 at 19:38 • Sorry but I just don't see it - I don't see anything here that begins to address the enormous set of possible local decompositions, let alone show that they are unsuitable for even one example. $-1$ for the misleading use of the accept mark (though nothing here is explicitly wrong - it's just not an answer to the question as you posed it). – Emilio Pisanty Dec 29 '17 at 19:43 • @EmilioPisanty Provide a more rigorous answer, and I'll accept it. As it is, the main thing I was looking for was showing that the $V\rightarrow 0$ attempt with the equations given falls into ambiguity taken with a real situation where outside sources are allowed. – Sean E. Lake Dec 29 '17 at 19:48 • I can't provide a more rigorous answer, but I am indeed interested in one if it exists; the acceptance makes the posting of such an answer less likely - hence the downvote. Otherwise, I have little more to add here. – Emilio Pisanty Dec 29 '17 at 19:52
{}
# Give a basis to get the specific matrix M #### mathmari ##### Well-known member MHB Site Helper Hey!! We have the following linear maps \begin{align*}\phi_1:\mathbb{R}^2\rightarrow \mathbb{R}, \ \begin{pmatrix}x\\ y\end{pmatrix} \mapsto \begin{pmatrix}x+y\\ x-y\end{pmatrix} \\ \phi_2:\mathbb{R}^2\rightarrow \mathbb{R}, \ \begin{pmatrix}x\\ y\end{pmatrix} \mapsto \begin{pmatrix}-y\\ x\end{pmatrix} \\ \phi_3:\mathbb{R}^2\rightarrow \mathbb{R}, \ \begin{pmatrix}x\\ y\end{pmatrix} \mapsto \begin{pmatrix}y\\ 0\end{pmatrix} \end{align*} 1. Give (if possible) for each $i\in \{1,2,3\}$ a Basis $B_i$ of $\mathbb{R}^2$ such that $M_{B_i}(\phi_i)$ an upper triangular matrix. 2. Give (if possible) for each $i\in \{1,2,3\}$ a Basis $B_i$ of $\mathbb{R}^2$ such that $M_{B_i}(\phi_i)$ an diagonal matrix. I have done the following: Let $\mathcal{B}_i=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}x_1\\ y_1 \end{pmatrix}$ and $b_2=\begin{pmatrix}x_2\\ y_2 \end{pmatrix}$. For question 1 : - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\left (\phi_1(b_1)\mid \phi_1(b_2)\right )=\left (\phi_1\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_1\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix}\end{equation*} So that it is an upper triangular matrix, it must be $x_1-y_1=0$. Then we have that $x_1=y_1$. Then we have for example such a basis $\mathcal{B}_1=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}1\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 0 \end{pmatrix}$. These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\begin{pmatrix}2 & 1 \\ 0 & 1\end{pmatrix}\end{equation*} is an upper triangular matrix. - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\left (\phi_2(b_1)\mid \phi_2(b_2)\right )=\left (\phi_2\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_2\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}-y_1 & -y_2 \\ x_1 & x_2\end{pmatrix}\end{equation*} So that it is an upper triangular matrix, it must be $x_1=0$. Then we have for example such a basis $\mathcal{B}_2=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 1 \end{pmatrix}$. These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\begin{pmatrix}-1 & 1 \\ 0 & 1\end{pmatrix}\end{equation*} is an upper triangular matrix. - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\left (\phi_3(b_1)\mid \phi_3(b_2)\right )=\left (\phi_3\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_3\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}y_1 & y_2 \\ 0 & 0\end{pmatrix}\end{equation*} This is already an upper triangular matrix, so we can take an arbitrary basis, e.g. $\mathcal{B}_3=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 1 \end{pmatrix}$. These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\begin{pmatrix}-1 & 1 \\ 0 & 1\end{pmatrix}\end{equation*} is an upper triangular matrix. For question 2 : - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\left (\phi_1(b_1)\mid \phi_1(b_2)\right )=\left (\phi_1\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_1\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix}\end{equation*} So that it is a diagonal matrix, it must be $x_1-y_1=x_2+y_2=0$, then $x_1=y_1$ and $x_2=-y_2$. Then we have for example such a basis $\mathcal{B}_1=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}1\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ -1 \end{pmatrix}$. These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\begin{pmatrix}2 & 0 \\ 0 & 2\end{pmatrix}\end{equation*} is a diagonal matrix. - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\left (\phi_2(b_1)\mid \phi_2(b_2)\right )=\left (\phi_2\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_2\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}-y_1 & -y_2 \\ x_1 & x_2\end{pmatrix}\end{equation*} So that it is a diagonal matrix, it must be $x_1=y_2=0$. Then we have for example such a basis $\mathcal{B}_2=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 0 \end{pmatrix}$. These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\begin{pmatrix}-1 & 0 \\ 0 & 1\end{pmatrix}\end{equation*} is a diagonal matrix. It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\left (\phi_3(b_1)\mid \phi_3(b_2)\right )=\left (\phi_3\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_3\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}y_1 & y_2 \\ 0 & 0\end{pmatrix}\end{equation*} So that it is a diagonal matrix, it must be $y_2=0$. Then we have for example such a basis $\mathcal{B}_3=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 0 \end{pmatrix}$. These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\begin{pmatrix}1 & 0 \\ 0 & 0\end{pmatrix}\end{equation*} is a diagonal matrix. Is everything correct? #### Klaas van Aarsen ##### MHB Seeker Staff member Hi mathmari !! What is $M_B(\phi)$? I would expect it to be the matrix of the transformation $\phi$ with respect to the basis $B$. But if so, then we would have $M_B(\phi) = (b_1\mid b_2)^{-1} (\phi(b_1)\mid \phi(b_2))$. Consider for instance the identity transformation $\text{id}$. With respect to a basis $B$ it should be $M_B(\text{id})=\begin{pmatrix}1&0\\0&1\end{pmatrix}$ shouldn't it? And not $(b_1\mid b_2)$? #### mathmari ##### Well-known member MHB Site Helper What is $M_B(\phi)$? I would expect it to be the matrix of the transformation $\phi$ with respect to the basis $B$. But if so, then we would have $M_B(\phi) = (b_1\mid b_2)^{-1} (\phi(b_1)\mid \phi(b_2))$. Consider for instance the identity transformation $\text{id}$. With respect to a basis $B$ it should be $M_B(\text{id})=\begin{pmatrix}1&0\\0&1\end{pmatrix}$ shouldn't it? And not $(b_1\mid b_2)$? Ahh ok! Yes, it is the matrix of the transformation. So, what do we have to do? #### Klaas van Aarsen ##### MHB Seeker Staff member We can find a diagonal $M_B(\phi)$ by calculating the eigenvalues and corresponding eigenvectors. If $\phi$ is diagonalizable, then the eigenvectors form a basis that satisfies the condition. In that case we have also found an upper triangle matrix, since a diagonal matrix is upper triangular. #### mathmari ##### Well-known member MHB Site Helper In general how is $M_B^B(\phi_a)$ for a matrix $a$, or $M_B^E(\text{id})$ or $M_E^B(\text{id})$ defined? For example let $$b_1=\begin{pmatrix}1 \\ 1\\ 1\end{pmatrix}, b_2=\begin{pmatrix}1 \\ 0\\ -1\end{pmatrix}, b_3=\begin{pmatrix}-1 \\ 1\\ 0\end{pmatrix}$$ Then is the following correct? \begin{equation*}\mathcal{M}_{\mathcal{E}}^{\mathcal{B}}(\text{id})=\left (\gamma_{\mathcal{E}}(b_1)\mid \gamma_{\mathcal{E}}(b_2)\mid \gamma_{\mathcal{E}}(b_3)\right )=\left (b_1\mid b_2\mid b_3\right )=\begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}\end{equation*} \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})=\left (\gamma_{\mathcal{B}}(e_1)\mid \gamma_{\mathcal{B}}(e_2)\mid \gamma_{\mathcal{B}}(e_3)\right )\end{equation*} For each $e_i$ we apply Gauss algorithm: \begin{align*}\begin{pmatrix} \left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ 0 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ -1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ -1 \\ -1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ -1 \\ 1 \end{matrix}\end{pmatrix} \end{align*} So we get \begin{equation*}\gamma_B(e_1)=\begin{pmatrix}-\frac{1}{3}\\ \frac{5}{3} \\ \frac{1}{3}\end{pmatrix}\end{equation*} \begin{align*}\begin{pmatrix} \left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ -2 \end{matrix}\end{pmatrix} \end{align*} So we get \begin{equation*}\gamma_B(e_2)=\begin{pmatrix}\frac{4}{3}\\ \frac{1}{3} \\ \frac{2}{3}\end{pmatrix}\end{equation*} \begin{align*}\begin{pmatrix} \left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \end{align*} So we get \begin{equation*}\gamma_B(e_3)=\begin{pmatrix}-1\\ \frac{2}{3} \\ -\frac{1}{3}\end{pmatrix}\end{equation*} That means that \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\phi_a)=\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\end{equation*} And : \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*} #### Klaas van Aarsen ##### MHB Seeker Staff member In general how is $M_B^B(\phi_a)$ for a matrix $a$, or $M_B^E(\text{id})$ or $M_E^B(\text{id})$ defined? I usually get confused with the upper and lower indices, and I think some texts swap their meaning. Anyway, let me give one definition, which matches what you write afterwards. $M_E^B(\phi)$ is the matrix such that when we multiply it with a vector with respect to the basis $B$, the result is the vector with respect to the basis $E$ mapped according to the transformation $\phi$. So suppose that $\gamma_E(b_1)$ is the vector with respect to $E$ of the first basis vector $b_1$ of $B$. Then $M_E^B(\phi)\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\phi(b_1))$. If $\phi$ is the identity $\text{id}$, we get $M_E^B(\text{id})\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\text{id}(b_1))=\gamma_E(b_1)$. And : \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*} I think something went wrong, because the result should be the identity matrix. Also, it seems that $\phi_a$ and $\text{id}$ have been mixed up in a number of places. #### mathmari ##### Well-known member MHB Site Helper I usually get confused with the upper and lower indices, and I think some texts swap their meaning. Anyway, let me give one definition, which matches what you write afterwards. $M_E^B(\phi)$ is the matrix such that when we multiply it with a vector with respect to the basis $B$, the result is the vector with respect to the basis $E$ mapped according to the transformation $\phi$. So suppose that $\gamma_E(b_1)$ is the vector with respect to $E$ of the first basis vector $b_1$ of $B$. Then $M_E^B(\phi)\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\phi(b_1))$. If $\phi$ is the identity $\text{id}$, we get $M_E^B(\text{id})\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\text{id}(b_1))=\gamma_E(b_1)$. So, what I have done in my previous post is not correct, is it? I think something went wrong, because the result should be the identity matrix. Also, it seems that $\phi_a$ and $\text{id}$ have been mixed up in a number of places. Ahh... The matrix $a$ is $\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ and I thought that the result is equal to teh matrix $a$, since $\phi_a(b_i)$ is the $i$-th column of $a$ ? Or isn't $M_B^B(\text{id})$ defined like that? #### Klaas van Aarsen ##### MHB Seeker Staff member So, what I have done in my previous post is not correct, is it? Ahh... The matrix $a$ is $\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ and I thought that the result is equal to teh matrix $a$, since $\phi_a(b_i)$ is the $i$-th column of $a$ ? Or isn't $M_B^B(\text{id})$ defined like that? Now that you mention what $a$ is, it makes a bit more sense. Either way, $M_B^B(\text{id})$ makes no reference to $a$ nor $\phi_a$ does it? So it can not be equal to anything that does refer to $\phi_a$. For each $e_i$ we apply Gauss algorithm: \begin{align*}\begin{pmatrix} \left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ 0 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ -1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ -1 \\ -1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix} \end{matrix}\right|\begin{matrix}1 \\ -1 \\ 1 \end{matrix}\end{pmatrix} \end{align*} So we get \begin{equation*}\gamma_B(e_1)=\begin{pmatrix}-\frac{1}{3}\\ \frac{5}{3} \\ \frac{1}{3}\end{pmatrix}\end{equation*} \begin{align*}\begin{pmatrix} \left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 1 \\ -2 \end{matrix}\end{pmatrix} \end{align*} So we get \begin{equation*}\gamma_B(e_2)=\begin{pmatrix}\frac{4}{3}\\ \frac{1}{3} \\ \frac{2}{3}\end{pmatrix}\end{equation*} \begin{align*}\begin{pmatrix} \left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix} \end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \end{align*} So we get \begin{equation*}\gamma_B(e_3)=\begin{pmatrix}-1\\ \frac{2}{3} \\ -\frac{1}{3}\end{pmatrix}\end{equation*} For the record, we can do the Gaussian elimination in one go. That is, we can apply Gauss to $\begin{pmatrix}B\mid I\end{pmatrix}$ instead of $\begin{pmatrix}B\mid e_i\end{pmatrix}$ for each $i$ separately. That means that \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\phi_a)=\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\end{equation*} Shouldn't it be $\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})$ instead? And : \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*} This looks wrong. I believe we have $\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})\ne\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )$ and $\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )\ne a$. #### mathmari ##### Well-known member MHB Site Helper Now that you mention what $a$ is, it makes a bit more sense. Either way, $M_B^B(\text{id})$ makes no reference to $a$ nor $\phi_a$ does it? So it can not be equal to anything that does refer to $\phi_a$. Oh there is a typo... There it should be $\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)$ #### mathmari ##### Well-known member MHB Site Helper So to clarify : We have $B=\{b_1, b_2, b_3\}$ with $$b_1=\begin{pmatrix}1 \\ 1\\ 1\end{pmatrix}, b_2=\begin{pmatrix}1 \\ 0\\ -1\end{pmatrix}, b_3=\begin{pmatrix}-1 \\ 1\\ 0\end{pmatrix}$$ which is basis of $\mathbb{R}^3$. Then to calculate $M_E^B(\text{id})$ and $M_B^E(\text{id})$ we do the following? \begin{equation*}\mathcal{M}_{\mathcal{E}}^{\mathcal{B}}(\text{id})=\left (\gamma_{\mathcal{E}}(b_1)\mid \gamma_{\mathcal{E}}(b_2)\mid \gamma_{\mathcal{E}}(b_3)\right )=\left (b_1\mid b_2\mid b_3\right )=\begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}\end{equation*} \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})=\left (\gamma_{\mathcal{B}}(e_1)\mid \gamma_{\mathcal{B}}(e_2)\mid \gamma_{\mathcal{B}}(e_3)\right )\end{equation*} Then suppose $a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ then to calculate $M_B^B(\phi_a)$ do we do the following? \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*} #### Klaas van Aarsen ##### MHB Seeker Staff member So to clarify : We have $B=\{b_1, b_2, b_3\}$ with $$b_1=\begin{pmatrix}1 \\ 1\\ 1\end{pmatrix}, b_2=\begin{pmatrix}1 \\ 0\\ -1\end{pmatrix}, b_3=\begin{pmatrix}-1 \\ 1\\ 0\end{pmatrix}$$ which is basis of $\mathbb{R}^3$. Then to calculate $M_E^B(\text{id})$ and $M_B^E(\text{id})$ we do the following? \begin{equation*}\mathcal{M}_{\mathcal{E}}^{\mathcal{B}}(\text{id})=\left (\gamma_{\mathcal{E}}(b_1)\mid \gamma_{\mathcal{E}}(b_2)\mid \gamma_{\mathcal{E}}(b_3)\right )=\left (b_1\mid b_2\mid b_3\right )=\begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}\end{equation*} \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})=\left (\gamma_{\mathcal{B}}(e_1)\mid \gamma_{\mathcal{B}}(e_2)\mid \gamma_{\mathcal{B}}(e_3)\right )\end{equation*} What are $\gamma_{\mathcal{E}}$ and $\gamma_{\mathcal{B}}$? Then suppose $a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ then to calculate $M_B^B(\phi_a)$ do we do the following? \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*} Can it be that we have $\mathcal{M}_{\mathcal{E}}^{\mathcal{E}}(\phi_a)=a$ instead? And $\mathcal{M}_{\mathcal{E}}^{\mathcal{E}}(\phi_a)=\left (\gamma_{\mathcal{E}}(\phi_a(e_1))\mid \gamma_{\mathcal{E}}(\phi_a(e_2))\mid \gamma_{\mathcal{E}}(\phi_a(e_3))\right )$? Perhaps we can back to this after we clarified what $\gamma_{\mathcal{E}}$ and $\gamma_{\mathcal{B}}$ are. #### mathmari ##### Well-known member MHB Site Helper $\gamma_B(v)$ is the vector of coefficients when we write the vector $v$ as a linear combination of the elements of the basis $B$. For example $\gamma_B(e_i)=\begin{pmatrix}c_1\\ c_2 \\ c_3\end{pmatrix}$ with \begin{equation*}e_i=c_1b_1+c_2b_2+c_3b_3=c_1\begin{pmatrix}1 \\ 1 \\ 0\end{pmatrix}+c_2\begin{pmatrix}1 \\ 0 \\ 1\end{pmatrix}+c_3\begin{pmatrix}0 \\ 1 \\ 1\end{pmatrix}\end{equation*} #### Klaas van Aarsen ##### MHB Seeker Staff member $\gamma_B(v)$ is the vector of coefficients when we write the vector $v$ as a linear combination of the elements of the basis $B$. For example $\gamma_B(e_i)=\begin{pmatrix}c_1\\ c_2 \\ c_3\end{pmatrix}$ with \begin{equation*}e_i=c_1b_1+c_2b_2+c_3b_3=c_1\begin{pmatrix}1 \\ 1 \\ 0\end{pmatrix}+c_2\begin{pmatrix}1 \\ 0 \\ 1\end{pmatrix}+c_3\begin{pmatrix}0 \\ 1 \\ 1\end{pmatrix}\end{equation*} Okay! Let $V$ be our abstract vector space. Let $\mathbb{R}^3_E$ be the space of column vectors with respect to basis $E$. Let $\mathbb{R}^3_B$ be the space of column vectors with respect to basis $B$. And let's assume that $\phi_a$ is the map $V\to V$ such that it corresponds to matrix multiplication with $a$ with respect to the standard basis $E$. Then here's a diagram that shows the relevant relationships. In particular we can deduce from it that: $$M^B_B(\phi_a)=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1}$$ Last edited: #### mathmari ##### Well-known member MHB Site Helper Okay! Let $V$ be our abstract vector space. Let $\mathbb{R}^3_E$ be the space of column vectors with respect to basis $E$. Let $\mathbb{R}^3_B$ be the space of column vectors with respect to basis $B$. And let's assume that $\phi_a$ is the map $V\to V$ such that it corresponds to matrix multiplication with $a$ with respect to the standard basis $E$. Then here's a diagram that shows the relevant relationships. View attachment 10951 In particular we can deduce from it that: $$M^B_B(\phi_a)=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1}$$ Ahh ok! And are the matrices $M^E_B(\text{id})$ and $M^E_B(\text{id})^{-1}$ that I calculated above correct? #### Klaas van Aarsen ##### MHB Seeker Staff member Ahh ok! And are the matrices $M^E_B(\text{id})$ and $M^E_B(\text{id})^{-1}$ that I calculated above correct? They look correct to me. #### mathmari ##### Well-known member MHB Site Helper They look correct to me. Great!! As for my initial post... what is then $M_{B_i}$ ? Is it like $M_{B_i}^{B_i}$ ? #### Klaas van Aarsen ##### MHB Seeker Staff member As for my initial post... what is then $M_{B_i}$ ? Is it like $M_{B_i}^{B_i}$ ? That is what I'd expect yes. #### mathmari ##### Well-known member MHB Site Helper That is what I'd expect yes. Ok.. So with $M_B(\phi) = (b_1\mid b_2)^{-1} (\phi(b_1)\mid \phi(b_2))$ we have the following : Let $b_1=\begin{pmatrix}x_1\\ y_1 \end{pmatrix}$ and $b_2=\begin{pmatrix}x_2\\ y_2 \end{pmatrix}$. With $\phi_1$ : \begin{align*}M_B(\phi_1)&=\begin{pmatrix}x_1 & x_2 \\ y_1 & y_2\end{pmatrix}^{-1}\cdot \begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix}=\frac{1}{x_1y_2-x_2y_1}\begin{pmatrix}y_2 & -x_2 \\ -y_1 & x_1\end{pmatrix}\cdot \begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix} \\ & =\frac{1}{x_1y_2-x_2y_1}\begin{pmatrix}x_1y_2+y_1y_2 -x_1x_2+x_2y_1&x_2y_2+y_2^2-x_2^2+x_2y_2 \\ -x_1y_1-y_1^2+x_1^2-x_1y_1 & -x_2y_1-y_1y_2+x_1x_2-x_1y_2\end{pmatrix} \\ & =\frac{1}{x_1y_2-x_2y_1}\begin{pmatrix}x_1y_2+y_1y_2 -x_1x_2+x_2y_1& 2x_2y_2+y_2^2-x_2^2 \\ -2x_1y_1-y_1^2+x_1^2 & -x_2y_1-y_1y_2+x_1x_2-x_1y_2\end{pmatrix}\end{align*} Now we have to solve a system such that this matrix is an upper triangular. Is that way correct? Or is there an other approach? #### Klaas van Aarsen ##### MHB Seeker Staff member Yes, that looks correct. However, we do not have to solve the entire system. It suffices if the bottom left matrix entry is $0$. That is, we only needed to find $-2x_1 y_1 -y_1^2+x_1^2$, and we need to find $x_1$ and $y_1$ such that it is $0$. Another approach. Let $U=(u_{ij})=M_B(\phi)$ be the desired upper triangular matrix. Let $b_1$ be the first vector in the desired basis. Then $U\begin{pmatrix}1\\0\end{pmatrix} = \begin{pmatrix}u_{11}\\0\end{pmatrix} = u_{11}\begin{pmatrix}1\\0\end{pmatrix}$. In other words, $u_{11}$ must be an eigenvalue of $\phi$ and $b_1$ must be the corresponding eigenvector. #### mathmari ##### Well-known member MHB Site Helper Another approach. Let $U=(u_{ij})=M_B(\phi)$ be the desired upper triangular matrix. Let $b_1$ be the first vector in the desired basis. Then $U\begin{pmatrix}1\\0\end{pmatrix} = \begin{pmatrix}u_{11}\\0\end{pmatrix} = u_{11}\begin{pmatrix}1\\0\end{pmatrix}$. In other words, $u_{11}$ must be an eigenvalue of $\phi$ and $b_1$ must be the corresponding eigenvector. So you take as $b_1$ the vector $\begin{pmatrix}1\\0\end{pmatrix}$ ? #### Klaas van Aarsen ##### MHB Seeker Staff member So you take as $b_1$ the vector $\begin{pmatrix}1\\0\end{pmatrix}$ ? Not exactly. We consider $b_1$ an as yet unknown vector. The representation of that vector with respect to the basis $B$ is $\gamma_B(b_1)=\begin{pmatrix}1\\0\end{pmatrix}$. That is, $1\cdot b_1 + 0\cdot b_2$. #### mathmari ##### Well-known member MHB Site Helper They look correct to me. Using the equality $M^B_B(\phi_a)=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1}$ we get \begin{align*}M^B_B(\phi_a)&=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1} \\ & =\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\cdot \begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}^{-1}\\ & =\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\cdot \begin{pmatrix}\frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} & -\frac{2}{3} \\ -\frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix} \\ & = \begin{pmatrix}\frac{2}{9} & -\frac{1}{9} & \frac{11}{9} \\- \frac{2}{9} & \frac{13}{9} & -\frac{8}{9} \\ 0 & \frac{1}{3} & \frac{1}{3}\end{pmatrix} \end{align*} This must be equal to \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)=\left (\gamma_{\mathcal{B}}\left (\phi_a(b_1)\right )\mid \gamma_{\mathcal{B}}\left (\phi_a(b_2)\right )\mid \gamma_{\mathcal{B}}\left (\phi_a(b_3)\right )\right )\end{equation*} or not? I got an other result : We have that \begin{align*}&\phi_a(b_1)=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}=\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \\ & \phi_a(b_2)=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix}=\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix} \\ & \phi_a(b_3)=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}=\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \end{align*} \begin{align*}\begin{pmatrix} \left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 1 & 1 & -1 \\ 1 & 0 & 1\end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix} \end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 0 & 2 & -1 \\ 1 & 0 & 1\end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix} \end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 0 & 2 & -1 \\ 0 & 1 & 1\end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix} \begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix} \end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 0 & 2 & -1 \\ 0 & -3 & 3\end{matrix}\end{pmatrix} \end{align*} For $\gamma_B(\phi_a(b_1))$ we get the equations \begin{align*}c_1+c_2-c_3&= 1 \\ -c_2+2c_3&=0 \\ -3c_3&= 0\end{align*} So we get $\gamma_B(\phi_a(b_1))=\begin{pmatrix}1\\ 0 \\ 0\end{pmatrix}$. For $\gamma_B(\phi_a(b_2))$ we get the equations \begin{align*}c_1+c_2-c_3&=-1 \\ -c_2+2c_3&=\ 2 \\ -3c_3&=-3\end{align*} So we get $\gamma_B(\phi_a(b_2))=\begin{pmatrix}0\\ 1 \\ 1\end{pmatrix}$. For $\gamma_B(\phi_a(b_3))$ we get the equations \begin{align*}c_1+c_2-c_3&= \ 0 \\ -c_2+2c_3&=-1 \\ -3c_3&= \ 3\end{align*} So we get $\gamma_B(\phi_a(b_3))=\begin{pmatrix}4\\ -4 \\ -1\end{pmatrix}$. Have I done something wrong or have I understood that wrong? #### mathmari ##### Well-known member MHB Site Helper Not exactly. We consider $b_1$ an as yet unknown vector. The representation of that vector with respect to the basis $B$ is $\gamma_B(b_1)=\begin{pmatrix}1\\0\end{pmatrix}$. That is, $1\cdot b_1 + 0\cdot b_2$. So $\mathcal{M}_{\mathcal{B}_1}(\phi_1)$ is a diagonal matrix and so also an upper triangular matrix if it is of the form $\begin{pmatrix}u_{11} & 0 \\ 0 & u_{22}\end{pmatrix}$. Then we get \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)\gamma_{\mathcal{B}_1}(b_1)=\begin{pmatrix}u_{11} \\ 0\end{pmatrix}=u_{11}\begin{pmatrix}1 \\ 0\end{pmatrix}=u_{11}\gamma_{\mathcal{B}_1}(b_1)\end{equation*} So $u_{11}$ is an eigenvalue of $\phi$ and $b_1$ the corresponding eigenvector. We have that \begin{equation*}\phi \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}1 & 1 \\ 1 & -1\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*} So $u_{11}=\sqrt{2}$ and $b_1=\begin{pmatrix}1+\sqrt{2} \\ 1\end{pmatrix}$. We also have that \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)\gamma_{\mathcal{B}_1}(b_2)=\begin{pmatrix}0 \\ u_{22}\end{pmatrix}=u_{22}\begin{pmatrix}0 \\ 1\end{pmatrix}=u_{22}\gamma_{\mathcal{B}_1}(b_2)\end{equation*} So $u_{22}$ is an eigenvalue of $\phi$ and $b_1$ the corresponding eigenvector. Then $u_{22}=-\sqrt{2}$ and $b_2=\begin{pmatrix}1-\sqrt{2} \\ 1\end{pmatrix}$. Is that correct? Or can we not consider both cases (upper tridiagonal and diagonal) together? #### mathmari ##### Well-known member MHB Site Helper For map $\phi_2$ we cannot do that like that : \begin{equation*}\phi_2 \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}0 & -1 \\ 1 & 0\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*} There are only complex eigenvalues #### mathmari ##### Well-known member MHB Site Helper For the last map we have: \begin{equation*}\phi_3 \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*} So there is only one eigenvalue $u_{11}=0$ and $b_1=\begin{pmatrix}1 \\ 0\end{pmatrix}$.
{}
# Logarithms View Notes ## What are Logarithms? Logarithms are the alternative ways of processing the methods of exponentials. Let's understand logarithms with an example. We know 2 to the power 3 is equal to 8. This is called an exponential equation. Let us now assume that you are asked “4 raised to which power is 64?” The answer replied by you will be 3. This situation can be explained as a logarithm equation log$_{4}$ (64) =3 From this, we can say that log base three of twenty-seven is three. We can clearly understand from the exponential equation and logarithm equation that there is some relationship between them. ### Define Logarithm A logarithm is used to raise the power of a number to get a certain number. log$_{2}$ 8 = 3 ## Logarithm Properties In logarithm, we will learn about some properties which will help us solve the logarithm equations. We know the logarithm equation has the same relationship with the exponential equation. It also has some similarity between the properties of the logarithm to exponential. The following are the properties: ### Property of Product in Logarithm It is the sum of the numbers of logarithms For example, Solve the logarithm for log$_{7}$ (3x): We can see that inside the bracket there are two variables, 3 and x. Now we will use the product rule to solve the logarithm log$_{a}$(3x) = log$_{a}$(3) + log$_{a}$(x) We can also simplify two variables of a logarithm into a single logarithm by using the property of the product. Keep in your mind that the bases of logarithm should be the same in two variables to use the property of the product while simplifying the variables into one. For example, We cannot use the property of the product to simplify a logarithm which is log$_{4}$(8) + log$_{6}$(x) ### Property of Quotient It helps you to find the difference between the two variables which are in the form denominator and numerator. Now, let us see an example for our better understanding, By solving the logarithm of log$_{5}$ $\frac{r}{4}$ : We are going to write this logarithm equation into different forms by using the property of quotients. log$_{5}$$\frac{r}{4}$ = log$_{5}$(r) - log$_{5}$(4) Now, we will understand how to condense the different forms of a logarithm into dividend and divisor forms. log$_{3}$(4) - log$_{3}$(h) = log$_{3}$$\frac{4}{h}$ When we simplify the different form of a logarithm into dividend and divisor form, the bases of logarithm should be the same. We don’t use the property of quotients when the bases of logarithm are not the same. ### Property of Power This property helps us to know that the power of the log is the exponent times the logarithm of the base of the power. Let’s see an example, By solving log$_{3}$(x$^{2}$) : Here we will convert the single logarithm into multiple logarithms by using the property of power of logarithm. log$_{3}$(x$^{2}$) = 2.log$_{3}$(x) = 2log$_{3}$(x) Now we will convert a multiple of a logarithm into a single logarithm. Let’s take the help of the property of quotient to convert multiple logarithms into a single logarithm 5.log$_{3}$(9) = log$_{3}$(9$^{5}$) = log$_{3}$(59049) ### The Inverse of Property of Logarithms From the definition and few examples above, we have understood that the exponential equation and logarithm equation follows the same kind of relationship as function. That means the exponential equation’s inverse is logarithm equations. When two inverses are written in the form of equations, they equal to y. So, f(y)= a$^{y}$ and g(y)= log$_{b}$y This means: f.g= b$^{log_{b}y}$ = y and g.f = log$_{b}$ b$^{y}$ = y These are called the inverse properties of Logarithm. ### Application of Logarithms In this world of modern technology, people are always finding ways to do things in simpler and easier ways. Therefore, people invented calculators and logarithms to make mathematical equations easier to solve. So, let us find some more advantages of learning logarithms: • Logarithms are used in various fields of science and many other industries. • Logarithms help to find the pH value in chemistry because the value for pH can be small, so we use the logarithm to have a range for using it for small numbers. • Logarithms are widely used in banking. • Logarithms are used to find the half-life of radioactive material. • It allows us to find out the earthquake’s intensity. • Even in the field of medicine or engineering, we can see some usage of logarithm and its properties. Q1. What are the Different Types of Functions of a Logarithm? Ans: Yes, we have two types of logarithm present in logarithm functions and they are as following: • Natural Logarithm The natural logarithms are the base of e of the logarithm. We can represent the natural logarithm as log or ln. For example, The natural logarithm of 56 can be rewritten as ln 56 The natural logarithm shows us to multiply e to get the desired output. • Common Logarithm The common logarithm is the base 10 of the logarithm. We can represent them as a log or log 10. For example, In logarithm, we can write the 100 as log(100). It generally shows the times we need to multiply to get the number 10 as per the requirement. 2. Why is it Necessary to Know the Properties of the Logarithm? Ans: While studying, you will witness many properties like commutative, associative, and distributive properties. These types of properties are introduced to help us to solve our equations or questions with easier methods. Similarly, logarithm properties were introduced to help us to solve the difficult and time-consuming equations more simply. To help to solve a difficult equation, we use the three properties of logarithm: property of product of the logarithm, property of quotient of the logarithm and property of power of logarithm. Hence, we need to know and learn the properties of the logarithm to solve the difficult equations in a simplified manner.
{}
# Proof: Intersection, Inverse Function • March 20th 2011, 09:08 PM lfroehli Proof: Intersection, Inverse Function For this proof, we are to show that each containment is a subset of the other. Part 1: $f^{-1}(\bigcap_{\lambda\in\Lambda}B_{\lambda}) = \bigcap_{\lambda\in\Lambda}f^{-1}(B_{\lambda})$. Part 2: $f^{-1}(\bigcup_{\lambda\in\Lambda}B_{\lambda}) = \bigcup_{\lambda\in\Lambda}f^{-1}(B_{\lambda})$. I honestly have no idea how to get started on this...any insight is appreciated! • March 20th 2011, 11:14 PM FernandoRevilla It is almost routine knowing the definition of $f^{-1}(B)$ . For example: $x\in f^{-1}(\bigcap_{\lambda\in\Lambda}B_{\lambda})\Rightar row f(x)\in \bigcap_{\lambda\in\Lambda}B_{\lambda}\Rightarrow f(x)\in B_{\lambda}\;\forall \lambda\in\Lambda\Rightarrow$ $x\in f^{-1}(B_{\lambda})\;\forall \lambda\in\Lambda \Rightarrow x\in \bigcap_{\lambda\in\Lambda}f^{-1}(B_{\lambda})$ etc. • March 21st 2011, 12:48 AM emakarov This question was also discussed in this thread.
{}
2k views ### Proof of the formula $1+x+x^2+x^3+ \cdots +x^n =\frac{x^{n+1}-1}{x-1}$ [duplicate] Possible Duplicate: Value of $\sum x^n$ Proof to the formula $$1+x+x^2+x^3+\cdots+x^n = \frac{x^{n+1}-1}{x-1}.$$ 316 views ### Easy summation question: $S= 1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8}+\frac{1}{16}\cdots$ [duplicate] While during physics I encountered a sum I couldn't evaluate: $$S= 1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8}+\frac{1}{16}\cdots$$ Is there a particular formula for this sum and does it converges? 43k views ### Proof of the power series 1 + $x^2$ + $x^3$ + $\ldots$ + $x^n$ = $\frac{1}{1-x}$ [duplicate] Can anyone show me the proof of this equation: $$\lim_{n \to \infty} 1 + x + x^2 + x^3 + \ldots + x^n = \frac{1}{1-x},$$ where $|x|<1$. Edit: I have then additionally written $x$ on the left ... 11k views ### How to convert a series to an equation? [duplicate] Possible Duplicate: Value of $\sum\limits_n x^n$ I don't know the technical language for what I'm asking, so the title might be a little misleading, but hopefully I can convey my purpose to you ... ### Why $\sum_{k=0}^{\infty} q^k$ sum is $\frac{1}{1-q}$ when $|q| < 1$ [duplicate] Why is the infinite sum of $\sum_{k=0}^{\infty} q^k = \frac{1}{1-q}$ when $|q| < 1$ I don't understand how the $\frac{1}{1-q}$ got calculated. I am not a math expert so I am looking for an easy ...
{}
# Simple Formula Manipulation 1. Dec 8, 2007 ### Sucks@Physics I'm trying to find the intensity of something. I know the equation is B = 10log(I/Io) But how to I solve for I? What would the equation be? For some reason I can't make it work. 2. Dec 8, 2007 ### Staff: Mentor logarithms Does this help? If $$a = log(b)$$ then $$b = 10^a$$ 3. Dec 8, 2007 ### Sucks@Physics No, i still can't ge thte correct equation, i come up with something like (10x10^B)/Io 4. Dec 8, 2007 ### l46kok Rewrite the equation as I/I0 = 10^(B/10) Last edited: Dec 8, 2007 5. Dec 8, 2007 ### Staff: Mentor Not quite: B = 10log(I/Io) B/10 = log(I/Io) 10^(B/10) = I/Io I = Io*10^(B/10) 6. Dec 8, 2007 ### Sucks@Physics Thanks, that is kinda tricky, but now i understand it 7. Dec 8, 2007 ### l46kok From the provided information, Doc Al's equation is correct. You may be using wrong values for your Io and B, or your intensity equation could be incorrect.
{}
Table 1 Determinations of parameter values in the application of fm Parameters Descriptions Value estimated by soil properties Value recommended if not available sources and notes a SOC–microorganism collocation factor Can be estimated by clay content (cc) $$a = \left\{ {\begin{array}{*{20}{l}} {0,c_{\rm c} \le 0.016} \hfill \\ {2.8c_{\rm c} - 0.046,0.016 < c_{\rm c} \le 0.37} \hfill \\ {1,c_{\rm c} > 0.37} \hfill \end{array}} \right.$$ Fig. 6 b O2 supply restriction factor Depend on O2 supply 0 ≤ b ≤ 1.7 0.75 Supplementary Data 1 θ op Optimum water content Can be calculated implicitly by soil properties $$\nu _{DO}\frac{{\theta _{{\rm op}}}}{{K_\theta + \theta _{{\rm op}}}}\alpha m_{{\rm SOC}}\phi ^{a\left( {m_{\rm s} - n_{\rm s}} \right)}\theta _{{\rm op}}^{an_{\rm s}} = k_{{\rm GO}}\phi ^{m_{\rm g} - n_{\rm g}}\left( {\phi - \theta _{{\rm op}}} \right)^bD_{{\rm GO,0}}$$ 0.65ϕ 4, 18 ϕ Soil porosity Can be estimated by soil bulk density (ρb) and mineral density (ρs) $$\phi = 1 - \frac{{\rho _b}}{{\rho _s}}$$ 67 n s Saturation exponent Depend on soil structure and texture 2 45 K θ Moisture constant Depend on organo-mineral associations 0.1 24
{}
Computer Laboratory Dominic's Part II Project Proposals 2012/13 The following are sketches of project ideas and are not complete (or static) proposals . If you are interested, please do get in touch. • OCamlLabs: Types for cross-compilation Increasing numbers of applications are targetting heterogenous execution environments such as server-side, client-side, GPU, clouid, embedded and web. This is supported by a number of OCaml compiler backends (JavaScript, Java, ARM, LLVM) and libraries allowing interoperability. In such a scenario, certain parts of code may be compiled only for certain platform-specific environments, but some can be safely shared. This project should extend the OCaml type system in order to track execution environments for which an expression can be compiled, and simplify writing code that cross-compiles for multiple platforms and diverse execution environments. This project idea comes from the OCamlLabs web site (which includes some links to related material) and is also included in Tomas Petricek's project suggestions. Either of us would be happy to supervise/be involved. As the name suggests, this should be done as an extension to the OCaml compiler, but if you're interested in other tools and languages, do talk to either of us. • TAKEN - EDSL (Embedded Domain Specific Language) for Array Programming on GPUS Stencil computations are a common programming pattern in scientific computing, games, and image processing, involving operations over the elements of an array defined by a local operation, possibly depending on some neighbourhood of elements. For example, the discrete Gaussian blur operation can be defined by the operation: (A[i-1][j] + A[i+1][j] + A[i][j-1] + A[i][j+1] + 4*A[i][j]) / 6.0 which can then be applied to an array by iterating over its index space. Stencil computations are highly regular and amenable to optimisation and parallelisation. However, general-purpose languages obscure this regular pattern from the compiler, and even the programmer, preventing optimisation and obfuscating (in)correctness. The Ypnos domain-specific language, developed in the CPRG, provides support for stencil computations embedded in Haskell. Ypnos allows declarative, abstract specification of stencil computations, exposing the structure of a problem to the compiler and to the programmer via specialised syntax. The discrete Gaussian operator can be written in Ypnos as: gauss | _ l _ | = (l + r + b + r + 4*c)/6.0 | r @c b | | _ t _ | where the figure, spanning three lines following guass and preceding =, is a special pattern matching syntax called a grid pattern, defining array access. Since pattern matching is static, infering the data access pattern for a stencil computation is simple (essentially parsing). As Ypnos is embedded in Haskell, grid patterns are provided by a small syntactic extension using Haskell's macroing system. A stencil computation is applied to an array, or grid in the Ypnos terminology, using a higher order function, such as run, e.g., x' = run gauss x. Various language invariants and program properties are encoded as Haskell types and enforced by the type system of the GHC compiler. For example, the gauss function has type: gauss :: (Safe (-1, 0) b, Safe (+1, 0) b, Safe (0, +1) b, Safe (0, -1) b, Num a) => Grid (Dim X :* Dim Y) b dyn a -> a The type class constraint Safe is used to ensure that the parameter grid has adequate boundary elements defined such that the guass operator never causes an out-of-bounds error when applied using run. Currently there is an Ypnos implementation for two-dimensional arrays, but many of the core combinators are missing. Ypnos programs are highly data parallel, and in principle could be computed on a GPU. The aim of this project is to extend the current implementation of Ypnos with GPU support (perhaps using GHC's ACCELERATE library), and extend the combinators currently provided to make programming in Ypnos more practical. Prior familiarity with Haskell is highly recommended. Other extensions include extending Ypnos to other data types (trees, meshes, graphs). There is scope for a project considering more theoretical aspects, such as working out a scheme for data-type generic grid pattern syntax, and exploring the mathematical structure behind Ypnos (based in category theory). Resources   The current implementation can be found on GitHub. The original Ypnos paper describes the general motivation and philosophy of Ypnos as well as explaining its general features. A followup paper describes in more depth the type system of core Ypnos features, using the advanced type system features of GHC Haskell. • Inductive programming with ellipses and subscripts Ellipsis notation (...) is commonly use in mathematics and in pseudo-code as syntactic sugar for iteration, repetition, sequences, extension of a pattern, ... . For example, summation of numbers 0 to n might be written: 0 + 1 + ... + n or the summation of elements in a set might be written as the following, using subscripts to index the elements: \sum x = x0 + ... + x{n-1} (some more examples can be found here: Ellipsis/In mathematical notation (Wikipedia). Haskell has some support for defining (inclusive) ranges using ellipses and the list notation e.g. [0..9] == [0,1,2,3,4,5,6,7,8,9] ['a'..'h'] == ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'] = "abcdefgh" [0..] == [0,1,2,3,4,5,6,7,8,9,10,11,12,... -- infinite list [False .. True] = [False, True] [True .. False] = [] [0,2..9] == [0,2,4,6,8] -- note different form [a,b..c] where interval is specified [10..1] == [] [10,9..1] == [] [0,2..] = [0,2,4,6,... -- infinite list [0.1..5] == [0.1, 1.1, 2.1, 3.1, 4.1, 5.1] -- slightly oddly Other languages use ellipses in similar ways, such as ranges in Perl, Ruby and variable arguments in C. In Haskell, any type which is an instance of the Enum type class has range operations using ellipses (see Enum documentation), which is defined for many orderable types. Ellipses and subscripting notations are compact and concise and used pervasively. Why can't we use them in programming more widely and form more operation than just defining ranges? The aim of this project is to extended support for programming with ellipses and subscripts. 1. Compare: map f [] = [] map f (x:xs) = (f x):(map f xs) with: map f [x_0, ..., x_n] = [f x_0, ..., f x_n] 2. Compare: g [] = 0 g (x:xs) = x + (g xs) with: g [x_0, ..., x_n] = x_0 + ... + x_n Can this be extended to inductively-defined types? For example: data BTree a = Leaf a | Node a (BTree a) (BTree a) treeMap f (Node x (... Leaf y) (... Leaf z)) = Node (f x) (... (Leaf (f y)) (... (Leaf (f z)))) The project might take the form of extending the GHC compiler for Haskell with an extension for programming with ellipsis/subscripts, or providing macro support using Haskell's macro/quotation system.
{}
# Activation Energy Formula In order to start a reaction, molecule requires energy. This can be understood by a simple example –  molecules need some kinetic energy or velocity to collide with other molecules to start a reaction. No reaction will take place, if collision don’t happen, or molecules don’t have enough kinetic energy. The energy needs to initiate the reaction is known as Activation energy. “Activation Energy is the minimum amount of energy that is needed to start a chemical reaction.” Activation Energy Formula $\dpi{120}&space;\large&space;lnK=lnA-\frac{E_{a}}{RT}$ $\dpi{120}&space;\large&space;logK=logA-\frac{E_{a}}{2.303RT}$ Where k = rate constant A = frequency factor Ea = activation energy R = gas constant T =  absolute temperature If we know the rate constant k1 and k2 at T1 and T2 the activation energy formula is $\dpi{120}&space;\large&space;log\frac{K_{2}}{K_{1}}=&space;-\frac{E_{_{a}}}{2.303&space;R}\left&space;(&space;\frac{T_{2}-T_{1}}{T_{1}T_{2}}&space;\right&space;)$ Solved Examples Question 1: The rate constant for the reaction 2N2O5(g) → 4NO2(g) + O2(g) is 5.0 ×× 10-4S-1. Frequency factor is 2.812 ×× 1013S-1. Find the activation energy of the reaction at 45oC. Solution: T = 273 + 45oC $\dpi{120}&space;\large&space;logK=logA-\frac{E_{a}}{2.303&space;RT}$ Therefore,replace all the values and rearrange the equation to get Ea value. log (5.0 ×× 10-4) = log (2.812 ×× 1013S-1) + Ea2.303 ×8.314 ×318Ea2.303 ×8.314 ×318 Ea = 102000J Ea = 102KJ Question 2: The rate of a reaction increases when the temperature changes from 293 to 313K. Find the energy of activation of the reaction assuming that it does not change with temperature. Solution: The T1and T2 are 293K and 313K Rearrange the following equation to get Ea. $\dpi{120}&space;\large&space;log\frac{K_{2}}{K_{1}}=&space;-\frac{E_{_{a}}}{2.303&space;R}\left&space;(&space;\frac{T_{2}-T_{1}}{T_{1}T_{2}}&space;\right&space;)$ So that Ea = 2.303R [T1T2T2−T1T1T2T2−T1 ×× logK2K1K2K1] T1T2T2−T1T1T2T2−T1 = 4585.45K logK2K1K2K1 = log4141 = log4 = 0.6021 Ea = 2.303 ×× 8.314JK-1mol-1 ×× 4585.45K ×× 0.6021 Ea = 52848 Jmol-1 Ea = 52.8 KJ mol-1
{}
If pepsin and trypsin had to function together, what ph would produce the highest activity? Question: If pepsin and trypsin had to function together, what ph would produce the highest activity? A fair coin is flipped 10 times and lands on heads 8 times why is there a difference between the experimental and theoretical probabilities a fair coin is flipped 10 times and lands on heads 8 times why is there a difference between the experimental and theoretical probabilities observing 8 heads in 10 coin flips... A(n) _ is a well-developed set of ideas that proposes an explanation for observed phenomena. A(n) _ is a well-developed set of ideas that proposes an explanation for observed phenomena.... Some elementary particles are positively or negatively a. adverse particles. b. charged particles. c. Some elementary particles are positively or negatively a. adverse particles. b. charged particles. c. fueled particles. d. progressive particles.... What is the difference in the structure of the enzyme RNA polymerase between eubacteria andarchaebacteria?A. Archaebacteria's What is the difference in the structure of the enzyme RNA polymerase between eubacteria and archaebacteria? A. Archaebacteria's RNA polymerase has 4-5 subunits. B. Eubacterial RNA polymerase has 10-20 subunits. C. Archaebacteria's RNA polymerase has 8-12 subunits. D. Eubacterial RNA polymerase has ... 8. Describe 4 new pieces of technology that were introduced during World War One. Whatkind of impact did they have? Someone’s help 8. Describe 4 new pieces of technology that were introduced during World War One. What kind of impact did they have? Someone’s help please... The City of South River budget for the fiscal year ended June 30, 2020, included an appropriation for The City of South River budget for the fiscal year ended June 30, 2020, included an appropriation for the police department in the amount of $8,706,000. During the month of July 2020, the following transactions occurred (in summary): Purchase orders were issued in the amount of$533,000. Of the \$533... 100 POINTSHow does the author use her personal experiences in the text to introduce and develop her main idea? CITE EVIDENCE 100 POINTS How does the author use her personal experiences in the text to introduce and develop her main idea? CITE EVIDENCE (at least two pieces) from the text to support your response.... ACE inhibitor beta-blocker antiplatelet agent CCB To gain some relief from angina pain, Mark is taking ACE inhibitor beta-blocker antiplatelet agent CCB To gain some relief from angina pain, Mark is taking medicines that prevent calcium movement into the heart cells. arrowRight Doctors have diagnosed Denis with thrombosis. He is taking medicines that prevent the formation of blood clots by preventing... Ineed complete the chart below with the correct information Ineed complete the chart below with the correct information $Ineed complete the chart below with the correct information$... One hot day at the carnival you decide to buy yourself a snow cone. The height of the cone shaped container is 5 in and it’s One hot day at the carnival you decide to buy yourself a snow cone. The height of the cone shaped container is 5 in and it’s radius is 2 in. The shaved ice is perfectly rounded on top forming a hemisphere. A. What is the volume of the ice in your frozen treat? B. Describe the cross sections crea... A golfer hit a golf ball from a tee box that is 6 yards above the ground. The graph shows the height in yards of the golf ball above the A golfer hit a golf ball from a tee box that is 6 yards above the ground. The graph shows the height in yards of the golf ball above the ground as a quadratic function of x, the horizontal distance in yards of the golf ball from the tee box. What is the domain of the function for this situation?... What is a unique take on beauty about? What is a unique take on beauty about?... Driver ed motorcyclists are easily hidden in large blind spots and the exposed rider is in constant Driver ed motorcyclists are easily hidden in large blind spots and the exposed rider is in constant danger. a. true b. false... Which of the following statements is FALSE? a The transcontinental railroad was a combined effort between the United States government, Which of the following statements is FALSE? a The transcontinental railroad was a combined effort between the United States government, cities, counties, and businesses. b By the 1830s railroad construction had begun all across America, but the Civil War halted further construction. c The transconti... 5. The coordinates of the points A and B are (0, 6)and (8, 0) respectively.() Find the equation of the line passing throughA and BGiven that the line 5. The coordinates of the points A and B are (0, 6) and (8, 0) respectively.() Find the equation of the line passing throughA and BGiven that the line y = x+1 cuts the line AB atthe point M, find(ii) the coordinates of M,(in) the equation of the line which passesthrough Mand is parallel to the -axi... Por favor ayúdame. Rápido!! Por favor ayúdame. Rápido!! $Por favor ayúdame. Rápido!!$$Por favor ayúdame. Rápido!!$... Phagocytosis is a process that most likely results in question 1 options: a. absorption of nutrients Phagocytosis is a process that most likely results in question 1 options: a. absorption of nutrients like amino acids and glucose b. the release of chemicals that "pokes" holes in cells c. the death of a cell that is virus-infected d. the death of bacteria as it becomes engulfed and digested... Identify the type of symbiotic relationship described in each scenario. Some wasps lay their eggs on caterpillars called tomato Identify the type of symbiotic relationship described in each scenario. Some wasps lay their eggs on caterpillars called tomato hornworms. When the eggs hatch, the young wasps burrow into the caterpillar’s body and eat it alive. The adult wasps then fly away. This is an example of ( a. mutualism, ...
{}
CodePSU 2018 - Intermediate Start 2018-03-18 18:00 UTC CodePSU 2018 - Intermediate End 2018-03-18 22:00 UTC The end is near! Contest is over. Not yet started. Contest is starting in -270 days 13:34:04 4:00:00 0:00:00 Problem GAccelerated Compulsive Meteors You have just started playing a new space-based role playing game Galaxy War Online. The evil empire has been reigning the universe for too long and a new coalition has been forming to try and knock the empire out of power. You are a commander in the new coalition in charge of the Accelerated Compulsive Meteors Brigade (or ACM for short). As part of your role as commander, you must keep track of your sub-brigades while they are in battle. You must be able to show a snapshot of your brigade to show your superiors the state of the brigade. 1. At the top of the brigade is you. 2. You have two lieutenants who are under you 3. Each lieutenant is in command of two sub-brigade each with one Captain 4. Each captain is in control of two areas, each with one sub-captain 5. Each sub-captain has two floats, and each float has two pilots, each pilot has two sub-pilots and so on. Your brigade is divided into n number of levels. The first level being you and your front line of defense being level n. As seen in the structure above each level, has twice the number of people as the level before it. As the war happens you must update the state of your brigade. As certain people in your brigade get eliminated by the enemy forces, everyone under them will be eliminated with them. You must keep track of your brigade as these eliminations happen and be ready to show it to your bosses. Input You are given an integer n ($0 \leq n \leq 10^4$) which corresponds to the number of elements in your brigade. Each element has an id from 0 to n which is unique. As commander, you stand at the top of the hierarchy, so you are number 1. Your two lieutenants are numbers 2 (to your left) and 3 (to your right). Under number 2 are 4 (to its left) and 5 (to its right) and under number 3 are 6 (to its left) and 7 (to its right). This tree-like pattern continues until you reach n elements. You are then given an integer $x$ ($0 \leq x \leq 101$) which corresponds to the number of commands to execute. Each of the $x$ lines after that have one of the two commands: Eliminate or Status. Eliminate commands are followed by an integer corresponding to the id of who is being eliminated. When a commander is eliminated, all of the troops under their command are also eliminated. Status asks you to print the status of your brigade by printing each member left in your brigade on their own line in increasing order (see test case below). Output Print your brigade by levels, starting with you as the first element and continuing down the structure, ignoring those that have been eliminated. Sample Input 1 Sample Output 1 7 2 Eliminate 2 Status 1 3 6 7 Sample Input 2 Sample Output 2 4 2 Eliminate 2 Status 1 3
{}
# Variability In the table bellow the number of wrong produced goods in two shifts: morning shift: 2; 0; 6; 10; 2; 2; 4; 2; 5; 2; afternoon shift: 4; 4; 0; 2; 10; 2; 6; 2; 3; 10; Compare the variability in both shifts, compare the average number of wrong goods on both shifts as well as other variability and positioning rates. a1 =  3.5 a2 =  4.3 s1 =  2.7295 s2 =  3.2265 ### Step-by-step explanation: ${a}_{1}=\left(2+0+6+10+2+2+4+2+5+2\right)\mathrm{/}10=\frac{7}{2}=3\frac{1}{2}=3.5$ We will be pleased if You send us any improvements to this math problem. Thank you! Tips to related online calculators Looking for help with calculating arithmetic mean? Looking for a statistical calculator? Looking for a standard deviation calculator? #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Related math problems and questions: • Complaints The table is given: days complaints 0-4 2 5-9 4 10-14 8 15-19 6 20-24 4 25-29 3 30-34 3 1.1 What percentage of complaints were resolved within 2weeks? 1.2 calculate the mean number of days to resolve these complaints. 1.3 calculate the modal number of day • Standard deviation Find standard deviation for dataset (grouped data): Age (years) No. Of Persons 0-10 15 10-20 15 20-30 23 30-40 22 40-50 25 50-60 10 60-70 5 70-80 10 • Two machines Performances of two machines are in a ratio of 7:12. A machine with less power produced 406 pieces of products per shift. a) How many pieces produced per shift second machine? b) How many pieces produced two machines together for five shifts? • Controller Output Controller of the company in the control of 50 randomly selected products found that 37 of them had no defect 8 has only one flaw, three had two defects, and two products had three defects. Determine the standard deviation and coefficient of variat • Workman - shift The worker produces 300 components per shift. How many components would be produced in 18 shift, if his performance gradually increased every shift by 3 components? • Std-deviation Calculate standard deviation for file: 63,65,68,69,69,72,75,76,77,79,79,80,82,83,84,88,90 • Precious metals In 2006-2009, the value of precious metals changed rapidly. The data in the following table represent the total rate of return (in percentage) for platinum, gold, an silver from 2006 through 2009: Year Platinum Gold Silver 2009 62.7 25.0 56.8 2008 -41.3 4 • SD - mean The mean is 10 and the standard deviation is 3.5. If the data set contains 40 data values, approximately how many of the data values will fall within the range of 6.5 to 13.5? • Pool If water flows into the pool by two inlets, fill the whole for 19 hours. The first inlet filled pool 5 hour longer than the second. How long pool take to fill with two inlets separately? • Tallest people a As a group, the Dutch are amongst the tallest people in the world. The average Dutchman is 184 cm tall. If a normal distribution is appropriate, and the standard deviation for Dutchmen is about 8 cm, what is the percentage of Dutchmen who will be over 2 m • A mountain A mountain climber plans to buy some rope to use as lifeline. Which of the following would be the better choice? Explain your choice. Rope A: Mean breaking strength:500lb; standard deviation of 100lb Rope B: Mean breaking strength: 500lb; standard deviati • Power Number ?. Find the value of x. • Unknown number 7 Calculate unknown number whose 12th power when divided by the 9th power get a number 27 times greater than the unknown number. Determine the unknown number. • 5-number summary Given the following 5-number summary: 11, 19, 24, 30, 48 which of the statistics cannot be determined? • Compare II Which of the numbers 710, 107 is greater? • IQ Intelligence quotient Intelligence quotient (IQ) is a standardized score used as the output of standardized intelligence psychological tests to quantify a person's intelligence with the rest of the population (respectively to a given group). Intelligence has an approximately n • Compare Compare with characters >, <, =: 85.57 ? 80.83
{}
# A question on Taylor Series and polynomial Suppose $f(x)$ that is infinitely differentiable in $[a,b]$. For every $c\in[a,b]$ the series $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$ is a polynomial. Is true that $f(x)$ is a polynomial? I can show it is true if for every $c\in [a,b]$, there exists a neighborhood $U_c$ of $c$, such that $$f(x)=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n\quad\text{for every }x\in U_c,$$ but, this equality is not always true. What can I do when $f(x)\not=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$? - Two solutions starting from weaker assumptions are given in this MO thread – t.b. Dec 22 '11 at 13:39 Put $F_n:=\bigcap_{k\geq n}\{x\in [a,b], f^{(k)}(x)=0\}$ and apply Baire's category theorem. – Davide Giraudo Dec 22 '11 at 13:39 I'm left wondering if the stronger assumptions here permit some more elementary proof. – leonbloy Dec 22 '11 at 14:44 @t.b. Would you (or @Davide) mind typing up a correct answer (possible just taken from MO), perhaps as community wiki? (Or, I can do it if no one else wants to). There are currently 10 incorrect answers (some deleted), and no correct answers. – Jason DeVito Oct 22 '12 at 19:44 As I confirmed here, if for every $c\in[a,b]$, the series $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$ is a polynomial, then for every $c\in[a,b]$ there exists a $k_c$ such that $f^{(n)}(c)=0$ for $n>k_c$. If $\max(k_c)$ is finite, we're done: $f(x)$ is a polynomial of degree $\le\max(k_c)$. If $\max(k_c)=\infty$ it means there is an infinite number of unbounded $k_c$'s, but $f$ is infinitely differentiable, so (hand waving) the $c$'s can't have a limit point, i.e. although $\max(k_c)=\infty$ it can't be $\lim_{c\to c_\infty}k_c=\infty$ for some $c_\infty\in[a,b]$ because that would mean $k_{c_\infty}=\infty$, i.e. not a polynomial. So the infinite number of unbounded $k_c$'s need to be spread apart, e.g. like a Cantor set. Does this suggest a counterexample or can a Cantor-like distribution of $k_c$'s never be infinitely differentiable? - 1. All polynomials are Power Series but not all Power Series are not polynomials. For a certain Power Series $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k = a_0 + a_1 (x-c)^1 + a_2 (x-c)^2 + a_3 (x-c)^3 + \cdots$ to be a Polynomial of degree $n$, then for all $k>n$, $a_k = 0$. 2. If $f(x)$ is infinitely differentiable in the interval $[a,b]$, then for every $k \in \mathbb{N}$, $f^{(k)}(x) \in \mathbb{R}$ i.e. exists as a finite number. The Taylor Series of $f(x)$ in the neighbourhood of $c$ is $\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k$ and 3. If the remainder, $R_N(x) = f(x) - \sum\limits_{k=0}^N \cfrac{f^{(k)}(c)}{k!}(x-c)^k$ for a certain $N \in \mathbb N$, converges to $0$ then $f(x) = \sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k$ 4. Taylor's inequality: If $|f^{(N+1)}(x)|≤ B$ for all $x$ in the interval $[a, b]$, then the remainder $R_N(x)$ (for the Taylor polynomial to $f(x)$ at $x = c$) satisfies the inequality $$|R_N(x)|≤ \cfrac {B}{(N+ 1)!}|x − c|^{N+1}$$ for all $x$ in $[c − d, c + d]$ and if the right hand side of this inequality converges to $0$ then $R_N(x)$ also converges to $0$. According to your question, supposing that $\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k$, $\forall c \in [a,b]$ is a polynomial which translates to $$\text{given } c\in[a,b],\ \ \exists n_c\in \mathbb N \ (\text{ n_c depends on c}) \quad|\quad\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k=P_{n_c}(x)$$ $$\quad \quad \quad\quad \quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad \text { and} \ \forall k>n_c, \ k\in \mathbb N, \ {f^{(k)}(c)}=0$$ This is true because if one looks at the finite sum $N\ge n_c$, $$\displaystyle\sum^N_{k=0} a_k(x-c)^k=\sum^N_{k=0}\sum^k_{i=0}a_k\binom ki(-1)^{k-i} c^{k-i}x^{i}=\sum^N_{i=0}x^{i}\sum^N_{k=i}a_k\binom ki(-1)^{k-i} c^{k-i}$$ if this is a polynomial $P_{n_c}(x)$ of degree $n_c$, then $$\forall i>n_c,\ \ \displaystyle \sum^N_{k=i}a_k\binom ki(-1)^{k-i} c^{k-i}=0$$ Solving this system of equations gives that $\forall n_c<k\le N, \ \ a_k=0$ and $$a_k=\cfrac{f^{(k)}(c)}{k!}=0\implies f^{(k)}(c)=0, \ \ \forall k>n_c$$ This holds when $N\rightarrow \infty$ Since $n_c$ depends on each $c\in[a,b]$, it is sufficient to take $\displaystyle n=\max_{c\in[a,b]} (n_c)$ such that for any $c\in [a,b]$ and for any $k>n,\ \ k\in \mathbb N$, we have $f^{(k)}(c)=0$. Thus, the Taylor series is of $f$ is a polynomial of degree $\displaystyle n=\max_{c\in[a,b]} (n_c)$ because $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k=P_n(x)$. At this point it is sufficient to prove that $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k=P_n(x)$ using the Taylor Remainder Theorem (#4). We've already found out that $f^{(k)}(c) = 0,\space \forall k>n$, thus $f^{(n+1)}(x) = 0$ or simply $f^{(n+1)}(x) \le 0$ (to work with inequalities) which implies that $B = 0$. At this point it is clear that $|R_N(x)|≤ \cfrac {B}{(N+ 1)!}|x − c|^{N+1} = 0$ and we can conclude that $R_N(x)$ converges to $0$ and that $f(x) = \sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k = P_n(x)$. $f$ is a polynomial. - The point is that $k$ is allowed to depend on $c$, whereas our $k$ is independent of $c$. – Jason DeVito Oct 22 '12 at 19:42 why the down-vote? – user31280 Nov 11 '12 at 16:33 I didn't downvote, but as I said in my comment, this doesn't answer the OPs question. The OPs question is this: "We know that for each point $c$, the Taylor series at $c$ is a polynomial. Why is the original function a polynomial?" We do not know that at each point $c$, the taylor series is a polynomial of degree $n$ for some $n$, because $n$ can vary as $c$ varies. In particular, your use of point $4$ is invalid because it's possible that at some point $c$, there is no universal $n$ that works any interval containing $c$. – Jason DeVito Nov 11 '12 at 18:21 @JasonDeVito I didn't get your point the first time. I'll fix the answer after working on it. Thanks. – user31280 Nov 11 '12 at 18:32 Well, I didn't make my point very well the first time - sorry about that! – Jason DeVito Nov 11 '12 at 19:01 At first , Taylor Series is an approximation of any polynomial for a value between a and b , given the polynomial is differentiable in the closed interval between a and b . If the equality is not always true , that exists some neighborhood of c inside a and b is not differentiable for the polynomial. In this sense, the assumption that polynomial is differentiable in the closed interval between a and b fails. And hence, the coefficient of the terms in taylor series and the coucannot be found. Simply not using this formula in the case is the solution. - The Taylor theorem states that every function which is differentiable over a space [a,b] can be rewritten as a polynomial. It doesn't mean the function was a polynomial at first place. Consider Sin(x), which is uniformly differentiable. The Taylor polynomial (for any space (a,b)) is actually the function itself, but sin(x) is not a polynomial. - I think the OP means that the Taylor series is finite. – Javier Aug 23 '12 at 14:33 You're using the word polynomial with two different meanings, saying that $\sin x$ can be written as a polynomial, but saying it is not a polynomial. If something can be written as a polynomial, then it is a polynomial. If you want to distinguish, you could say $\sin x$ can be written as an infinite polynomial, but not a finite polynomial. But, without the adjective in front, polynomial will almost always be taken to mean finite polynomial. So, I don't think your answer adds anything. – Graphth Sep 28 '12 at 20:20 Unless I'm missing something why doesn't the following work. Pick a $c\in[a,b]$. By assumption $g(x)=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!} (x-c)^n$ is a polynomial (I'm assuming this is suppose to mean that the series converges to a polynomial function on some nonzero size interval around $c$). This says that $g^{(k)}(c) = 0$ for $k>d$ where $d$ is the degree of that polynomial. The taylor series $g(x)=\sum\limits_{n=0}^d \cfrac{g^{(n)}(c)}{n!} (x-c)^n$, which is just a polynomial, has the same value as $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!} (x-c)^n$ on some interval and therefore the coefficients are equal. We conclude that $f^{(k)}(c)=g^{(k)}(c) = 0$ for $k>d$. To show that $f(x)$ agrees with its expansion around $c$ consider the lagrange form of the remainder. $$f(x) = \sum_{n=0}^k \frac{f^{(n)}(c)}{n!} (x-c)^n + f^{(k+1)}(h)\frac{(x-c)^{(k+1)}}{(k+1)!}$$ where $h$ lies between $c$ and $x$ and $x\in [a,b]$. That equality holds for $x\in[a,b]$ is guaranteed since $f$ is $k+1$ times differentiable on $[a,b]$. We choose $k$ so that $k+1>d$, this guarantees $f^{(k+1)}=0$ and simplifies the above to $$f(x) = \sum_{n=0}^k \frac{f^{(n)}(c)}{n!} (x-c)^n$$ where $x\in [a,b]$. Or in other words f is a polynomial. - The problem is the assumption that $f^{(k+1)}(h)=0$, when you actually only know that $f^{(k+1)}(c)=0$. Note that $d$ depends on $c$. (Or rather, you have to prove that $d$ can be chosen independently of $c$.) (By the way: the series being a polynomial just means what you concluded, that for all $c$ there exists $d$ such that $f^{(k)}(c)=0$ for all $k>d$. Note that polynomials converge everywhere, so reference to "nonzero size interval" is unnecessary.) – Jonas Meyer Dec 24 '11 at 3:56
{}
# Lesson 5 Two Equations for Each Relationship The practice problem answers are available at one of our IM Certified Partners ### Problem 1 The table represents the relationship between a length measured in meters and the same length measured in kilometers. 1. Complete the table. 2. Write an equation for converting the number of meters to kilometers. Use $$x$$ for number of meters and $$y$$ for number of kilometers. meters kilometers 1,000 1 3,500 500 75 1 $$x$$ ### Problem 2 Concrete building blocks weigh 28 pounds each. Using $$b$$ for the number of concrete blocks and $$w$$ for the weight, write two equations that relate the two variables. One equation should begin with $$w =$$ and the other should begin with $$b =$$. ### Problem 3 A store sells rope by the meter. The equation $$p = 0.8L$$ represents the price $$p$$ (in dollars) of a piece of nylon rope that is $$L$$ meters long. 1. How much does the nylon rope cost per meter? 2. How long is a piece of nylon rope that costs \$1.00? ### Problem 4 The table represents a proportional relationship. Find the constant of proportionality and write an equation to represent the relationship. $$a$$ $$y$$ 2 $$\frac23$$ 3 1 10 $$\frac{10}{3}$$ 12 4 Constant of proportionality: __________ Equation: $$y =$$ (From Grade7, Unit 2, Lesson 4.) ### Problem 5 On a map of Chicago, 1 cm represents 100 m.Select all statements that express the same scale. A: 5 cm on the map represents 50 m in Chicago. B: 1 mm on the map represents 10 m in Chicago. C: 1 km in Chicago is represented by 10 cm the map. D: 100 cm in Chicago is represented by 1 m on the map. (From Grade7, Unit 1, Lesson 8.)
{}
# What is the complexity of chordalization? A graph $G=(V,E)$ is a chordal graph, if it does not contain an induced cycle of length at least four. We say a graph $H$ is a chordalization of graph $G$, if $H$ contains $G$ as a subgraph, and $H$ is chordal. $Q_1$: Find minimum number of edges whose addition to a given graph makes the graph a chordal graph. According to this, $Q_1$ is NP-hard. $Q_2$: Find a chordalization that does not introduce new $K_4$? What is the complexity of $Q_2$? Is $Q_2$ harder than $Q_1$? { Remark: After Florent comment, I changed $Q_1$ from the following: $Q_1$ in first version of my post: What is the complexity of giving an arbitrary chordalization of input graph? } • Since a complete graph is chordal, if you're looking for an arbitrary chordalization, I guess that adding all possible edges will suffice for your first question! – Florent Foucaud Jun 19 '12 at 13:24 Now to get a chordal graph, fix any vertex ordering in G and vertex by vertex make the neighborhood a clique and delete the vertex. The set of introduced edges will make $G$ chordal (also called a fill-in). Thus, your first question is efficiently solvable. For the second one, I'm not so sure. First of all, it should be easy to construct graphs $G$ where no chordalization without new $K_4$ is possible. In fact, if you have a biclique (two independent sets with all possible edges between them) then you have to make one side a clique. Furthermore, the treewidth of $G$ is the minimum clique size minus one over all chordalizations $H$ of $G$. The Treewidth problem is NP-complete. Finding graphs $H$ that have essentially the same cliques as $G$ therefore seems hard. Hence, my guess(!) is that this is NP-hard to find if it exists.
{}
# The Unapologetic Mathematician ## Characteristic Functions as Idempotents I just talked about characteristic functions as masks on other functions. Given a function $f:X\rightarrow\mathbb{R}$ and a subset $S\subseteq X$, we can mask the function $f$ to the subset $S$ by multiplying it by the characteristic function $\chi_S$. I want to talk a little more about these functions and how they relate to set theory. First of all, it’s easy to recognize a characteristic function when we see one: they’re exactly the idempotent functions. That is, ${\chi_S}^2=\chi_S$, and if $f^2=f$ then $f$ must be $\chi_S$ for some set $S$. Indeed, given a real number $a$, we can only have $a^2=a$ if $a=0$ or $a=1$. That is, $f(x)=0$ or $f(x)=1$ for every $x$. So we can define $S$ to be the set of $x\in X$ for which $f(x)=1$, and then $f(x)=\chi_S(x)$ for every $x\in X$. Thus the idempotents in the algebra of real-valued functions on $X$ correspond exactly to the subsets of $X$. We can define two operations on such idempotent functions to make them into a lattice. The easier to define is the meet. Given idempotents $\chi_S$ and $\chi_T$ we define the meet to be their product: $\displaystyle\left[\chi_S\wedge\chi_T\right](x)=\chi_S(x)\chi_T(x)$ This function will take the value ${1}$ at a point $x$ if and only if both $\chi_S$ and $\chi_T$ do, so this is the characteristic function of the intersection $\displaystyle\chi_S\wedge\chi_T=\chi_{S\cap T}$ We might hope that the join would be the sum of two idempotents, but in general this will not be another idempotent. Indeed, we can check: $\displaystyle(\chi_S+\chi_T)^2={\chi_S}^2+{\chi_T}^2+2\chi_S\chi_T=(\chi_S+\chi_T)+2\chi_{S\cap T}$ We have a problem exactly when the corresponding sets have a nonempty intersection, which leads us to think that maybe this has something to do with the inclusion-exclusion principle. We’re “overcounting” the intersection by just adding, so let’s subtract it off to define $\displaystyle\left[\chi_S\vee\chi_T\right](x)=\chi_S(x)+\chi_T(x)-\chi_S(x)\chi_T(x)$ We can multiply this out to check its idempotence, or we could consider its values. If $x$ is not in $T$, then $\chi_T(x)=0$, and we find $\chi_S\vee\chi_T=\chi_S$ — it takes the value ${1}$ if $x\in S$ and ${0}$ otherwise. A similar calculation holds if $x\notin S$, which leaves only the case when $x\in S\cap T$. But now $\chi_S(x)$ and $\chi_T(x)$ both take the value ${1}$, and a quick calculation shows that $\chi_S\vee\chi_T$ does as well. This establishes that $\displaystyle\chi_S\vee\chi_T=\chi_{S\cup T}$ We can push further and make this into an orthocomplemented lattice. We define the orthocomplement of an idempotent by $\displaystyle\left[\neg\chi_S\right](x)=1-\chi_S(x)$ This function is ${1}$ wherever $\chi_S$ is ${0}$, and vice-versa. That is, it’s the characteristic function of the complement $\displaystyle\neg\chi_S=\chi_{X\setminus S}$ So we can take the lattice of subsets of $X$ and realize it in the nice, concrete algebra of real-valued functions on $X$. The objects of the lattice are exactly the idempotents of this algebra, and we can build the meet and join from the algebraic operations of addition and multiplication. In fact, we could turn around and do this for any commutative algebra to create a lattice, which would mimic the “lattice of subsets” of some “set”, which emerges from the algebra. This sort of trick is a key insight to quite a lot of modern geometry. December 23, 2009 Posted by | Algebra, Fundamentals, Lattices | 2 Comments
{}
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Paper: TR20-124 | 3rd August 2020 21:36 #### A Strong XOR Lemma for Randomized Query Complexity TR20-124 Authors: Joshua Brody, JaeTak Kim, Peem Lerdputtipongporn, Hariharan Srinivasulu Publication: 17th August 2020 16:19 We give a strong direct sum theorem for computing $XOR \circ g$. Specifically, we show that the randomized query complexity of computing the XOR of $k$ instances of $g$ satisfies $\bar{R}_\varepsilon(XOR \circ g)=\Theta(\bar{R}_{\varepsilon/k}(g))$. This matches the naive success amplification bound and answers a question of Blais and Brody. As a consequence of our strong direct sum theorem, we give a total function $g$ for which $R(XOR \circ g) = \Theta(k\log(k)R(g))$, answering an open question from Ben-David et al.
{}
All solutions here are SUGGESTED. Mr. Teng will hold no liability for any errors. Comments are entirely personal opinions. (i) Order a list of all 5000 households from 1 to 5000. $\frac{5000}{100} = 50$ Randomly select a number from 1 to 50 and taking that as a starting point, sample at continuous interval of 50. Eg, 10, 60, 110, … (ii) Use simple random sampling within each stratum to obtain the sample. (iii) Stratified sampling is more appropriate since it is more representative of the users of different age and shopping methods. ### KS Comments Some students overlook that for systematic sampling, we need to RANDOMLY select. For (ii), they can draw a nice table to present their answers. ### One Comment • […] 2014 A-level H1 Mathematics (8864) Question 7 Suggested Solutions […]
{}
# Optimization Resources A collection of software and resources for nonlinear optimization. Maintained by Coralia Cartis (University of Oxford), Jaroslav Fowkes (University of Oxford) and Lindon Roberts (Australian National University). There are many software packages for solving optimization problems. Here we provide a list of some available solvers that we have developed and a number of other packages that are available for solving a broad variety of problems in several programming languages. On this page we focus on nonlinear optimization (not other important categories such as linear programming or integer programming problems). #### Nonlinear Optimization Problems These problems take the form $$\min_{x\in\mathbb{R}^n} f(x),$$ where $f$ is an 'objective' function (implemented as a piece of code) which takes the length-$n$ array $x$ of continuous variables and computes some quantity of interest depending on $x$. If the problem is to maximize $f$, then you can instead minimize $(-1)\times f(x)$ to put it in the form required by most solvers. ##### Constraints It is often the case that the variables $x$ can only take some values, and we call such problems constrained: $$\min_{x\in\Omega} f(x),$$ where $\Omega$ is some region of $n$-dimensional space defining the valid choices of $x$. This 'feasible region' is usually represented by a collection of constraint functions of the form $c(x)\geq 0$ or $c(x)=0$. Some simple constraints are sometimes considered explicitly, such as box constraints $a_i \leq x_i \leq b_i$ (i.e. the $i$-th entry of $x$ must be between $a_i$ and $b_i$). In general it is better to represent constraints in the simplest possible way (e.g. don't use $e^x \leq 1$ when you could use $x\leq 0$ instead). #### Data Fitting/Inverse Problems One very common special type of optimization problem is nonlinear least-squares problems, which are very common in data fitting and inverse problems. In this case, the objective function has the special form $$f(x) = \sum_{i=1}^{m} r_i(x)^2,$$ where each function $r_i(x)$ usually represents a fitting error corresponding to the $i$-th data point (out of $m$ total data points). #### Problem Information to Supply What information you have access to can determine what type of solver you should use. In general, most solvers will require you to provide a piece of code which computes $f(x)$ for any $x$ (and $c(x)$ for any constraints). Usually you will also need to be able to compute the gradient of the objective (and all constraints), that is provide code which returns the $n$-dimensional vector $\nabla f(x)$ for any $x$ (where the $i$-th entry of $\nabla f(x)$ is the derivative of $f$ with respect to the $i$-th entry of $x$). You can implement a gradient calculation by explicitly calculating the derivatives (if you know the explicit mathematical form of $f$), or use finite differencing techniques or algorithmic differentiation software packages. For nonlinear least-squares problems and problems with multiple constraints, often you will need to write code to return a vector of results (e.g. given $x$, return the length-$m$ vector $r(x)=[r_1(x) \: \cdots \: r_m(x)]$ for nonlinear least-squares problems). In this case, instead of the gradient $\nabla f(x)$ you will need to provide the Jacobian matrix of first derivatives. For instance, if you have a function $r(x)=[r_1(x) \: \cdots \: r_m(x)]$ where $x$ has $n$ entries, the Jacobian matrix has $m$ rows and $n$ columns and the $i$-th row is $\nabla r_i(x)$. Some solvers also allow you to provide code to calculate the second derivatives of $f$ (the Hessian of $f$), but this is usually not necessary. In general the more information you can provide, the better the optimization solver will perform, so it is sometimes worth the effort of writing Hessian code. If you are writing code which returns very large matrices (e.g. Jacobians or Hessians), it is often useful to think about the sparsity of these matrices: which entries are always zero for every $x$? Many solvers allow you to provide Jacobians or Hessians as sparse matrices, which can give substantial savings on memory and computation time. If your calculation of $f$ has some randomness (e.g. it requires a Monte Carlo simulation) or is very computationally expensive, then evaluating $\nabla f(x)$ may be very expensive or inaccurate. In this case, you may want to consider derivative-free solvers, which only require code to evaluate $f$. • DFO-LS (open source, Python) - a derivative-free nonlinear least-squares solver (i.e. no first derivatives of the objective are required or estimated), with optional bound constraints. This solver is suitable when objective evaluations are expensive and/or have stochastic noise, and has many input parameters which allow it adapt to these problem types. It is based on our previous solver DFO-GN. • oBB (open source, Python) - a parallelized solver for global optimization problems with general constraints. It requires first derivatives of the objective and bounds on its second derivatives. It is part of the COIN-OR collection (see below). • P-IPM-LP (open source, MATLAB) - a perturbed Interior Point Method for linear programming problems (i.e. minimize a linear function subject to linear constraints). • Py-BOBYQA (open source, Python) - a derivative-free general objective solver, with optional bound constraints. The algorithm is based on Mike Powell's original BOBYQA (Fortran), but includes a number of the features from DFO-LS, which improve its performance for noisy problems. • trustregion (open source, Python) - a wrapper to Fortran routines for solving the trust-region subproblem. This subproblem is an important component of some optimization solvers. These are some software packages which we have had some involvement with, directly or indirectly. • COIN-OR (open source, many languages) - an organization which maintains a collection of many open-source optimization packages for a wide variety of problems. A list of packages by problem type is here and the source code is on Github. • FitBenchmarking (open source, Python) - a tool for comparing different optimization software for data fitting problems. The documentation is here and the source code is on Github. • GALAHAD (open source, Fortran with MATLAB interface) - a collection of routines for constrained and unconstrainted nonlinear optimization, including useful subroutines such as trust-region and cubic regularization subproblem solvers. The source code is on Github. • NAG Library (proprietary, many languages) - a large collection of mathematical software, inluding optimization routines for many problem types (nonlinear, LPs, QPs, global, mixed integer, etc.). Note: some of our research has been partially supported by NAG. Software packages: • Ipopt (open source, C++ with interfaces to many languages including Python, MATLAB, C and Julia) - a package for solving inequality-constrained nonlinear optimization problems. • Julia Smooth Optimizers and JuliaOpt (open source, Julia) - a collection of packages for optimization, including solvers, profiling tools (such as creating performance profiles) and a CUTEst interface. • KNitro (proprietary, free trial available, MATLAB) - a high quality solver for general constrained and unconstrained nonlinear optimization problems. • NLOpt (open source, many languages including Python, MATLAB, Fortran and C) - a library which wraps many different nonlinear optimization solvers in a unified interface. Solvers include SQP, augmented Lagrangian and L-BFGS (local), DIRECT (global) and BOBYQA (local, derivative-free). • RALFit (open source, Fortran with interfaces to C and Python) - a package for solving nonlinear least-squares problems (such as some data fitting and inverse problems). Other resources: • arXiv:math.OC - a repository of optimization-related preprints. • Decision Tree for Optimization Software - an extensive database of software for many different types of optimization problem, carefully sorted by problem type. • Global Optimization test problems - a collection of resources maintained by Arnold Neumaier (University of Vienna). • NEOS server - an online server where you can submit jobs to be solved by specific solvers. • NEOS guide - a reference page with explanations of the different types of optimization problems and summaries of many key algorithms (sorted by problem type). It also has its own page of optimization resources, such as key journals and conferences. • Optimization Online - a repository of optimization-related preprints. This section gives details of how to install the extensive CUTEst library of optimization test problems (paper) - earlier versions were called CUTE (1995) and CUTEr (2003) - as well as providing some standard collections of optimization test problems. CUTEst is a Fortran library, but we give instructions for installing wrappers for Matlab and Python. The other problem sets are implemented in Python. To use CUTEst on Linux you will need to install four packages: archdefs, SIFDecode, CUTEst and MASTSIF. To keep things simple, install all four packages in the same directory: mkdir cutest cd cutest git clone https://github.com/ralna/ARCHDefs ./archdefs git clone https://github.com/ralna/SIFDecode ./sifdecode git clone https://github.com/ralna/CUTEst ./cutest git clone https://bitbucket.org/optrove/sif ./mastsif Note that mastsif contains all the test problem definitions and is therefore quite large. If you're short on space you may want to copy only the *.SIF files for the problems you wish to test on. Next set the following environment variables in your ~/.bashrc to point to the installation directories used above: # CUTEst export ARCHDEFS=/path/to/cutest/archdefs/ export SIFDECODE=/path/to/cutest/sifdecode/ export MASTSIF=/path/to/cutest/mastsif/ export CUTEST=/path/to/cutest/cutest/ export MYARCH="pc64.lnx.gfo" export MYMATLAB=/path/to/matlab/ # if using Matlab interface Now you are ready to compile CUTEst using the interactive install script: cd ./cutest $ARCHDEFS/install_optrove Answer the questions as appropriate, in particular answer the following as shown: Do you wish to install CUTEst (Y/n)? y Do you require the CUTEst-Matlab interface (y/N)? y # if using Matlab interface (see below) Select platform: 6 # PC with generic 64-bit processor Select operating system: 2 # Linux Select Matlab version: 4 # R2016b or later Would you like to compile SIFDecode ... (Y/n)? y Would you like to compile CUTEst ... (Y/n)? y CUTEst may be compiled in (S)ingle or (D)ouble precision or (B)oth. Which precision do you require for the installed subset (D/s/b) ? D #### Installing the Python Interface To obtain a Python interface to CUTEst, please install PyCUTEst: https://github.com/jfowkes/pycutest To use CUTEst on Mac you will first need to install the Homebrew package manager: ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Then you can easily install CUTEst: brew tap optimizers/cutest brew install cutest --without-single --with-matlab # if using Matlab interface brew install mastsif # if you want all the test problems for f in "archdefs" "mastsif" "sifdecode" "cutest"; do \ echo ". $(brew --prefix$f)/$f.bashrc" >> ~/.bashrc; \ done If you installed CUTEst with Matlab support, you also need to define an environment variable pointing to your local Matlab installation in your ~/.bashrc, e.g., export MYMATLAB=/Applications/MATLAB_R2017a.app #### Installing the Python Interface To obtain a Python interface to CUTEst, please install PyCUTEst: https://github.com/jfowkes/pycutest #### Installing the Matlab Interface This is rather involved unfortunately. First download the following xml file and place it in ~/Library/Application\ Support/Mathworks/MATLAB/R2018a/ (substitute your Matlab version). Then download the shell script below and place it somewhere on your PATH. I like to create ~/bin and add it to my PATH. Make the script executable: mkdir ~/bin # if not already done cp Downloads/cutest2matlab_osx.sh ~/bin/ chmod +x ~/bin/cutest2matlab_osx.sh export PATH=~/bin:$PATH # if not already done; add this to your ~/.bashrc Now you have to compile the CUTEst problems you wish to use. For example, if you wish to use the ROSENBR problem (Rosenbrock function), create a folder for it and run cutest2matlab_osx.sh: mkdir rosenbrock cd rosenbrock cutest2matlab_osx.sh ROSENBR This should produce two shared libraries: libROSENBR.dylib mcutest.mexmaci64 cd /the/directory/where/you/ran/cutest2matlab_osx.sh prob = cutest_setup() If that works, you should see a Matlab struct and you should be able to write, e.g., f = cutest_obj(prob.x) The easiest way to set up CUTEst and PyCUTEst on Windows is to install the Windows Subsystem for Linux (on Windows 10). This gives you access to a Linux filesystem and a bash terminal, from which you can install regular programs such as Python and gfortran (using apt-get), and install CUTEst/PyCUTEst using the Linux instructions above. #### Running Linux GUIs It is possible to run Linux GUI programs too, but this is not enabled by default. For this, you need to: 1. Install an X-server program on Windows (e.g. VcXsrv or see here for others). 2. Ensure the X-server is running (you may want to enable it to start automatically on Windows startup). 3. Modify your .bashrc file in your Linux file system to include the lines export DISPLAY=localhost:0.0 export LIBGL_ALWAYS_INDIRECT=1 4. Install the WSL utilities program in Linux (e.g. apt-get install wslu). 5. Use the installed Linux program wslusc to create a shortcut in Windows to start your desired GUI application (run man wslusc in bash for instructions). Alternatively, you can do this manually by creating a .bat Windows script file which contains (if you have Ubuntu 18.04) making sure to include your C:\\Users\\username directory and edit the /path/to/gui/executable. You should double-check the path to your Linux executable to make sure this runs. Double-click on the .bat file to execute it. To obtain a Python interface to CUTEst, please install PyCUTEst: https://github.com/jfowkes/pycutest The Moré-Garbow-Hillstrom (MGH) test collection has a number of common unconstrained nonlinear least-squares test problems. To install the MGH test set for Python, please download the following file: and place it in the same directory as your Python code requiring the test problems. #### Using the test set To create an instance of the extended Rosenbrock function and evaluate the residual and Jacobian at the initial point: from MGH import ExtendedRosenbrock rb = ExtendedRosenbrock(n=4, m=4) x0 = rb.initial # initial point rb.r(x0) # residual vector at x0 rb.jacobian(x0) # Jacobian at x0 where n is the problem dimension and m the number of residuals in the objective. The Moré & Wild (MW) test collection is a collection of common unconstrained nonlinear least-squares problems, designed for testing derivative-free optimization solvers. To install the MW test set for Python, please download the following two files: and place them in the same directory as your Python code requiring the test problems. The code can provide either the vector of residuals or the full least-squares objective. More details of the problems, including an estimate of f_min, are given in the associated more_wild_info.csv file. #### Using the test set Choose a problem number from 1 to 53 and then either get the scalar objective function: from more_wild import * f, x0, n, m = get_problem_as_scalar_objective(probnum) or get the vector function of residuals: from more_wild import * r, x0, n, m = get_problem_as_residual_vector(probnum) Outputs: • f is the scalar objective $f(\mathbf{x})=r_1(\mathbf{x})^2 + \cdots + r_m(\mathbf{x})^2$ • r is the vector of residuals $r(\mathbf{x}) = [r_1(\mathbf{x}) \: \cdots \: r_m(\mathbf{x})]$ • x0 is the initial starting point for the solver • n is the dimension of the problem • m is the number of residuals in the objective #### Creating stochastic problems The code also allows you to create stochastic problems by adding artificial random noise to each evaluation of $f(\mathbf{x})$ or $r(\mathbf{x})$. You do this by calling one of: from more_wild import * r, x0, n, m = get_problem_as_residual_vector(probnum, noise_type='multiplicative_gaussian', noise_level=1e-2) f, x0, n, m = get_problem_as_scalar_objective(probnum, noise_type='multiplicative_gaussian', noise_level=1e-2) where the different options for noise_type are: • smooth (default): no noise added • multiplicative_deterministic: each $r_i(\mathbf{x})$ is replaced by $\sqrt{1+\sigma \phi(\mathbf{x})} r_i(\mathbf{x})$, where $\sigma$ is equal to noise_level and $\phi(\mathbf{x})$ is a deterministic high frequency function taking values in $[-1,1]$ (see equation (4.2) of the Moré & Wild paper). The resulting function is still deterministic, but it is no longer smooth. • multiplicative_uniform: each $r_i(\mathbf{x})$ is replaced by $(1+\sigma) r_i(\mathbf{x})$, where $\sigma$ is uniformly distributed between -noise_level and noise_level (sampled independently for each $r_i(\mathbf{x})$). • multiplicative_gaussian: each $r_i(\mathbf{x})$ is replaced by $(1+\sigma) r_i(\mathbf{x})$, where $\sigma$ is normally distributed with mean 0 and standard deviation noise_level (sampled independently for each $r_i(\mathbf{x})$). • additive_gaussian: each $r_i(\mathbf{x})$ is replaced by $r_i(\mathbf{x}) + \sigma$, where $\sigma$ is normally distributed with mean 0 and standard deviation noise_level (sampled independently for each $r_i(\mathbf{x})$). • additive_chi_square: each $r_i(\mathbf{x})$ is replaced by $\sqrt{r_i(\mathbf{x})^2 + \sigma^2}$, where $\sigma$ is normally distributed with mean 0 and standard deviation noise_level (sampled independently for each $r_i(\mathbf{x})$). This section contains details about data and performance profiles, two common approaches for comparing different optimization solvers' performance on a collection of test problems. We describe the methodology, and provide Python software for generating these profiles. #### Data & Performance Profiles To compare solvers, we use data and performance profiles as defined by Moré & Wild (2009). First, for each solver $\mathcal{S}$, each problem $p$ and for an accuracy level $\tau\in(0,1)$, we determine the number of function evaluations $N_p(\mathcal{S};\tau)$ required for a problem to be solved': $$N_p(\mathcal{S}; \tau) := \text{# objective evals required to get f(\mathbf{x}_k) \leq f^* + \tau(f(\mathbf{x}_0) - f^*),}$$ where $f^*$ is an estimate of the true minimum $f(\mathbf{x}^*)$. (Note: sometimes $f^*$ is taken to be the best value achieved by any solver.) We define $N_p(\mathcal{S}; \tau)=\infty$ if this was not achieved in the maximum computational budget allowed. This measure is suitable for derivative-free solvers (where no gradients are available), but alternative measures of success can be used if gradients are available (see next section). We can then compare solvers by looking at the proportion of test problems solved for a given computational budget. For data profiles, we normalise the computational effort by problem dimension, and plot (for solver $\mathcal{S}$, accuracy level $\tau\in(0,1)$ and problem suite $\mathcal{P}$) $$d_{\mathcal{S}, \tau}(\alpha) := \frac{|\{p\in\mathcal{P} : N_p(\mathcal{S};\tau) \leq \alpha(n_p+1)\}|}{|\mathcal{P}|}, \qquad \text{for \alpha\in[0,N_g],}$$ where $N_g$ is the maximum computational budget, measured in simplex gradients (i.e. $N_g(n_p+1)$ objective evaluations are allowed for problem $p$). For performance profiles (originally proposed by Dollan & Moré (2002)), we normalise the computational effort by the minimum effort needed by any solver (i.e. by problem difficulty). That is, we plot $$\pi_{\mathcal{S},\tau}(\alpha) := \frac{|\{p\in\mathcal{P} : N_p(\mathcal{S};\tau) \leq \alpha N_p^*(\tau)\}|}{|\mathcal{P}|}, \qquad \text{for \alpha\geq 1,}$$ where $N_p^*(\tau) := \min_{\mathcal{S}} N_p(\mathcal{S};\tau)$ is the minimum budget required by any solver. When multiple test runs are used, we take average data and performance profiles over multiple runs of each solver; that is, for each $\alpha$ we take an average of $d_{\mathcal{S},\tau}(\alpha)$ and $\pi_{\mathcal{S},\tau}(\alpha)$. When plotting performance profiles, we took $N_p^*(\tau)$ to be the minimum budget required by any solver in any run. #### Measuring Performance for Gradient-Based Solvers The above was designed comparing derivative-free solvers, where gradients are not available. If, however, derivative information is available, it may be better to replace the definition above with $$N_p(\mathcal{S}; \tau) := \text{# objective evals required to get \|\nabla f(\mathbf{x}_k)\| \leq \tau \|\nabla f(\mathbf{x}_0)\|.}$$ The above script can handle this definition by a modification of the input files: • In the problem information, replace columns 'f0' and 'fmin (approx)' with $\|\nabla f(\mathbf{x}_0)\|$ and 0 respectively; • The last column of the raw solver results should be $\|\nabla f(\mathbf{x}_k)\|$ instead of $f(\mathbf{x}_k)$ for each evaluation. #### Plotting Code We provide methods for generating data and performance profiles, plus a script with example usage: These methods assume you have results for a single solver for a collection of test problems in a text file, with format Problem number, evaluation number, objective value As an example, the below files have the results for running the derivative-free solver BOBYQA (paper and code) on the Moré & Wild test set. The three files use three different settings (varying the number of interpolation points used to construct models of the objective: $n+2$, $2n+1$ and $(n+1)(n+2)/2$ respectively), and a budget of $10(n+1)$ objective evaluations. For instance, the 'np2' file looks like: 1,1,71.999999999999986 1,2,72.410000006258514 ... 53,89,1965397.5762139191 53,90,1953998.3397798422 showing that the first evaluation for the first problem had objective value 72, and the final (90th) evaluation for the last problem (53) had objective value 1953998. #### Processing Raw Results A raw file in the format described above can be converted to the correct format using the following: from plotting import * problem_info_file = '../mw/more_wild_info.csv' # problem suite information infile = 'raw_bobyqa_budget10_np2.csv' outfile = 'clean_bobyqa_budget10_np2.csv' solved_times = get_solved_times_for_file(problem_info_file, infile) solved_times.to_csv(outfile) The output file has, for each test problem: dimension, minimum value found by the solver, objective evaluations used, and the number of evaluations required to achieve accuracy $\tau$ for $\tau\in\{0.1, 0.01, \ldots, 10^{-10}\}$, where a value of -1 indicates the solver did not achieve the required accuracy within the allowed budget. Setting,n,fmin,nf,tau1,tau2,tau3,tau4,tau5,tau6,tau7,tau8,tau9,tau10 1,9,36.0000000021001,100,18,29,41,49,54,64,71,85,93,98 ... 53,8,1953998.3397798422,90,12,12,21,23,-1,-1,-1,-1,-1,-1 #### Generating Plots To generate data and performance profiles, we first build a list of tuples containing plotting information. The first entry is the stem of the processed results file - if we provide stem, all files of the form stem*.csv are assumed to contain runs for that solver, and we use an average data/performance profile over all these runs. This is needed if the algorithm and/or objective evaluation has noise. We also need plot formatting information: legend label, line style, color and (optional) marker and marker size. These can be taken from the standard matplotlib options. # each entry is tuple = (filename_stem, label, colour, linestyle, [marker], [markersize]) solver_info = [] solver_info.append(('clean_bobyqa_budget10_np2', r'$n+2$ points', 'b', '-', '.', 12)) solver_info.append(('clean_bobyqa_budget10_2np1', r'$2n+1$ points', 'r', '--')) solver_info.append(('clean_bobyqa_budget10_nsq', r'$(n+1)(n+2)/2$ points', 'k', '-.')) In its simplest, we need an output file stem, the solver plotting information, problem suite information, the set of $\tau$ levels to plot, and the maximum budget (in simplex gradients, $N_g$): from plotting import * tau_levels = [1, 3] budget = 10 outfile_stem = 'demo' # solver_info defined as above create_plots(outfile_stem, solver_info, tau_levels, budget) ` This produces files demo_data1, demo_data3, demo_perf1 and demo_perf3 representing data and performance profiles for $\tau=10^{-1}$ and $\tau=10^{-3}$. In each case, we have image files and a csv containing the raw plot data. For the $\tau=10^{-1}$ case, the plots look like: The create_plots function has several optional parameters: • max_ratio - A float for the largest x-axis value in performance profiles (default is 32.0); • data_profiles - A boolean flag for whether to generate data profiles (default is True); • perf_profiles - A boolean flag for whether to generate performance profiles (default is True); • save_to_file - A boolean flag for whether to save plots to file; if False, plots are displayed on screen (default is True); • fmt - A string for image output type, which needs to be accepted by matplotlib's savefig command (default is "eps"); • dp_with_logscale - A boolean flag for whether the x-axis for data profiles should be in a log scale (default is False). • expected_nprobs - An integer with the number of problems in the test set (if provided, check all input files to ensure they have exactly this many problems) The above demo script shows these options being used.
{}
# 2 NIMBLE tutorial ## 2.1 Introduction In this second chapter, you will get familiar with NIMBLE, an R package that implements up-to-date MCMC algorithms for fitting complex models. NIMBLE spares you from coding the MCMC algorithms by hand, and requires only the specification of a likelihood and priors for model parameters. We will illustrate NIMBLE main features with a simple example, but the ideas hold for other problems. ## 2.2 What is NIMBLE? NIMBLE stands for Numerical Inference for statistical Models using Bayesian and Likelihood Estimation. Briefly speaking, NIMBLE is an R package that implements for you MCMC algorithms to generate samples from the posterior distribution of model parameters. Freed from the burden of coding your own MCMC algorithms, you only have to specify a likelihood and priors to apply the Bayes theorem. To do so, NIMBLE uses a syntax very similar to the R syntax, which should make your life easier. This so-called BUGS language is also used by other programs like WinBUGS, OpenBUGS, and JAGS. So why use NIMBLE you may ask? The short answer is that NIMBLE is capable of so much more than just running MCMC algorithms! First, you will work from within R, but in the background NIMBLE will translate your code in C++ for (in general) faster computation. Second, NIMBLE extends the BUGS language for writing new functions and distributions of your own, or borrow those written by others. Third, NIMBLE gives you full control of the MCMC samplers, and you may pick other algorithms than the defaults. Fourth, NIMBLE comes with a library of numerical methods other than MCMC algorithms, including sequential Monte Carlo (for particle filtering) and Monte Carlo Expectation Maximization (for maximum likelihood). Last but not least, the development team is friendly and helpful, and based on users’ feedbacks, NIMBLE folks work constantly at improving the package capabilities. ## 2.3 Getting started To run NIMBLE, you will need to: 1. Build a model consisting of a likelihood and priors. 3. Specify parameters you want to make inference about. 4. Pick initial values for parameters to be estimated (for each chain). 5. Provide MCMC details namely the number of chains, the length of the burn-in period and the number of iterations following burn-in. First things first, let’s not forget to load the nimble package: library(nimble) Note that before you can install nimble like any other R package, Windows users will need to install Rtools, and Mac users will need to install Xcode. More at https://r-nimble.org/download. Now let’s go back to our example on animal survival from the previous chapter. First step is to build our model by specifying the binomial likelihood and a uniform prior on survival probability theta. We use the nimbleCode() function and wrap code within curly brackets: model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) # derived quantity lifespan <- -1/log(theta) }) You can check that the model R object contains your code: model ## { ## survived ~ dbinom(theta, released) ## theta ~ dunif(0, 1) ## lifespan <- -1/log(theta) ## } In the code above, survived and released are known, only theta needs to be estimated. The line survived ~ dbinom(theta, released) states that the number of successes or animals that have survived over winter survived is distributed as (that’s the ~) as a binomial with released trials and probability of success or survival theta. Then the line theta ~ dunif(0, 1) assigns a uniform between 0 and 1 as a prior distribution to the survival probability. This is all you need, a likelihood and priors for model parameters, NIMBLE knows the Bayes theorem. The last line lifespan <- - 1/log(theta) calculates a quantity derived from theta, which is the expected lifespan assuming constant survival17. • The most common distributions are available in NIMBLE. Among others, we will use later in the book dbeta, dmultinom and dnorm. If you cannot find what you need in NIMBLE, you can write your own distribution as illustrated in Section 2.4. • It does not matter in what order you write each line of code, NIMBLE uses what is called a declarative language for building models. In brief, you write code that tells NIMBLE what you want to achieve, and not how to get there. In contrast, an imperative language requires that you write what you want your program to do step by step. • You can think of models in NIMBLE as graphs as in Figure 2.2. A graph is made of relations (or edges) that can be of two types. A stochastic relation is signaled by a ~ sign and defines a random variable in the model, such as survived or theta. A deterministic relation is signaled by a <- sign, like lifespan. Relations define nodes on the left - the children - in terms of other nodes on the right - the parents, and relations are directed edges from parents to children. Such graphs are called directed acyclic graph or DAG. Second step in our workflow is to read in some data. We use a list in which each component corresponds to a known quantity in the model: my.data <- list(released = 57, survived = 19) You can proceed with data passed this way, but you should know a little more about how NIMBLE sees data. NIMBLE distinguishes data and constants. Constants are values that do not change, e.g. vectors of known index values or the indices used to define for loops. Data are values that you might want to change, basically anything that only appears on the left of a ~. Declaring relevant values as constants is better for computational efficiency, but it is easy to forget, and fortunately NIMBLE will by itself distinguish data and constants. I will not use the distinction between data and constants in this chapter, but in the next chapters it will become important. Third step is to tell NIMBLE which nodes in your model you would like to keep track of, in other words the quantities you’d like to do inference about. In our model we want survival theta and lifespan: parameters.to.save <- c("theta", "lifespan") In general you have many quantities in your model, including some of little interest that are not worth monitoring, and having full control on verbosity will prove handy. Fourth step is to specify initial values for all model parameters. To make sure that the MCMC algorithm explores the posterior distribution, we start different chains with different parameter values. You can specify initial values for each chain in a list and put them in yet another list: init1 <- list(theta = 0.1) init2 <- list(theta = 0.5) init3 <- list(theta = 0.9) initial.values <- list(init1, init2, init3) initial.values ## [[1]] ## [[1]]$theta ## [1] 0.1 ## ## ## [[2]] ## [[2]]$theta ## [1] 0.5 ## ## ## [[3]] ## [[3]]$theta ## [1] 0.9 Alternatively, you can write a simple R function that generates random initial values: initial.values <- function() list(theta = runif(1,0,1)) initial.values() ##$theta ## [1] 0.8356 Firth and last step, you need to tell NIMBLE the number of chains to run, say n.chain, how long the burn-in period should be, say n.burnin, and the number of iterations following the burn-in period to be used for posterior inference. In NIMBLE, you specify the total number of iterations, say n.iter, so that the number of posterior samples per chain is n.iter - n.burnin. NIMBLE also allows discarding samples after burn-in, a procedure known as thinning, which I will not use in this book18. n.iter <- 5000 n.burnin <- 1000 n.chains <- 3 We now have all the ingredients to run model, that is to sample in the posterior distribution of model parameters using MCMC simulations. This is accomplished using function nimbleMCMC(): mcmc.output <- nimbleMCMC(code = model, data = my.data, inits = initial.values, monitors = parameters.to.save, niter = n.iter, nburnin = n.burnin, nchains = n.chains) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| NIMBLE goes through several steps that we will explain in Section 2.5. Function nimbleMCMC() takes other arguments that you might find useful. For example, you can suppress the progress bar if you find it too depressing when running long simulations with progressBar = FALSE. You can also get a summary of the outputs by specifying summary = TRUE. Check ?nimbleMCMC for more details. Now let’s inspect what we have in mcmc.output: str(mcmc.output) ## List of 3 ## $chain1: num [1:4000, 1:2] 0.907 0.907 0.907 0.907 0.853 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$ : NULL ## .. ..$: chr [1:2] "lifespan" "theta" ##$ chain2: num [1:4000, 1:2] 0.787 0.894 1.291 1.388 1.388 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$: NULL ## .. ..$ : chr [1:2] "lifespan" "theta" ## $chain3: num [1:4000, 1:2] 0.745 0.745 0.745 0.886 1.136 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$ : NULL ## .. ..$: chr [1:2] "lifespan" "theta" The R object mcmc.output is a list with three components, one for each MCMC chain. Let’s have a look to chain1 for example: dim(mcmc.output$chain1) ## [1] 4000 2 head(mcmc.output$chain1) ## lifespan theta ## [1,] 0.9069 0.3320 ## [2,] 0.9069 0.3320 ## [3,] 0.9069 0.3320 ## [4,] 0.9069 0.3320 ## [5,] 0.8526 0.3095 ## [6,] 0.7987 0.2859 Each component of the list is a matrix. In rows, you have 4000 samples from the posterior distribution of theta, which corresponds to n.iter - n.burnin iterations. In columns, you have the quantities we monitor, theta and lifespan. From there, you can compute the posterior mean of theta: mean(mcmc.output$chain1[,'theta']) ## [1] 0.3407 You can also obtain the 95% credible interval for theta: quantile(mcmc.output$chain1[,'theta'], probs = c(2.5, 97.5)/100) ## 2.5% 97.5% ## 0.2219 0.4620 Let’s visualise the posterior distribution of theta with a histogram: mcmc.output %>% as_tibble() %>% ggplot() + geom_histogram(aes(x = chain1[,"theta"]), color = "white") + labs(x = "survival probability") There are less painful ways of doing posterior inference. In this book, I will use the R package MCMCvis19 to summarise and visualize MCMC outputs, but there are other perfectly valid options out there like ggmcmc20 and basicMCMCplots21. Shall I demonstrate these other options? Let’s load the package MCMCvis: library(MCMCvis) To get the most common numerical summaries, the function MCMCsummary() does the job: MCMCsummary(object = mcmc.output, round = 2) ## mean sd 2.5% 50% 97.5% Rhat n.eff ## lifespan 0.94 0.17 0.66 0.92 1.32 1 2513 ## theta 0.34 0.06 0.22 0.34 0.47 1 2533 You can use a caterpillar plot to visualise the posterior distributions of theta with MCMCplot(): MCMCplot(object = mcmc.output, params = 'theta') The point represents the posterior median, the thick line is the 50% credible interval and the thin line the 95% credible interval. The trace and posterior density of theta can be obtained with MCMCtrace(): MCMCtrace(object = mcmc.output, pdf = FALSE, # no export to PDF ind = TRUE, # separate density lines per chain params = "theta") You can also add the diagnostics of convergence we discussed in the previous chapter: MCMCtrace(object = mcmc.output, pdf = FALSE, ind = TRUE, Rhat = TRUE, # add Rhat n.eff = TRUE, # add eff sample size params = "theta") We calculated lifespan directly in our model with lifespan <- -1/log(theta). But you can also calculate this quantity from outside NIMBLE. This is a nice by-product of using MCMC simulations: you can obtain the posterior distribution of any quantity that is function of your model parameters by applying this function to samples from the posterior distribution of these parameters. In our example, all you need is samples from the posterior distribution of theta, which we pool between the three chains with: theta_samples <- c(mcmc.output$chain1[,'theta'], mcmc.output$chain2[,'theta'], mcmc.output$chain3[,'theta']) To get samples from the posterior distribution of lifespan, we apply the function to calculate lifespan to the samples from the posterior distribution of survival: lifespan <- -1/log(theta_samples) As usual then, you can calculate the posterior mean and 95% credible interval: mean(lifespan) ## [1] 0.9398 quantile(lifespan, probs = c(2.5, 97.5)/100) ## 2.5% 97.5% ## 0.6629 1.3194 You can also visualise the posterior distribution of lifespan: lifespan %>% as_tibble() %>% ggplot() + geom_histogram(aes(x = value), color = "white") + labs(x = "lifespan") Now you’re good to go. For convenience I have summarized the steps above in the box below. The NIMBLE workflow provided with nimbleMCMC() allows you to build models and make inference. This is what you can achieve with other software like WinBUGS or JAGS. NIMBLE workflow: # model building model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) # derived quantity lifespan <- -1/log(theta) }) my.data <- list(released = 57, survived = 19) # specify parameters to monitor parameters.to.save <- c("theta", "lifespan") # pick initial values initial.values <- function() list(theta = runif(1,0,1)) # specify MCMC details n.iter <- 5000 n.burnin <- 1000 n.chains <- 3 # run NIMBLE mcmc.output <- nimbleMCMC(code = model, data = my.data, inits = initial.values, monitors = parameters.to.save, niter = n.iter, nburnin = n.burnin, nchains = n.chains) # calculate numerical summaries MCMCsummary(object = mcmc.output, round = 2) # visualize parameter posterior distribution MCMCplot(object = mcmc.output, params = 'theta') # check convergence MCMCtrace(object = mcmc.output, pdf = FALSE, # no export to PDF ind = TRUE, # separate density lines per chain params = "theta") But NIMBLE is more than just another MCMC engine. It provides a programming environment so that you have full control when building models and estimating parameters. NIMBLE allows you to write your own functions and distributions to build models, and to choose alternative MCMC samplers or code new ones. This flexibility often comes with faster convergence. I have to be honest, learning these improvements over other software takes some reading and experimentation, and it might well be that you do not need to use any of these features. And it’s fine. In the next sections, I cover some of this advanced material. You may skip these sections and go back to this material later if you need it. ## 2.4 Programming In NIMBLE you can write and use your own functions, or use existing R or C/C++ functions. This allows you to customize models the way you want. ### 2.4.1 NIMBLE functions NIMBLE provides nimbleFunctions for programming. A nimbleFunction is like an R function, plus it can be compiled for faster computation. Going back to our animal survival example, we can write a nimbleFunction to compute lifespan: computeLifespan <- nimbleFunction( run = function(theta = double(0)) { # type declarations ans <- -1/log(theta) return(ans) returnType(double(0)) # return type declaration } ) Within the nimbleFunction, the run section gives the function to be executed. It is written in the NIMBLE language. The theta = double(0) and returnType(double(0)) arguments tell NIMBLE that the input and output are single numeric values (scalars). Alternatively, double(1) and double(2) are for vectors and matrices, while logical(), integer() and character() are for logical, integer and character values. You can use your nimbleFunction in R: computeLifespan(0.8) ## [1] 4.481 You can compile it and use the C++ code for faster computation: CcomputeLifespan <- compileNimble(computeLifespan) CcomputeLifespan(0.8) ## [1] 4.481 You can also use your nimbleFunction in a model: model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) # derived quantity lifespan <- computeLifespan(theta) }) The rest of the workflow remains the same: my.data <- list(survived = 19, released = 57) parameters.to.save <- c("theta", "lifespan") initial.values <- function() list(theta = runif(1,0,1)) n.iter <- 5000 n.burnin <- 1000 n.chains <- 3 mcmc.output <- nimbleMCMC(code = model, data = my.data, inits = initial.values, monitors = parameters.to.save, niter = n.iter, nburnin = n.burnin, nchains = n.chains) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| MCMCsummary(object = mcmc.output, round = 2) ## mean sd 2.5% 50% 97.5% Rhat n.eff ## lifespan 0.94 0.16 0.66 0.92 1.31 1 2593 ## theta 0.34 0.06 0.22 0.34 0.47 1 2652 With nimbleFunctions, you can mimic basic R syntax, do linear algebra (e.g. compute eigenvalues), operate on vectors and matrices (e.g. inverse a matrix), use logical operators (e.g. and/or) and flow control (e.g. if-else). There is also a long list of common and less common distributions that can be used with nimbleFunctions. To learn everything you need to know on writing nimbleFunctions, make sure to read chapter 11 of the NIMBLE manual at https://r-nimble.org/html_manual/cha-RCfunctions.html#cha-RCfunctions. ### 2.4.2 Calling R/C++ functions If you’re like me, and too lazy to write your own functions, you can rely on the scientific community and use existing C, C++ or R code. The trick is to write a nimbleFunction that wraps access to that code which can then be used by NIMBLE. As an example, imagine you’d like to use an R function myfunction(), either a function you wrote yourself, or a function available in your favorite R package: myfunction <- function(x) { -1/log(x) } Now wrap this function using nimbleRcall() or nimbleExternalCall() for a C or C++ function: Rmyfunction <- nimbleRcall(prototype = function(x = double(0)){}, Rfun = 'myfunction', returnType = double(0)) In the call to nimbleRcall() above, the argument prototype specifies inputs (a single numeric value double(0)) of the R function Rfun that generates outputs returnType (a single numeric value double(0)). Now you can call your R function from a model (or any nimbleFunctions): model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) lifespan <- Rmyfunction(theta) }) The rest of the workflow remains the same: my.data <- list(survived = 19, released = 57) parameters.to.save <- c("theta", "lifespan") initial.values <- function() list(theta = runif(1,0,1)) n.iter <- 5000 n.burnin <- 1000 n.chains <- 3 mcmc.output <- nimbleMCMC(code = model, data = my.data, inits = initial.values, monitors = parameters.to.save, niter = n.iter, nburnin = n.burnin, nchains = n.chains) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| MCMCsummary(object = mcmc.output, round = 2) ## mean sd 2.5% 50% 97.5% Rhat n.eff ## lifespan 0.94 0.16 0.68 0.92 1.29 1 2597 ## theta 0.34 0.06 0.23 0.34 0.46 1 2643 Evaluating an R function from within NIMBLE slows MCMC sampling down, but if you can live with it, the cost is easily offset by the convenience of being able to use existing R functions. Another advantage of using nimbleRcall() (or nimbleExternalCall()) is that you can keep large objects out of your model, so that NIMBLE does not have to handle them in MCMC sampling. These objects should be constants and not change when you run NIMBLE. Letting R manipulating these objects will save you time, usually more than the time you lose by calling R from within NIMBLE. ### 2.4.3 User-defined distributions With nimbleFunctions you can provide user-defined distributions to NIMBLE. You need to write functions for density (d) and simulation (r) for your distribution. As an example, we write our own binomial distribution: # density dmybinom <- nimbleFunction( run = function(x = double(0), size = double(0), prob = double(0), log = integer(0, default = 1)) { returnType(double(0)) # compute binomial coefficient lchoose <- lfactorial(size) - lfactorial(x) - lfactorial(size - x) # binomial density function logProb <- lchoose + x * log(prob) + (size - x) * log(1 - prob) if(log) return(logProb) else return(exp(logProb)) }) # simulation using the coin flip method (p. 524 in Devroye 1986) rmybinom <- nimbleFunction( run = function(n = integer(0, default = 1), size = double(0), prob = double(0)) { returnType(double(0)) x <- 0 y <- runif(n = size, min = 0, max = 1) for (j in 1:size){ if (y[j] < prob){ x <- x + 1 }else{ x <- x } } return(x) }) You need to define the nimbleFunctions in R’s global environment for them to be accessed: assign('dmybinom', dmybinom, .GlobalEnv) assign('rmybinom', rmybinom, .GlobalEnv) You can try out your function and simulate a random value from a binomial distribution with size 5 and probability 0.1: rmybinom(n = 1, size = 5, prob = 0.1) ## [1] 0 All set. You can run your workflow: model <- nimbleCode({ # likelihood survived ~ dmybinom(prob = theta, size = released) # prior theta ~ dunif(0, 1) }) my.data <- list(released = 57, survived = 19) initial.values <- function() list(theta = runif(1,0,1)) n.iter <- 5000 n.burnin <- 1000 n.chains <- 3 mcmc.output <- nimbleMCMC(code = model, data = my.data, inits = initial.values, niter = n.iter, nburnin = n.burnin, nchains = n.chains) ## Registering the following user-provided distributions: dmybinom ## NIMBLE has registered dmybinom as a distribution based on its use in BUGS code. Note that if you make changes to the nimbleFunctions for the distribution, you must call 'deregisterDistributions' before using the distribution in BUGS code for those changes to take effect. ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| MCMCsummary(mcmc.output) ## mean sd 2.5% 50% 97.5% Rhat n.eff ## theta 0.34 0.05976 0.2286 0.3378 0.4598 1 2970 Having nimbleFunctions offers infinite possibilities to customize your models and algorithms. Besides what we covered already, you can write your own samplers. We will see an example in a minute, but I first need to tell you more about the NIMBLE workflow. ## 2.5 Under the hood So far, you have used nimbleMCMC() which runs the default MCMC workflow. This is perfecly fine for most applications. However, in some situations you need to customize the MCMC samplers to improve or fasten convergence. NIMBLE allows you to look under the hood by using a detailed workflow in several steps: nimbleModel(), configureMCMC(), buildMCMC(), compileNimble() and runMCMC(). Note that nimbleMCMC() does all of this at once. We write the model code, read in data and pick initial values as before: model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) # derived quantity lifespan <- -1/log(theta) }) my.data <- list(survived = 19, released = 57) initial.values <- list(theta = 0.5) First step is to create the model as an R object (uncompiled model) with nimbleModel(): survival <- nimbleModel(code = model, data = my.data, inits = initial.values) You can look at its nodes: survival$getNodeNames() ## [1] "theta" "lifespan" "survived" You can look at the values stored at each node: survival$theta ## [1] 0.5 survival$survived ## [1] 19 survival$lifespan ## [1] 1.443 # this is -1/log(0.5) We can also calculate the log-likelihood at the initial value for theta: survival$calculate() ## [1] -5.422 # this is dbinom(x = 19, size = 57, prob = 0.5, log = TRUE) The ability in NIMBLE to access the nodes of your model and to evaluate the model likelihood can help you in identifying bugs in your code. Give example? Provide negative initial value for theta, or released in data < survived. You can obtain the graph of the model as in Figure 2.2 with: survival$plotGraph() Second we compile the model with compileNimble(): Csurvival <- compileNimble(survival) With compileNimble(), the C++ code is generated, compiled and loaded back into R so that it can be used in R (compiled model): Csurvival$theta ## [1] 0.5 Now you have two versions of the model, survival is in R and Csurvival in C++. Being able to separate the steps of model building and parameter estimation is a strength of NIMBLE. This gives you a lot of flexibility at both steps. For example, imagine you would like to fit your model with maximum likelihood, then you can do it by wrapping your model in an R function that gets the likelihood and maximise this function. Using the C version of the model, you can write: # function for negative log-likelihood to minimize f <- function(par) { Csurvival[['theta']] <- par # assign par to theta ll <- Csurvival$calculate() # update log-likelihood with par value return(-ll) # return negative log-likelihood } # evaluate function at 0.5 and 0.9 f(0.5) ## [1] 5.422 f(0.9) ## [1] 55.41 # minimize function out <- optimize(f, interval = c(0,1)) round(out$minimum, 2) ## [1] 0.33 By maximising the likelihood (or minimising the negative log-likelihood), you obtain the maximum likelihood estimate of animal survival, which is exactly 19 surviving animals over 57 released animals or 0.33. Third we create a MCMC configuration for our model with configureMCMC(): survivalConf <- configureMCMC(survival) ## ===== Monitors ===== ## thin = 1: theta ## ===== Samplers ===== ## RW sampler (1) ## - theta This steps tells you the nodes that are monitored by default, and the MCMC samplers than have been assigned to them. Here theta is monitored, and samples from its posterior distribution are simulated with a random walk sampler similar to the Metropolis sampler we coded in the previous chapter in Section 1.5.3. To monitor lifespan in addition to theta, you write: survivalConf$addMonitors(c("lifespan")) ## thin = 1: theta, lifespan survivalConf ## ===== Monitors ===== ## thin = 1: theta, lifespan ## ===== Samplers ===== ## RW sampler (1) ## - theta Third, we create a MCMC function with buildMCMC() and compile it with compileNimble(): survivalMCMC <- buildMCMC(survivalConf) CsurvivalMCMC <- compileNimble(survivalMCMC, project = survival) Note that models and nimbleFunctions need to be compiled before they can be used to specify a project. Fourth, we run NIMBLE with runMCMC(): n.iter <- 5000 n.burnin <- 1000 samples <- runMCMC(mcmc = CsurvivalMCMC, niter = n.iter, nburnin = n.burnin) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| We run a single chain but runMCMC() allows you to use multiple chains as with nimbleMCMC(). You can look into samples which contains values simulated from the posterior distribution of the parameters we monitor: head(samples) ## lifespan theta ## [1,] 0.9093 0.3330 ## [2,] 0.9093 0.3330 ## [3,] 0.9093 0.3330 ## [4,] 1.2095 0.4374 ## [5,] 1.2095 0.4374 ## [6,] 1.1835 0.4296 From here, you can obtain numerical summaries with samplesSummary(): samplesSummary(samples) ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## lifespan 0.9357 0.9194 0.16117 0.6831 1.2969 ## theta 0.3386 0.3370 0.06128 0.2313 0.4625 I have summarized the steps above in the box below. Detailed NIMBLE workflow: # model building model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) # derived quantity lifespan <- -1/log(theta) }) my.data <- list(released = 57, survived = 19) # pick initial values initial.values <- function() list(theta = runif(1,0,1)) # create model as an R object (uncompiled model) survival <- nimbleModel(code = model, data = my.data, inits = initial.values()) # compile model Csurvival <- compileNimble(survival) # create a MCMC configuration survivalConf <- configureMCMC(survival) # add lifespan to list of parameters to monitor survivalConf$addMonitors(c("lifespan")) # create a MCMC function and compile it survivalMCMC <- buildMCMC(survivalConf) CsurvivalMCMC <- compileNimble(survivalMCMC, project = survival) # specify MCMC details n.iter <- 5000 n.burnin <- 1000 n.chains <- 2 # run NIMBLE samples <- runMCMC(mcmc = CsurvivalMCMC, niter = n.iter, nburnin = n.burnin, nchain = n.chains) # calculate numerical summaries MCMCsummary(object = samples, round = 2) # visualize parameter posterior distribution MCMCplot(object = samples, params = 'theta') # check convergence MCMCtrace(object = samples, pdf = FALSE, # no export to PDF ind = TRUE, # separate density lines per chain params = "theta") At first glance, using several steps instead of doing all these at once with nimbleMCMC() seems odds. Why is it useful? Mastering the whole sequence of steps allows you to play around with samplers, by changing the samplers NIMBLE picks by default, or even writing your own samplers. ## 2.6 MCMC samplers ### 2.6.1 Default samplers What is the default sampler used by NIMBLE in our example? You can answer this question by inspecting the MCMC configuration obtained with configureMCMC(): #survivalConf <- configureMCMC(survival) survivalConf$printSamplers() ## [1] RW sampler: theta Now that we have control on the MCMC configuration, let’s mess it up. We start by removing the default sampler: survivalConf$removeSamplers(c('theta')) survivalConf$printSamplers() And we change it for a slice sampler: survivalConf$addSampler(target = c('theta'), type = 'slice') survivalConf$printSamplers() ## [1] slice sampler: theta Now you can resume the workflow: # create a new MCMC function and compile it: survivalMCMC2 <- buildMCMC(survivalConf) CsurvivalMCMC2 <- compileNimble(survivalMCMC2, project = survival, resetFunctions = TRUE) # to compile new functions # into existing project, # need to reset nimbleFunctions # run NIMBLE: samples2 <- runMCMC(mcmc = CsurvivalMCMC2, niter = n.iter, nburnin = n.burnin) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| # obtain numerical summaries: samplesSummary(samples2) ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## lifespan 0.9357 0.9231 0.16002 0.6645 1.2826 ## theta 0.3387 0.3385 0.06098 0.2221 0.4586 NIMBLE implements many samplers, and a list is available with ?samplers. For example, high correlation in (regression) parameters can make independent samplers inefficient. In that situation, block sampling might help which consists in proposing candidate values from a multivariate distribution that acknowledges correlation between parameters. Say something on how default samplers are chosen by NIMBLE? ### 2.6.2 User-defined samplers Allowing you to code your own sampler is another topic on which NIMBLE thrives. As an example, we focus on the Metropolis algorithm of Section 1.5.3 which we coded in R. In this section, we make it a nimbleFunction so that we can use it within our model: my_metropolis <- nimbleFunction( name = 'my_metropolis', # fancy name for our MCMC sampler contains = sampler_BASE, setup = function(model, mvSaved, target, control) { # i) get dependencies for 'target' in 'model' calcNodes <- model$getDependencies(target) # ii) get sd of proposal distribution scale <- control$scale }, run = function() { # (1) log-lik at current value initialLP <- model$getLogProb(calcNodes) # (2) current parameter value current <- model[[target]] # (3) logit transform lcurrent <- log(current / (1 - current)) # (4) propose candidate value lproposal <- lcurrent + rnorm(1, mean = 0, scale) # (5) back-transform proposal <- plogis(lproposal) # (6) plug candidate value in model model[[target]] <<- proposal # (7) log-lik at candidate value proposalLP <- model$calculate(calcNodes) # (8) compute lik ratio on log scale lMHR <- proposalLP - initialLP # (9) spin continuous spinner and compare to ratio if(runif(1,0,1) < exp(lMHR)) { # (10) if candidate value is accepted, update current value copy(from = model, to = mvSaved, nodes = calcNodes, logProb = TRUE, row = 1) } else { ## (11) if candidate value is accepted, keep current value copy(from = mvSaved, to = model, nodes = calcNodes, logProb = TRUE, row = 1) } }, methods = list( reset = function() {} ) ) Compared to nimbleFunctions we wrote earlier, my_metropolis() contains a setup function which i) gets the dependencies of the parameter to update in the run function with Metropolis, the target node, that would be theta in our example and ii) extracts control parameters, that would be scale the standard deviation of the proposal distribution in our example. Then the run function implements the steps of the Metropolis algorithm: (1) get the log-likelihood function evaluated at the current value, (2) get the current value, (3) apply the logit transform to it, (4) propose a candidate value by perturbing the current value with some normal noise controled by the standard deviation scale, (5) back-transform the candidate value and (6) plug it in the model, (7) calculate the log-likelihood function at the candidate value, (8) compute the Metropolis ratio on the log scale, (9) compare output of a spinner and the Metropolis ratio to decide whether to (10) accept the candidate value and copy from the model to mvSaved or (11) reject it and keep the current value by copying from mvSaved to the model. Because this nimbleFunction is to be used as a MCMC sampler, several constraints need to be respected like having a contains = sampler_BASE statement or using the four arguments model, mvSaved, target and control in the setup function. Of course, NIMBLE implements a more advanced and efficient version of the Metropolis algorithm, you can look into it at https://github.com/cran/nimble/blob/master/R/MCMC_samplers.R#L184. Now that we have our user-defined MCMC algorithm, we can change the default sampler for our new sampler as in Section 2.6.1. We start from scratch: model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) }) my.data <- list(survived = 19, released = 57) initial.values <- function() list(theta = runif(1,0,1)) survival <- nimbleModel(code = model, data = my.data, inits = initial.values()) Csurvival <- compileNimble(survival) survivalConf <- configureMCMC(survival) ## ===== Monitors ===== ## thin = 1: theta ## ===== Samplers ===== ## RW sampler (1) ## - theta We print the samplers used by default, remove the default sampler for theta, replace it with our my_metropolis() sampler with the standard deviation of the proposal distribution set to 0.1, and print again to make sure NIMBLE now uses our new sampler: survivalConf$printSamplers() ## [1] RW sampler: theta survivalConf$removeSamplers(c('theta')) survivalConf$addSampler(target = 'theta', type = 'my_metropolis', control = list(scale = 0.1)) # standard deviation # of proposal distribution survivalConf$printSamplers() ## [1] my_metropolis sampler: theta, scale: 0.10000000000000001 The rest of the workflow is unchanged: survivalMCMC <- buildMCMC(survivalConf) CsurvivalMCMC <- compileNimble(survivalMCMC, project = survival) samples <- runMCMC(mcmc = CsurvivalMCMC, niter = 5000, nburnin = 1000) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| samplesSummary(samples) ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## theta 0.339 0.3377 0.05592 0.2374 0.4528 You can re-run the analysis by setting the standard deviation of the proposal to different values, say 1 and 10, and compare Figure 2.3 to traceplots we obtained with our R implementation of the Metropolis algorithm in the previous chapter at Figure 1.14: ## 2.7 Tips and tricks Before closing this chapter on NIMBLE, I thought it’d be useful to have a section gathering a few tips and tricks that would make your life easier. These are my tips and tricks, NIMBLE users, I’d be happy to hear yours: email me, edit the chapter or file an issue on GitHub. ### 2.7.1 Precision vs standard deviation In other sotware like JAGS, the normal distribution is parameterized with mean mu and a parameter called precision, often denoted tau, the inverse of the variance you are used to. Say we use a normal prior on some parameter epsilon with epsilon ~ dnorm(mu, tau). We’d like this prior to be vague, therefore tau should be small, say 0.01 so that the variance of the normal distribution is large, 1/0.01 = 100 here. This subtlety is the source of problems (and frustration) when you forget that the second parameter is precision and use epsilon ~ dnorm(mu, 100), because then the variance is actually 1/100 = 0.01 and the prior is very informative, and peaked on mu. In NIMBLE you can use this parameterisation as well as the more natural parameterisation epsilon ~ dnorm(mu, sd = 100) which avoids confusion. ### 2.7.2 Indexing NIMBLE does not guess the dimensions of objects. In other software like JAGS you can write sum.x <- sum(x[]) to calculate the sum over all components of x. In NIMBLE you need to write sum.x <- sum(x[1:n]) to sum the components of x from 1 up to n. Specifying dimensions can be annoying, but I find it useful as it forces me to think of what I am doing and to keep my code self-explaining. ### 2.7.3 Faster compilation You might have noticed that compilation in NIMBLE takes time. When you have large models (with lots of nodes), compilation can take forever. You can set calculate = FALSE in nimbleModel() to disable the calculation of all deterministic nodes and log-likelihood. You can also use useConjugacy = FALSE in configureMCMC() to disable the search for conjugate samplers. With the animal survival example, you would do: model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) }) my.data <- list(survived = 19, released = 57) initial.values <- function() list(theta = runif(1,0,1)) survival <- nimbleModel(code = model, data = my.data, inits = initial.values(), calculate = FALSE) # first tip Csurvival <- compileNimble(survival) survivalConf <- configureMCMC(survival) ## ===== Monitors ===== ## thin = 1: theta ## ===== Samplers ===== ## RW sampler (1) ## - theta survivalMCMC <- buildMCMC(survivalConf, useConjugacy = FALSE) # second tip CsurvivalMCMC <- compileNimble(survivalMCMC, project = survival) samples <- runMCMC(mcmc = CsurvivalMCMC, niter = 5000, nburnin = 1000) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| samplesSummary(samples) ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## theta 0.3402 0.3391 0.06029 0.2258 0.4616 ### 2.7.4 Updating MCMC chains Sometimes it is useful to run your MCMC chains a little bit longer to improve convergence. Re-starting from the run in previous section, you can use: niter_ad <- 6000 CsurvivalMCMC$run(niter_ad, reset = FALSE) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## NULL Then you can extract the matrix of previous MCMC samples augmented with new ones and obtain numerical summaries: more_samples <- as.matrix(CsurvivalMCMC$mvSamples) samplesSummary(more_samples) ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## theta 0.3402 0.3382 0.05975 0.2281 0.4632 You can check that more_samples contains 10000 samples, 4000 from the call to runMCMC() plus 6000 additional samples. ### 2.7.5 Reproducibility If you want your results to be reproducible, you can control the state of R the random number generator with the setSeed argument in functions nimbleMCMC() and runMCMC(). Going back to the animal survival example, you can check that two calls to nimbleMCMC() give the same results when setSeed is set to the same value: # first call to nimbleMCMC() mcmc.output1 <- nimbleMCMC(code = model, data = my.data, inits = initial.values, niter = 5000, nburnin = 1000, nchains = 3, summary = TRUE, setSeed = 123) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| # second call to nimbleMCMC() mcmc.output2 <- nimbleMCMC(code = model, data = my.data, inits = initial.values, niter = 5000, nburnin = 1000, nchains = 3, summary = TRUE, setSeed = 123) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| # outputs from both calls are the same mcmc.output1$summary$all.chains ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## theta 0.3387 0.336 0.05968 0.2282 0.4608 mcmc.output2$summary$all.chains ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## theta 0.3387 0.336 0.05968 0.2282 0.4608 ### 2.7.6 Parallelization To speed up your analyses, you can run MCMC chains in parallel. This is what the package jagsUI22 accomplishes for JAGS users. Here, we use the parallel package for parallel computation: library(parallel) First you create a cluster using the total amount of cores you have but one to make sure your computer can go on working: nbcores <- detectCores() - 1 my_cluster <- makeCluster(nbcores) Then you wrap your workflow in a function to be run in parallel: workflow <- function(seed, data) { library(nimble) model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) }) set.seed(123) # for reproducibility initial.values <- function() list(theta = runif(1,0,1)) survival <- nimbleModel(code = model, data = data, inits = initial.values()) Csurvival <- compileNimble(survival) survivalMCMC <- buildMCMC(Csurvival) CsurvivalMCMC <- compileNimble(survivalMCMC) samples <- runMCMC(mcmc = CsurvivalMCMC, niter = 5000, nburnin = 1000, setSeed = seed) return(samples) } Now we run the code using parLapply(), which uses cluster nodes to execute our workflow: output <- parLapply(cl = my_cluster, X = c(2022, 666), fun = workflow, data = list(survived = 19, released = 57)) In the call to parLapply, we specify X = c(2022, 666) to ensure reproducibility. We use two alues 2022 and 666 to set the seed in workflow(), which means we run two instances of our workflow, or two MCMC chains. Note that we also have a line set.seed(123) in the workflow() function to ensure reproducibility while drawing randomly initial values. It’s good practice to close the cluster with stopCluster() so that processes do not continue to run in the background and slow down other processes: stopCluster(my_cluster) By inspecting the results, you can see that the object output is a list with two components, one for each MCMC chain: str(output) ## List of 2 ## $: num [1:4000, 1] 0.393 0.369 0.346 0.346 0.346 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$ : NULL ## .. ..$: chr "theta" ##$ : num [1:4000, 1] 0.435 0.435 0.435 0.435 0.243 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$: NULL ## .. ..$ : chr "theta" Eventually, you can obtain numerical summaries: MCMCsummary(output) ## mean sd 2.5% 50% 97.5% Rhat n.eff ## theta 0.3361 0.06148 0.2215 0.3335 0.4594 1 1779 ### 2.7.7 Incomplete initialization When you run nimbleMCMC() or nimbleModel(), you may get warnings thrown at you by NIMBLE like ‘This model is not fully initialized’ or ‘value is NA or NaN even after trying to calculate.’ This is not necessarily an error, but it ‘reflects missing values in model variables’ (incomplete initialization). In this situation, NIMBLE will initialize nodes with NAs by drawing from priors, and it will work or not. When possible, I try to initialize all nodes (full initialization). The process can be a bit of a headache, but it helps understanding the model structure better. Going back to our animal survival example, let’s purposedly forget to provide an initial value for theta: model <- nimbleCode({ # likelihood survived ~ dbinom(theta, released) # prior theta ~ dunif(0, 1) }) #initial.values <- list(theta = runif(1,0,1)) survival <- nimbleModel(code = model, data = list(survived = 19, released = 57)) To see which variables are not initialized, we use initializeInfo(): # survival$calculate() # gives NA survival$initializeInfo() Now that we know theta was not initialized, we can fix the issue and resume our workflow: survival$theta <- 0.5 # assign initial value to theta survival$calculate() ## [1] -5.422 Csurvival <- compileNimble(survival) survivalMCMC <- buildMCMC(Csurvival) ## ===== Monitors ===== ## thin = 1: theta ## ===== Samplers ===== ## RW sampler (1) ## - theta CsurvivalMCMC <- compileNimble(survivalMCMC) samples <- runMCMC(mcmc = CsurvivalMCMC, niter = 5000, nburnin = 1000) ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| samplesSummary(samples) ## Mean Median St.Dev. 95%CI_low 95%CI_upp ## theta 0.3359 0.3335 0.06088 0.2191 0.4602 ### 2.7.8 Vectorization Vectorization is the process of replacing a loop by a vector so that instead of processing a single value at a time, you process a set of values at once. As an example, instead of writing: for(i in 1:n){ x[i] <- mu + epsilon[i] } you would write: x[1:n] <- mu + epsilon[1:n] Vectorization can make your code more efficient by manipulating one vector node x[1:n] instead of n nodes x[1], …, x[n]. Think of an example in relation to animal survival? Illustrate with vectorized Bernoulli or vectorized Binomial? ## 2.8 Summary • NIMBLE is an R package that implements for you MCMC algorithms to generate samples from the posterior distribution of model parameters. You only have to specify a likelihood and priors using the BUGS language to apply the Bayes theorem. • NIMBLE is more than just another MCMC engine. It provides a programming environment so that you have full control when building models and estimating parameters. • At the core of NIMBLE are nimbleFunctions which you can write and compile for faster computation. With nimbleFunctions you can mimic basic R syntax, work with vectors and matrices, use logical operators and flow control, and specify many distributions. • There are two workflows to run NIMBLE. In most situations, nimbleMCMC() will serve you well. When you need more control, you can adopt a detailed workflow with nimbleModel(), configureMCMC(), buildMCMC(), compileNimble() and runMCMC(). • By having full control of the workflow, you can change default MCMC samplers and even write your own samplers.
{}
# A kind of Stein factorization for non-proper morphisms Let $S$ and $T$ be a noetherian connected normal scheme, say. Let $K$ (resp. $L$) be the function field of $S$ (resp. $T$). Assume that $char(L)=0$. Let $f: S\to T$ be a smooth surjective morphism. Is it true that $f$ factors as $$S\to T'\to T$$ where $S\to T'$ is smooth and surjective with geometrically connected generic fibre and $T'\to T$ is a finite morphism? (A candidate for $T'$ could be the following scheme: Let $F$ be the algebraic closure of $L$ in $K$. Let $T'$ be the normalization of $T$ in $F$. Then the morphism $f$ factors through $T'$, but I do not know how to prove that the resulting morphism $S\to T'$ is smooth and surjective.) My main interest is the case where $L$ is a number field and $T$ is a dense open subscheme of $Spec(O_L)$. • I think you should try some examples on your own. For instance, what happens if $T$ is $\text{Spec}(\mathbb{Z})$ and $S$ is the maximal open subscheme of $\text{Spec}(\mathbb{Z}[x]/\langle x^3 + x + 1 \rangle)$ that is smooth over $T$? – Jason Starr Apr 29 '14 at 19:53 • Then I can take $T'=S$? – Sebastian Petersen Apr 30 '14 at 5:53 • "Then I can take $T'=S$?" No, you cannot. The induced morphism from $S$ to $T$ is not a finite morphism, and it does not have geometrically connected generic fiber. So you cannot take $T'$ equals $T$, and you also cannot take $T'$ equals $S$. – Jason Starr Apr 30 '14 at 12:01 • OK, now I get it. Maybe I should require $S\to T'$ dominant instead of surjective? I am thinking for a moment... – Sebastian Petersen Apr 30 '14 at 13:29
{}
:: The Properties of Product of Relational Structures :: by Artur Korni{\l}owicz :: :: Copyright (c) 1998-2018 Association of Mizar Users registration let S, T be non empty upper-bounded RelStr ; cluster [:S,T:] -> upper-bounded ; coherence [:S,T:] is upper-bounded proof end; end; registration let S, T be non empty lower-bounded RelStr ; cluster [:S,T:] -> lower-bounded ; coherence [:S,T:] is lower-bounded proof end; end; theorem :: YELLOW10:1 for S, T being non empty RelStr st [:S,T:] is upper-bounded holds ( S is upper-bounded & T is upper-bounded ) proof end; theorem :: YELLOW10:2 for S, T being non empty RelStr st [:S,T:] is lower-bounded holds ( S is lower-bounded & T is lower-bounded ) proof end; theorem Th3: :: YELLOW10:3 for S, T being non empty antisymmetric upper-bounded RelStr holds Top [:S,T:] = [(Top S),(Top T)] proof end; theorem Th4: :: YELLOW10:4 for S, T being non empty antisymmetric lower-bounded RelStr holds Bottom [:S,T:] = [(),()] proof end; theorem Th5: :: YELLOW10:5 for S, T being non empty antisymmetric lower-bounded RelStr for D being Subset of [:S,T:] st ( [:S,T:] is complete or ex_sup_of D,[:S,T:] ) holds sup D = [(sup ()),(sup ())] proof end; theorem :: YELLOW10:6 for S, T being non empty antisymmetric upper-bounded RelStr for D being Subset of [:S,T:] st ( [:S,T:] is complete or ex_inf_of D,[:S,T:] ) holds inf D = [(inf ()),(inf ())] proof end; theorem :: YELLOW10:7 for S, T being non empty RelStr for x, y being Element of [:S,T:] holds ( x is_<=_than {y} iff ( x 1 is_<=_than {(y 1)} & x 2 is_<=_than {(y 2)} ) ) proof end; theorem :: YELLOW10:8 for S, T being non empty RelStr for x, y, z being Element of [:S,T:] holds ( x is_<=_than {y,z} iff ( x 1 is_<=_than {(y 1),(z 1)} & x 2 is_<=_than {(y 2),(z 2)} ) ) proof end; theorem :: YELLOW10:9 for S, T being non empty RelStr for x, y being Element of [:S,T:] holds ( x is_>=_than {y} iff ( x 1 is_>=_than {(y 1)} & x 2 is_>=_than {(y 2)} ) ) proof end; theorem :: YELLOW10:10 for S, T being non empty RelStr for x, y, z being Element of [:S,T:] holds ( x is_>=_than {y,z} iff ( x 1 is_>=_than {(y 1),(z 1)} & x 2 is_>=_than {(y 2),(z 2)} ) ) proof end; theorem :: YELLOW10:11 for S, T being non empty antisymmetric RelStr for x, y being Element of [:S,T:] holds ( ex_inf_of {x,y},[:S,T:] iff ( ex_inf_of {(x 1),(y 1)},S & ex_inf_of {(x 2),(y 2)},T ) ) proof end; theorem :: YELLOW10:12 for S, T being non empty antisymmetric RelStr for x, y being Element of [:S,T:] holds ( ex_sup_of {x,y},[:S,T:] iff ( ex_sup_of {(x 1),(y 1)},S & ex_sup_of {(x 2),(y 2)},T ) ) proof end; theorem Th13: :: YELLOW10:13 for S, T being antisymmetric with_infima RelStr for x, y being Element of [:S,T:] holds ( (x "/\" y) 1 = (x 1) "/\" (y 1) & (x "/\" y) 2 = (x 2) "/\" (y 2) ) proof end; theorem Th14: :: YELLOW10:14 for S, T being antisymmetric with_suprema RelStr for x, y being Element of [:S,T:] holds ( (x "\/" y) 1 = (x 1) "\/" (y 1) & (x "\/" y) 2 = (x 2) "\/" (y 2) ) proof end; theorem Th15: :: YELLOW10:15 for S, T being antisymmetric with_infima RelStr for x1, y1 being Element of S for x2, y2 being Element of T holds [(x1 "/\" y1),(x2 "/\" y2)] = [x1,x2] "/\" [y1,y2] proof end; theorem Th16: :: YELLOW10:16 for S, T being antisymmetric with_suprema RelStr for x1, y1 being Element of S for x2, y2 being Element of T holds [(x1 "\/" y1),(x2 "\/" y2)] = [x1,x2] "\/" [y1,y2] proof end; definition let S be antisymmetric with_suprema with_infima RelStr ; let x, y be Element of S; :: original: is_a_complement_of redefine pred y is_a_complement_of x; symmetry for y, x being Element of S st R55(S,b2,b1) holds R55(S,b1,b2) proof end; end; theorem Th17: :: YELLOW10:17 for S, T being antisymmetric bounded with_suprema with_infima RelStr for x, y being Element of [:S,T:] holds ( x is_a_complement_of y iff ( x 1 is_a_complement_of y 1 & x 2 is_a_complement_of y 2 ) ) proof end; theorem Th18: :: YELLOW10:18 for S, T being non empty reflexive antisymmetric up-complete RelStr for a, c being Element of S for b, d being Element of T st [a,b] << [c,d] holds ( a << c & b << d ) proof end; theorem Th19: :: YELLOW10:19 for S, T being non empty up-complete Poset for a, c being Element of S for b, d being Element of T holds ( [a,b] << [c,d] iff ( a << c & b << d ) ) proof end; theorem Th20: :: YELLOW10:20 for S, T being non empty reflexive antisymmetric up-complete RelStr for x, y being Element of [:S,T:] st x << y holds ( x 1 << y 1 & x 2 << y 2 ) proof end; theorem Th21: :: YELLOW10:21 for S, T being non empty up-complete Poset for x, y being Element of [:S,T:] holds ( x << y iff ( x 1 << y 1 & x 2 << y 2 ) ) proof end; theorem Th22: :: YELLOW10:22 for S, T being non empty reflexive antisymmetric up-complete RelStr for x being Element of [:S,T:] st x is compact holds ( x 1 is compact & x 2 is compact ) proof end; theorem Th23: :: YELLOW10:23 for S, T being non empty up-complete Poset for x being Element of [:S,T:] st x 1 is compact & x 2 is compact holds x is compact proof end; theorem Th24: :: YELLOW10:24 for S, T being antisymmetric with_infima RelStr for X, Y being Subset of [:S,T:] holds ( proj1 (X "/\" Y) = () "/\" () & proj2 (X "/\" Y) = () "/\" () ) proof end; theorem :: YELLOW10:25 for S, T being antisymmetric with_suprema RelStr for X, Y being Subset of [:S,T:] holds ( proj1 (X "\/" Y) = () "\/" () & proj2 (X "\/" Y) = () "\/" () ) proof end; theorem :: YELLOW10:26 for S, T being RelStr for X being Subset of [:S,T:] holds downarrow X c= [:(downarrow ()),(downarrow ()):] proof end; theorem :: YELLOW10:27 for S, T being RelStr for X being Subset of S for Y being Subset of T holds [:(),():] = downarrow [:X,Y:] proof end; theorem Th28: :: YELLOW10:28 for S, T being RelStr for X being Subset of [:S,T:] holds ( proj1 () c= downarrow () & proj2 () c= downarrow () ) proof end; theorem :: YELLOW10:29 for S being RelStr for T being reflexive RelStr for X being Subset of [:S,T:] holds proj1 () = downarrow () proof end; theorem :: YELLOW10:30 for S being reflexive RelStr for T being RelStr for X being Subset of [:S,T:] holds proj2 () = downarrow () proof end; theorem :: YELLOW10:31 for S, T being RelStr for X being Subset of [:S,T:] holds uparrow X c= [:(uparrow ()),(uparrow ()):] proof end; theorem :: YELLOW10:32 for S, T being RelStr for X being Subset of S for Y being Subset of T holds [:(),():] = uparrow [:X,Y:] proof end; theorem Th33: :: YELLOW10:33 for S, T being RelStr for X being Subset of [:S,T:] holds ( proj1 () c= uparrow () & proj2 () c= uparrow () ) proof end; theorem :: YELLOW10:34 for S being RelStr for T being reflexive RelStr for X being Subset of [:S,T:] holds proj1 () = uparrow () proof end; theorem :: YELLOW10:35 for S being reflexive RelStr for T being RelStr for X being Subset of [:S,T:] holds proj2 () = uparrow () proof end; theorem :: YELLOW10:36 for S, T being non empty RelStr for s being Element of S for t being Element of T holds [:(),():] = downarrow [s,t] proof end; theorem Th37: :: YELLOW10:37 for S, T being non empty RelStr for x being Element of [:S,T:] holds ( proj1 () c= downarrow (x 1) & proj2 () c= downarrow (x 2) ) proof end; theorem :: YELLOW10:38 for S being non empty RelStr for T being non empty reflexive RelStr for x being Element of [:S,T:] holds proj1 () = downarrow (x 1) proof end; theorem :: YELLOW10:39 for S being non empty reflexive RelStr for T being non empty RelStr for x being Element of [:S,T:] holds proj2 () = downarrow (x 2) proof end; theorem :: YELLOW10:40 for S, T being non empty RelStr for s being Element of S for t being Element of T holds [:(),():] = uparrow [s,t] proof end; theorem Th41: :: YELLOW10:41 for S, T being non empty RelStr for x being Element of [:S,T:] holds ( proj1 () c= uparrow (x 1) & proj2 () c= uparrow (x 2) ) proof end; theorem :: YELLOW10:42 for S being non empty RelStr for T being non empty reflexive RelStr for x being Element of [:S,T:] holds proj1 () = uparrow (x 1) proof end; theorem :: YELLOW10:43 for S being non empty reflexive RelStr for T being non empty RelStr for x being Element of [:S,T:] holds proj2 () = uparrow (x 2) proof end; theorem Th44: :: YELLOW10:44 for S, T being non empty up-complete Poset for s being Element of S for t being Element of T holds [:(),():] = waybelow [s,t] proof end; theorem Th45: :: YELLOW10:45 for S, T being non empty reflexive antisymmetric up-complete RelStr for x being Element of [:S,T:] holds ( proj1 () c= waybelow (x 1) & proj2 () c= waybelow (x 2) ) proof end; theorem Th46: :: YELLOW10:46 for S being non empty up-complete Poset for T being non empty lower-bounded up-complete Poset for x being Element of [:S,T:] holds proj1 () = waybelow (x 1) proof end; theorem Th47: :: YELLOW10:47 for S being non empty lower-bounded up-complete Poset for T being non empty up-complete Poset for x being Element of [:S,T:] holds proj2 () = waybelow (x 2) proof end; theorem :: YELLOW10:48 for S, T being non empty up-complete Poset for s being Element of S for t being Element of T holds [:(),():] = wayabove [s,t] proof end; theorem :: YELLOW10:49 for S, T being non empty reflexive antisymmetric up-complete RelStr for x being Element of [:S,T:] holds ( proj1 () c= wayabove (x 1) & proj2 () c= wayabove (x 2) ) proof end; theorem Th50: :: YELLOW10:50 for S, T being non empty up-complete Poset for s being Element of S for t being Element of T holds [:(),():] = compactbelow [s,t] proof end; theorem Th51: :: YELLOW10:51 for S, T being non empty reflexive antisymmetric up-complete RelStr for x being Element of [:S,T:] holds ( proj1 () c= compactbelow (x 1) & proj2 () c= compactbelow (x 2) ) proof end; theorem Th52: :: YELLOW10:52 for S being non empty up-complete Poset for T being non empty lower-bounded up-complete Poset for x being Element of [:S,T:] holds proj1 () = compactbelow (x 1) proof end; theorem Th53: :: YELLOW10:53 for S being non empty lower-bounded up-complete Poset for T being non empty up-complete Poset for x being Element of [:S,T:] holds proj2 () = compactbelow (x 2) proof end; registration let S be non empty reflexive RelStr ; cluster empty -> Open for Subset of S; coherence for b1 being Subset of S st b1 is empty holds b1 is Open proof end; end; theorem :: YELLOW10:54 for S, T being non empty reflexive antisymmetric up-complete RelStr for X being Subset of [:S,T:] st X is Open holds ( proj1 X is Open & proj2 X is Open ) proof end; theorem :: YELLOW10:55 for S, T being non empty up-complete Poset for X being Subset of S for Y being Subset of T st X is Open & Y is Open holds [:X,Y:] is Open proof end; theorem :: YELLOW10:56 for S, T being non empty reflexive antisymmetric up-complete RelStr for X being Subset of [:S,T:] st X is inaccessible holds ( proj1 X is inaccessible & proj2 X is inaccessible ) proof end; theorem :: YELLOW10:57 for S, T being non empty reflexive antisymmetric up-complete RelStr for X being upper Subset of S for Y being upper Subset of T st X is inaccessible & Y is inaccessible holds [:X,Y:] is inaccessible proof end; theorem :: YELLOW10:58 for S, T being non empty reflexive antisymmetric up-complete RelStr for X being Subset of S for Y being Subset of T st [:X,Y:] is directly_closed holds ( ( Y <> {} implies X is directly_closed ) & ( X <> {} implies Y is directly_closed ) ) proof end; theorem :: YELLOW10:59 for S, T being non empty reflexive antisymmetric up-complete RelStr for X being Subset of S for Y being Subset of T st X is directly_closed & Y is directly_closed holds [:X,Y:] is directly_closed proof end; theorem :: YELLOW10:60 for S, T being non empty reflexive antisymmetric up-complete RelStr for X being Subset of [:S,T:] st X is property(S) holds ( proj1 X is property(S) & proj2 X is property(S) ) proof end; theorem :: YELLOW10:61 for S, T being non empty up-complete Poset for X being Subset of S for Y being Subset of T st X is property(S) & Y is property(S) holds [:X,Y:] is property(S) proof end; theorem Th62: :: YELLOW10:62 for S, T being non empty reflexive RelStr st RelStr(# the carrier of S, the InternalRel of S #) = RelStr(# the carrier of T, the InternalRel of T #) & S is /\-complete holds T is /\-complete proof end; registration let S be non empty reflexive /\-complete RelStr ; cluster RelStr(# the carrier of S, the InternalRel of S #) -> /\-complete ; coherence RelStr(# the carrier of S, the InternalRel of S #) is /\-complete by Th62; end; registration let S, T be non empty reflexive /\-complete RelStr ; cluster [:S,T:] -> /\-complete ; coherence [:S,T:] is /\-complete proof end; end; theorem :: YELLOW10:63 for S, T being non empty reflexive RelStr st [:S,T:] is /\-complete holds ( S is /\-complete & T is /\-complete ) proof end; registration let S, T be non empty antisymmetric bounded complemented with_suprema with_infima RelStr ; cluster [:S,T:] -> complemented ; coherence [:S,T:] is complemented proof end; end; theorem :: YELLOW10:64 for S, T being antisymmetric bounded with_suprema with_infima RelStr st [:S,T:] is complemented holds ( S is complemented & T is complemented ) proof end; registration let S, T be non empty antisymmetric distributive with_suprema with_infima RelStr ; cluster [:S,T:] -> distributive ; coherence [:S,T:] is distributive proof end; end; theorem :: YELLOW10:65 for S being antisymmetric with_suprema with_infima RelStr for T being reflexive antisymmetric with_suprema with_infima RelStr st [:S,T:] is distributive holds S is distributive proof end; theorem :: YELLOW10:66 for S being reflexive antisymmetric with_suprema with_infima RelStr for T being antisymmetric with_suprema with_infima RelStr st [:S,T:] is distributive holds T is distributive proof end; registration let S, T be meet-continuous Semilattice; cluster [:S,T:] -> satisfying_MC ; coherence [:S,T:] is satisfying_MC proof end; end; theorem :: YELLOW10:67 for S, T being Semilattice st [:S,T:] is meet-continuous holds ( S is meet-continuous & T is meet-continuous ) proof end; registration let S, T be non empty up-complete /\-complete satisfying_axiom_of_approximation Poset; coherence proof end; end; registration let S, T be non empty /\-complete continuous Poset; cluster [:S,T:] -> continuous ; coherence [:S,T:] is continuous ; end; theorem :: YELLOW10:68 for S, T being non empty lower-bounded up-complete Poset st [:S,T:] is continuous holds ( S is continuous & T is continuous ) proof end; registration let S, T be lower-bounded up-complete satisfying_axiom_K sup-Semilattice; coherence proof end; end; registration let S, T be lower-bounded complete algebraic sup-Semilattice; cluster [:S,T:] -> algebraic ; coherence [:S,T:] is algebraic ; end; theorem Th69: :: YELLOW10:69 for S, T being non empty lower-bounded Poset st [:S,T:] is algebraic holds ( S is algebraic & T is algebraic ) proof end; registration let S, T be lower-bounded arithmetic LATTICE; cluster [:S,T:] -> arithmetic ; coherence [:S,T:] is arithmetic proof end; end; theorem :: YELLOW10:70 for S, T being lower-bounded LATTICE st [:S,T:] is arithmetic holds ( S is arithmetic & T is arithmetic ) proof end;
{}
## Journal of the Mathematical Society of Japan ### Generalized Whittaker functions on GSp(2,R) associated with indefinite quadratic forms Tomonori MORIYAMA #### Abstract We study the generalized Whittaker models for G = GSp(2,R) associated with indefinite binary quadratic forms when they arise from two standard representations of G: (i) a generalized principal series representation induced from the non-Siegel maximal parabolic subgroup and (ii) a (limit of) large discrete series representation. We prove the uniqueness of such models with moderate growth property. Moreover we express the values of the corresponding generalized Whittaker functions on a one-parameter subgroup of G in terms of the Meijer G-functions. #### Article information Source J. Math. Soc. Japan, Volume 63, Number 4 (2011), 1203-1262. Dates First available in Project Euclid: 27 October 2011 https://projecteuclid.org/euclid.jmsj/1319721140 Digital Object Identifier doi:10.2969/jmsj/06341203 Mathematical Reviews number (MathSciNet) MR2855812 Zentralblatt MATH identifier 1268.22018 #### Citation MORIYAMA, Tomonori. Generalized Whittaker functions on GSp (2, R ) associated with indefinite quadratic forms. J. Math. Soc. Japan 63 (2011), no. 4, 1203--1262. doi:10.2969/jmsj/06341203. https://projecteuclid.org/euclid.jmsj/1319721140 #### References • A. N. Andrianov, Dirichlet series with Euler product in the theory of Siegel modular forms of genus two, Trudy Mat. Inst. Steklov., 112 (1971), 73–94. • A. N. Andrianov and V. Kalinin, Analytic properties of standard zeta-functions of Siegel modular forms, Mat. Sb. (N.S.), 106 (1978), 323–339. • E. W. Barnes, The Asymptotic Expansion of Integral Functions Defined by Generalized Hypergeometric Series, Proc. London Math. Soc., 5 (1907), 59–116. • A. Borel, Automorphic forms on $SL \sb 2(\R)$, Cambridge Tracts in Math., 130, Cambridge University Press, Cambridge, 1997. • D. Bump, Automorphic forms and representations, Cambridge University Press, 1997. • D. Bump, S. Friedberg and M. Furusawa, Explicit formulas for the Waldspurger and Bessel models, Israel J. Math., 102 (1997), 125–177. • C. Casselman and J. A. Shalika, The unramified principal series of $p$-adic groups, II, The Whittaker function, Compositio Math., 41 (1980), 207–231. • A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher transcendental functions, I, Based on notes left by Harry Bateman, McGraw-Hill Book Company Inc., New York-Toronto-London, 1953. • M. Furusawa, On $L$-functions for $GSp(4)\times GL(2)$ and their special values, J. Reine Angew. Math., 438 (1993), 187–218. • B. H. Gross and D. Prasad, On irreducible representations of $SO_{2n+1}\times SO_{2m}$, Canad. J. Math., 46 (1994), 930–950. • Harish-Chandra, Discrete series for semisimple Lie groups, II, Acta Math., 166 (1966), 1–111. • M. Harris, Occult period invariants and critical values of the degree four $L$-function of $GSp(4)$, In: Contributions to Automorphic Forms, Geometry, and Number Theory, Johns Hopkins Univ. Press, Baltimore, MD, 2004, pp.,331–354. • T. Ishii, Siegel-Whittaker functions on $Sp(2,\R)$ for principal series representations, J. Math. Sci. Univ. Tokyo, 9 (2002), 303–346. • T. Ishii and T. Moriyama, Spinor $L$-functions for generic cusp forms on $GSp(2)$ belonging to principal series representations, Trans. Amer. Math. Soc., 360 (2008), 5683–5709. • H. Jacquet and J. A. Shalika, On Euler products and the classification of automorphic forms, II, Amer. J. Math., 103 (1981), 777–815. • A. W. Knapp, Representation theory of semisimple groups, An overview based on examples, Princeton Mathematical Series, 36, Princeton University Press, Princeton NJ, 1986. • S. Kudla, S. Rallis and D. Soudry, On the degree 5 $L$-function for $Sp(2)$, Invent. Math., 107 (1992), 483–541. • F. Lemma, Régulateurs supérieurs, périodes et valeurs spéciales de la fonction $L$ de degré 4 de $GSp(4)$, (French, English, French summary) [Higher regulators, periods and special values of the degree-4 $L$-function of $GSp(4)$], C. R. Math. Acad. Sci. Paris, 346 (2008), 1023–1028. • J. S. Li, Nonexistence of singular cusp forms, Compositio Math., 83 (1992), 43–51. • H. Maass, Siegel's modular forms and Dirichlet series, Lecture Notes in Math., 216, Springer-Verlag, 1976. • C. S. Meijer, On the $G$-functions I-II, Indag. Math., 8 (1946), 124–134, 213–225. • T. Miyazaki, The generalized Whittaker functions for $Sp(2,\R)$ and the gamma factor of the Andrianov $L$-functions, J. Math. Sci. Univ. Tokyo., 7 (2000), 241–295. • T. Miyazaki, Nilpotent orbits and Whittaker functions for derived functor modules of $Sp(2,\R)$, Canad. J. Math., 54 (2002), 769–794. • T. Moriyama, Entireness of the spinor $L$-functions for certain generic cusp forms on $GSp(2)$, Amer. J. Math., 126 (2004), 899–920. • S. Niwa, On generalized Whittaker functions on Siegel's upper half space of degree 2, Nagoya Math. J., 121 (1991), 171–184. • M. E. Novodvorsky, Automorphic $L$-functions for symplectic group $GSp(4)$, In: Automorphic Forms, Representations and $L$-Functions, (Eds. A. Borel and W. Classelman), Proc. Sympos. Pure Math., 33, Amer. Math. Soc., Providence RI, 1979, pp.,87–95. • M. E. Novodvorsky and I. I. Piatetski-Shapiro, Generalized Bessel models for a symplectic group of rank 2, Math. USSR-Sb., 19 (1973), 243–255. • I. I. Piatetski-Shapiro, $L$-functions for $GSp \sb 4$, Olga Taussky-Todd: in memoriam, Pacific J. Math., Special Issue (1997), 259–275. • I. I. Piatetski-Shapiro and S. Rallis, A new way to get an Euler product, J. Reine Angew. Math., 392 (1988), 110–124. • A. Pitale and R. Schmidt, Bessel models for lowest weight representations of $GSp(4,R)$, Int. Math. Res. Not., (2009), 1159–1212. • D. Prasad and R. Takloo-Bighash, Bessel models for $GSp(4)$, J. Reine Angew. Math., 655 (2011), 189–243. • F. Rodier, Modèles pour les représentations du groupe $Sp(4,k)$ où $k$ est un corps local, [Models for the representations of $Sp(4,k)$ where $k$ is a local field], Reductive groups and automorphic forms, I (Paris, 1976/1977), Publ. Math. Univ. Paris VII, 1, Univ. Paris VII, Paris, 1978, pp.,123–145. • T. Sugano, On holomorphic cusp forms on quaternion unitary groups of degree 2, J. Fac. Sci. Univ. Tokyo Sect. IA Math., 31 (1985), 521–568. • R. Takloo-Bighash, Spinor $L$-functions, theta correspondence, and Bessel coefficients, with an appendix by Philippe Michel, Forum Math., 19 (2007), 487–554. • N. Wallach, Asymptotic expansions of generalized matrix entries of representations of real reductive groups, In: Lie group representations, I, Lecture Notes in Math., 1024, Springer-Verlag, 1983, pp.,287–369. • W. Wasow, Asymptotic expansions for ordinary differential equations, Pure and Appl. Math., XIV, Interscience Publishers John Wiley & Sons, Inc., New York-London-Sydney, 1965. • H. Yamashita, Multiplicity one theorems for generalized Gel'fand-Graev representations of semisimple Lie groups and Whittaker models for the discrete series, In: Representations of Lie Groups, Kyoto, Hiroshima, 1986, (eds. K. Okamoto and T. Oshima), Adv. Stud. Pure Math., 14, Academic Press, Boston, MA, 1988, pp.,31–121.
{}
# User:Eas4200c.f08.aeris.guan/919lecture ## Torque, Shear Flow, and Shear Panels Recall that from the problem 1.1 that the torque of the thin-walled cross section equaled to twice the shear stress multiplied by the product of the dimensions given by the equation ${\displaystyle \tau ={\frac {T}{2abt}}}$. This expression for the torque's relationship with the cross sectional dimensions is only for a particular case where the cross section of the thin-walled beam or member being analyzed is rectangular. For a more general case of relating the torque applied on an object, it is necessary to introduce the concept of shear flow. Shear flow is the product of the shear stress and the thickness of the cross section. Usually, this thickness is very small compared to the contour length. For more general case of relating torque with dimensions and shear flow, a formula is introduced in section 3.5 of "Mechanics of Aircraft Structures" by C.T. Sun. ${\displaystyle \displaystyle T=\oint {\rho qds}=\int \int _{A_{bar}}{2qdA}=2qA_{bar}}$ This formula describes the torque of a thin-walled bar with an arbitrary cross section of thickness t. Here, q is the shear flow, t is the thickness, A bar is the average area of the cross section, and T is the torque. In summary, for a nonuniform thin-walled cross section,${\displaystyle T=2qA_{bar}:}$ and for a thin-walled rectangular cross section${\displaystyle T=2\tau ab}$: Deriving the torque for the thin-walled rectangular cross section was an ad hoc method because of the assumptions used to determine the equation. However, it is logical. The derivation of the non-uniform cross section torque is a method based on elasticity. Upon scrutinization of the stringers of an aircraft that is part of a monocoque structure, the open thin-walled box structure was previously determined to be more practical that the closed structure because of ease of riveting and installation. However, as the stringers were examined more carefully, it is realized that the vertical sections are slightly slanted. Why is this? That is because this structure is practical for stockpiling and storage and at the same time, not losing much moment of inertia due to the slanting of the vertical sections. Starting chapter two, the structure of a shear panel is introduced. It is a thin sheet of material, usually metal aluminum, that can withstand shear load. There is a relationship that relates the shear modulus of a material and the shear strain with the shear stress that is applied the structure. ${\displaystyle \displaystyle \tau =G\gamma }$, where G is the shear modulus, and gamma is the shear strain. Gamma describes the change in a right angle between two fictitious lines in the material due to deformation. It can be expressed by displacements in an (x,y) coordinate system relative to the structure. ${\displaystyle \displaystyle \gamma ={\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}={\frac {\partial u_{x}}{\partial y}}+{\frac {\partial u_{y}}{\partial y}}}$ where ${\displaystyle u=u_{x}}$= the displacement along the x direction and similar for v. This shear strain is called the engineering shear strain and not the tensorial shear strain. These two quantities are different in the fact that ${\displaystyle \displaystyle \epsilon _{xy}=}$ (tensorial shear strain)${\displaystyle \displaystyle =.5\gamma _{xy}}$. The following picture describes the deformation of an infinitesimally small element (i.e. part of a shear panel) subject to shear forces. The blue shape is the deformed infinitesimally small rectangle.
{}
#### core, lib, modules: restructured source code tree - new folder src/ to hold the source code for main project applications - main.c is in src/ - all core files are subfolder are in src/core/ - modules are in src/modules/ - libs are in src/lib/ - application Makefiles are in src/ - application binary is built in src/ (src/kamailio) Daniel-Constantin Mierla authored on 07/12/2016 11:03:51 Showing 1 changed files 1 1 deleted file mode 100644 ... ... @@ -1,51 +0,0 @@ 1 -/* 2 - * Copyright (C) 2008 iptelorg GmbH 3 - * 4 - * Permission to use, copy, modify, and distribute this software for any 5 - * purpose with or without fee is hereby granted, provided that the above 6 - * copyright notice and this permission notice appear in all copies. 7 - * 8 - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 11 - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 13 - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 - */ 16 - 17 -/*! 18 - * \file 19 - * \brief Kamailio core :: ser/kamailio/openser compatibility macros & vars. 20 - * \ingroup core 21 - * Module: \ref core 22 - */ 23 - 24 -/* 25 - * History: 26 - * -------- 27 - * 2008-11-29 initial version (andrei) 28 - */ 29 - 30 - 31 -#include "sr_compat.h" 32 - 33 -/** 34 - * compatibility modes: 35 - * - SR_COMPAT_SER - strict compatibiliy with ser ($xy is avp) 36 - * - SR_COMPAT_KAMAILIO - strict compatibiliy with kamailio ($xy is pv) 37 - * - SR_COMPAT_MAX - max compatibiliy ($xy tried as pv, if not found, is avp) 38 - */ 39 -#ifdef SR_SER 40 -#define SR_DEFAULT_COMPAT SR_COMPAT_SER 41 -#elif defined SR_KAMAILIO || defined SR_OPENSER 42 -#define SR_DEFAULT_COMPAT SR_COMPAT_MAX 43 -#elif defined SR_ALL || defined SR_MAX_COMPAT 44 -#define SR_DEFAULT_COMPAT SR_COMPAT_MAX 45 -#else 46 -/* default */ 47 -#define SR_DEFAULT_COMPAT SR_COMPAT_MAX 48 -#endif 49 - 50 -int sr_compat=SR_DEFAULT_COMPAT; 51 -int sr_cfg_compat=SR_DEFAULT_COMPAT; #### Core Removed history, svn$id$and doxygen udpates on the .c files Olle E. Johansson authored on 03/01/2015 09:53:17 Showing 1 changed files ... ... @@ -1,6 +1,4 @@ 1 1 /* 2 - *$Id$3 - * 4 2 * Copyright (C) 2008 iptelorg GmbH 5 3 * 6 4 * Permission to use, copy, modify, and distribute this software for any ... ... @@ -15,9 +13,10 @@ 15 13 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 16 14 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 17 15 */ 16 + 18 17 /*! 19 18 * \file 20 - * \brief SIP-router core :: ser/kamailio/openser compatibility macros & vars. 19 + * \brief Kamailio core :: ser/kamailio/openser compatibility macros & vars. 21 20 * \ingroup core 22 21 * Module: \ref core 23 22 */ #### core: default compatibility set to SR_COMPAT_MAX - means that$xy is tried first as pv and if not, then is considered avp (was default compat mode for default flavour in the past) - you can still use: - #!KAMAILIO in config to force SR_COMPAT_KAMAILIO (i.e., $xy must be a pv, otherwise is error) - #!SER is config to force SR_COMPAT_SER (i.e.,$xy is avp/attr) Daniel-Constantin Mierla authored on 24/01/2013 10:45:31 Showing 1 changed files
{}
In set theory, a branch of mathematics, the Milner – Rado paradox, found by Eric Charles Milner and Richard Rado (1965), states that every ordinal number α less than the successor κ+ of some cardinal number κ can be written as the union of sets X1,X2,... where Xn is of order type at most κn for n a positive integer. ## Proof The proof is by transfinite induction. Let ${\displaystyle \alpha }$ be a limit ordinal (the induction is trivial for successor ordinals), and for each ${\displaystyle \beta <\alpha }$, let ${\displaystyle \{X_{n}^{\beta }\}_{n}}$ be a partition of ${\displaystyle \beta }$ satisfying the requirements of the theorem. Fix an increasing sequence ${\displaystyle \{\beta _{\gamma }\}_{\gamma <\mathrm {cf} \,(\alpha )}}$ cofinal in ${\displaystyle \alpha }$ with ${\displaystyle \beta _{0}=0}$. Note ${\displaystyle \mathrm {cf} \,(\alpha )\leq \kappa }$. Define: ${\displaystyle X_{0}^{\alpha }=\{0\};\ \ X_{n+1}^{\alpha }=\bigcup _{\gamma }X_{n}^{\beta _{\gamma +1}}\setminus \beta _{\gamma }}$ Observe that: ${\displaystyle \bigcup _{n>0}X_{n}^{\alpha }=\bigcup _{n}\bigcup _{\gamma }X_{n}^{\beta _{\gamma +1}}\setminus \beta _{\gamma }=\bigcup _{\gamma }\bigcup _{n}X_{n}^{\beta _{\gamma +1}}\setminus \beta _{\gamma }=\bigcup _{\gamma }\beta _{\gamma +1}\setminus \beta _{\gamma }=\alpha \setminus \beta _{0}}$ and so ${\displaystyle \bigcup _{n}X_{n}^{\alpha }=\alpha }$. Let ${\displaystyle \mathrm {ot} \,(A)}$ be the order type of ${\displaystyle A}$. As for the order types, clearly ${\displaystyle \mathrm {ot} (X_{0}^{\alpha })=1=\kappa ^{0}}$. Noting that the sets ${\displaystyle \beta _{\gamma +1}\setminus \beta _{\gamma }}$ form a consecutive sequence of ordinal intervals, and that each ${\displaystyle X_{n}^{\beta _{\gamma +1}}\setminus \beta _{\gamma }}$ is a tail segment of ${\displaystyle X_{n}^{\beta _{\gamma +1}}}$ we get that: ${\displaystyle \mathrm {ot} (X_{n+1}^{\alpha })=\sum _{\gamma }\mathrm {ot} (X_{n}^{\beta _{\gamma +1}}\setminus \beta _{\gamma })\leq \sum _{\gamma }\kappa ^{n}=\kappa ^{n}\cdot \mathrm {cf} (\alpha )\leq \kappa ^{n}\cdot \kappa =\kappa ^{n+1}}$
{}
Dyadic rationals in the interval from 0 to 1. In mathematics, a dyadic fraction or dyadic rational is a rational number whose denominator is a power of two, i.e., a number of the form $\frac{a}{2^b}$ where a is an integer and b is a natural number; for example, 1/2 or 3/8, but not 1/3. These are precisely the numbers whose binary expansion is finite. ## Use in measurement The inch is customarily subdivided in dyadic rather than decimal fractions; similarly, the customary divisions of the gallon into half-gallons, quarts, and pints are dyadic. The ancient Egyptians also used dyadic fractions in measurement, with denominators up to 64.[1] ## Arithmetic The sum, product, or difference of any two dyadic fractions is itself another dyadic fraction: $\frac{a}{2^b}+\frac{c}{2^d}=\frac{2^{d-b}a+c}{2^d} \quad (d\ge b)$ $\frac{a}{2^b}-\frac{c}{2^d}=\frac{2^{d-b}a-c}{2^d} \quad (d\ge b)$ $\frac{a}{2^b}-\frac{c}{2^d}=\frac{a-2^{b-d}c}{2^b} \quad (d< b)$ $\frac{a}{2^b}\times \frac{c}{2^d} = \frac{ a \times c}{2^{b+d}}.$ However, the result of dividing one dyadic fraction by another is not necessarily a dyadic fraction. Addition modulo 1 forms a group; this is the Prüfer 2-group. Because they are closed under addition, subtraction, and multiplication, but not division, the dyadic fractions form a subring of the rational numbers Q and an overring of the integers Z. Algebraically, this subring is the localization of the integers Z with respect to the set of powers of two. The set of all dyadic fractions is dense in the real line: any real number x can be arbitrarily closely approximated by dyadic rationals of the form $\lfloor 2^i x \rfloor / 2^i$. Compared to other dense subsets of the real line, such as the rational numbers, the dyadic rationals are in some sense a relatively "small" dense set, which is why they sometimes occur in proofs. (See for instance Urysohn's lemma.) ## Dual group Considering only the addition and subtraction operations of the dyadic rationals gives them the structure of an additive abelian group. The dual group of a group consists of its characters, group homomorphisms to the multiplicative group of the complex numbers, and in the spirit of Pontryagin duality the dual group of the additive dyadic rationals can also be viewed as a topological group. It is called the dyadic solenoid and is an example of a solenoid group and of a protorus. The dyadic rationals are the direct limit of infinite cyclic subgroups of the rational numbers, $\varinjlim \left\{2^{-i}\mathbb{Z}\mid i = 0, 1, 2, \dots \right\}$ and their dual group can be constructed as the inverse limit of the unit circle group under the repeated squaring map $\zeta\mapsto\zeta^2.$ An element of the dyadic solenoid can be represented as an infinite sequence of complex numbers q0, q1, q2, ..., with the properties that each qi lies on the unit circle and that, for all i > 0, qi2 = qi − 1. The group operation on these elements multiplies any two sequences componentwise. Each element of the dyadic solenoid corresponds to a character of the dyadic rationals that maps a/2b to the complex number qba. Conversely, every character χ of the dyadic rationals corresponds to the element of the dyadic solenoid given by qi = χ(1/2i). As a topological space the dyadic solenoid is a solenoid, and an indecomposable continuum.[2] ## Related constructions The surreal numbers are generated by an iterated construction principle which starts by generating all finite dyadic fractions, and then goes on to create new and strange kinds of infinite, infinitesimal and other numbers. The binary van der Corput sequence is an equidistributed permutation of the positive dyadic rational numbers. ## In music Time signatures in Western musical notation traditionally consist of dyadic fractions (for example: 2/2, 4/4, 6/8...), although non-dyadic time signatures have been introduced by composers in the twentieth century (for example: 2/.). Non-dyadic time signatures are called irrational in musical terminology, but this usage does not correspond to the irrational numbers of mathematics, because they still consist of ratios of integers. ## In computing As a data type used by computers, floating point numbers are often defined as integers multiplied by positive or negative powers of two, and thus all numbers that can be represented for instance by IEEE floating point datatypes are dyadic rationals. The same is true for the majority of fixed point datatypes, which also uses powers of two implicitly in the majority of cases.
{}
# Finishing Type Inference Let's recap what we did in the last two sections. We started with this language: e ::= n | i | b | if e1 then e2 else e3 | fun x -> e | e1 e2 n ::= x | bop bop ::= ( + ) | ( * ) | ( <= ) t ::= int | bool | t1 -> t2 We then introduced an algorithm for inferring a type of an expression. That type came along with a set of constraints. The algorithm was expressed in the form of a relation env |- e : t -| C. Next, we introduced the unification algorithm for solving constraint sets. That algorithm produces as output a sequence S of substitutions, or it fails. If it fails, then e is not typeable. To finish type inference and reconstruct the type of e, we just compute t S. That is, we apply the solution to the contraints to the type t produced by constraint generation. Let p be that type. That is, p = t S. It's possible to prove p is the principal type for the expression, meaning that if e also has type t for any other t, then there exists a substitution S such that t = p S. For example, the principal type of the identity function fun x -> x would be 'a -> 'a. But you could also give that function the less helpful type int -> int. What we're saying is that HM will produce 'a -> 'a, not int -> int. So in a sense, HM actually infers the most "lenient" type that is possible for an expression. ## A worked example Let's infer the type of the following expression: fun f -> fun x -> f (( + ) x 1) It's not much code, but this will get quite involved! Constraint generation. We start in the initial environment I that, among other things, maps ( + ) to int -> int -> int. I |- fun f -> fun x -> f (( + ) x 1) For now we leave off the : t -| C, because that's the output of constraint generation. We haven't figure out the output yet! Since we have a function, we use the function rule for inference to proceed by introducing a fresh type variable for the argument: I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) <-- Here Again we have a function, hence a fresh type variable: I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) <-- Here Now we have an application application. Before dealing with it, we need to descend into its subexpressions. The first one is easy. It's just a variable. So we finally can finish a judgment with the variable's type from the environment, and an empty contraint set. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} <-- Here Next is the second subexpression. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 <-- Here That is another application, so we need to handle its subexpressions. Recall that ( + ) x 1 is parsed as (( + ) x) 1. So the first subexpression is the complicated one to handle. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 I, f : 'a, x : 'b |- ( + ) x <-- Here Yet another application. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 I, f : 'a, x : 'b |- ( + ) x I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} <-- Here That one was easy, because we just had to look up the name ( + ) in the environment. The next is also easy, because we just look up x. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 I, f : 'a, x : 'b |- ( + ) x I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} I, f : 'a, x : 'b |- x : 'b -| {} <-- Here At last, we're ready to resolve a function application! We introduce a fresh type variable and add a constraint. The constraint is that the inferred type int -> int -> int of the left-hand subexpression must equal the inferred type 'b of the right-hand subexpression arrow the fresh type variable 'c, that is, 'b -> 'c. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 I, f : 'a, x : 'b |- ( + ) x : 'c -| int -> int -> int = 'b -> 'c <-- Here I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} I, f : 'a, x : 'b |- x : 'b -| {} Now we're ready for the argument being passed to that function. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 I, f : 'a, x : 'b |- ( + ) x : 'c -| int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} I, f : 'a, x : 'b |- x : 'b -| {} I, f : 'a, x : 'b |- 1 : int -| {} <-- Here Again we can resolve a function application with a new type variable and constraint. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 : 'd -| 'c = int -> 'd, int -> int -> int = 'b -> 'c <-- Here I, f : 'a, x : 'b |- ( + ) x : 'c -| int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} I, f : 'a, x : 'b |- x : 'b -| {} I, f : 'a, x : 'b |- 1 : int -| {} And once more, a function application, so a new type variable and a new constraint. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) I, f : 'a, x : 'b |- f (( + ) x 1) : 'e -| 'a = 'd -> 'e, 'c = int -> 'd, int -> int -> int = 'b -> 'c <-- Here I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 : 'd -| 'c = int -> 'd, int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) x : 'c -| int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} I, f : 'a, x : 'b |- x : 'b -| {} I, f : 'a, x : 'b |- 1 : int -| {} Now we finally get to finish off an anonymous function. Its inferred type is the fresh type variable 'b of its parameter x, arrow the inferred type e of its body. I |- fun f -> fun x -> f (( + ) x 1) I, f : 'a |- fun x -> f (( + ) x 1) : 'b -> 'e -| 'a = 'd -> 'e, 'c = int -> 'd, int -> int -> int = 'b -> 'c <-- Here I, f : 'a, x : 'b |- f (( + ) x 1) : 'e -| 'a = 'd -> 'e, 'c = int -> 'd, int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 : 'd -| 'c = int -> 'd, int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) x : 'c -| int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} I, f : 'a, x : 'b |- x : 'b -| {} I, f : 'a, x : 'b |- 1 : int -| {} And the last anonymous function can now be complete in the same way: I |- fun f -> fun x -> f (( + ) x 1) : 'a -> 'b -> 'e -| 'a = 'd -> 'e, 'c = int -> 'd, int -> int -> int = 'b -> 'c <-- Here I, f : 'a |- fun x -> f (( + ) x 1) : 'b -> 'e -| 'a = 'd -> 'e, 'c = int -> 'd, int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- f (( + ) x 1) : 'e -| 'a = 'd -> 'e, 'c = int -> 'd, int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- f : 'a -| {} I, f : 'a, x : 'b |- ( + ) x 1 : 'd -| 'c = int -> 'd, int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) x : 'c -| int -> int -> int = 'b -> 'c I, f : 'a, x : 'b |- ( + ) : int -> int -> int -| {} I, f : 'a, x : 'b |- x : 'b -| {} I, f : 'a, x : 'b |- 1 : int -| {} As a result of constraint generation, we know that the type of the expression is 'a -> 'b -> 'e, where 'a = 'd -> 'e 'c = int -> 'd int -> int -> int = 'b -> 'c Unification. To solve that system of equations, we use the unification algorithm: unify('a = 'd -> 'e, 'c = int -> 'd, int -> int -> int = 'b -> 'c) The first constraint yields a substitution {('d -> 'e) / 'a}, which we record as part of the solution, and also apply it to the remaining constraints: ... = {('d -> 'e) / 'a}; unify(('c = int -> 'd, int -> int -> int = 'b -> 'c) {('d -> 'e) / 'a} = {('d -> 'e) / 'a}; unify('c = int -> 'd, int -> int -> int = 'b -> 'c) The second constraint behaves similarly to the first: ... = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; unify((int -> int -> int = 'b -> 'c) {(int -> 'd) / 'c}) = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; unify(int -> int -> int = 'b -> int -> 'd) The function constraint breaks down into two smaller constraints: ... = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; unify(int = 'b, int -> int = int -> 'd) We get another substitution: ... = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; {int / 'b}; unify((int -> int = int -> 'd) {int / 'b}) = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; {int / 'b}; unify(int -> int = int -> 'd) Then we get to break down another function constraint: ... = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; {int / 'b}; unify(int = int, int = 'd) The first of the resulting new constraints is trivial and just gets dropped: ... = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; {int / 'b}; unify(int = 'd) The very last constraint gives us one more substitution: = {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; {int / 'b}; {int / 'd} Reconstructing the type. To finish, we apply the substitution output by unification to the type inferred by constraint generation: ('a -> 'b -> 'e) {('d -> 'e) / 'a}; {(int -> 'd) / 'c}; {int / 'b}; {int / 'd} = (('d -> 'e) -> 'b -> 'e) {(int -> 'd) / 'c}; {int / 'b}; {int / 'd} = (('d -> 'e) -> 'b -> 'e) {int / 'b}; {int / 'd} = (('d -> 'e) -> int -> 'e) {int / 'd} = (int -> 'e) -> int -> 'e And indeed that is the same type that OCaml would infer for the original expression: # fun f -> fun x -> f (( + ) x 1);; - : (int -> 'a) -> int -> 'a = <fun> Except that OCaml uses a different type variable identifier. OCaml is nice to us and "lowers" the type variables down to smaller letters of the alphabet. We could do that too with a little extra work. ## Type errors In reality there is yet another piece to type inference. If unification fails, the compiler or interpreter needs to produce a helpful error message. That's an important engineering challenge that we won't address here. It requires keeping track of more than just constraints: we need to know why a constraint was introduced, and the ramification of its violationg. We also need to track the constraint back to the lexical piece of code that produced it, so that programmers can see where the problem occurs. And since it's possible that constraints can be processed in many different orders, there are many possible error messages that could be produced. Figuring out which one will lead the programmer to the root cause of an error, instead of some downstream consequence of it, is an area of ongoing research.
{}
Statistical Science, Volume 16, Issue 3 (2001) Article Communications in Mathematical Physics, Volume 113, Number 4 (1988) Article Bulletin of the American Mathematical Society, Volume 53, Number 6 (1947) Article Bulletin of the American Mathematical Society, Volume 10, Number 3 (1903) Article The Annals of Statistics, Volume 6, Number 2 (1978) Article ### Featured partner #### The International Society for Bayesian Analysis The International Society for Bayesian Analysis (ISBA) was founded in 1992 to promote the development and application of Bayesian analysis useful in the solution of theoretical and applied problems in science, industry and government. The ISBA publishes Bayesian Analysis, an electronic journal covering a wide range of articles that demonstrate or discuss Bayesian methods in some theoretical or applied context. ### New articles Examples of non-isolated blow-up for perturbations of the scalar curvature equation on non-locally conformally flat manifoldsJournal of Differential Geometry Ricci curvature of integral submanifolds of an $f.p.k.$-space formBulletin of the Belgian Mathematical Society - Simon Stevin Integrable Systems on $\mathbb{S}^{3}$Publicacions Matemàtiques ### Project Euclid holdings May 18, 2015: Total pages in Euclid 1,883,953 (1,260,621 open access) Journal articles 129,214 (87,748 open access) Books 299 (4,045 book chapters) Conference proceedings volumes 70 (1,379 proceedings)
{}
Journal topic Nat. Hazards Earth Syst. Sci., 18, 807–812, 2018 https://doi.org/10.5194/nhess-18-807-2018 Nat. Hazards Earth Syst. Sci., 18, 807–812, 2018 https://doi.org/10.5194/nhess-18-807-2018 Brief communication 13 Mar 2018 Brief communication | 13 Mar 2018 # Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system Samuele Segoni1, Ascanio Rosi1, Daniela Lagomarsino1,a, Riccardo Fanti1, and Nicola Casagli1 Samuele Segoni et al. • 1Department of Earth Sciences, University of Florence, Firenze, 50121, Italy • anow at: Eni S.p.A, S. Donato Milanese, Milano, Italy Correspondence: Samuele Segoni (samuele.segoni@unifi.it) Abstract We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well. 1 Introduction Regional-scale landslide early warning systems (RSLEWS henceforth) are usually based on empirical rainfall thresholds, which in turn are based on rainfall parameters that can be easily measured and monitored by rain gauges (Aleotti, 2004; Baum et al., 2010; Cannon et al., 2011; Segoni et al., 2015a; Leonarduzzi et al., 2017; Piciullo et al., 2017). However, it is widely recognized that soil moisture conditions before the triggering rainfall event can play a crucial role in the initiation of landslides, especially when deep-seated landslides and terrains with complex hydrological settings are involved (Wieczorek, 1996; Zezere et al., 2005; Jemec and Komac, 2013; Peres and Cancelliere, 2016; Bogaard and Greco, 2018). Unfortunately, the influence of soil moisture conditions is difficult to encompass in RSLEWS. One of the most widespread approaches is establishing rainfall thresholds based on the rainfall amount accumulated during a given period before landslide occurrence or before the triggering rainfall event (Kim et al., 1991; Chleborad, 2003). The length of these time spans varies widely in the international literature, e.g., from a few days (Kim et al., 1991; Calvello et al., 2015) to a few months (Zezere et al., 2005). More advanced models combine daily rainfall data to compute antecedent rainfall indexes that can be used to forecast landslide occurrence (Crozier, 1999; Glade et al., 2000). All these methodologies share the approach of considering antecedent rainfall as a proxy for soil moisture. A smaller series of studies takes advantage of remotely sensed soil moisture data (Brocca et al., 2016; Laiolo et al., 2015) but their integration in RSLEWS is not straightforward and it is limited to few case studies (Ponziani et al., 2012). This work explores the possibility to exploit the estimated mean soil moisture (MSM) value averaged over large (thousands of squared kilometers) territorial units (TUs) to find an empirical correlation with the triggering of landslides. We tested this hypothesis in the regional warning system of the Emilia Romagna Region (Italy), which is based on the combination of short-term and long-term rainfall measurements to forecast the occurrence of landslides, as described in detail in Martelloni et al. (2012) and Lagomarsino et al. (2013). We developed an alternate version of the RSLEWS, substituting long-term measurements with soil moisture estimates obtained by TOPKAPI (TOPographic Kinematic APproximation and Integration), a physically based model (Ciarapica and Todini, 2002). The different versions of the RSLEWS were compared and, given the satisfactory outcomes of the results, we discuss a possible application of the proposed methodology to the regional warning system. 2 Materials and method The test site is the Emilia Romagna Region (Northern Italy). This region is characterized by a morphology ranging from high mountains in the S–SW to wide plains towards NE. The mountain chain of the region belongs to the northern Apennines, which is a complex fold-and-thrust arcuate orogenic belt that originated in response to the closure of the Ligurian Ocean and the subsequent collision of the European and continental margins which started in the Oligocene (Agostini et al., 2013). The mountainous part of the region is affected by surficial and deep-seated landslides, which can be triggered by short and intense rainfalls or by prolonged rainy periods, respectively (Martelloni et al., 2012). One of the instruments used to manage landslide hazard is a RSLEWS called SIGMA, which is based on a complex decisional algorithm that considers whether the statistical rainfall thresholds are overcome (Martelloni et al., 2012). The thresholds are defined in terms of standard deviation (SD; σ) from the mean rainfall amount accumulated during progressively increasing time steps. The methodology to develop a sigma model (fully described in Martelloni et al., 2012) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering and multiples of the SD are used as thresholds to discriminate between ordinary and extraordinary rainfall events. To obtain probability values of not exceeding a given rainfall threshold, rainfall time series longer than 50 years are taken into account for each rain gauge. Data of the original rainfall distributions are adapted to a target function chosen as a model (standard Gaussian distribution in this case). After this conversion, it is possible to define any probability of not overcoming by using SD values, which in turn can be related to the corresponding rainfall value of the original data series. SIGMA algorithm considers two different periods of cumulative rainfall. Daily checks of 1-, 2- and 3-day cumulative rainfall (short period) are used to forecast shallow landslides. A series of daily checks over a longer and variable time window (ranging from 4 to 243 days, depending on the seasonality) is used to forecast deep-seated landslides in low-permeability terrains (Lagomarsino et al., 2013). To increase the effectiveness of the model, the mountainous part of the region is divided into 25 homogeneous TUs, each monitored by a reference rain gauge, as fully described in Lagomarsino et al. (2013) and depicted in Fig. 1. Figure 1Test site showing the partition in territorial units (TUs) and highlighting the TUs used as test sites. For some of the hydrographic basins of the region, ARPAE-ER (Regional Agency for Prevention, Environment, and Energy of Emilia Romagna) provides the MSM value at hourly time step. These values are estimated by TOPKAPI (Ciarapica and Todini, 2002), which is a rainfall–runoff model providing high-resolution hydrological information. We used these data to estimate the daily MSM value for each TU. We used daily aggregation because SIGMA is normally run daily, and it uses daily aggregations of hourly rainfall measurements; therefore, a higher temporal resolution would be unnecessary. In case the territory of some TUs is occupied by more than one basin, a weighted mean was used to obtain an averaged value. Similarly, since the final objective of this work is to couple soil moisture data with rainfall data measured over discrete points (a network of rain gauges, one for each TU), we are not interested into distributed modeling of soil moisture, but a single soil moisture value is needed for each TU. This approach is not completely new, as in the same test site Martelloni et al. (2013) used punctual measurements of temperature to incorporate in SIGMA a module accounting for snow accumulation–depletion processes. 3 Alternate approaches ## 3.1 A preliminary test: the MSM threshold We compared all landslide occurrences in the years 2009–2014 and MSM at each TU. We verified that for each TU a threshold MSM value can be identified under which landslides have never been reported, independently of the rainfall amount. In addition, we verified that in general TUs had similar threshold MSM, with a few exceptions. Threshold MSM is 75 % in TU23 and TU22, 76 % in TU18, 78 % in TU17, and 79 % in TU19. In TU21, the threshold MSM is 88 %. This value is higher than all other TUs and it can be partially explained with the scarcity of data: only four landslide events are included in the testing data set of TU21. TU20 presents a landslide event with 54 % MSM. If we consider this event as an outlier and we exclude it from the analysis, the value is also 75 % for TU20. Consequently, taking a MSM threshold into account could prevent SIGMA from committing false alarms in case of abundant rainfalls outside the rainy season, when the soil is dry. Therefore, we modified SIGMA algorithm by adding a cutoff threshold defined as MSM =75 %, which is the arithmetic mean of the values of each TU. Basically, the modified version of the algorithm checks the daily MSM value reported for a given TU and compares it with the MSM =75 % threshold. Under this value, no landslide is expected and the SIGMA algorithm is not launched. When daily MSM is higher than 75 %, landslides can be expected when particular rainfall conditions are verified – therefore SIGMA algorithm is launched. We set a MSM threshold equal for all TUs because in some TUs the landslide data set contains only a few events (e.g., only four landslide events in TU21) and a dedicated MSM threshold value would be characterized by a very weak empirical correlation that would prevent a safe use in the RSLEWS. In addition, if we exclude the outliers, all TUs are characterized by small variations in MSM threshold values (from 75 to 79 %). We therefore decided to renounce the “detail” of the personalized threshold in favor of a more robust MSM threshold generalized for the whole test area. A back analysis performed for the years 2009–2014 over the seven test TUs shows a marked reduction of false alarms (days in which the rainfall thresholds are exceeded but no landslides are reported). More in detail: false alarms in the first warning level decreased from 320 to 231 (28 %), false alarms in the second warning level decreased from 169 to 141 (17 %) and false alarms in the third warning level decreased from 13 to 5 (62 %). To correctly evaluate the effectiveness of a EWS, the improvement concerning false alarms should be weighed against the behavior concerning missed alarms (days in which the rainfall thresholds are not exceeded but landslides are reported). We verified that the introduction of the MSM threshold caused the increase of missed alarm counts only by one: the already mentioned event occurred in 1 June 2013, consisting in three landslides (lowest alarm level according to Lagomarsino et al., 2013). Since this was a minor event and since lowering the MSM threshold to 54 % would result in an almost total loss of the benefits in terms of false alarm reduction, the 75 % threshold was considered successfully tested and the 1 June 2013 event was considered an acceptable trade-off for a general improvement of the warning system. It should be noted that the described use of the MSM threshold is not capable of reducing the missed alarms committed by SIGMA, as it acts like a cutoff filter. To obtain a reduction of both missed and false alarms, a more radical modification of SIGMA is depicted in the next section. ## 3.2 SIGMA-U After the preliminary but encouraging results described in the previous section, we decided to integrate soil moisture thresholds more deeply into the original SIGMA algorithm, and we substituted rainfall thresholds based on long accumulation periods with statistical soil moisture thresholds. Following the same procedure used in Martelloni et al. (2012) for rainfall data to build σ curves, we calculated for every TU the time series of soil moisture (u), assessing the mean values and the SDs. After this procedure, for each TU every soil moisture value (U) could be expressed in terms of multiples of SD from u. After that, we deeply modified the original decisional algorithm of SIGMA, discarding all the long-period rainfall σ curves in favor of soil moisture σ curves. While the former rainfall σ curves were checked for long periods up to 243 days, the new soil moisture σ curves are checked for cumulative periods ranging from 1 to 15 days, at 1-day increasing time steps. Rainfall thresholds based on rainfall sigma curves are still present in the new version of the algorithm, but are used only for short periods (1-, 2- and 3-day antecedent rainfall). The new version of the algorithm, which was called SIGMA-U, is shown in Fig. 2. Figure 2Scheme of the SIGMA-U algorithm. C is cumulative rainfall, U is soil moisture, and $\stackrel{\mathrm{‾}}{\mathit{\upsilon }}$ is average soil moisture. A back analysis was performed using landslide, soil moisture and rainfall data from the period 2011–2014 to compare the performances of SIGMA and SIGMA-U. The test was performed in all TUs where soil moisture values were available (14 out of 25, as shown in Fig. 1) and the results are summarized in Table 1. Table 1Quantitative evaluation of the performances of the models SIGMA (Lagomarsino et al., 2013) and SIGMA-U (this paper). The results of the back analysis are encouraging, as the count of both false alarms and missed alarms is lower in SIGMA-U than in SIGMA. Concerning false alarms, the more dangerous the alarm level is, the higher the reduction: false alarms corresponding to the first warning level, which are negligible, decreased by 8 %, while the very important warning level 3 was erroneously issued 11 times instead of 21 (48 %). False alarms at the intermediate warning level 2 were reduced from 287 to 197 (31 %). Missed alarms were reduced as well: while SIGMA missed 88 alarms, SIGMA-U missed 69 alarms (22 %). This corresponds to a total of 134 missed landslides instead of 214 (37 %). Overall, SIGMA-U hits 789 landslides out of 923 (85.5 %), outperforming SIGMA, which hits 709 landslides (76.8 %). 4 Conclusions We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS based on rainfall thresholds (SIGMA; Martelloni et al., 2012; Lagomarsino et al., 2013) by integrating mean soil moisture values averaged over the territorial units of the system. We tested two different approaches. The first approach is the simplest: it is based on a soil moisture threshold value (75 % in this study) under which rainfall thresholds are not used because landslides are not expected to occur. When tested with a back analysis, this approach reduced consistently false alarms but produced an additional missed alarm. This approach is very simple and can be easily replicated in other cases of study after a straightforward calibration against the local soil moisture and landslide data sets. The second approach is more complex and relies on the idea that rainfall thresholds based on antecedent rainfall accumulated over very long periods can be substituted with soil moisture thresholds. A back analysis demonstrated that a new version of the model based on soil moisture and short-term rainfall could be more effective than the original version based on short-term rainfall and long-term rainfall, as both false alarms and missed alarms were consistently reduced. Some recent studies criticized the traditional rainfall threshold approach based only on rainfall variables and stressed the importance of considering additional factors such as soil moisture to better encompass the hydrologic conditions of landsliding slopes (Bogaard and Greco, 2018; Canli et al., 2017). The present work follows the direction expressed by the aforementioned series of works and presents a small advance towards a sounder (and more effective) hydrologic approach to identify rainfall thresholds for landslide occurrence. The research is still ongoing and further tests are needed before arriving to a full integration with the regional landslide warning system of Emilia Romagna. These tests include (i) the use of soil moisture measurements coming from other sources (e.g., remotely sensed data or direct measurements at selected test sites); (ii) the refinement of the spatial resolution of the alerts by integrating soil moisture measurements, rainfall thresholds and susceptibility maps (Segoni et al., 2015b); (iii) the improvement of the model taking into account different threshold values of sigma for each TU, after a thorough site-specific calibration; and (iv) a thorough validation of the model. Data availability Data availability. Rainfall and soil moisture data are publicly available and are organized in DEXT3R, a public repository managed by ARPAE-Emilia Romagna. DEXT3R can be accessed upon registration at the URL http://www.smr.arpa.emr.it/dext3r/. Special issue statement Special issue statement. This article is part of the special issue “Landslide early warning systems: monitoring systems, rainfall thresholds, warning models, performance evaluation and risk perception”. It is not associated with a conference. Reviewed by: two anonymous referees References Agostini, A., Tofani, V., Nolesini, T., Gigli, G., Tanteri, L., Rosi, A., Cardellini, S., and Casagli, N.: A new appraisal of the Ancona landslide based on geotechnical investigations and stability modelling, Q. J. Eng. Geol. Hydroge., 47, 29–43, https://doi.org/10.1144/qjegh2013-028, 2013. Aleotti, P.: A warning system for rainfall-induced shallow failures, Eng. Geol., 73, 247–265, 2004. Baum, R. L. and Godt, J. W.: Early warning of rainfall-induced shallow landslides and debris flows in the USA, Landslides, 7, 259–272, 2010. Bogaard, T. and Greco, R.: Invited perspectives: Hydrological perspectives on precipitation intensity-duration thresholds for landslide initiation: proposing hydro-meteorological thresholds, Nat. Hazards Earth Syst. Sci., 18, 31–39, https://doi.org/10.5194/nhess-18-31-2018, 2018. Brocca, L., Ciabatta, L., Moramarco, T., Ponziani, F., Berni, N., Wagner, W., Petropoulos, G. P., Srivastava, P., and Kerr, Y.: Use of satellite soil moisture products for the operational mitigation of landslides risk in central Italy, in: Satellite Soil Moisture Retrievals: Techniques & Applications, Elsevier, Amsterdam, the Netherlands, 231–247, 2016. Calvello, M., d'Orsi, R. N., Piciullo, L., Paes, N., Magalhaes, M. A., and Lacerda, W. A.: The Rio de Janeiro early warning system for rainfall-induced landslides: analysis of performance for the years 2010–2013, Int. J. Disast. Risk Re., 12, 3–15, https://doi.org/10.1016/j.ijdrr.2014.10.005, 2015. Canli, E., Mergili, M., and Glade, T.: Probabilistic landslide ensemble prediction systems: Lessons to be learned from hydrology, Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2017-427, in review, 2017. Cannon, S., Boldt, E., Laber, J., Kean, J., and Staley, D: Rainfall intensity duration thresholds for postfire debris-flow emergency response planning, Nat. Hazards, 59, 209–236, 2011. Chleborad, A. F.: Preliminary evaluation of a precipitation threshold for anticipating the occurrence of landslides in the Seattle, Washington, Area, US Geological Survey Open-File Report 03, 463 pp., 2003. Ciarapica, L. and Todini, E.: TOPKAPI: A model for the representation of the rainfall–runoff process at different scales, Hydrol. Process., 16, 207–229, 2002. Crozier, M. J.: Prediction of rainfall-triggered landslides: a test of the Antecedent Water Status Model, Earth Surf. Proc. Land., 24, 825–833, 1999. Glade, T., Crozier, M., and Smith, P.: Applying probability determination to refine landslide-triggering rainfall thresholds using an empirical “Antecedent Daily Rainfall Model”, Pure Appl. Geophys., 157, 1059–1079, 2000. Jemec, M. and Komac, M.: Rainfall patterns for shallow landsliding in perialpine Slovenia, Nat. Hazards, 67, 1011–1023, 2013. Kim, S. K., Hong, W. P., and Kim, Y. M.: Prediction of rainfall triggered landslides in Korea, in: Landslides, vol. 2, edited by: Bell, D. H., A. A. Balkema, Rotterdam, 989–994, 1991. Lagomarsino, D., Segoni, S., Fanti, R., and Catani, F.: Updating and tuning a regional-scale landslide early warning system, Landslides, 10, 91–97, 2013. Laiolo, P., Gabellani, S., Campo, L., Silvestro, F., Delogu, F., Rudari, R., and Crapolicchio, R.: Impact of different satellite soil moisture products on the predictions of a continuous distributed hydrological model, Int. J. Appl. Earth Obs., 48, 131–145, https://doi.org/10.1016/j.jag.2015.06.002, 2015. Leonarduzzi, E., Molnar, P., and McArdell, B. W.: Predictive performance of rainfall thresholds for shallow landslides in Switzerland from gridded daily data, Water Resour. Res., 53, 6612–6625, 2017. Martelloni, G., Segoni, S., Fanti, R., and Catani, F.: Rainfall thresholds for the forecasting of landslide occurrence at regional scale, Landslides, 9, 485–495, 2012. Martelloni, G., Segoni, S., Lagomarsino, D., Fanti, R., and Catani, F.: Snow accumulation/melting model (SAMM) for integrated use in regional scale landslide early warning systems, Hydrol. Earth Syst. Sci., 17, 1229–1240, https://doi.org/10.5194/hess-17-1229-2013, 2013. Peres, D. J. and Cancelliere, A.: Estimating return period of landslide triggering by Monte Carlo simulation, J. Hydrol., 541, 256–271, 2016. Piciullo, L., Gariano, S. L., Melillo, M., Brunetti, M. T., Peruccacci, S., Guzzetti, F., and Calvello, M.: Definition and performance of a threshold-based regional early warning model for rainfall-induced landslides, Landslides, 14, 995–1008, https://doi.org/10.1007/s10346-016-0750-2, 2017. Ponziani, F., Pandolfo, C., Stelluti, M., Berni, N., Brocca, L., and Moramarco, T.: Assessment of rainfall thresholds and soil moisture modeling for operational hydrogeological risk prevention in the Umbria region (central Italy), Landslides, 9, 229–237, 2012. Segoni, S., Battistini, A., Rossi, G., Rosi, A., Lagomarsino, D., Catani, F., Moretti, S., and Casagli, N.: Technical Note: An operational landslide early warning system at regional scale based on space–time-variable rainfall thresholds, Nat. Hazards Earth Syst. Sci., 15, 853–861, https://doi.org/10.5194/nhess-15-853-2015, 2015a. Segoni, S., Lagomarsino, D., Fanti, R., Moretti, S., and Casagli, N.: Integration of rainfall thresholds and susceptibility maps in the Emilia Romagna (Italy) regional-scale landslide warning system, Landslides, 12, 773–785, 2015b. Wieczorek, G. F.: Landslide triggering mechanism, in: Landslides investigation and mitigation, special report. Transportation Research Board, 247, edited by: Turner, A. K. and Schuster, R. L., National Academy Press, Washington, 76–89, 1996. Zêzere, J. L., Trigo, R. M., and Trigo, I. F.: Shallow and deep landslides induced by rainfall in the Lisbon region (Portugal): assessment of relationships with the North Atlantic Oscillation, Nat. Hazards Earth Syst. Sci., 5, 331–344, https://doi.org/10.5194/nhess-5-331-2005, 2005.
{}
# Tag Info 1 You're actually dealing with the Potts model, which is a slight generalization of Ising. Not that it really matters, as you won't need any results from Potts. The point of mean field theory is typically to make each site independent of their neighbors, which allows you to evaluate the partition function by only iterating through the possible states of one ... -1 Point in configuration space represents configuration of the system, i.e. positions of the constituent particles. Point in phase space represents state of the system, i.e. positions and velocities of the constituent particles together. No. Liouville's theorem has no simple analogue in the configuration space. Depends on what is the task at hand and what are ... 1 You should think of the definite integral operation as a function of two arguments: a region over which to integrate (here, $[x_0,x_1]$), and another function $f$ called the integrand (here, $f:\xi \mapsto (E-V(\xi))^{-\frac{1}{2}}$). So first of all, in my definition of $f$ above, we could have used (almost) any other symbol instead of $\xi$ and the ... 1 Here is an outline of the reduction from the Nambu-Goto (NG) action to the light-cone (LC) formulation from a Hamiltonian perspective: The starting point is the Hamiltonian formulation of the NG string, cf. e.g. this Phys.SE post. The Hamiltonian density is of the form "Lagrange multipliers times constraints"$^1$ $${\cal H}~=~\lambda^{\alpha} ... 0 The first equation has just explicited the definition of momentum. As a matter of fact, p=\partial \mathscr{L} / \partial \dot{q} 3 I) In this alternative answer we resolve the singular Hessian H_{\mu\nu} of the Nambu-Goto string action by introducing two auxiliary variables from the onset, thereby indirectly showing that the Hessian H_{\mu\nu} must have co-rank 2. The target space metric has (-,+,\ldots,+) sign convention, and c=1=\hbar. Consider the extended Nambu-Goto ... 1 I'm not so sure, if this is really, what you're looking for, but you can of course solve this easy problem analytically. To do this, it is clever to first analyze the easier Hamiltonian H_0 = 2g (\vec L \cdot \vec S), where the L_i and S_j fulfill independent SU(2)-algebrae$$ [L_i, L_j] = i \epsilon_{ijk} L_k\\ [S_i, S_j] = i \epsilon_{ijk} S_k. ... 0 The following might help: $H = \frac{1}{2}(mv^2 + kx^2) + \gamma mkvx$ decays exponentially with time along the solution of the damped system. Check by integrating $H$ with respect to $t$ and using the equations of the system. So the "energy" $H$ decays exponentially instead of remaining constant. 4 I) In this answer we will consider the standard Nambu-Goto string and show that the Hessian has co-rank 2. The target space metric has $(-,+,\ldots,+)$ sign convention, and $c=1=\hbar$. The Nambu-Goto Lagrangian density is $${\cal L}_{NG}~:=~-T_0\sqrt{{\cal L}_{(1)}},$$ {\cal L}_{(1)}~:=~-\det\left(\partial_{\alpha} X\cdot \partial_{\beta} ... 1 Some of the mathematical aspects of the Liouville operator can be found in the second book by Reed and Simon, in section X.14 (it is not a comprehensive account, but it gives the basic ideas and proofs). In the notes at the end of chapter X, in the part dedicated to section X.14, there is also a quite extensive bibliography that may be useful. 1 Normally we do NOT calculate the phase space density of a system. In the phase space formulation of classical statistical mechanics, the phase space density \rho(p,q;t) has its specified form for different ensembles. Normally for systems at equilibrium the density \rho has no explicit time dependence and thus we work with \rho(p,q). (1) For ... 1 The first thing we can do is to split up \Gamma according to the number of particles in the given states. Let \gamma_N be a state with N particles. The grand canonical partition function is then \begin{align} \mathcal{Z} = & \sum_\Gamma \exp\left(-\beta(\mathcal{H} - \mu N)\right)\\ =& \sum_{N=0}^\infty\exp\left(\beta \mu N ... 1 You are at a point where you'll need v_1 and v_2. Observe from the original transformation that:v_2 = v_1 - v\implies V = \frac{(m_1+m_2)v_1 -m_2v}{m_1+m_2}v_1 = V + \frac{m_2v}{m_1+m_2}$$We also get, by a similar procedure,$$v_2 = V - \frac{m_1v}{m_1+m_2}$$We have expressed v_1 and v_2 in terms of the new variables, V and v. ... 3 The answer is Yes. Define function g(q):= \frac{1}{f(q)} for later convenience. Then the classical Hamiltonian reads$$2h~=~g(q)p^2.$$One may show that the Weyl-ordered Hamiltonian reads$$2H_W~=~ (g(q)p^2)_W ~=~ \frac{1}{4}P^2 g(Q)+\frac{1}{2} Pg(Q)P+\frac{1}{4} g(Q)P^2~=~ Pg(Q)P - \frac{1}{4}\hbar^2g^{\prime\prime}(Q),see e.g. Ref. 1 and this ... 1 This depends on whether the corresponding quadratures have physical meaning in your specific example. This is because if a=x+ip, then changing a\mapsto a'=e^{i\theta}a corresponds to the canonical transformation \begin{align} x\mapsto x'= \cos(\theta)\, x -\sin(\theta) \,p, \\ p\mapsto p'=\sin(\theta)\, x +\cos(\theta) \,p. \end{align} This could be ... 0 The momentum is a covector because it is a gradient, and gradients are always covariant. It does what it says on the tin. However, you are right that this is a subtle point and it's not particularly clear at first sight. For a lagrangian of the form L=T-V with V independent of \dot q, the canonical momentum is given by p=\frac{\partial L}{\partial ... -1 I think the normal is always time-like because when you slice your space-time you do it in such a way such that the normal vector to this hyper-surface is time-like. Thus, time components of the original metric are absent in the induced metric. Which reference are you reading from? 2 1) The spacelike hypersurface has three spacelike directions tangent to it. Any vector that is normal to all three spacelike directions in the eneveloping space is necessarily timelike. Equivalently, the spacelike surfaces can be thought to be labeled by a function $\tau$ which gives the "time coordinate"'s value on those surfaces. the normal to the ... 0 At least a partial answer to your question is that commuting hamiltonians help you to solve the physical system described by one of them: in particular, if your system has $N$ degrees of freedom and you have $N$ commuting hamiltonians, there is good hope that you can trivialize the problem and solve it exactly. In classical mechanics, this is known as ... 0 After thinking about Nick P's answer and re-reading the relevant chapter of Sussman's Structure and Interpretation of Classical Mechanics, I came up with the following elaboration of Nick's argument. It's not water-tight, but it convinced me, and perhaps it will help someone else. I will use Sussman's unorthodox but precise notation. The first step (and ... 1 For simplicity consider the 1-d case, with $\psi =\sqrt{n} e^{2i\phi}$, then $$i \psi_t =\frac{i}{2} \frac{\dot{n}}{\sqrt{n}} e^{2i\phi} -\sqrt{n} e^{2i\phi} 2\dot{\phi}.$$ Similarly \frac{\partial H}{\partial \psi^*} = \frac{\partial H}{\partial n}\frac{\partial n}{\partial \psi^*} + \frac{\partial H}{\partial \phi}\frac{\partial \phi}{\partial ... Top 50 recent answers are included
{}
# Ordered spectral statistics in one-dimensional disordered supersymmetric quantum mechanics and Sinai diffusion with dilute absorbers @article{Texier2012OrderedSS, title={Ordered spectral statistics in one-dimensional disordered supersymmetric quantum mechanics and Sinai diffusion with dilute absorbers}, author={Christophe Texier}, journal={Physica Scripta}, year={2012}, volume={86} } • C. Texier • Published 1 May 2012 • Physics • Physica Scripta Some results on the ordered statistics of eigenvalues for one-dimensional random Schrödinger Hamiltonians are reviewed. In the case of supersymmetric quantum mechanics with disorder, the existence of low-energy delocalized states induces eigenvalue correlations and makes the ordered statistics problem non-trivial. The resulting distributions are used to analyze the problem of classical diffusion in a random force field (Sinai problem) in the presence of weakly concentrated absorbers. It is… ## References SHOWING 1-10 OF 31 REFERENCES ### One-dimensional classical diffusion in a random force field with weakly concentrated absorbers • Physics • 2009 A one-dimensional model of classical diffusion in a random force field with a weak concentration ρ of absorbers is studied. The force field is taken as a Gaussian white noise with ⟨ϕ(x)⟩=0 and ### Statistical Distribution of Quantum Entanglement for a Random Bipartite State • Physics • 2011 We compute analytically the statistics of the Renyi and von Neumann entropies (standard measures of entanglement), for a random pure state in a large bipartite quantum system. The full probability ### Sinai model in presence of dilute absorbers We study the Sinai model for the diffusion of a particle in a one dimension random potential in presence of a small concentration $\rho$ of perfect absorbers using the asymptotically exact real space ### TOPICAL REVIEW: Functionals of Brownian motion, localization and metric graphs • Mathematics • 2005 We review several results related to the problem of a quantum particle in a random environment. In an introductory part, we recall how several functionals of Brownian motion arise in the study of ### Extreme value problems in random matrix theory and other disordered systems • Mathematics • 2007 We review some applications of central limit theorems and extreme values statistics in the context of disordered systems. We discuss several problems, in particular concerning random matrix theory ### Individual energy level distributions for one-dimensional diagonal and off-diagonal disorder We study the distribution of the n-th energy level for two different one-dimensional random potentials. This distribution is shown to be related to the distribution of the distance between two ### Introduction to the Theory of Disordered Systems • Physics • 1988 General Properties of Disordered Systems. The Density of States in One-Dimensional Systems. States, Localization, and Conductivity in One-Dimensional Systems. The Fluctuation Region of the Spectrum. ### On the basic states of one-dimensional disordered structures • Mathematics • 1983 The purpose of this paper is to study a limit probability distribution of the set of the first κ eigenvalues λ1(ℒ)<λ2(ℒ)<...<λκ(ℒ) (with a fixed κ and ℒ→∞) of the boundary problem on the interval [0,
{}
bug-lilypond [Top][All Lists] Re: page-break-permission = ##f doesn't work for the final manual \pageB From: Paul Morris Subject: Re: page-break-permission = ##f doesn't work for the final manual \pageBreak Date: Mon, 24 Feb 2014 16:57:12 -0500 ```On Feb 24, 2014, at 3:38 PM, Federico Bruni <address@hidden> wrote: > No, my fault: you should change the value of ragged-last-bottom: > > \paper { > ragged-last-bottom = ##f > } Ah, ok, thanks. Glad to know there's already a way to achieve this. > If the point of setting page-break-permission = ##f is to "insert page > breaks at explicit \pageBreak commands and nowhere else" as the docs say... > and yet there is an easily reproducible case where breaks are always > inserted where there is no explicit \pageBreak, despite the presence of an > explicit \pageBreak where the break should go, and would go if there were > more music after it... I'd call that a bug. > > I assume that fixing this would entail improving \pageBreak so it works even > when there is no object following it? (i.e. when it is the last command in > the input) > > > You should ask a comment from a developer, but I don't think that this > request makes sense. Also because of ragged-last-bottom, which already fixes > this case. Hmmm... one way to look at this is that the docs here: http://lilypond.org/doc/v2.18/Documentation/notation/explicit-breaks give the impression that LilyPond has a mode that will _only_ insert breaks where there are explicit break commands (***and nowhere else***). But is that actually the case? The following leads you to think there is such a "manual break only" mode: "When line-break-permission is overridden to false, Lily will insert line breaks at explicit \break commands ***and nowhere else***. When page-break-permission is overridden to false, Lily will insert page breaks at explicit \pageBreak commands ***and nowhere else***." But what it says at the top of that page is a little different: "Lily sometimes rejects explicit \break and \pageBreak commands. There are two commands to override this behavior:" If that is the more accurate account of what these commands do, then they don't really prevent automatic breaks from being inserted. They just prevent explicit breaks from being ignored/rejected. If that's the case then I think the next part should be revised like this: "When line-break-permission is overridden to false, Lily will _always_ insert line breaks at explicit \break commands. When page-break-permission is overridden to false, Lily will _always_ insert page breaks at explicit \pageBreak commands." Thanks for your help and for considering this. I've started another thread
{}
# Factors Affecting Rate Of Chemical Reaction Show/Hide Sub-topics (Chemical Reactions | O Level) From collision theory for rate of chemical reactions, we know that there are a few factors which affect the rate of a reaction: • Concentration • Pressure • Particle size • Temperature • Presence of catalyst ### Effect Of Concentration On Rate Of Chemical Reactions An increase in the concentration of one or more of the reactants will increase the rate of reaction. Why? When the concentration of one or more of the reactants increases, the following sequence of events may occur: • There will be more reactant particles in a given volume (i.e. high number of reactant particles per unit volume). • Reactant particles will collide more often. • Number of collisions per unit volume will increase. • Number of effective collisions increases. • Rate of reaction increases. ### Effect Of Pressure On Rate Of Chemical Reactions A change in the pressure will only affect the rate of reaction for chemical reactions involving gaseous reactants. An increase in the pressure will lead to an increase in the rate of reaction. Why? When the pressure increases, the following sequence of events may occur: • The increase in pressure forces the gaseous reactant particles closer together. • Number of reactant particles per unit volume increases. • Number of collisions per unit volume increases. • Number of effective collisions increases. • Rate of reaction increases. High pressure is used frequently in industrial processes to improve the rate of chemical reactions. This is because a higher rate of reaction will mean more products are made per unit time (i.e. more profits for the companies). A common example of such an industrial process is the Haber Process, whereby a pressure of 200 atm is used to speed up the process and increase yield. ### Effect Of Particle Size On Rate Of Chemical Reactions A decrease in the particle size of a solid reactant will increase the rate of reaction. Why? When the particle size of a solid reactant is decreased, the following sequence of events may occur: • The particle size of a solid reactant is decreased by breaking up the solid reactant into smaller pieces. • This action will increase the total surface area. • The area of contact between the reactant particles increased. • Number of collisions per unit time increases. • Number of effective collisions per unit time increases. • Rate of reaction increases. ### Effect Of Temperature On Rate Of Chemical Reactions An increase in the temperature will increase the rate of reaction for most chemical reactions. Why? When the temperature is increased, the following sequence of events may occur: • Reactant particles have more kinetic energy (i.e. they move faster). • Frequency of collision between reactant particles increases AND a larger number of reactant particles have energy equal to or more than the activation energy. • Number of collisions per unit time increases. • Number of effective collisions per unit time increases. • Rate of reaction increases. ### Effect Of Catalyst On Rate Of Chemical Reactions A catalyst is a chemical substance that changes the rate of reaction without itself undergoing any permanent chemical change at the end of the reaction. A catalyst work by providing an alternative reaction pathway for the reaction, i.e. one that has a much lower activation energy. (As shown in the figure above) This means that: The presence of a catalyst will increase the rate of reaction. Why? With a catalyst, the following sequence of events may occur: • An alternative reaction pathway with a lower activation energy is now available. • More reactant particles will have sufficient energy to overcome the energy barriere. • Number of effective collisions per unit time increases. • Rate of reaction increases.
{}
# Corresponding Sides of Two Similar Triangles Are in the Ratio 1 : 3. If the Area of the Smaller Triangle in 40 Cm2, Find the Area of the Larger Triangle. - Mathematics Sum Corresponding sides of two similar triangles are in the ratio 1 : 3. If the area of the smaller triangle in 40 cm2, find the area of the larger triangle. #### Solution Since the ratio of areas of two similar triangles is equal to the ratio of the squares of any two corresponding sides. \text{(Area of smaller triangle)}/\text{(Area of larger  triangle)}=\text{(Corresponding side of smaller triangle)}^2/\text{(Corresponding side of larger triangle)}^2 \text{(Area of smaller triangle)}/\text{(Area of larger  triangle)}1^2/3^2 40/\text{(Area of larger  triangle)}1/9 Area of larger  triangle = (40xx9)/(1) = 360 cm^2 Hence the area of the larger triangle is  360 cm^2 Concept: Triangles Examples and Solutions Is there an error in this question or solution? #### APPEARS IN RD Sharma Class 10 Maths Chapter 7 Triangles Q 19 | Page 126
{}
# Cards Suppose that are three cards in the hats. One is red on both sides, one of which is black on both sides, and a third one side red and the second black. We are pulled out of a hat randomly one card and we see that one side of it is red. What is the probability that the second side is red? p =  0.5 ### Step-by-step explanation: $p=1\mathrm{/}2=0.5$ Did you find an error or inaccuracy? Feel free to write us. Thank you! Tips to related online calculators Would you like to compute count of combinations? ## Related math problems and questions: • White and black balls There are 7 white and 3 black balls in an opaque pocket. The balls are the same size. a) Randomly pull out one ball. What is the probability that it will be white? We pull out one ball, see what color it is and return it to the pocket. Then we pull out th • Two aces From a 32 card box we randomly pick 1 card and then 2 more cards. What is the probability that last two drawn cards are aces? • Cards From a set of 32 cards, we randomly pull out three cards. What is the probability that it will be seven kings and ace? • Ace From complete sets of playing cards (32 cards), we pulled out one card. What is the probability of pulling the ace? • Playing cards From 32 playing cards containing 8 red cards, we choose 4 cards. What is the probability that just 2 will be red? • Balls From the urn in which are 7 white balls and 17 red, gradually drag 3-times without replacement. What is the probability that pulls balls are in order: red red red? • Monty Hall Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. Wh • STRESSED word Each letter in STRESSED is printed on identical cards, one letter per card, and assembled in random order. Calculate the probability that the cards spell DESSERTS when assembled. • Red balls In the bag there are 3 red, 12 blue and 8 green balls. How many red balls we must be attached to the bag if we want the probability of pulling out the red balls was 20%? • Hazard game In the Sportka hazard game, 6 numbers out of 49 are drawn. What is the probability that we will win: a) second prize (we guess 5 numbers correctly) b) the third prize (we guess 4 numbers correctly)? • One green In the container are 45 white and 15 balls. We randomly select 5 balls. What is the probability that it will be a maximum one green? • Peaches There are 20 peaches in the pocket. 3 peaches are rotten. What is the probability that one of the randomly picked two peaches will be just one rotten? • Green - Red We have 5 bags. Each consist one green and 2 red balls. From each we pull just one ball. What is the probability that we doesn't pull any green ball? • Cards The player gets 8 cards of 32. What is the probability that it gets a) all 4 aces b) at least 1 ace • Balls The urn is 8 white and 6 black balls. We pull 4 randomly balls. What is the probability that among them will be two white? • Twenty Twenty swallows sit on a 10 m long telephone cable. Assume that swallows are completely randomly distributed along the line. (a) What is the probability that more than three swallows sit on a randomly selected section of cable 1 m long? (b) What is the pr • PIN code PIN on Michael credit card is a four-digit number. Michael told this to his friend: • It is a prime number - that is, a number greater than 1, which is only divisible by number one and by itself. • The first digit is larger than the second. • The second d
{}
# Boolean Rectangles [duplicate] Inspired by Braille graphics Given a Boolean matrix (i.e., consisting of only 1s and 0s), output an ASCII-art-style Unicode representation of the closed-polygon shape created by the 1s in the matrix, using the Unicode drawing characters ┌ ─ ┐ │ └ ┘ (code points 0x250c, 0x2500, 0x2510, 0x2502, 0x2514, and 0x2518, respectively, separated by spaces). Replace the 0s with spaces, so you're left with just the line drawing. Output or return that drawing. You won't have to handle invalid or ambiguous input - there will be only one unambiguous path of 1s through the matrix. In other words, every 1 has exactly two cardinal neighbors that are 1s, and two that are 0s or out of bounds. Additionally, since the shape is a closed-polygon, this guarantees that the path is cyclical. Examples: [[1,1,1] [1,0,1] [1,1,1]] ┌─┐ │ │ └─┘ [[0,1,1,1,1,0], [1,1,0,0,1,1], [1,0,0,0,0,1], [1,1,1,1,1,1]] ┌──┐ ┌┘ └┐ │ │ └────┘ [[1,1,1,0,0] [1,0,1,1,1] [1,0,0,0,1] [1,1,0,1,1] [0,1,1,1,0]] ┌─┐ │ └─┐ │ │ └┐ ┌┘ └─┘ [[1,1,1,0] [1,0,1,1] [1,1,0,1] [0,1,1,1]] ┌─┐ │ └┐ └┐ │ └─┘ • Leading or trailing newlines or whitespace are all optional, so long as the characters themselves line up correctly. • Either a full program or a function are acceptable. If a function, you can return the output rather than printing it. • If possible, please include a link to an online testing environment so other people can try out your code! • Standard loopholes are forbidden. • This is so all usual golfing rules apply, and the shortest code (in bytes) wins. ## marked as duplicate by AdmBorkBork code-golf StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Aug 31 '17 at 19:14 • What are the codepoints for the relevant drawing characters? – Peter Taylor Aug 31 '17 at 18:24 • @PeterTaylor Good point. Added. – AdmBorkBork Aug 31 '17 at 18:29 • I believe this is a dupe since my Jelly solution works with this input TIO. It could probably be shortened since there will only be one path here instead of multiple like in the other. – miles Aug 31 '17 at 18:44 • @miles Dagnabbit. I thought this was a duplicate, but 48+ hours in the Sandbox and several searches by me didn't find one. – AdmBorkBork Aug 31 '17 at 19:14 • @AdmBorkBork The usual tip: after you type the title, a list with possible duplicates goes below the title. You can check there, although the titles in this case were quite different. – Erik the Outgolfer Sep 1 '17 at 8:08 # Javascript (ES6), 125 bytes a=>a.map((r,i)=>r.map((c,j)=>c&&"─ ┌└ ┐┘ |"[!(a[i-1]||0)[j]+!(a[i+1]||0)[j]*2+!r[j-1]*4+!r[j+1]*8-3]||" ").join).join Example code snippet: f= a=>a.map((r,i)=>r.map((c,j)=>c&&"─ ┌└ ┐┘ |"[!(a[i-1]||0)[j]+!(a[i+1]||0)[j]*2+!r[j-1]*4+!r[j+1]*8-3]||" ").join).join o.innerText=f([[1,1,1,0,0], [1,0,1,1,1], [1,0,0,0,1], [1,1,0,1,1], [0,1,1,1,0]]) <pre id=o>
{}
# How do you simplify 2 5/6 -2 5/22? Aug 3, 2017 By some arrangement. One can find $= \frac{20}{33}$ #### Explanation: $\frac{17}{6} - \frac{49}{22}$ Expand the first term by 22 and the second by 6 $= \frac{374}{132} - \frac{294}{132} = \frac{374 - 294}{132} = \frac{80}{132} = \frac{20}{33}$ It is your answer $\frac{20}{33}$ Aug 3, 2017 $\frac{20}{33}$ #### Explanation: Subtract the whole numbers first and then find the LCD of $6 \mathmr{and} 22$ and make equivalent fractions. $2 \frac{5}{6} - 2 \frac{5}{22} \text{ } \leftarrow L C D = 66$ $= 0 \frac{55 - 15}{66}$ $= \frac{40}{66} \text{ } \leftarrow$ now simplify $= \frac{20}{33}$
{}
# A nice divisibility result on binomial coefficients I just came across this cute little result on Quora and generalised it to the following. Proposition. For any integers $0, $\displaystyle\frac{n}{(n,k)}$ divides $\displaystyle\binom{n}{k}$. First proof. Note that $\displaystyle \frac{k}{(n,k)}\binom nk=\frac{n}{(n,k)}\binom{n-1}{k-1}$. Since $\displaystyle\left(\frac{n}{(n,k)},\frac{k}{(n,k)}\right)=1$, the result follows. $\square$ Second proof. Let $n=p_1^{a_1}\cdots p_r^{a_r}$ and $k=p_1^{b_1}\cdots p_r^{b_r}$ where $p_1,\dots,p_r$ are the prime factors of $nk$. For each $i$, the base $p_i$ representations of $n$ and $k$ have $a_i$ and $b_i$ trailing zeros respectively. Hence by Kummer’s theorem $p_i^{\max\{a_i-b_i,0\}}$ divides $\displaystyle\binom nk$. Hence $\displaystyle k\prod_{i=1}^rp_i^{\max\{a_i-b_i,0\}}=\prod_{i=1}^rp_i^{\max\{a_i,b_i\}}=[n,k]$ divides $\displaystyle k\binom nk$. Now the result follows using $[n,k]/k=n/(n,k)$. $\square$ A nice corollary is the following property of Pascal’s triangle. Corollary. For any integers $0, $\displaystyle\gcd\left(n,\binom nk\right)>1$. Using the identity $\displaystyle\binom nk\binom kr=\binom nr\binom{n-r}{k-r}$ the argument in the first proof above can be adapted to prove the following generalisation: Proposition. For any integers $0\le r\le k\le n$, $\displaystyle\frac{\binom nr}{\left(\binom nr,\binom kr\right)}$ divides $\displaystyle\binom{n}{k}$. Corollary. Any two entries $\neq 1$ in a given row of Pascal’s triangle have a common factor $>1$.
{}
# Compound Interest Math Problems And Answers Compound interest is greater than simple interest: Compound interest is greater than simple interest. What will the account balance be after 6 years? $6,520. Plus model problems explained step by step. At end of the first month when you make a payment, how much of the$300 is interest owed for. Time is the main factor in competitive exams. In reality we found that Compound Interest Practice Worksheet Answers Half Life Math Worksheet was being one of the subjects about instances of business archives. Maths revision video and notes on the topic of Compound Interest and Depreciation. Find the compound interest on Rs 6400 for 2 years, compounded annually at 7. 20 scaffolded questions that start relatively easy and end with some real challenges. Fun maths practice! Improve your skills with free problems in 'Compound interest: word problems' and thousands of other practice lessons. Most of us miss that part. Many programming books I have read provide many examples th. For each exercise below, click once in the ANSWER BOX, type in your answer and then click ENTER. An investment earns at an annual interest rate of 4% compounded continuously. More solved problems on compound interest using formula are shown below. Simple Interest Compound Interests Problem With Explanations. Of math worksheets simple and compound interest 1351777 u2013 myscres, how to solve a compounding interest word problem youtube. Simple and Compound Interest: The Effect of Time. Math Problem Help. Sample problems from Chapter 10. Problems dealing with financial interest are problems dealing with rate of growth. Mixed Logarithms Word Problems 1. NOW is the time to make today the first day of the rest of your life. ANSWER: For personal finance I recommend you to try this internet site where you can compare options from the best companies. In the event you seek guidance on formulas or even algebraic expressions, Algebra-expression. Interest earned for the investment = $800 _____ 3) Ryan bought$ 15,000 from a bank to buy a car at 10% simple Interest. This is Quantitative Aptitude PDF very useful for the upcoming competitive exams like SSC CGL, BANK, RAILWAYS, RRB NTPC, LIC AAO, and many other exams. Some of the worksheets for this concept are Compound interest name work, Use simple interest to find the ending, Compound interest, Lesson plan simple and compound interest, Simple interest problems, Compound interest and e work, Simple and compound interest, Simple interest. Fear not! With one small bit of explanation, you'll be good to go. It’s easy to calculate compound interest in our head with an easy number and interest rate like the one in the example above. Fun maths practice! Improve your skills with free problems in 'Compound interest: word problems' and thousands of other practice lessons. You may wish to read Introduction to Interest first. The ( ) braces representing open interval gives open circle and [ ] braces representing closed interval gives closed circle on the graph. Every time the savings account reaches over 3000, 2000 is withdrawn into the CD account which is compounded yearly at a rate of 6%. The only real difference between simple interest and compound interest problems on the GMAT is that. If you keep that interest in the bank too, eventually it will earn its own interest. We want to solve for n in this case, which is the amount of years. com are shown below. Math Games does not send out junk mail or resell your email address in any form, Compound Interest Once you're comfortable with this skill, test your speed with. from simple addition and subtraction to complicated problems including compound interest, for example. Compound Interest Problems. In this lesson, we'll practice calculating interest amounts and interest rates. For your GCSE maths exam you need to know about two different types of interest rates, simple interest and compound interest. I = Simple interest P = principal T = time in years R = rate of interest A = P + S. If he paid $9,000 as interest while clearing the loan, find the time for which the loan was given. Math practice by iPracticeMath is the best place to build concepts of math through fun and interactive sessions for grades 1 up to 12. If the interest rate is 7%, how would that percent be represented in the compound interest formula if it was compounded quarterly. Simple interest works like this: you don’t get any interest on your interest, only interest on the original balance. To calculate compound interest, use the formula A = P(1 + r) n, where P is the principal, r is the interest rate expressed as a decimal and n is the number of number of periods during which the interest will be compounded. So Big Idea #1, compound interest always out performs simple interest, as long as there's more than one year, that is to say, as long as they've more than one compounding period. Includes a math lesson, 2 practice sheets, homework sheet, and a quiz!. If a sum of money grows to 144/121 times when invested for two years in a scheme where interest is compounded annually, how long will the same sum of money take to treble if invested at the same rate of interest in a scheme where interest is computed using simple interest method?. Variable Rate of Compound Interest. Blogs Discontinued Hello Weber School District Parents, Teachers, and Staff, On March 15th, 2019, the server that housed our Wordpress Blogs has been. With compound interest you receive interest not only on the initial amount, but on the interest as well. The Compound Interest rule is: A = P ( 1 + r ) n. Compound interest problems with answers and solutions are presented. see videos for problems that have an asterisk in unit 1. The distance formula can be used to predict how much time a car trip might take. Compound Interest Name_____ Date_____ Period____ 1) Brenda invests$4,848 in a savings account with a fixed annual interest rate of 5% compounded 2 times per year. Learn about simple and compound interest concepts as you’ll need them not only for entrance exams but in the real world too, especially after you become rich and famous. if the interest rate is 5% p. Michael Arthur deposited $2,900 in a new regular savings account that earns 5. The effective annual yield is the simple interest rate that gives the same yearly return as a compound interest rate. Compound interest may be compounded daily, weekly, monthly, quarterly or yearly. Quantitative Aptitude Concept wise Explanation with Practice problems, Simple Interest Compound Interests Problem with Explanations and solutions. 5% annual rate, how much interest is earned?$45 2. And check out the Samples of our Printable Materials. Is the secret to getting rich winning the lottery? No! Compound interest and patience are! This page will show you how your money can grow over time with compound interest. They drew up multiplication tables and tables of reciprocals, squares, cubes and exponentials, and used them to calculate compound interest and mortgage repayments. For the following problems, use the compound interest formula. In this lesson, we'll practice calculating interest amounts and interest rates. This is a list of the example problems which can be solved by using this calculator. Round your answers to the nearest pound where necessary. Interest can be charged in two ways, i. Compound interest multiple choice questions and answers (MCQs), compound interest quiz answers pdf, learn online elementary school courses for math degree. The following tables give the formulas for Simple Interest, Compound Interest, and Continuously Compounded Interest. SAT Math Help » Algebra » Exponents » Pattern Behaviors in Exponents » How to find compound interest Example Question #1 : How To Find Compound Interest A five-year bond is opened with in it and an interest rate of %, compounded annually. Skills in these areas are essential not only in daily life but also for the intelligent use of the calculator itself. First, some definitions: Principal – money that is invested in an account. In this article, we shift our focus from concepts to Compound Interest tricks. How much. Our eTextbooks, answer keys, workbooks, and videos will help both students and teachers navigate the changing curricula of the modern era. Use the compound interest formulas A=P(1+r/n)^nt and A=Pe^rt to solve the problem given. ) You borrowed $59,000 for 2 years at 11% which was compounded annually. 1,768 The Answer to this question is Rs. 71828, and is the limit of (1 + 1/n) n as n approaches infinity, an expression that arises in the study of compound interest. Improve your math knowledge with free questions in "Simple interest" and thousands of other math skills. Math Worksheets for Grade Seven! Math Worksheets Based on NCTM Standards! Number Theory, Decimals, Fractions, Ratio and Proportions, Geometry, Measurement, Volume, Interest, Integers, Probability, Statistics, Algebra, Word Problems Also visit the Math Test Prep section for additional grade seven materials. Module 2: Mathematics of Finance. Free Practice for SAT, ACT and Compass Maths tests. Improve your math knowledge with free questions in "Compound interest: word problems" and thousands of other math skills. What is the amount after fifteen years? 2. Instructions Use black ink or ball-point pen. Percent problems, straightforward Finding percent change Markup, discount, and tax (easy, hard) Proportions Proportion word problems Similar figures Similar figure word problems Simple and compound interest. Compound continuously Find thing out will be awesome thanks. These are the questions which will cover entire method. Learn the math behind your money. Calculate the compound interest earned when$8 500 at 8% per annum for: a. compound interest questions answers mcq of quantitative aptitude are useful for it officer bank exam, ibps and other competitive exam preparation. Note: Banks usually charge compound interest not simple interest. Simple and Compound interest Problems and Solutions. GCSE Maths Takeaway. The associated picture of an excel printout shows an example of $5,000 put in an account for 3 years. of the power of compound interest. Use a Math problem solver to get the right answers for your Math problems. Search this site. © h ©p://math. Plus model problems explained step by step. Simple & Compound Interest PDF. Finite Math. Simple and compound interest are compared in the. CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. problems in Categories 1 and 2, problems that call for mental math or estimation. Simple interest is calculated only on the principal. math answers to any questions Kids Printable Projects Magic E Worksheets Free Printable Free Autism Reading Comprehension Worksheets. I get some examples but they provide based on yearly. Improve your math knowledge with free questions in "Simple interest" and thousands of other math skills. A sum of money at simple interest amounts to 815 in 3 years and to 854 in 4 years. Follow rounding directions. but nonetheless this makes an interesting math problem. Asked in Math and Arithmetic. ) You borrowed$59,000 for 2 years at 11% which was compounded annually. 50, children's cost $4. For your GCSE maths exam you need to know about two different types of interest rates, simple interest and compound interest. Topic : Logarithm Word Problems- Worksheet 1 1. We study two compound interest examples in this section where we study the concept of population growth. Gcse Math Math Charts Math Notes Secondary Math Math Help Math Concepts Math For Kids Homeschool Math Money Problems Calculating Compound Interest with the Formula: 1 page visual interactive "doodle notes" set for financial math - When students color or doodle in math class, it activates both hemispheres of the brain at the same time. In other words, interest is accumulated on any interest received. pdf from MATH 100 at Bullard-havens Technical High School. Chapter 4: Math of Finance Problems 18. Compound interest formula: A 5 P(1 1 r)t where A represents the amount of money in the account at the end of the time period, P is the principal, r is the annual interest rate, and t is the time in years. Instructions Use black ink or ball-point pen. We want to solve for n in this case, which is the amount of years. In this lesson, we'll practice calculating interest amounts and interest rates. 5% compound interest in a year. computing simple and compound interest. These are the questions which will cover entire method. ) A=71,257 P= 4,000 t= 24 The equation is A=Pe^rt I'm not sure how to solve for an exponent variable. From compound inequality calculator to matrix algebra, we have everything discussed. If you're seeing this message, it means we're having trouble loading external resources on our website. Recognize and Measure Volume With Cubic Units. YES! Now is the time to redefine your true self using Slader’s free Saxon Math 8/7 with Prealgebra answers. The problem starts with monthly deposits of (300,350,350,350,400), each being deposited into a savings account every month for a year. Worksheet - Compound interest For each question it is assumed no money is withdrawn or deposited into the account after the original deposit. Example: If the nominal annual interest rate is i = 7. Growth rate isn't expressed as a percentage. NS: Apply properties of operations as strategies to add and subtract rational numbers. • The compound inequality graph is the intersection of the graphs of the two inequalities. The free printable worksheets in this lesson will improve your homeschool math lessons and help your students become better at calculations. How much will. Improve your math knowledge with free questions in "Compound interest: word problems" and thousands of other math skills. Use the compound interest formulas A=P(1+r/n)^nt and A=Pe^rt to solve the problem given. The problems this week involve compound interest. Compound interest problem. PF1 – compare simple and compound interest, relate compound interest to exponential growth, and solve problems involving compound interest; PF2 – compare services available from financial institutions, and solve problems involving the cost of making purchases on credit; PF3 – interpret. YES! Now is the time to redefine your true self using Slader’s free Saxon Math 8/7 with Prealgebra answers. Sample problems from Chapter 10. A total of$12,000 is invested in two funds paying 9% and 11% simple interest. It receives around 27,778 visitors every month based on a global traffic rank of 754,112. Kids learn how to calculate interest and percent in money word problems including simple and compound interest and figuring tips. Interest can be charged in two ways, i. MathScore EduFighter is one of the best math games on the Internet today. Get an answer for 'Compound interest questions, grade 11 math. Solution : Principle = $15,000 Rate of Interest R = 10% = 0. What's compound interest and what's the formula for compound interest in Excel? This example gives you the answers to these questions. k is the number of compounding periods in one year. problems in Categories 1 and 2, problems that call for mental math or estimation. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Compound Interest. For Math applications, quiz questions have various Compound Interest Online Test to practice. Andrew puts$15000 in an interest bearing deposit for 5 months at 6 % p. Ron invested $55,000 in a nine-year CD that pays out twelve percent compounded annually. ) You borrowed$59,000 for 2 years at 11% which was compounded annually. Compound Interest 1 - Cool Math has free online cool math lessons, cool math games and fun math activities. These problems are physically compact, often can be solved within short time limits, and have unambiguous solutions, and English versions have been normed for solving rates and levels of difficulty. corrections to unit 1. Simple and Compound Interest Simple Interest Problems Doubts and Solutions A sum of money lent at simple interest amounts to ₹3224 in 2 years and ₹4160 in 5 years. Which of the following most closely approximates the total amount in the account after that period of time? First, break the problem into two segments. Compound Interest questions or problems with solutions, covered for all Bank Exams, Competitive Exams, Interviews and Entrance tests. 17 doesn't state that the interest is annual interest. The term "compound interest" refers to the growth of money over time. 1 find the sum. Compound interest multiple choice questions and answers (MCQs), compound interest quiz answers pdf, learn online elementary school courses for math degree. Suppose you have a balance owed on a car loan of $10,000. Lottery starter 2. If you're seeing this message, it means we're having trouble loading external resources on our website. If you borrow from the bank to buy a car, the bank will charge you interest for its use. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is correct or incorrect. You can get your answers verified by using this answers of the online compound interest calculator. Or that the two are somehow intrinsically linked. The ( ) braces representing open interval gives open circle and [ ] braces representing closed interval gives closed circle on the graph. Compound interest example. The world of finance is literally FULL of mathematical models, formulas, and systems. Math - Simple/Compound Interest/Future Value of Annuity. You can select different variables to customize these Algebra 1 Worksheets for your needs. Compound Interest when Interest is Compounded Half-Yearly. Compound semiannually B. Most reliable sites will include an online Math problem solver program that can give you the answers to the problems you are faced with. Home; Division Word Problem Answers. This is the aptitude questions and answers section on "Simple Interest" with explanation for various interview, competitive examination and entrance test. Finite Math. But banks almost NEVER charge simple interest, they prefer Compound Interest: Compound Interest. Economist GMAT Tutor’s strategy for calculating compound interest rate problems that ask for a value is to calculate the amount using the simple interest formula and then choose an answer which is slightly higher. A five-year bond is opened with in it and an interest rate of %, compounded annually. Making this change gives us the standard formula for compound interest. Students will use their knowledge of rational numbers to calculate simple and compound interest earnings. Gcse Math Math Charts Math Notes Secondary Math Math Help Math Concepts Math For Kids Homeschool Math Money Problems Calculating Compound Interest with the Formula: 1 page visual interactive "doodle notes" set for financial math - When students color or doodle in math class, it activates both hemispheres of the brain at the same time. ) Your allowance of$190 got 11% compounded monthly for 1 2/3 years. Percent Word Problems (answers on page 17) Directions: Set up a basic percent problem. To get a bit more deeper into interest please look at our compounded interest section here to learn more about interest and the formulas used to calculate it: Compound Interest Formula. Home > Topics > Money Math > Consumer Math Worksheets. can be written. However, in compound interest, the interest is calculated on the new total amount of money each year. Math homework help. compound interest questions answers mcq of quantitative aptitude are useful for it officer bank exam, ibps and other competitive exam preparation. Chapter 2 Practice, Finance Additional Problems for section 2. Circus Multiplication. 20 scaffolded questions that start relatively easy and end with some real challenges. to know your score and get the correct answers. Let's solve some word problems on application of compound interest If you're seeing this message, it means we're having trouble loading external resources on our website. This is the aptitude questions and answers section on "Compound Interest" with explanation for various interview, competitive examination and entrance test. So Big Idea #1, compound interest always out performs simple interest, as long as there's more than one year, that is to say, as long as they've more than one compounding period. 5% compound interest in a year. org will be the perfect place to take a look at!. In the above example, if it were a 3% simple interest rate (as opposed to a compound interest rate), then the interest you would earn each year would remain at £3 and not increase. When students do use calculators, you may. A principal of $2000 is placed in a savings account at 3% per annum compounded annually. Interest is calculated several times, depending on the investment. Steve put$1,300 into an account and it earned 4% interest compounded monthly. Simple interest can be calculated by multiplying the amount invested at the interest rate. have math worksheets for the following topics, Addition, Algebra 1, Algebra 2, Decimals, Subtraction, Telling Time, Venn Diagrams, Word Games and Word Problems. After studying the videos you should try to complete the FREE worksheet at the bottom of this page. It seems that conventional wisdom suggests that good programmers are also good at math. Andrew puts $15000 in an interest bearing deposit for 5 months at 6 % p. Simple and Compound Interest: Worksheet: ALL: Lesson 2: Present Value Problems: Practice Problems: ALL: Lesson 3: Future Value of Ordinary Annuities: Practice Problems: ALL Lesson 4: Present Value of Ordinary Annuities: Practice Problems: ALL: Lesson 5: Payments and Total Interest: Practice Problems: ALL Review Answers: In Class Assignment. Compound Interest questions or problems with solutions, covered for all Bank Exams, Competitive Exams, Interviews and Entrance tests. Visit Mathway on the web. Find the simple interest and compound interest (Taken for C. Improve your math knowledge with free questions in "Compound interest" and thousands of other math skills. Examples: Simple and Compound Interest Example 1: Suppose you make an initial deposit of$ 1000 into a savings account at a bank which offers a 3 % yearly simple interest rate. A health club offers to let you join for $50 down and payments of only$36 per month for 3 years. How much money is in the bank after for 4 years?. Compound Interest problems with solutions for bank exams mcq of IBPS, SBI, RBI for PO and Clerk with short tricks and formulas. Interest is calculated several times, depending on the investment. 5% interest compounded semiannually. About "Compound Interest Word Problems Worksheet" Compound Interest Word Problems Worksheet : Worksheet given in this section is much useful to the students who would like to practice solving word problems on compound interest. 5% per annum. While there is a simple formula for determining the amount of money in an account after a given number of periods and a given interest rate, you do not need that formula (and should not use it) for these problems, as the focus of this assignment is loop structures in MATLAB. Note: Banks usually charge compound interest not simple interest. Fishbowl Multiplication. So you can see that to use recursion you need to break your problem down to a base case , where you can stop the recursion, and a recursive step , where you can define the problem in terms of a simpler problem. This website is good to visit for the following topic finding in relevance to math like; simple arithmetic questions and answers, GRE arithmetic questions and answers, basic arithematics for jobs, general math for jobs, basic mathematics for general public, math portion of job test, entrance exam basic maths, math test for jobs, word problems on job exam mathematics. Study and make solutions to your math exercises. All worksheets are free and formatted for easy printing and include an option to view the answers. The formula for compound interest with a finite number of calculations is an exponential equation. Update: I am repeating this class due to problems like this that throw me off track. You can choose to include answers and step-by-step solutions. Of math worksheets simple and compound interest 1351777 u2013 myscres, how to solve a compounding interest word problem youtube. Michael Arthur deposited $2,900 in a new regular savings account that earns 5. Solution: Let P = 20000, r = 6%, n = 3 using formula$${\text{A}}. 8 answers If 2/3 of the pie was eaten what fraction remain?. How much will your investment be worth after one year at an annual interest rate of 8%? The answer is$108. Topic : Logarithm Word Problems- Worksheet 1 1. Here is a list of some basic definition and formulas to solve problems on Interest. Bankers do all sorts of math, from simple addition and subtraction to complicated problems including compound interest, for example. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Compound Interest. Rates of Interest. 97 Includes: Pi and the Lost Function Game (for Windows or Mac) Full Access to the Online Content Tool (OCT) Individual and Group Performance Reports Introduction Curriculum Story Features Documents System Requirements Introduction Pi and The Lost Function helps each individual student with their own knowledge of each strand. This is the aptitude questions and answers section on "Compound Interest Important Formulas" with explanation for various interview, competitive examination and entrance test. Ron invested $55,000 in a nine-year CD that pays out twelve percent compounded annually. And, suppose that classmate had learned about sequences and series, but had not. Round your answer to the nearest hundredth percent. You may wish to read Introduction to Interest first. It gives you step by step answers along with explanations. Solve each problem below by entering a dollar amount with cents. r = interest rate as a decimal. If you're behind a web filter, please make sure that the domains *. Years 10 (10 periods). 20 For the second month, if no payment is made, interest is likely charged on the sum of the principal, the accumulated interest so far, and the late fee, so will be. Best Answer: 2) You buy a home entertainment system on credit. This is the aptitude questions and answers section on "Compound Interest Important Formulas" with explanation for various interview, competitive examination and entrance test. Young mathematicians put their skills to the test in the real world during this four-lesson consumer math unit. Remember, till the time you actually solve questions using these tricks, you won't be able to memorize and understand them. Compound interest :$1,276 - $1,000 =$276 * Value of (1 + 5%) 5 from future value of $1 table: 5 periods; 5% interest rate. Interest is a great thing. Simple Interest Math Problems Worksheet. Calculate the compound interest earned when$8 500 at 8% per annum for: a. Simple interest is where the amount of interest earned is fixed over time. Just Question Answer is one of the best online websites for Mathematics Assignment Help. Compound Interest problems with solutions for bank exams mcq of IBPS, SBI, RBI for PO and Clerk with short tricks and formulas. The interest is said to be compounded. Find the total amount of simple interest that is paid over a period of five years on a principal of $30,000 at a simple interest rate of 6%. We will use the compound interest formula to solve these compound interest word problems. The basic idea is that after the first interest period, the amount of. How much money is in the bank after for 4 years?. Years 10 (10 periods). Compound Interest = Simple Interest. com are shown below. If you borrow from the bank to buy a car, the bank will charge you interest for its use. Compound interest is a way of life in our society. Simply fill in the blanks to the right, then click the button. This includes your principal plus any interest that you previously earned that still remains in the account. Students can also use compound interest calculator, to solve compound interest problems in a easier way. Solved examples with detailed answer description, explanation are given and it would be easy to understand. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is correct or incorrect. Here is a graphic preview for all of the Algebra 1 Worksheet Sections. The following chart is a record of the activity in a certain account that earns compound interest. rate of interest is ₹11,000. A blend of directed, guided, and investigative instruction. Simple interest is an easy method of calculating interest charge based on the principal amount of a deposit or a loan. After that period ends, the interest compounds, and the next year. Apart from the stuff given above, If you want to know more about "Simple interest worksheet with answers ", please click here. This is not actually possible, but continuous compounding is well-defined nevertheless as the upper bound of "regular" compound interest. The simple interest on a sum of money for 3 years at 6²/₃ % per annum is$ 6750. interest rate, Bank B offers savings accounts with a 4. The trouble is, Andrew did not consult his investment advisor before putting his money down, and this sneaky company has locked him into an investment that pays simple interest, not compound interest, on his principal. Continuous Compound Interest Formula. Now this interest. 97 Includes: Pi and the Lost Function Game (for Windows or Mac) Full Access to the Online Content Tool (OCT) Individual and Group Performance Reports Introduction Curriculum Story Features Documents System Requirements Introduction Pi and The Lost Function helps each individual student with their own knowledge of each strand. We will use the compound interest formula to solve these compound interest word problems. CCSS Math: Standards for Mathematical Practices CCSS Math 7. The term "compound interest" refers to the growth of money over time. Interest – money that is paid out for investing principal. Quantitative Aptitude Concept wise Explanation with Practice problems, Simple Interest Compound Interests Problem with Explanations and solutions. So in the short term, it does a little bit better than simple interest. Home; Division Word Problem Answers. IXL Learning Learning. • A compound interest account, starting with $1,000, at a rate of 5% annually. The Compound Interest rule is: A = P ( 1 + r ) n. Apart from the stuff given above, If you want to know more about "Simple interest worksheet with answers ", please click here. Example 1: Convert to a simple fraction. Learn the math behind your money. Plus model problems explained step by step. installment loan math problems. When the interest rate is applied to the original principal and any accumulated interest, this is called compound interest. P N is the balance in the account after N years. Chapter 4: Math of Finance Problems 18. We have Free Practice Compound Interest sums, Shortcuts and Useful tips. For example, if you have$100 and invest it, and the bank pays 5% interest, then in one year you will have an extra $5. More Multiplication Worksheets and Lessons. I = Simple interest P = principal T = time in years R = rate of interest A = P + S. CCSS Math: Standards for Mathematical Practices CCSS Math 7. The interest rate is 8% per year compounded monthly and your monthly payment is$300.
{}
# Square and 2016 Geometry Level 3 $$ABCD$$ is a square with side length 2016. $$E$$ and $$F$$ are the mid-points of $$AD$$ and $$AB$$ respectively. $$G$$ is the intersection point of $$CF$$ and $$BE$$. Find the length of $$DG$$. ×
{}
15 views Answer questions on the basis of following information. Neelu, Pavitra, Sinthia, Pallavi and Madhvi are five friends. Each of them has $6$ balls. Neelu gives three balls to Pallavi, who further gives two balls each to Pavitra and Madhvi. Sinthia gives $4$ balls to Neelu who in turn gives $3$ balls each to Pavitra and Madhvi. Pavitra gives $5$ balls to Sinthia and Madhvi gives $4$ balls to Pallavi. How many balls Pavitra has with her ? 1. $6$ 2. $7$ 3. $8$ 4. $9$
{}
# Jack Savoretti – Calling Me Back To You Lyrics Jack Savoretti – Calling Me Back To You Lyrics These crazy waters, I’m living in Got me feeling like I’m sinking like a stone Far from the shore, far from everyone Feels like I’m sailing this ship on my own In the middle of the night I’m dancing with my ghost Where the sun don’t shine and we never know where to go But now I know, there’s a lighthouse calling me From the edge of every cliff dived to the bottom of the sea Oh, there’s a lighthouse calling me Pulling me from the blue (From the blue) Calling me back to you (Back to you) Calling me, calling me, calling me, calling me, calling me back to you (Back to you) Calling me, calling me, calling me, calling me back to you (Back to you) These shifting tides are out of my control (Out of my control) But they can’t keep me away from you (Away from you) It’s harder than you now, harder than you think Easier to drift, easier to sink But I keep on coming back, back to you In the middle of the night I’m dancing with my ghost Where the sun don’t shine and we never know where to go But now I know, there’s a lighthouse calling me From the edge of every cliff dived to the bottom of the sea Oh, there’s a lighthouse calling me Pulling me from the blue (From the blue) Calling me back to you (Back to you) Calling me, calling me, calling me, calling me, calling me back to you (Back to you) Calling me, calling me, calling me, calling me back to you (Back to you) Shine, shine your light on me Shine, shine your light on me Shine, shine your light on me Shine, shine your light on me Shine, shine your light on me Shine, shine your light on me Shine, shine your light on me Shine, shine your light on me Oh, there’s a lighthouse calling me From the edge of every cliff dived to the bottom of the sea Oh, there’s a lighthouse calling me Pulling me from the blue (From the blue) Calling me back to you (Back to you) Calling me, calling me, calling me, calling me, calling me back to you (Back to you) Calling me, calling me, calling me, calling me back to you (Back to you)
{}
G5 | Maths Made Easy # G5 Question:Valentina is going for a bike ride. Below is a distance-time graph that describes her full journey. Work out: a) how long she was stationary for b) the total distance travelled during her journey c) Her average speed in kilometres per hour between 17:15 and 17:45 $\text{Gradient } = \dfrac{20}{0.5} = 40\text{km/h}$.
{}
#### Previous topic Econometric models module # Neural network models module¶ Some basis of neural network models with PyTorch package. fynance.models.neural_network.BaseNeuralNet() Base object for neural network model with PyTorch. fynance.models.neural_network.MultiLayerPerceptron(X, y) Neural network with MultiLayer Perceptron architecture. class fynance.models.neural_network.BaseNeuralNet Bases: torch.nn.modules.module.Module Base object for neural network model with PyTorch. Inherits of torch.nn.Module object with some higher level methods. Attributes: criterion : torch.nn.modules.loss A loss function. optimizer : torch.optim An optimizer algorithm. N, M : int Respectively input and output dimension. Methods set_optimizer(criterion, optimizer, **kwargs) Set optimizer object with specified criterion (loss function) and any optional parameters. train_on(X, y) Trains the neural network on X as inputs and y as ouputs. predict(X) Predicts the outputs of neural network model for X as inputs. __init__(self) Initialize. predict(self, X) Predicts outputs of neural network model. Parameters: X : torch.Tensor Inputs to compute prediction. torch.Tensor Outputs prediction. set_data(self, X, y, x_type=None, y_type=None) Set data inputs and outputs. Parameters: X, y : array-like Respectively input and output data. x_type, y_type : torch.dtype Respectively input and ouput data types. Default is None. set_optimizer(self, criterion, optimizer, **kwargs) Set the optimizer object. Set optimizer object with specified criterion as loss function and any kwargs as optional parameters. Parameters: criterion : torch.nn.modules.loss A loss function. optimizer : torch.optim An optimizer algorithm. kwargs : dict Keyword arguments of optimizer, cf pytorch documentation [1]. NeuralNetwork Self object model. References train_on(self, X, y) Trains the neural network model. Parameters: X, y : torch.Tensor Respectively inputs and outputs to train model. torch.nn.modules.loss Loss outputs. class fynance.models.neural_network.MultiLayerPerceptron(X, y, layers=[], activation=None, drop=None) Neural network with MultiLayer Perceptron architecture. Refered as vanilla neural network model, with n hidden layers s.t n $$\geq$$ 1, with each one a specified number of neurons. Attributes: criterion : torch.nn.modules.loss A loss function. optimizer : torch.optim An optimizer algorithm. n : int Number of hidden layers. layers : list of int List with the number of neurons for each hidden layer. f : torch.nn.Module Activation function. Methods set_optimizer(criterion, optimizer, **kwargs) Set optimizer object with specified criterion (loss function) and any optional parameters. train_on(X, y) Trains the neural network on X as inputs and y as ouputs. predict(X) Predicts the outputs of neural network model for X as inputs. set_data(X, y) Set respectively input and ouputs data tensor. __init__(self, X, y, layers=[], activation=None, drop=None) Initialize. Parameters: X, y : array-like Respectively inputs and outputs data. layers : list of int List of number of neurons in each hidden layer. activation : torch.nn.Module Activation function of layers. drop : float, optional Probability of an element to be zeroed. forward(self, x) Forward computation.
{}
Welcome to the Quiz on Programming in C# 1. What will be the output of following code?using System; namespace MyApplication1 { class MyClass { static void Main(string[] args) { byte a = 200; byte b = 230; byte c = a + b; Console.WriteLine($"Sum = {c}"); } } } 2. What will be the output of following code?using System; namespace MyApplication1 { class MyClass { static void Main(string[] args) { byte a = 200; byte b = 230; byte c = (byte)(a + b); Console.WriteLine($"Sum = {c}"); } } } 3. What is the error in following code?double a = 200; int b = 230; int c = a + b; Console.WriteLine($"Sum = {c}"); 4. What is the error in following code?int a = 200; int b = 230; double c = a + b; Console.WriteLine($"Sum = {c}"); 5. What will happen if you execute following code?int c; Console.WriteLine(\$"C = {c}"); 6. What is the reason of error in following code?int a; byte b; double c; c = a + b; 7. What is the output of following code?int a=2; byte b=3; double c; c = a + b; Console.WriteLine("Sum of {0} and {1} is {2}", a, b); 8. What is the output of following code?sbyte a=-20; sbyte b=-30; byte c; c = (byte)(a + b); Console.WriteLine(c); 9. What is wrong with following statement?int c = 3 + 4M; 10. What is the error in following code?ulong x = 5; long i = 9; i = x; x = i;
{}
# Fourier transform time and frequency question? So there is an example in my book where g(k) is converted to G(f) and its written $$g(k)\Longleftrightarrow G(f)$$ So: $$a^ku(k)\Longleftrightarrow \frac{1}{1-ae^{-j2\pi f}}$$ My question is, how do we find that $a^ku(k)\Longleftrightarrow 1/(1-ae^{-j2 \pi f})$ ? $$G(f)=\sum_{k=-\infty}^{\infty}g(k)e^{-j2\pi kf}=\sum_{k=0}^{\infty}a^ke^{-j2\pi kf}=\frac{1}{1-ae^{-j2\pi f}},\quad |a|<1$$ where the last equality follows from the formula of the geometric sum, which is valied if $|a|<1$ holds.
{}
# Calculate fundamental limits using l'Hospital rule So I have this essay where a question is "Calculate the three fundamental limits using l'Hospital's rule" I find easy to calculate $\lim_{x \rightarrow 0}\frac{\sin(x)}{x}$ and $\lim_{x \rightarrow 0}\frac{e^x - 1}{x}$, however the one I can't understand is the limit $\lim_{x \rightarrow +\infty}\left(1 + \frac{1}{x}\right)^x$... How exactly am I supposed to use l'Hospital's rule here? I tried writing $\left(1 + \frac{1}{x}\right)^x$ as $\frac{(x+1)^x}{x^x}$ and utilize the fact that $\frac{d(x^x)}{dx} = x^x(ln(x) + 1)$ but instead of simplifying, using l'Hospital'a rule that way actually makes it worse... Can anyone point me to the right direction? • I presume that sen(x) means \sin x – DanielWainfleet Aug 9 '18 at 1:10 • A strange topic for an essay, since using l'Hospital for those limits is circular reasoning (with the way that the derivatives of $\sin x$ and $e^x$ are usually derived in basic calculus courses)... – Hans Lundmark Aug 9 '18 at 10:57 By the well known exponential manipulation $A^B=e^{B\log A}$, we have $$\left(1 + \frac{1}{x}\right)^x=\large{e^{x\log \left(1 + \frac{1}{x}\right)}}=\large{e^{\frac{\log \left(1 + \frac{1}{x}\right)}{\frac1x}}}$$ and $\frac{\log \left(1 + \frac{1}{x}\right)}{\frac1x}$ is an indeterminate form $\frac{0}{0}$. Hint: $(1+1/x)^x=e^{x \ln(1+1/x)} = e^{\ln(1+t)/t}$ where $t=1/x$.
{}
# How do I calculate the event horizon? Tags: 1. Sep 12, 2015 ### Spring I am clearly talking about black holes. The event horizon is the limit where even a photon won't escape it. I tried to calculate it in the easy way using enegry calculation m * MG/R = mc^2 / 2 but I do not know if I am using the right equation or even if I can devide by the m because it equels to zero, deviding by which is mocking the foundations of math and physics. If the way to calculate it is tricky and scientific I will be disappointed because I wanna understand it well but I will still try and listen. 2. Sep 13, 2015 ### stevebd1 The first equation is correct but the second one (to my knowledge) should be the Newton equation for kinetic energy- $$E_k=\frac{1}{2}mv^2$$ If you replace the second equation with this one, then you should be able to rearrange to get the equation for escape velocity and from that, you should be able to establish an equation for the event horizon (or the Schwarzschild radius). This is a basic way of establishing the EH, for a more accurate and GR related solution, you should look at the Schwarzschild metric. You might also find the following thread of interest- Deriving the Schwarzchild radius? Last edited: Sep 13, 2015 3. Sep 14, 2015 ### Spring The second equation is ½mv2 but I used c (speed of light) to calculate it for light. Once again, I am unsure because I devided by m of photon which equals to zero, deviding by nothing. Also because there is a different way to calculate the energy of a photon. Ep = hf . That way it means that every where there is a photon who can escape with high enough frequency. 4. Sep 14, 2015 ### AgentSmith Zero is the rest mass of a photon. Of course, photons are not normally at rest. High frequency implies high energy.
{}
# How do I solve this matrix equation for infinitesimal rotations? I have a matrix equation, taken from Wikipedia (Infinitesimale Drehungen), that looks not that complicated (note $$a$$ is a scalar, actually an angle as input paramter for the rotation matrix): $$R(a)=\exp(aJ)$$ In my case I would like to obtain $$J$$ while $$R(a)$$ is given: $$R(a)=\left( \begin{array}{ccc} \cos\left(\sqrt{5}a\right)&-\frac{2\sin\left(\sqrt{5}a\right)}{\sqrt{5}}&\frac{\sin\left(\sqrt{5}a\right)}{\sqrt{5}}\\ \frac{2\sin\left(\sqrt{5}a\right)}{\sqrt{5}} & \frac{1}{5}\left(4\cos\left(\sqrt{5}a\right)+1\right) & -\frac{2}{5}\left(\cos\left(\sqrt{5}a\right)-1\right)\\ -\frac{\sin\left(\sqrt{5}a\right)}{\sqrt{5}} & -\frac{2}{5}\left(\cos\left(\sqrt{5}a\right)-1\right) & \frac{1}{5}\left(\cos\left(\sqrt{5}a\right)+4\right) \\ \end{array} \right)$$ I tried the following J[a_] := MatrixLog[R[a]]/a which does not work. Based on the equation $$J=\left.\frac{dR(a)}{da}\right|_{a=0}$$ that is also provided on the above-given Wikipedia page, I tried J[a_] := D[R[a], a] as well, which did not worked too. My full listing looks as follows: ClearAll["Global*"]; R[a_] := {{Cos[\[Sqrt]5 a], -((2 Sin[\[Sqrt]5 a])/(\[Sqrt]5)), Sin[\[Sqrt]5 a]/(\[Sqrt]5)}, {(2 Sin[\[Sqrt]5 a])/(\[Sqrt]5), (1/5) (1 + 4 Cos[\[Sqrt]5 a]), -(2/5) (-1 + Cos[\[Sqrt]5 a])}, {-(Sin[\[Sqrt]5 a]/(\[Sqrt]5)), -(2/5) (-1 + Cos[\[Sqrt]5 a]), (1/5) (4 + Cos[\[Sqrt]5 a])}}; J[a_] := MatrixLog[R[a]]/a; J[a_] := D[R[a], a]; FullSimplify[MatrixExp[a*J]] FullSimplify[Limit[MatrixPower[IdentityMatrix[3] + (a/n) J, n], n -> Infinity]] The last two Print statements should yield the original matrix $$R(a)$$. I would be greatful for any help on obtaining the Matrix $$J$$. • Asymptotic[R[a] - MatrixExp[a R'[a]] // Simplify, a -> 0] shows that expression is small O[a^2] Aug 22, 2022 at 11:05 • With a quick look, it seems to me the notation is a bit inconsistent, ie $a$ is in a dot product with $J$ in the 1st equation (hence is a matrix), but then appears as a scalar later on. I think this should be clarified before any attempts at a solution are made ;-) Aug 22, 2022 at 11:10 • You are right - $a$ is a scalar (an angle actually). And I fixed it in the OP to be treated consistently as a scalar. Aug 22, 2022 at 11:14 • Ok, thanks it's then more clear. Aug 22, 2022 at 11:16 • The use of Print is unnecessary in expressions like Print[FullSimplify[MatrixExp[a*J]]]; Just don't suppress the output with the semi-colon. Aug 22, 2022 at 14:21 Generally, Mathematica is strict when using multivalued functions (like Log), and that is why it won't fully "simplify" your expression unless you use some tricky functions (like PowerExpand or ComplexExpand). The following works: Clear[R, J]; R[a_] := {{Cos[\[Sqrt]5 a], -((2 Sin[\[Sqrt]5 a])/(\[Sqrt]5)), Sin[\[Sqrt]5 a]/(\[Sqrt]5)}, {(2 Sin[\[Sqrt]5 a])/(\[Sqrt]5), (1/ 5) (1 + 4 Cos[\[Sqrt]5 a]), -(2/5) (-1 + Cos[\[Sqrt]5 a])}, {-(Sin[\[Sqrt]5 a]/(\[Sqrt]5)), -(2/5) (-1 + Cos[\[Sqrt]5 a]), (1/5) (4 + Cos[\[Sqrt]5 a])}}; J = PowerExpand[FullSimplify[(MatrixLog[R[a]]/a)]]; J // MatrixForm Check the result: MatrixExp[a J] == R[a] // Reduce (* True *) • Works brilliantly: Print[FullSimplify[MatrixExp[a*J]]] yields the original matrix. Many thanks! Aug 22, 2022 at 11:20 • Just to provide a double-check: the limit-based equation FullSimplify[Limit[MatrixPower[IdentityMatrix[3] + (a/n) J, n], n -> Infinity]] yields the original rotation matrix as well. Aug 22, 2022 at 12:37 The derivative formula you found, $$\left.\frac{dR}{da}\right|_{a=0} = J,$$ is much easier to use than the MatrixLog functions; you just neglected to set $$a$$ to zero after you took the derivative. This can be done like so: J = D[R[a], a] /. a -> 0 (* {{0, -2, 1}, {2, 0, 0}, {-1, 0, 0}} *) Verifying: Simplify[MatrixExp[a J] == R[a]] (* True *) Simplify[Limit[MatrixPower[IdentityMatrix[3] + (a/n) J, n], n -> Infinity] == R[a]] (* True *) Alternately, the matrix exponential is defined in such a way that $$\frac{dR}{da} = J R(a) \quad \Rightarrow \quad J = R^{-1} \frac{dR}{da}$$ which can also be done in Mathematica: J = Simplify[Inverse[R[a]] . D[R[a], a]] (* same as above *) • This is really a very efficient alternative, which I also just tried and works great. Thank you! Aug 22, 2022 at 17:35 An alternative approach that might be useful in some cases. You can compute the MatrixLog in the limit of small a Limit[MatrixLog[Normal[Series[R[a], {a, 0, 1}]]]/a, a -> 0] (* {{0, -2, 1}, {2, 0, 0}, {-1, 0, 0}} *) • Thank you for this interesting approach, which works greatly as well. Do you have a reference (maybe wiki or math book) for this trick? On Wikipedia Page for Matrix Log I could not find it. Aug 23, 2022 at 5:44 • Your relationship is true for all values of a so it must be true in the limiting value of small a`. Aug 23, 2022 at 11:38
{}
MathSciNet bibliographic data MR2776142 54H25 (47H10) Karapınar, Erdal Fixed point theory for cyclic weak \$\straightphi\$$\straightphi$-contraction. Appl. Math. Lett. 24 (2011), no. 6, 822–825. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{}
# What is LaTeX? What is LaTeX? LaTeX is a typesetting language based on TeX that was created by Donald Knuth, to render the math expressions. We have pre-populated some LaTeX buttons for your use. The full complement of LaTeX commands are not supported, although many are. If you are having difficulty finding theLaTeX command you want to create, check out these resources: LaTeX cheat sheet LaTeX Math Symbols CodeCogs is also a great resource and you can usually cut and paste from CodeCogs into EquatIO.
{}
## Peirce’s 1880 “Algebra Of Logic” Chapter 3 • Selection 7 ### Chapter 3. The Logic of Relatives (cont.) #### §4. Classification of Relatives 225.   Individual relatives are of one or other of the two forms $\begin{array}{lll} \mathrm{A : A} & \qquad & \mathrm{A : B}, \end{array}$ and simple relatives are negatives of one or other of these two forms. 226.   The forms of general relatives are of infinite variety, but the following may be particularly noticed. Relatives may be divided into those all whose individual aggregants are of the form $\mathrm{A : A}$ and those which contain individuals of the form $\mathrm{A : B}.$  The former may be called concurrents, the latter opponents. Concurrents express a mere agreement among objects.  Such, for instance, is the relative ‘man that is ──’, and a similar relative may be formed from any term of singular reference.  We may denote such a relative by the symbol for the term of singular reference with a comma after it;  thus $(m,\!)$ will denote ‘man that is ──’ if $(m)$ denotes ‘man’.  In the same way a comma affixed to an $n$-fold relative will convert it into an $(n + 1)$-fold relative.  Thus,  $(l)$ being ‘lover of ──’,  $(l,\!)$ will be ‘lover that is ── of ──’. The negative of a concurrent relative will be one each of whose simple components is of the form $\mathrm{\overline{A : A}},$ and the negative of an opponent relative will be one which has components of the form $\mathrm{\overline{A : B}}.$ We may also divide relatives into those which contain individual aggregants of the form $\mathrm{A : A}$ and those which contain only aggregants of the form $\mathrm{A : B}.$  The former may be called self-relatives, the latter alio-relatives.  We also have negatives of self-relatives and negatives of alio-relatives. ### References • Peirce, C.S. (1880), “On the Algebra of Logic”, American Journal of Mathematics 3, 15–57.  Collected Papers (CP 3.154–251), Chronological Edition (CE 4, 163–209). • Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958.  Volume 3 : Exact Logic, 1933. • Peirce, C.S., Writings of Charles S. Peirce : A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianapolis, IN, 1981–.  Volume 4 (1879–1884), 1986. ## Relations & Their Relatives : 4 Right, the “divisor of” relation signified by $x|y$ is a dyadic relation on the set of positive integers $\mathbb{M},$ so it can be understood as a subset of the cartesian product $\mathbb{M} \times \mathbb{M}.$  It is an example of a partial order, whereas the “less than or equal to” relation signified by $x \le y$ is an example of a total order relation. And yes, the mathematics of relations can be applied most felicitously to semiotics, but here we must bump the adicity or arity up to three.  We take any sign relation $L$ to be subset of a cartesian product $O \times S \times I,$ where $O$ is the set of objects under consideration in a given discussion, $S$ is the set of signs, and $I$ is the set of interpretant signs involved in the same discussion. One thing we need to understand here is that the sign relation $L \subseteq O \times S \times I$ relevant to a given level of discussion can be rather more abstract than what we would call a sign process proper, that is, a structure extended through a dimension of time.  Indeed, many of the most powerful sign relations are those that generate sign processes through iteration or recursion or other operations of that sort.  When this happens, the most penetrating analysis of the sign process or semiosis in view will come through grasping the core sign relation that generates it. ## Mathematical Demonstration & the Doctrine of Individuals : 2 ### Selection from C.S. Peirce, “Logic Of Relatives” (1870), CP 3.45–149 93.   In reference to the doctrine of individuals, two distinctions should be borne in mind.  The logical atom, or term not capable of logical division, must be one of which every predicate may be universally affirmed or denied.  For, let $\mathrm{A}$ be such a term.  Then, if it is neither true that all $\mathrm{A}$ is $\mathrm{X}$ nor that no $\mathrm{A}$ is $\mathrm{X},$ it must be true that some $\mathrm{A}$ is $\mathrm{X}$ and some $\mathrm{A}$ is not $\mathrm{X};$  and therefore $\mathrm{A}$ may be divided into $\mathrm{A}$ that is $\mathrm{X}$ and $\mathrm{A}$ that is not $\mathrm{X},$ which is contrary to its nature as a logical atom. Such a term can be realized neither in thought nor in sense. Not in sense, because our organs of sense are special — the eye, for example, not immediately informing us of taste, so that an image on the retina is indeterminate in respect to sweetness and non-sweetness.  When I see a thing, I do not see that it is not sweet, nor do I see that it is sweet;  and therefore what I see is capable of logical division into the sweet and the not sweet.  It is customary to assume that visual images are absolutely determinate in respect to color, but even this may be doubted.  I know no facts which prove that there is never the least vagueness in the immediate sensation. In thought, an absolutely determinate term cannot be realized, because, not being given by sense, such a concept would have to be formed by synthesis, and there would be no end to the synthesis because there is no limit to the number of possible predicates. A logical atom, then, like a point in space, would involve for its precise determination an endless process.  We can only say, in a general way, that a term, however determinate, may be made more determinate still, but not that it can be made absolutely determinate.  Such a term as “the second Philip of Macedon” is still capable of logical division — into Philip drunk and Philip sober, for example;  but we call it individual because that which is denoted by it is in only one place at one time.  It is a term not absolutely indivisible, but indivisible as long as we neglect differences of time and the differences which accompany them.  Such differences we habitually disregard in the logical division of substances.  In the division of relations, etc., we do not, of course, disregard these differences, but we disregard some others.  There is nothing to prevent almost any sort of difference from being conventionally neglected in some discourse, and if $I$ be a term which in consequence of such neglect becomes indivisible in that discourse, we have in that discourse, $[I] = 1.$ This distinction between the absolutely indivisible and that which is one in number from a particular point of view is shadowed forth in the two words individual (τὸ ἄτομον) and singular (τὸ καθ᾿ ἕκαστον);  but as those who have used the word individual have not been aware that absolute individuality is merely ideal, it has come to be used in a more general sense. ### Note Peirce explains his use of the square bracket notation at CP 3.65. I propose to denote the number of a logical term by enclosing the term in square brackets, thus, $[t].$ The number of an absolute term, as in the case of $I,$ is defined as the number of individuals it denotes. ### References • Peirce, C.S. (1870), “Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole’s Calculus of Logic”, Memoirs of the American Academy of Arts and Sciences 9, 317–378, 26 January 1870. Reprinted, Collected Papers 3.45–149, Chronological Edition 2, 359–429. Online (1) (2) (3). • Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958. • Peirce, C.S., Writings of Charles S. Peirce : A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianapolis, IN, 1981–. ## Mathematical Demonstration & the Doctrine of Individuals : 1 ### Selection from C.S. Peirce, “Logic Of Relatives” (1870), CP 3.45–149 92.   Demonstration of the sort called mathematical is founded on suppositions of particular cases.  The geometrician draws a figure;  the algebraist assumes a letter to signify a single quantity fulfilling the required conditions.  But while the mathematician supposes an individual case, his hypothesis is yet perfectly general, because he considers no characters of the individual case but those which must belong to every such case.  The advantage of his procedure lies in the fact that the logical laws of individual terms are simpler than those which relate to general terms, because individuals are either identical or mutually exclusive, and cannot intersect or be subordinated to one another as classes can.  Mathematical demonstration is not, therefore, more restricted to matters of intuition than any other kind of reasoning.  Indeed, logical algebra conclusively proves that mathematics extends over the whole realm of formal logic;  and any theory of cognition which cannot be adjusted to this fact must be abandoned.  We may reap all the advantages which the mathematician is supposed to derive from intuition by simply making general suppositions of individual cases. ### References • Peirce, C.S. (1870), “Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole’s Calculus of Logic”, Memoirs of the American Academy of Arts and Sciences 9, 317–378, 26 January 1870. Reprinted, Collected Papers 3.45–149, Chronological Edition 2, 359–429. Online (1) (2) (3). • Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958. • Peirce, C.S., Writings of Charles S. Peirce : A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianapolis, IN, 1981–. ## Relations & Their Relatives : 3 Here are two ways of looking at the divisibility relation, a dyadic relation of fundamental importance in number theory. Table 1 shows the first few ordered pairs of the relation on positive integers that corresponds to the relative term, “divisor of”.  Thus, the ordered pair ${i\!:\!j}$ appears in the relation if and only if ${i}$ divides ${j},$ for which the usual notation is ${i|j}.$ $\begin{array}{|c||*{11}{c}|} \multicolumn{12}{c}{\text{Table 1. Elementary Relatives for the Divisor Of" Relation}} \\[4pt] \hline i|j &1&2&3&4&5&6&7&8&9&10&\ldots \\ \hline\hline 1&1\!\!:\!\!1&1\!:\!2&1\!:\!3&1\!:\!4&1\!:\!5&1\!:\!6&1\!:\!7&1\!:\!8&1\!:\!9&1\!:\!10&\dots \\ 2&&2\!:\!2&&2\!:\!4&&2\!:\!6&&2\!:\!8&&2\!:\!10&\dots \\ 3&&&3\!:\!3&&&3\!:\!6&&&3\!:\!9&&\dots \\ 4&&&&4\!:\!4&&&&4\!:\!8&&&\dots \\ 5&&&&&5\!:\!5&&&&&5\!:\!10&\dots \\ 6&&&&&&6\!:\!6&&&&&\dots \\ 7&&&&&&&7\!:\!7&&&&\dots \\ 8&&&&&&&&8\!:\!8&&&\dots \\ 9&&&&&&&&&9\!:\!9&&\dots \\ 10&&&&&&&&&&10\!:\!10&\dots \\ \ldots&\ldots&\ldots&\ldots&\ldots&\ldots& \ldots&\ldots&\ldots&\ldots&\ldots&\ldots \\ \hline \end{array}$ Table 2 shows the same information in the form of a logical matrix.  This has a coefficient of ${1}$ in row ${i}$ and column ${j}$ when ${i|j},$ otherwise it has a coefficient of ${0}.$  (The zero entries have been omitted here for ease of reading.) $\begin{array}{|c||*{11}{c}|} \multicolumn{12}{c}{\text{Table 2. Logical Matrix for the Divisor Of" Relation}} \\[4pt] \hline i|j &1&2&3&4&5&6&7&8&9&10&\ldots \\ \hline\hline 1&1&1&1&1&1&1&1&1&1&1&\dots \\ 2& &1& &1& &1& &1& &1&\dots \\ 3& & &1& & &1& & &1& &\dots \\ 4& & & &1& & & &1& & &\dots \\ 5& & & & &1& & & & &1&\dots \\ 6& & & & & &1& & & & &\dots \\ 7& & & & & & &1& & & &\dots \\ 8& & & & & & & &1& & &\dots \\ 9& & & & & & & & &1& &\dots \\ 10&& & & & & & & & &1&\dots \\ \ldots&\ldots&\ldots&\ldots&\ldots&\ldots& \ldots&\ldots&\ldots&\ldots&\ldots&\ldots \\ \hline \end{array}$ Just as matrices in linear algebra represent linear transformations, these logical arrays and matrices represent logical transformations. ## Relations & Their Relatives : 2 It may help to clarify the relationship between logical relatives and mathematical relations.  The word relative as used in logic is short for relative term — as such it refers to an article of language that is used to denote a formal object.  So what kind of object is that?  The way things work in mathematics, we are free to make up a formal object that corresponds directly to the term, so long as we can form a consistent theory of it, but it’s probably easier and more practical in the long run to relate the relative term to the kinds of relations that are ordinarily treated in mathematics and universally applied in relational databases. In these contexts a relation is just a set of ordered tuples and — if you are a fan of strong typing like I am — such a set is always set in a specific setting, namely, it’s a subset of a specified Cartesian product. Peirce wrote $k$-tuples $(x_1, x_2, \ldots, x_{k-1}, x_k)$ in the form $x_1 : x_2 : \ldots : x_{k-1} : x_k$ and he referred to them as elementary $k$-adic relatives.  He expressed a set of $k$-tuples as a “logical aggregate” or “logical sum”, what we would call a logical disjunction of elementary relatives, and he frequently regarded them as being arranged in the form of $k$-dimensional arrays. Time for some concrete examples, which I will give in the next post. ## Relations & Their Relatives : 1 Sign relations are just special cases of triadic relations, in much the same way that binary operations in mathematics are special cases of triadic relations.  It does amount to a minor complication that we participate in sign relations whenever we talk or think about anything else, but it still makes sense to try and tease the separate issues apart as much as we possibly can. As far as relations in general go, relative terms are often expressed by slotted frames like “brother of __”, “divisor of __”, and “sum of __ and __”.  Peirce referred to these kinds of incomplete expressions as rhemes or rhemata and Frege used the adjective ungesättigt or unsaturated to convey more or less the same idea. Switching the focus to sign relations, it’s a fair question to ask what kinds of objects might be denoted by pieces of code like “brother of __”, “divisor of __”, and “sum of __ and __”.  And while we’re at it, what is this thing called denotation, anyway?
{}
# Properties Label 68208bw Number of curves $6$ Conductor $68208$ CM no Rank $0$ Graph # Learn more about Show commands for: SageMath sage: E = EllipticCurve("68208.be1") sage: E.isogeny_class() ## Elliptic curves in class 68208bw sage: E.isogeny_class().curves LMFDB label Cremona label Weierstrass coefficients Torsion structure Modular degree Optimality 68208.be5 68208bw1 [0, -1, 0, -614672, 198801408] [2] 1179648 $$\Gamma_0(N)$$-optimal 68208.be4 68208bw2 [0, -1, 0, -10026592, 12223470400] [2, 2] 2359296 68208.be3 68208bw3 [0, -1, 0, -10218672, 11730977280] [2, 2] 4718592 68208.be1 68208bw4 [0, -1, 0, -160425232, 782144188288] [2] 4718592 68208.be6 68208bw5 [0, -1, 0, 9785088, 52042554432] [4] 9437184 68208.be2 68208bw6 [0, -1, 0, -33295712, -60103232832] [2] 9437184 ## Rank sage: E.rank() The elliptic curves in class 68208bw have rank $$0$$. ## Modular form 68208.2.a.be sage: E.q_eigenform(10) $$q - q^{3} + 2q^{5} + q^{9} - 4q^{11} + 2q^{13} - 2q^{15} - 2q^{17} - 4q^{19} + O(q^{20})$$ ## Isogeny matrix sage: E.isogeny_class().matrix() The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the Cremona numbering. $$\left(\begin{array}{rrrrrr} 1 & 2 & 4 & 4 & 8 & 8 \\ 2 & 1 & 2 & 2 & 4 & 4 \\ 4 & 2 & 1 & 4 & 2 & 2 \\ 4 & 2 & 4 & 1 & 8 & 8 \\ 8 & 4 & 2 & 8 & 1 & 4 \\ 8 & 4 & 2 & 8 & 4 & 1 \end{array}\right)$$ ## Isogeny graph sage: E.isogeny_graph().plot(edge_labels=True) The vertices are labelled with Cremona labels.
{}
even/odd graph • August 9th 2009, 09:27 AM live_laugh_luv27 even/odd graph Are the graphs of $\sqrt{x}$ $2^x$ $log_2x$ even or odd? I don't see any symmetry with respect to y axis or origin. Thanks! • August 9th 2009, 09:40 AM Jhevon Quote: Originally Posted by live_laugh_luv27 Are the graphs of $\sqrt{x}$ $2^x$ $log_2x$ even or odd? I don't see any symmetry with respect to y axis or origin. Thanks! A function $f(x)$ is even if $f(-x) = f(x)$ A function $f(x)$ is odd if $f(-x) = -f(x)$ can you continue? • August 9th 2009, 10:10 AM live_laugh_luv27 • August 9th 2009, 10:15 AM live_laugh_luv27 I'm still not exactly sure how to set that up, but have another ?...even if a function does not have symmetry in respect to the y-axis, can it still be even? similarly, even if the function does not have symmetry in respect to the origin, can it still be odd? thanks! • August 9th 2009, 10:17 AM Plato Quote: Originally Posted by live_laugh_luv27 What does any of the above have to do with the basic definitions of odd & even? $\text{Odd functions are such that }f(-x)=-f(x)$ $\text{Even functions are such that }f(-x)=f(x)$ • August 9th 2009, 10:27 AM Jhevon Quote: Originally Posted by live_laugh_luv27 I'm still not exactly sure how to set that up, but have another ?...even if a function does not have symmetry in respect to the y-axis, can it still be even? similarly, even if the function does not have symmetry in respect to the origin, can it still be odd? thanks! no, even and odd follow the exact definitions i gave you. you don't even have to look at the graphs. example, state whether (a) $f(x) = x^2$, (b) $f(x) = \sin x$ and (c) $f(x) = \frac {x^2}{x^3 + 1}$ (a) $f(x) = x^2$ is even, since $f(-x) = (-x)^2 = x^2 = f(x)$ (b) $f(x) = \sin x$ is odd, since $f(-x) = \sin (-x) = - \sin x = - f(x)$ (c) $f(x) = \frac {x^2}{x^3 + 1}$ is neither even nor odd, since $f(-x) = \frac {(-x)^2}{(-x)^3 + 1} = \frac {x^2}{1 - x^3} \ne f(x) \text{ or } -f(x)$ notice that i didn't even draw any graphs. now, try again • August 9th 2009, 10:29 AM live_laugh_luv27 Quote: Originally Posted by Plato What does any of the above have to do with the basic definitions of odd & even? $\text{Odd functions are such that }f(-x)=-f(x)$ $\text{Even functions are such that }f(-x)=f(x)$ so these answers are incorrect? http://www.mathhelpforum.com/math-he...19f85201-1.gif - even http://www.mathhelpforum.com/math-he...84e8019b-1.gif - odd http://www.mathhelpforum.com/math-he...069c3f11-1.gif - even • August 9th 2009, 10:31 AM Jhevon yes, they are wrong. did you try what i said? • August 9th 2009, 10:33 AM live_laugh_luv27 http://www.mathhelpforum.com/math-he...19f85201-1.gif = $\sqrt{-x}$ = odd http://www.mathhelpforum.com/math-he...84e8019b-1.gif = $2^{-x}$ = odd? http://www.mathhelpforum.com/math-he...069c3f11-1.gif = $\frac{ln(-x )}{ln2}$ ? • August 9th 2009, 10:36 AM Jhevon Quote: Originally Posted by live_laugh_luv27 http://www.mathhelpforum.com/math-he...19f85201-1.gif = $\sqrt{-x}$ = odd ?! are you telling me that, for example, $\sqrt 2 = \sqrt {-2}$? and if so, does that comply with the definition of an odd function as defined in my and Plato's posts? Quote: http://www.mathhelpforum.com/math-he...84e8019b-1.gif = $2^{-x}$ = odd? interesting. say $x = 1$, is it true that $2^1 = 2^{-1}$ or $-2^1 = 2^{-1}$ ? Quote: http://www.mathhelpforum.com/math-he...069c3f11-1.gif = $\frac{ln(-x )}{ln2}$ ? tell me, what's $\log_2 (-5)$, say • August 9th 2009, 10:52 AM live_laugh_luv27 http://www.mathhelpforum.com/math-he...19f85201-1.gif = http://www.mathhelpforum.com/math-he...a3ae9cbf-1.gif = $-\sqrt{x}$ = $-f(x)$ ? http://www.mathhelpforum.com/math-he...2950423e-1.gif is not true http://www.mathhelpforum.com/math-he...f1a02f81-1.gif is $\frac{ln(-5)}{ln(2)}$ ? • August 9th 2009, 10:57 AM Jhevon Quote: Originally Posted by live_laugh_luv27 http://www.mathhelpforum.com/math-he...19f85201-1.gif = http://www.mathhelpforum.com/math-he...a3ae9cbf-1.gif = $-\sqrt{x}$ = $-f(x)$ ? http://www.mathhelpforum.com/math-he...2950423e-1.gif is not true http://www.mathhelpforum.com/math-he...f1a02f81-1.gif is $\frac{ln(-5)}{ln(2)}$ ? ok, as far as real numbers are concerned, the square root function and the logarithm function are not defined for negative real numbers.... you should know this. please look this up and make sure you get it and clearly $2^1 \ne 2^{-1}$ nor does $-2^1 = 2^{-1}$ so that $f(x) = 2^x$ is neither even nor odd.
{}
## Friday, March 31, 2017 ### lecture 26: double integrals with polar coordinates I'll compute the center of mass for a half disk and set up a double integral that gives the polar moment of inertia for a disk that is offset from the origin. ## Wednesday, March 29, 2017 ### exam two results (updated) Last Thursday, 157 brave calcunauts took the second exam. The average is now 77.6 and the quartile scores are 70, 79, and 87.7. This outcome is nearly identical to exam one! If you are unhappy with your score, talk to your discussion leader, or me. Make an appointment if you can't attend our office hours. ### lecture 25: a trig substitution We'll use double integrals to calculate some volumes where the region of integration is a sector of a circle. A smart way to evaluate these integrals is to make an inverse substitution using the polar coordinate system (13.3). The Jacobian determinant will make an appearance. ## Monday, March 27, 2017 ### lecture 24; double integrals on non-rectangular regions We'll work on visualizing regions of integration and reversing the order of integration (13.2). We may even talk about rewriting double integrals using trig substitutions (13.3). ## Thursday, March 23, 2017 ### exam 2 solutions 1. Krishna Sai Chemudupati found a major mistake in my solution to problem 9. The mistake is fixed (4/24 at 9:25 pm). While the exam is still fresh in your mind, take a look at my solutions. Please let me know if you find any bogus math or fuzzy explanations. As Bang says, "My head hurts when I look at your answers." ### effective learning techniques An engineering professor answers the question "Which is the most effective learning technique you have experienced so far?" ... I didn’t just do a math homework problem and turn it in. Instead, particularly if it was an important homework problem, I would work it and rework it fresh, spacing the practice out over several days. I wouldn’t peek at the answer unless I absolutely had to. That ensured I really could solve the problem myself—that I wasn’t just fooling myself that I knew it. After I was comfortable that I could really solve the problem by myself on paper, I then “went mental,” practicing the steps in my mind until the solution could flow like a sort of mental song. I could perform this kind of mental practice at times people often don’t think to use for studying—like in the shower, or when I was walking to class. I found that this attention to chunking eventually gave me sort of magic powers—I could glance at many problems, even ones I’d never seen before, and know virtually instantly how to solve them. ### exam two approaches! Exam two begins this afternoon (Thursday, March 22) at 5:15 pm. This exam covers section 11.6, and all of chapter 12 (except for 12.3)No electronic devices are allowed at the exam. We will provide you with this equation sheet. Be careful, some facts are not on the equation sheet. You need to know how to compute the dot and cross products, how to integrate and differentiate, and understand the properties of the gradient vector. You should also know the cosine and sine of common angles like $0$, $\pi/6$, $\pi/4$, $\pi/3$, $\pi/2$, $\pi$, and $2\pi$ radians. This exam is your opportunity to demonstrate to us that you understand the material. Be sure to read each question carefully, and draw sketches where appropriate. We expect complete solutions and correct notation. Be careful with the T/F questions; think, don't react. Your exam room is a function of the first four letters of your last name. • Aaaa through Hanc, go to CR 302 • Hans through Pont, go to CR 306 • Post through Zzzz, go to CR 310 • We are sharing the rooms with Calculus I and II students. Make sure you are not sitting next to another Calculus III student. To practice for the exam, use the problems from MyMathLab, discussion, and the mock exams and examples from the text. If you don't understand something ask questions at your discussion section and during our office hours. ## Wednesday, March 22, 2017 ### uncertain office hours for march 22 I have an appointment this afternoon that interferes with my office hours. I'll try to get back as soon as possible. ### lecture 22: volumes by double integrals We'll use double integrals to calculate the volume that lies above a rectangle (13.1) in the $xy$-plane and beneath a surface $z=f(x,y)$. If there is time, we'll also create a double integral that gives the volume above a triangle (13.2) and below the surface. ## Monday, March 20, 2017 ### lecture 21: lagrange multiplier method refresher I'll work two or three examples where we look for the extreme values of functions on curves or surfaces. ## Friday, March 10, 2017 ### lecture 20: lagrange multiplier method In calc I a function that is continuous on a closed interval is guaranteed to have an absolute maximum and an absolute minimum value on the interval. We'll chat about analogs to closed intervals in $\mathbb{R}^2$ and $\mathbb{R}^3$. And, we'll solve some problems (12.9) using gradient vectors to find the extreme value(s) of various functions on sets of points in both $\mathbb{R}^2$ and $\mathbb{R}^3$. A tiny amount calculus will occur. ## Wednesday, March 8, 2017 ### lecture 19: critical points and the second derivatives test We'll use the gradient vector to identify and classify critical points for a function of two variables: $f(x,y)=x^3+y^3-3xy$. Once we know how the function works we'll use algebra to make sure we located all the critical points, and the second derivatives test to check our interpretations of those critical points (12.8). ## Tuesday, March 7, 2017 ### the big picture In chapter 11 we learned to describe lines and curves in $\mathbb{R}^3$ using vector functions of the sort $\vec{r}(t)$. These vector functions have one independent variable, $t$, because curves are one dimensional. The derivative of the vector function is tangent to, or parallel to, the space curve. In chapter 12 we describe surfaces in $\mathbb{R}^3$ with single Cartesian equations that depended on some combination of $x$, $y$, and $z$. If the equation of the surface is expressed as $g(x,y,z)=0$ (or any constant) then the gradient of the function $g(x,y,z)$ is normal to, or perpendicular to, the surface. Both tangent and normal vectors are used in the final weeks of the semester, when we integrate the tangential component of some vector field along a curve, or the normal component of another vector field over a surface. ## Monday, March 6, 2017 ### lecture 18: tangent planes and differentials We'll use the gradient vector to attach tangent planes to surfaces that are described by implicit or explicit equations. The tangent plane lies very close to the surface at points near the point of attachment, so the tangent plane equation can be rearranged to give a linear approximation and a total differential. We'll work three examples. ## Friday, March 3, 2017 ### lecture 17: properties of the gradient vector We'll finish the example problem from Wednesday and then look at the properties of the gradient vector (12.6). They are: 1. The range of the directional derivative is $-| \, \vec{\nabla}f \,| \le D_{\hat{u}}f \le | \, \vec{\nabla}f \,|$. 2.  $\vec{\nabla}f$ is the direction in which $f$ increases most rapidly, aka the direction of maximum increase. 3. $-\vec{\nabla}f$ is the direction in which $f$ decreases most rapidly, aka the direction of maximum decrease. 4. $\vec{\nabla}f$ is perpendicular to level curves of $f(x,y)$ in $\mathbb{R}^2$ or level surfaces of $f(x,y,z)$ in $\mathbb{R}^3$. The fourth property gives us a spiffy way to create tangent planes to surfaces. And, tangent planes are a gateway to linear approximations. ## Wednesday, March 1, 2017 ### lecture 16: chain rule and directional derivatives I'll work two more chain rule examples. In one case, we'll rid the world of the scourge of implicit differentiation (12.5). Then we'll find the rate of change of a function in an arbitrary direction in the function's domain. Dot products will appear as will an amazing vector, the gradient vector, that is constructed from the first derivatives of the function (12.6).
{}
# BIOS 135 Week 6 Quiz This file of BIOS 135 Week 6 Quiz shows the solutions to the following problems: 1. Which of these crosses will only produce heterozygous offspring?2. A true-breeding plant that produces yellow seeds is crossed with a true-breeding plant that produces green seeds. The seeds of all of the offspring are yellow. Why?3. What is the genotype of an individual who is heterozygous for dimples?4. In eukaryotic cells, repressor proteins inhibit transcription by binding to ______.5. Repressors block binding of RNA polymerase by attaching to ______.6. After replication, ______.7. Ethical dilemmas raised by DNA technology and knowledge of the human genome include ______.8. The end result of recombinant technology is a transgenic bacterium with a human gene that codes for marketable quantities of a human gene product. However, molecular biologists frequently have problems with the product. One problem might be ______.9. Inheritance of certain genes increases the risk of getting certain cancers; thus, it can be said that ______.10. Transcription factors attach to ______.11. A father who is a hemophiliac marries a woman who does not carry the disease. While pregnant with their first child, the ultrasound reveals it will be a boy and the tech tells them that they are very lucky, because now they know their child will not be a hemophiliac! Explain how they know this. • Robicychr 2 orders completed $9.09 ANSWER Tutor has posted answer for$9.09. See answer's preview **** *** Week 6 ****
{}
# Soft question: Lucrative careers for applied mathematics [closed] I am currently a math/CS undergraduate. I enjoy applied math a lot and want to go to graduate school for it. I've been thinking that I will do a PhD, but I also know I don't want to stay in academia, so recently I've been reevaluating my options and thinking about whether to just pursue a Master's instead. To put it bluntly, my goal is to make a lot of money. Not to be a billionaire, but I'd like to get to a $500K salary down the road. I realize this is neither at all easy nor at all likely, because otherwise anyone would do it. I just want to know how to maximize my chances of attaining this goal. Some careers for highly educated math people with high earnings potential: -Finance -Actuary -Data Science -Software -Consulting However, I've realized that out of these, only data science, finance, and consulting would even care about a PhD in applied math vs. a master's, and even in those it's possible for just a MS to get a job. Should I forget about the PhD? I don't want to spend more than 4-5 years on it, and though I love applied math, I know a career in academia is not what I want. Next, are there other fields for applied mathematics/CS people with a high earnings potential? Finally, which field should I pick to maximize my likelihood of attaining an eventual$500k salary relatively quickly? (Again, I realize that this likelihood is still very small.) Things that I believe mean that my plan has at least a nonzero chance of success: -Strong coding skills, lots of CS experience including internships -Top undergraduate univ, top graduate univ [if I don't get into top PhD programs I will just do a master's at a top school, as master's programs are easier to get into and I have basically a guarantee of getting into the master's program at my current school if nothing else] -Strong communication skills -Drive and willingness to adapt (sounds stupid, but is still true) - ## closed as off-topic by Chris Janjigian, Claude Leibovici, Brian Fitzpatrick, Semsem, DarylMar 18 at 7:45 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is not about mathematics, within the scope defined in the help center." – Chris Janjigian, Claude Leibovici, Brian Fitzpatrick, Semsem, Daryl If this question can be reworded to fit the rules in the help center, please edit the question. Hate to say it but people who pull down 500k a year usually have no college degree at all. Think Bill Gates, Mark Zuckerberg, Steve Jobs, LeBron James... –  Vladhagen Mar 18 at 3:36 But those guys are billionaires (or 100-millionaires). I'm not looking to become one, because realistically the probability of that is basically zero. I'm not the entrepreneurial type and I don't want to essentially 'play the lottery' by joining a random startup. There are a non-insignificant number of people in the fields I listed that do make that kind of money, and I want to know how to become one of those people. –  antawn Mar 18 at 3:38 It is not a recipe. –  copper.hat Mar 18 at 3:50 This isn't really the point of my question. Obviously there is no set recipe, otherwise - as I said - everyone would do it. However, as with anything, there are things you can do to increase your chances. Were I to do a PhD in a completely abstract area, never seek an internship, and get poor grades, I would be reducing my chances, I think. Conversely, there must be some things that increase it. –  antawn Mar 18 at 4:20 I'm passionate about applied mathematics and certain types of programming. I don't have a real preference within that subfield. Why shouldn't I pursue a good life for myself as well? Also, you might like to think that, but that's certainly not true for everyone - believe me, I know more counterexamples than people who validate your hypothesis. Also, I'm hardly wasting my time. It took about 5 minutes to write this question and all these comments. It's not like anybody does what they're passionate about every waking moment of every day. –  antawn Mar 18 at 5:11 If you do get the PhD, the reason will be to get a job in finance. $200k starting salary, plus bonuses, is par for the course. The other option is the startup route, where your goal is to get bought out after a few years. Founding a startup carries a lot more risk than joining a hedge fund, but will probably be more fun, you don't have to invest years getting a PhD that is otherwise useless to you, and you get to keep your soul. Here you will be leaning more on your CS skills than math skills. In terms of specific fields to look at, machine learning (/big data) is super hot right now, if crowded. In my personal opinion, if I had to give you the 2014 versions of "plastics," it would be "3D printing." - Okay. Are you sure though? I read that D.E. Shaw only hires PhD's as quants. I doubt I could get into D.E. Shaw itself, but are they an exception to the rule? (E.g. quantstart.com/articles/…) – antawn Mar 18 at 4:53 @antawn You're right, if you want a quant job, you do need a PhD, for better or worse. I've amended my answer. – user7530 Mar 18 at 4:59 OK. And just to clarify, I do like research. I just don't want to spend my whole life in academia. – antawn Mar 18 at 5:09 add comment From another math/cs undergrad: Don't go for a PhD if your ultimate goal is to earn money. Just my 2 cents. - What path would you suggest instead? A masters and then what? Or not even a masters? – antawn Mar 18 at 3:41 That I have no idea(I'm just an undergrad student like you). I'm assuming that it'd depend on the field you plan on entering. – Junichi Koganemaru Mar 18 at 3:43 Honestly, I don't care as long as it involves math - whichever one maximizes my earning potential. – antawn Mar 18 at 3:44 I disagree with this answer: a PhD can significantly increase pay. – qwr Mar 18 at 4:10 ^What types of companies like math PhDs? – antawn Mar 18 at 4:21 show 2 more comments I believe it's extremely unlikely you'll get \$500k doing any type of math job. In fact, almost no job will get you \$500k (almost greedy), even if you're an anesthesiologist or heart surgeon, unless you're at the very top of your field. In my opinion, you won't be able to the top salaries in finance without some business or managing degree. Being a statistician or an aerospace engineer for a Fortune 500 company can earn quite a bit of money. I think the closer you are to actual number crunching, the less you'll make because people can replace you with software. A PhD, versus masters, does go a quite far for more salary. Disclaimer: This is just based on people I know. - I think usually the numbers have PhD's making less, on average, than those with master's. – Charles Mar 18 at 4:55 Being a statistician would interest me. What types of jobs are there for statisticians at "Fortune 500s"? I don't think your$500k outlook for general jobs is totally accurate, because I know (well) a few established surgeons that exceed that number, but they aren't at the top of their field or anything. –  antawn Mar 18 at 4:57
{}
OBJECTIVE—To compare the cost-effectiveness of different type 2 diabetes screening strategies using population-based data (KORA Survey; Augsburg, Germany; subjects aged 55–74 years), including participation data. RESEARCH DESIGN AND METHODS—The decision analytic model, which had a time horizon of 1 year, used the following screening strategies: fasting glucose testing, the oral glucose tolerance test (OGTT) following fasting glucose testing in impaired fasting glucose (IFG) (fasting glucose + OGTT), OGTT only, and OGTT if HbA1c was >5.6% (HbA1c + OGTT), all with or without first-step preselection (p). The main outcome measures were costs (in Euros), true-positive type 2 diabetic cases, incremental cost-effectiveness ratios (ICERs), third-party payers, and societal perspectives. RESULTS—After dominated strategies were excluded, the OGTT and HbA1c + OGTT from the perspective of the statutory health insurance remained, as did fasting glucose + OGTT and HbA1c + OGTT from the societal perspective. OGTTs (€4.90 per patient) yielded the lowest costs from the perspective of the statutory health insurance and fasting glucose + OGTT (€10.85) from the societal perspective. HbA1c + OGTT was the most expensive (€21.44 and €31.77) but also the most effective (54% detected cases). ICERs, compared with the next less effective strategies, were €771 from the statutory health insurance and €831 from the societal perspective. In the Monte Carlo analysis, dominance relations remained unchanged in 100 and 68% (statutory health insurance and societal perspective, respectively) of simulated populations. CONCLUSIONS—The most effective screening strategy was HbA1c combined with OGTT because of high participation. However, costs were lower when screening with fasting glucose tests combined with OGTT or OGTT alone. The decision regarding which is the most favorable strategy depends on whether the goal is to identify a high number of cases or to incur lower costs at reasonable effectiveness. Undetected diabetes may be as prevalent as diagnosed type 2 diabetes (1,2). In a population-based study in Germany, the prevalence of known diabetes was 8.4% among 55- to 74-year-old subjects and 8.2% had previously undiagnosed diabetes (3). There is a lack of data on the effectiveness of type 2 diabetes screening with respect to reduced morbidity or mortality (4). Nevertheless, the topic is widely discussed, particularly in regards to subjects aged ≥45 years (57). Several screening strategies have been suggested, including fasting glucose, oral glucose tolerance, or HbA1c testing and preceding risk factor assessment (810). Although there are a variety of recommendations that screening for type 2 diabetes should be implemented, there has been limited consideration of the economic aspects involved (1113). A 1998 study (14) considered quality-adjusted life-years gained as an outcome measure, but this was based on type 1 diabetes data. There are only two existing studies that investigated different screening procedures (15,16). Neither, however, evaluated incremental cost-effectiveness or considered the incomplete participation of the target population in screening programs. The aim of our study was to evaluate the cost-effectiveness of type 2 diabetes screening for several recommended strategies. The outcome measure was costs (in Euros) per correctly identified diabetic subject. The economic evaluation used carefully assessed primary data from a population-based study conducted in southern Germany (3). We also used a population practice study to consider the participation of subjects in screening programs. We created a cost-effectiveness model for screening a population-based sample of subjects, aged 55–74 years and who had not been previously diagnosed with diabetes, for type 2 diabetes within 1 year. We compared four strategies. 1) In fasting glucose testing only, diabetes was assumed when fasting glucose was ≥7.0 mmol/l (8). 2) In fasting glucose + OGTT, when fasting glucose was ≥6.1 and <7.0 mmol/l (impaired fasting glucose [IFG]), an OGTT was performed during a second visit. Diabetes was considered when fasting glucose was ≥7.0 mmol/l or 2-h postglucose load was ≥11.1 mmol/l (9). 3) In OGTT only, diabetes was considered when fasting glucose was ≥7.0 mmol/l or 2-h postglucose load was ≥11.1 mmol/l (9). 4) In HbA1c + OGTT, if HbA1c was >5.6% (10), then an OGTT was performed during a second visit. For strategies 1–3, we assumed the necessity of separate visits, since fasting glucose tests and OGTTs require a fasting state, whereas HbA1c measurements can be done during a regular visit. Nearly all subjects aged ≥55 years in Germany have at least one instance of contact with the health care system during 1 year. We considered two different models (yielding a total of eight screening procedures for evaluation). In model A, all subjects of the screening population were included for the above-mentioned screening strategies. In model B, a first-step preselection (p) was performed (pfasting glucose, pfasting glucose + OGTT, pOGTT, and pHbA1c + OGTT). Further screening was carried out only among subjects who fulfilled at least one of the following criteria: family history of type 2 diabetes, obesity (BMI 30 kg/m2), hypertension (blood pressure >140/90 mmHg), and fasting triglycerides 2 mmol/l. Since the actual data of the selection criteria are assumed not to be available for the majority of patients, patient assessment was considered the first step of the screening program and was associated with screening costs. ### Main outcome measures The main outcomes were screening costs, the number of newly detected true-positive cases of type 2 diabetes related to the whole screening population, and incremental cost-effectiveness ratios (ICERs). Furthermore, we analyzed the percentage of identified diabetic cases in relation to all subjects with previously undiagnosed diabetes in the study population for each screening strategy. ### Determining cost-effectiveness The economic evaluations were conducted from the perspective of the statutory health insurers who cover the direct costs of the screening program (a third-party payer system) as well as from a societal viewpoint, which is the most comprehensive viewpoint, since it covers both direct and indirect costs. Economic evaluations were performed as cost-effectiveness analyses. Because of the short-term perspective of this study, discounting was not required. We compared screening strategies using ICERs (additional costs were divided by the additional effect when a screening strategy was compared with the next less expensive or less effective one). We ruled out strategies that were less effective and more expensive than others (dominated) and those with lower effectiveness and a higher ICER (extended dominance) (17). ### Clinical and epidemiological data and survey estimation The clinical and epidemiological parameters are presented in Table 1. With the exception of the proportions of participation, which were derived from a population-based practice study in the U.K. (18), all data stem from the population-based KORA (Co-operative Health Research in the Region of Augsburg) Survey 2000 (3). The KORA Survey population was selected as a stratified sample from the city of Augsburg, Germany, and the surrounding districts (southern Germany, population of ∼600,000 in 1999). A total of 1,653 of 2,656 subjects aged 55–74 years (62%) could be included. After 131 subjects with known diabetes were excluded, 1,522 remained eligible, 1,353 of whom completed an OGTT. All data from the KORA Survey were calculated, accounting for the sample design. Cost data are provided in Table 2. Direct medical costs included practitioner and laboratory testing fees. Medical costs were calculated using a price scale set by the German health care system (Einheitlicher Bewertungsmaßstab, average point value in 2002 of €0.04). When considering the societal perspective, costs were calculated as an approximation of opportunity costs. The calculation also took the productivity losses of patients (time away from work to visit the doctor) into account. We calculated a total of 1 h for each separate visit. For the OGTT, however, we considered an additional 2 h required for the test. We assessed productivity losses using the human capital approach, where the average labor cost of an employee was our approximate measure (17). We estimated the time cost of nonworking and retired subjects by applying the replacement approach (19). The proportions of retired subjects as well as fees for the general working population were derived from German statistics available through a personal communication (North Rhine-Westphalia Statistics Bureau and 20,21). All cost data were calculated for 2002 prices and given in Euros (31 December 2002: $1U.S. = €1.12347). ### Sensitivity analysis We varied the following input parameters (Tables 1 and 2): 1) the prevalences of disturbed glucose metabolism (type 2 diabetes, IFG, and/or elevated HbA1c), 2) the proportions of participation for the separate visits, and 3) the labor costs included in the analyses from the societal viewpoint. In the univariate sensitivity analyses, we decreased and increased baseline values of the input variable by 20% each. We conducted a multivariate sensitivity analysis with simultaneous random variation of the parameters using a Monte Carlo simulation with 1,000 iterations. We entered ranges for the input data according to the KORA Survey or the U.K. practice study or estimates (for social costs) (Table 2). We assumed that distributions were either binomial or log normal (for social costs). We fitted multiplicative regression models with and without interaction to estimate an approximate multiplicative equation among costs, effects, cost-effectiveness ratios (CERs) compared with “no intervention” (average CERs) and the variation parameters. The relative relation of the four strategies to one another (in a position of dominance or extended dominance) were analyzed systematically on the simulated datasets to evaluate the proportion of the simulated populations in which relations would change. Further details of the sensitivity analyses are included in the online appendix (available at http://care.diabetesjournals.org). ### Screening costs for the whole target-age population in Augsburg and surrounding districts We estimated the total screening costs for inhabitants aged 55–74 years in Augsburg, Germany, and surrounding districts based on the costs listed in the model (adjusted for sex and age; sample design based). We performed all analyses using SAS for UNIX (version 8.2; SAS Institute, Cary, NC) and Stata Statistical Software (version 7.0; Stata, College Station, TX). ### Detected subjects with undiagnosed diabetes The proportions of subjects with undetected type 2 diabetes per screening strategy among the study population are shown in Table 3. The pfasting glucose strategy (fasting glucose testing after preselection) delivered the lowest percentage of detected cases. Using HbA1c + OGTT (HbA1c testing combined with an OGTT in the whole screening population without first-step preselection) as a screening strategy was the most effective in detecting cases. The age and cardiovascular risk profile (BMI, blood pressure, triglycerides, and HDL cholesterol) of the subjects identified by the selected screening strategies were very similar (data not shown). ### Costs of screening and diagnostic testing The costs of the various screening strategies are presented in Table 3 and Fig. 1. There was a large variation in costs. The highest costs per study subject were incurred by the HbA1c + OGTT because of the large number of subjects using this screening strategy (100% participation in the HbA1c testing). ### Cost-effectiveness The strategies with a first-step preselection were all dominated and could be ruled out, both from the perspective of the statutory health insurance and from the societal viewpoint (Table 3 and Fig. 1). Fasting glucose testing and fasting glucose + OGTT were dominated by the OGTT strategy (which was more effective and incurred lower costs) and could be excluded when considered from the perspective of statutory health insurance. HbA1c + OGTT was more effective than OGTT but incurred higher costs. ICERs considered from the societal viewpoint show that fasting glucose testing was dominated by fasting glucose + OGTT as a screening strategy and OGTT by HbA1c + OGTT (both extended dominance). Among the remaining strategies, fasting glucose + OGTT incurred lower costs but was also less effective than using HbA1c + OGTT (Fig. 1). ICERs (additional costs per additional detected case) from both perspectives, statutory health insurance and society, are presented in Table 3. ### Sensitivity analysis HbA1c + OGTT remained the most effective and expensive strategy in all 1,000 populations generated by the Monte Carlo simulation (Table 4). The relations among the strategies and decisions about ruling out or selecting strategies remained, as they were in all of the simulated datasets from the perspective of statutory health insurance after a systematic analysis of dominances. A total of 68.3% remained from the societal perspective. The results of the regression analysis show that the variation of the parameters that were included in the sensitivity analyses had only moderate effects on the costs, effects, and CERs of the different strategies. The variation in prevalence of disturbed glucose metabolism had the largest effect on CERs. This was particularly true for the HbA1c + OGTT strategy (each 20% decrease and increase of prevalence resulted in a 1.40-fold increase and 0.76-fold decrease of CERs from societal perspective). The results remained unchanged when we included interaction terms (data not shown). Figure 1 demonstrates the minimum and maximum costs and effects resulting from the univariate specific sensitivity analyses of the prevalence of disturbed glucose metabolism (dotted lines). ### Estimation of screening costs in the whole target-age population of the study region (Augsburg and surrounding southern Germany) The population of 55- to 74-year-olds in Augsburg and its surrounding regions was 123,226 in the year 2000. From the KORA Survey (3), we calculated that 10,351 subjects had known diabetes and that 10,105 would have previously undiagnosed diabetes in this age-group. From the perspective of the statutory health insurance, the costs of screening (having ruled out the already diagnosed subjects) for previously undiagnosed diabetes were €503,779 using the OGTT as a screening strategy and €2,203,502 using HbA1c + OGTT. From the societal perspective, these costs were €1,115,142 for the fasting glucose + OGTT strategy and €3,264,646 for HbA1c + OGTT. The fasting glucose + OGTT strategy detected a total of 2,351 true cases, OGTTs 2,736 cases, and HbA1c + OGTT 4,939 cases. We present a decision analytic model for the evaluation of the cost-effectiveness of a number of screening procedures for type 2 diabetes, using population-based data from southern Germany. The OGTT incurred the lowest costs from the perspective of the statutory health insurance system, and a combination of the OGTT and fasting glucose testing incurred the lowest cost from the perspective of society (after dominated strategies were ruled out). However, both of these screening strategies detected only about one-third (OGTT alone) and one-fourth (fasting glucose + OGTT) of subjects with previously undiagnosed diabetes. HbA1c testing followed by OGTT in those subjects who proved to have elevated HbA1c yielded the highest rate of detected type 2 diabetes (more than half of all cases detected). This strategy, however, incurred the highest costs. Those strategies that were performed after a first-step preselection of subjects (considered high risk for type 2 diabetes) were all ruled out because of the additional screening costs involved in preselection. The high effectiveness of HbA1c testing combined with the OGTT to detect previously undiagnosed diabetes can be explained by the complete participation of all subjects in HbA1c testing (which requires no special visit to the doctor but can be performed as an extension of another scheduled visit). The study results remained rather stable in the sensitivity analyses. However, when a participation level of >59.5% for fasting glucose testing and >54.5% for the OGTT was achieved, an OGTT alone would be the most effective strategy and would dominate HbA1c testing combined with an OGTT (data not shown). However, these very high participations for fasting glucose testing and the OGTT can probably only be achieved in a study setting. The participation levels in the present study, which were taken from the results of a practice study in the U.K. (30–35% participation), seem to be reasonable estimations of participation and are transferable across regional settings (18). In Germany, participation in a health check that included fasting glucose testing offered by the statutory health insurance was ∼20–25% (22). It is difficult to compare our results with data from other studies. A U.S. study showed$758U.S. in screening costs per truly detected cases of diabetes from the societal perspective when using the fasting glucose test rather than no screening strategy (16). This is a higher cost expenditure than ours, which is €499. However, county-specific CERs largly depend on unit costs. At the same time, purchasing power parities are insufficient to adjust for differences in unit costs. Therefore, results from one country cannot be directly translated to other countries. Thus, the CERs in the present study may vary when our model is applied to other countries. OGTT and HbA1c testing may be more or less expensive in other health care systems. Our most important result, however, is HbA1c combined with the OGTT is the most effective, as well as most expensive, screening strategy and should not vary across health care systems. Thus, our results should also be valid in other countries. We assumed that actual information concerning BMI, blood lipids, blood pressure, and family history of diabetes would not be available for the majority of patients. This conclusion is based on health care research data in Germany (22). We therefore considered it necessary to include costs for preselection procedures. If actual data were available, no preselection would be necessary, but the practitioner would have to select patients at high risk for diabetes screening, resulting in added cost. Generally, however, preselection reduces the effectiveness of all screening strategies. As in the population practice study of Lawrence et al. (18), we defined successful detection of previously undiagnosed diabetes using only one fasting glucose test or OGTT. However, the American Diabetes Association recommends that a diabetes diagnosis be followed by a confirmation test. We chose a single fasting glucose and OGTT as the screening strategy, however, because we could use data on participation from a carefully designed general practice study (18). If we included confirmation testing, the costs per case detected might be higher than in the present study. Twofold testing, however, would probably reduce participation. Several limitations of the present study must be considered. Like many other cost-effectiveness analyses of screening, we used an intermediate outcome: the number of truly positive cases detected. Including information on potential costs following the screening procedure and benefits of treatment would provide a more complete picture of the cost-effectiveness of screening for diabetes. However, no population-based data regarding the natural disease process of early detected diabetes or results describing the effectiveness of early intervention after diabetes screening are available so far (4,5). The results of the present study are for a one-time screening situation, and they may not be applicable for ongoing screening. In the case of ongoing screening, the prevalence of undiagnosed diabetes would become lower. As can be concluded from the sensitivity analysis, a lower prevalence of undiagnosed diabetes would raise the costs per case identified and thus affect the CERs. However, the relation among strategies would probably remain the same. The greatest strength of the present study is that it uses highly valid population-based data, whereas previous analyses used several external data resources. The sensitivity analysis showed reasonable stable results after varying the prevalence of disturbed glucose metabolism (type 2 diabetes, IFG, and elevated HbA1c), participation, and social costs. In particular, the relation among strategies with respect to their costs and level of effectiveness remained stable in the majority of simulated populations. In general, a cost analysis cannot determine which strategies should be implemented. The choice depends on the goal of the screening program. It may be to identify the most possible cases of previously undiagnosed diabetes or to pursue lower costs per case identified. Cost-effectiveness analyses can only indicate which strategies are dominated by others and can therefore be ruled out and show ICERs to determine which program is more effective although it incurs higher costs. A decision maker can use these information to choose the most suitable screening procedure for a program by taking into account the maximum limit to be spent per additional case detected. Further studies are warranted in order to answer the question of which screening procedure is most appropriate. To achieve better and less costly screening, participation in screening tests needs to become more accepted by the target population. The most important issue is to evaluate the effectiveness of early intervention in diabetic subjects. Figure 1— Cost-effectiveness for the various strategies from the perspective of the statutory health insurance. The gradient of the line reflects the incremental cost-effectiveness ratio. Dotted lines indicate variation of diabetes and pre-diabetes (each 20% decrease and increase). Figure 1— Cost-effectiveness for the various strategies from the perspective of the statutory health insurance. The gradient of the line reflects the incremental cost-effectiveness ratio. Dotted lines indicate variation of diabetes and pre-diabetes (each 20% decrease and increase). Close modal Table 1— Clinical and epidemiological data in the whole and the preselected study population Whole population without previously diagnosed diabetes*Preselected population n 1,353 938 Sex (% male) 47 49 Age (years) [mean (range)] 64 (55–74) 64 (55–74) Age distribution (% aged 55–64 years) 58 56 Prevalences and participation Prevalence of diabetes 8.9 (7.3–10.5) 11.5 (9.2–13.8) Prevalence of IFG 15.2 (13.6–16.9) 19.2 (17.0–21.4) Prevalence of diabetes in IFG subjects 11.8 (8.3–15.3) 12.6 (8.8–16.4) Prevalence of HbA1c >5.6% 46.2 (40.3–52.2) 48.1 (42.5–53.7) Prevalence of diabetes in subjects with HbA1c >5.6% 14.4 (12.5–16.4) 18.2 (15.4–21.0) Participation at fasting glucose testing (%) 35 (estimated range 28–42) 35 (estimated range 28–42) Participation at OGTT (%) 30 (estimate) (estimated range 24–36) 30 (estimate) (estimated range 24–36) Participation at OGTT in IFG (%) 72 72 Participation in OGTT in subjects with HbA1c >5.6% (%) 72 (estimate) 72 (estimate) Test parameters Sensitivity of fasting glucose testing (%) 59.0 (50.8–67.3)§ 58.4 (49.9–67.0)§ Specificity of fasting glucose testing (%) 100 100 Sensitivity of OGTT (%) 100 100 Specificity of OGTT (%) 100 100 Sensitivity of fasting glucose + OGTT 79.4 (72.0–86.8)§ 79.5 (70.2–88.7)§ Specificity of fasting glucose + OGTT (%) 100 100 Sensitivity of HbA1c >5.6% + OGTT 75.1 (62.1–88.1)§ 75.9 (64.5–87.3)§ Specificity of HbA1c >5.6% + OGTT (%) 100.0 100.0 Sensitivity of HbA1c >5.6% 75.1 (62.1–88.1)§ 75.9 (64.5–87.3)§ Specificity of HbA1c >5.6% 56.6 (50.8–62.4)§ 55.5 (49.9–61.2)§ Whole population without previously diagnosed diabetes*Preselected population n 1,353 938 Sex (% male) 47 49 Age (years) [mean (range)] 64 (55–74) 64 (55–74) Age distribution (% aged 55–64 years) 58 56 Prevalences and participation Prevalence of diabetes 8.9 (7.3–10.5) 11.5 (9.2–13.8) Prevalence of IFG 15.2 (13.6–16.9) 19.2 (17.0–21.4) Prevalence of diabetes in IFG subjects 11.8 (8.3–15.3) 12.6 (8.8–16.4) Prevalence of HbA1c >5.6% 46.2 (40.3–52.2) 48.1 (42.5–53.7) Prevalence of diabetes in subjects with HbA1c >5.6% 14.4 (12.5–16.4) 18.2 (15.4–21.0) Participation at fasting glucose testing (%) 35 (estimated range 28–42) 35 (estimated range 28–42) Participation at OGTT (%) 30 (estimate) (estimated range 24–36) 30 (estimate) (estimated range 24–36) Participation at OGTT in IFG (%) 72 72 Participation in OGTT in subjects with HbA1c >5.6% (%) 72 (estimate) 72 (estimate) Test parameters Sensitivity of fasting glucose testing (%) 59.0 (50.8–67.3)§ 58.4 (49.9–67.0)§ Specificity of fasting glucose testing (%) 100 100 Sensitivity of OGTT (%) 100 100 Specificity of OGTT (%) 100 100 Sensitivity of fasting glucose + OGTT 79.4 (72.0–86.8)§ 79.5 (70.2–88.7)§ Specificity of fasting glucose + OGTT (%) 100 100 Sensitivity of HbA1c >5.6% + OGTT 75.1 (62.1–88.1)§ 75.9 (64.5–87.3)§ Specificity of HbA1c >5.6% + OGTT (%) 100.0 100.0 Sensitivity of HbA1c >5.6% 75.1 (62.1–88.1)§ 75.9 (64.5–87.3)§ Specificity of HbA1c >5.6% 56.6 (50.8–62.4)§ 55.5 (49.9–61.2)§ Data are percent (95% CI) unless otherwise indicated. With the exception of the participation proportions, which were derived from a practice study in the U.K. (18), all data were taken from the KORA Survey 2000 (3). Prevalences and test parameters are sample design based. * KORA Survey population without previously diagnosed diabetes. Includes at least one of the risk factors: family history of type 2 diabetes, BMI >30 kg/m2, blood pressure >140/90 mmHg, triglycerides >2 mmol/l. It is assumed that the participating population is not different from the nonparticipants, with respect to the evaluated parameters. The preselected population was considered to participate in the same proportion as the whole study population. § Not included in the sensitivity analysis. According to the definition of type 2 diabetes, fasting glucose testing has a specificity of 1.0, with OGTT as the gold standard. Defining diabetes, according to the World Health Organization’s 1999 criteria, is characterized by sensitivity and specificity of 1.0. Table 2— Cost data Procedures and parametersUnits, unit costs, and proportionsSources and comments Fasting glucose testing* €14.78 EBM: consultation fee (item 2), advice conversation fee (item 10), laboratory testing (fasting glucose: items 3661, 3707) OGTT* €16.34 EBM: consultation fee (item 2), advice conversation fee (item 10), laboratory testing (OGT: items 3661*3, 3707*3) HbA1c testing €16.00 EBM: advice conversation fee (item 2), laboratory testing (item 3722) Preselection testing €12.18 EBM: conversation fee (item 10), laboratory testing (triglycerides: item 3667) Time required for the visit including subject’s travel time to and from the practice; all separate visits, except OGTT 1 h Estimate Time required for the visit including subject’s travel time to and from the practice for OGTT 3 h Estimate Proportion of working subjects (%)  Year 2000, Statistics Bureau of North-Rhine Westfalia (personal communication) Men aged 55–59 years 74.5 Men aged 60–64 years 28.9 Women aged 55–59 years 47.6 Women aged 60–64 years 13.1 All aged ≥65 years 0.0 Average labor cost per hour of an employee (assumed for working subjects) €29.19 (estimated range 23.35–35.03) Annual labor cost for 1996 (Statistics Bureau 1999), annual costs multiplied by an annual 3% increase in the gross income of employees; 2002 working hours per year Average labor cost per hour of the civil service (assumed for subjects not working or retired) €5.37 (estimated range 4.30–6.44) Annual labor cost for 1996 (Bureau of Civil Services 1999), annual costs multiplied by an annual 3% increase in the gross income of employees up to the year 2002; 2002 working hours per year Procedures and parametersUnits, unit costs, and proportionsSources and comments Fasting glucose testing* €14.78 EBM: consultation fee (item 2), advice conversation fee (item 10), laboratory testing (fasting glucose: items 3661, 3707) OGTT* €16.34 EBM: consultation fee (item 2), advice conversation fee (item 10), laboratory testing (OGT: items 3661*3, 3707*3) HbA1c testing €16.00 EBM: advice conversation fee (item 2), laboratory testing (item 3722) Preselection testing €12.18 EBM: conversation fee (item 10), laboratory testing (triglycerides: item 3667) Time required for the visit including subject’s travel time to and from the practice; all separate visits, except OGTT 1 h Estimate Time required for the visit including subject’s travel time to and from the practice for OGTT 3 h Estimate Proportion of working subjects (%)  Year 2000, Statistics Bureau of North-Rhine Westfalia (personal communication) Men aged 55–59 years 74.5 Men aged 60–64 years 28.9 Women aged 55–59 years 47.6 Women aged 60–64 years 13.1 All aged ≥65 years 0.0 Average labor cost per hour of an employee (assumed for working subjects) €29.19 (estimated range 23.35–35.03) Annual labor cost for 1996 (Statistics Bureau 1999), annual costs multiplied by an annual 3% increase in the gross income of employees; 2002 working hours per year Average labor cost per hour of the civil service (assumed for subjects not working or retired) €5.37 (estimated range 4.30–6.44) Annual labor cost for 1996 (Bureau of Civil Services 1999), annual costs multiplied by an annual 3% increase in the gross income of employees up to the year 2002; 2002 working hours per year * Separate visit. Remaining subjects are assumed not to work or to be retired. EBM, Einheitlicher Bewertungsmaßstab (see research design and methods for details). Table 3— Cost, effectiveness, and ICERs for the different strategies: perspective of the statutory health insurance and societal perspective StrategyProportion of detected cases among all previously unknown diabetic subjects (%)Total costs (per study subject) (€)Effectiveness (number of detected cases per study subject)Additional costs per 1,000 of study population (€)Additional detected cases per 1,000 of study population (n)ICERs (costs per detected case) Statutory health insurance perspective Whole screening population Fasting glucose test 20.7 5.17 0.018 — — Dominated Fasting glucose and OGTT 25.8 5.80 0.023 — — Dominated OGTT 30.0 4.90 0.027 Base case Base case Base case HbA1c >5.6% and OGTT 54.1 21.44 0.048 16,539 21.4 771 Preselected population Fasting glucose test 18.2 15.76 0.016 — — Dominated Fasting glucose and OGTT 22.9 16.31 0.021 — — Dominated OGTT 26.7 15.58 0.024 — — Dominated HbA1c >5.6% and OGTT 48.6 27.18 0.044 — — Dominated Societal perspective Whole screening population Fasting glucose test 20.7 8.98 0.018 — — Dominated (extended) Fasting glucose and OGTT 25.8 10.85 0.023 Base case Base case Base case OGTT 30.0 14.68 0.027 — — Dominated (extended) HbA1c >5.6% and OGTT 54.1 31.77 0.048 20,916 25.2 831 Preselected population Fasting glucose test 18.2 18.36 0.016 — — Dominated Fasting glucose and OGTT 22.9 19.98 0.021 — — Dominated OGTT 26.7 22.24 0.024 — — Dominated HbA1c >5.6% and OGTT 48.6 34.47 0.044 — — Dominated StrategyProportion of detected cases among all previously unknown diabetic subjects (%)Total costs (per study subject) (€)Effectiveness (number of detected cases per study subject)Additional costs per 1,000 of study population (€)Additional detected cases per 1,000 of study population (n)ICERs (costs per detected case) Statutory health insurance perspective Whole screening population Fasting glucose test 20.7 5.17 0.018 — — Dominated Fasting glucose and OGTT 25.8 5.80 0.023 — — Dominated OGTT 30.0 4.90 0.027 Base case Base case Base case HbA1c >5.6% and OGTT 54.1 21.44 0.048 16,539 21.4 771 Preselected population Fasting glucose test 18.2 15.76 0.016 — — Dominated Fasting glucose and OGTT 22.9 16.31 0.021 — — Dominated OGTT 26.7 15.58 0.024 — — Dominated HbA1c >5.6% and OGTT 48.6 27.18 0.044 — — Dominated Societal perspective Whole screening population Fasting glucose test 20.7 8.98 0.018 — — Dominated (extended) Fasting glucose and OGTT 25.8 10.85 0.023 Base case Base case Base case OGTT 30.0 14.68 0.027 — — Dominated (extended) HbA1c >5.6% and OGTT 54.1 31.77 0.048 20,916 25.2 831 Preselected population Fasting glucose test 18.2 18.36 0.016 — — Dominated Fasting glucose and OGTT 22.9 19.98 0.021 — — Dominated OGTT 26.7 22.24 0.024 — — Dominated HbA1c >5.6% and OGTT 48.6 34.47 0.044 — — Dominated Table 4— Results of the Monte Carlo analysis StrategyProportion of detected cases among all previously unknown diabetic subjects (%)Total costs (per study subject) (€)Effectiveness (number of detected cases per study subject) Perspective of the statutory health insurance Fasting glucose test 20.7 ± 0.01 (19.7–21.7) 5.17 ± 0.19 (4.94–5.42) 0.018 ± 0.002 (0.016–0.021) Fasting glucose and OGTT 25.8 ± 0.01 (24.5–27.3) 5.81 ± 0.24 (5.52–6.11) 0.023 ± 0.003 (0.020–0.026) OGTT 30.1 ± 0.01 (28.5–31.6) 4.92 ± 0.20 (4.66–5.17) 0.027 ± 0.003 (0.023–0.030) HbA1c >5.6% and OGTT 53.9 ± 0.02 (51.0–56.9) 21.44 ± 0.48 (20.85–22.06) 0.048 ± 0.008 (0.038–0.059) Societal perspective Fasting glucose test 20.7 ± 0.01 (19.7–21.7) 9.37 ± 1.63 (7.63–11.63) 0.018 ± 0.002 (0.016–0.021) Fasting glucose and OGTT 25.8 ± 0.01 (24.5–27.3) 11.39 ± 2.13 (9.05–14.16) 0.023 ± 0.003 (0.020–0.026) OGTT 30.1 ± 0.01 (28.5–31.6) 15.74 ± 4.12 (11.36–21.56) 0.027 ± 0.003 (0.023–0.030) HbA1c >5.6% and OGTT 53.9 ± 0.02 (51.0–56.9) 32.83 ± 4.57 (27.74–39.08) 0.048 ± 0.008 (0.038–0.059) StrategyProportion of detected cases among all previously unknown diabetic subjects (%)Total costs (per study subject) (€)Effectiveness (number of detected cases per study subject) Perspective of the statutory health insurance Fasting glucose test 20.7 ± 0.01 (19.7–21.7) 5.17 ± 0.19 (4.94–5.42) 0.018 ± 0.002 (0.016–0.021) Fasting glucose and OGTT 25.8 ± 0.01 (24.5–27.3) 5.81 ± 0.24 (5.52–6.11) 0.023 ± 0.003 (0.020–0.026) OGTT 30.1 ± 0.01 (28.5–31.6) 4.92 ± 0.20 (4.66–5.17) 0.027 ± 0.003 (0.023–0.030) HbA1c >5.6% and OGTT 53.9 ± 0.02 (51.0–56.9) 21.44 ± 0.48 (20.85–22.06) 0.048 ± 0.008 (0.038–0.059) Societal perspective Fasting glucose test 20.7 ± 0.01 (19.7–21.7) 9.37 ± 1.63 (7.63–11.63) 0.018 ± 0.002 (0.016–0.021) Fasting glucose and OGTT 25.8 ± 0.01 (24.5–27.3) 11.39 ± 2.13 (9.05–14.16) 0.023 ± 0.003 (0.020–0.026) OGTT 30.1 ± 0.01 (28.5–31.6) 15.74 ± 4.12 (11.36–21.56) 0.027 ± 0.003 (0.023–0.030) HbA1c >5.6% and OGTT 53.9 ± 0.02 (51.0–56.9) 32.83 ± 4.57 (27.74–39.08) 0.048 ± 0.008 (0.038–0.059) Data are mean ± SD (10th to 90th percentile). The study was supported by institutional funding (German Diabetes Research Institute) from the German Ministery of Health and by the Ministery of Science of North-Rhine Westfalia. The study was supported in part by a research grant from the German Diabetes Foundation. 1. Mooy JM, Grootenhuis PA, de Vries H, Vaulkenburg HA, Bouter LM, Kostense PJ, Heine RJ: Prevalence and determinants of glucose intolerance in a Dutch caucasian population: the Hoorn Study (Short Report). Diabetes Care 18 : 1270 –1273, 1995 2. DECODE Study Group on behalf of the European Diabetes Epidemiology Study Group: Will new diagnostic criteria for diabetes change phenotype of patients with diabetes? Re-analysis of European epidemiological data. BMJ 317 : 371 –375, 1998 3. Rathmann W, Haastert B, Icks A, Löwel H, Meisinger C, Holle R, Giani G: High prevalence of undiagnosed diabetes in southern Germany: target populations for efficient screening: the KORA Survey 2000. Diabetologia 46 : 182 –189, 2003 4. Lauritzen T, Griffin S, Borch-Johnsen K, Wareham NJ, Wolffenbuttel BHR, Rutten G for the Addition Study Group: The Addition Study proposed trial of the cost-effectiveness of an intensive multifactorial intervention on morbidity and mortality among people with type 2 diabetes detected by screening. Int J Obes 24 : S6 –S11, 2000 5. Wareham N, Griffin SJ: Should we screen for type 2 diabetes? Evaluation against National Screening Committee criteria. BMJ 322 : 986 –988, 2001 6. Streets P: Undiagnosed diabetes must be detected. BMJ 323 : 453 –454, 2001 7. Harris MI, Eastman RC: Early detection of undiagnosed diabetes mellitus: a US perspective. Diabetes Metab Res Rev 16 : 230 –236, 2000 8. The Expert Committee on the Diagnosis and Classification of Diabetes Mellitus: Report of the Expert Committee on the Diagnosis and Classification of Diabetes Mellitus. Diabetes Care 20 : 1183 –1197, 1997 9. World Health Organization: Definition, Diagnosis and Classification of Diabetes Mellitus and its Complications: Part 1: Diagnosis and Classification of Diabetes Mellitus: Report of a WHO Consultation . Geneva, World Health Organization, 1999 10. Jesudason DR, Dunstan K, Leong D, Wittert GA: Macrovascular risk and diagnostic criteria for type 2 diabetes. Diabetes Care 26 : 485 –490, 2003 11. Raikou M, McGuire A: The economics of screening and treatment in type 2 diabetes mellitus. Pharmacoeconomics 8 : 543 –564, 2003 12. Lee DS, Remington P, Madagame J, Blustein J: A cost analysis of community screening for diabetes in the central Wisconsin Medicare population. WMJ 99 : 39 –44, 2000 13. Chen THH, Yen MF, Tung TH: A computer simulation model for cost-effectiveness analysis of mass screening for type 2 diabetes mellitus. Diabetes Res Clin Pract 54 (Suppl. 1) : S37 –S42, 2001 14. Engelgau M, Venkat Narayan K, Thomson T, CDC Diabetes Cost-Effectiveness Study Group: The cost-effectiveness of screening for type 2 diabetes. JAMA 280 : 1757 –1763, 1998 15. Shirasaya K, Miyakawa M, Yoshida K, Takashashi E, Shimada N, Kondo T: Economic evaluation of alternative indicators for screening for diabetes mellitus. Prev Med 29 : 79 –86, 1999 16. Zhang P, Engelgau M, Valdez R, Benjamin SM, Cadwell B, Venkat Narayan KM: Costs of screening for pre-diabetes among U.S. adults. Diabetes Care 26 : 2536 –2542, 2003 17. Gold MR: Cost-Effectiveness in Health and Medicine . New York, Oxford University Press, 1996 18. Lawrence JM, Bennett P, Young A, Robinson AM: Screening for diabetes in general practice: cross-sectional population study. BMJ 323 : 548 –551, 2001 19. Drummond MF, O’Brien BJ, Stoddart GL, Torrance WG: Methods for the Economic Evaluation of Health Care Programs . 2nd ed. New York, Oxford University Press, 1997 20. Statistics Bureau: Economy and Statistics . Stuttgart, Germany, Metzler Poeschel, 1999 21. Bureau for Civil Services: Guidelines for the Process of the Civil Service . Köln, Germany, Bureau for the Civil Service, 1999 22. Kahl H, Hülling H, Kamtsiuris P: Utilization of health screening programs and interventions for health promotion. Gesundheitswesen 61 (Suppl.) : S163 –S168, 1996 Additional information for this article can be found in an online appendix at http://care.diabetesjournals.org. A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances.
{}