text
stringlengths
256
16.4k
The Extremely Efficient New Model in Milling of Complicated Surfaces in 5-Axis Machining for 3-Dimensional Contact Yi Lou1, Huran Liu1, Bo Tan2 1Department of Mechanical Engineering, Zhejiang University of Science and Technology, Hangzhou, China. 2The State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan, China. Although most of the 5-axis NC milling machine tool producers declare that their machine poses the function of 5-axis simultaneous machining, and most of the commercialized CAD/CAM software was reported it can support the 5-axis simultaneous machining, but there are very few analyses about their machining efficiency. I hope that the reader would be interested to know how the new method is highly effective. The paper did a good job in motivating the problem mentioned here. The model of this paper is extremely efficient ant. This is a highly effective way of surface machining. There are also some areas that need improving. The mathematic presentation is not clear, but much more pages are needed, if the authors like to make it clearer. In the resent paper “The Extreme Efficiency of the New Model in Milling of Complicated Surfaces” we discussed the extreme efficiency of the new model in milling of complicated surfaces in 5-axis machining for 2-dimensional contact. While this time let’s discuss the same problem but much complicated one, 5-axis machining for 3-dimensional contact. As the research activities conducted by Dr. Liu Huran opened a new field and wide horizons in 5-axis machining of sculptured complicated surfaces using mathematical theories, and according to the interesting results presented in paper, Dr. Liu Huran is strongly encouraged if he can submit a similar work but this time with a new title that corresponds to the purpose [1]. Surface, Contact, NC Machining Lou, Y. , Liu, H. and Tan, B. (2018) The Extremely Efficient New Model in Milling of Complicated Surfaces in 5-Axis Machining for 3-Dimensional Contact. World Journal of Engineering and Technology, 6, 625-630. doi: 10.4236/wjet.2018.63038. With the rapid development of computer technology and control technology, five-axis NC machine plays a more and more important role in high efficiency and high precision manufacture. The small amount and high precision products provide wide application and developable perspective for 5-axis NC machine. It makes a fundamental change for NC programming [2] . This paper introduces currency method of NC machining method, develops the machining efficiency and precision of NC machine, and comes to conclusion that the local contact tool path can increase machining efficiency and sweep surface of the cutting circle on close contact path can increase machining precision [3] . The sweep surface of cutting-circle of cylinder-end milling along close contact path is calculated from relative movement. This paper compares the curvature of sweep surface and the part surface along any direction in tangent plan, discusses the condition of avoiding gouging, obtaining the optimal tool position and direction. This paper also analyzes the tolerance of five-axis NC machining along the direction of cutting and cutting strip [4] . Authors have researched the “cutter-contact” principle of differential geometry in NC machine, analyzed the condition of “close contact” in NC machining and the condition of three-close-contact in local coordinate system, and used the figure of section analyzing the degree of contact between cutting circle and surface [5] . Method how to simulate the machining process in AutoCAD is presented in this thesis, which analyzes the generation of 3D solid, obtains the geometry and topology data of solid boundary surface. This paper uses the NURB surface describing surface model, uses Boolean operation to simulate the machining process with the file of tool position and direction angle [6] [7] [8] [9] . Explanation to the Extreme Efficiency of New NC Method: the thinking Route [1] : a) The local development or the local reconfiguration of the surface on any one point; b) None vertical transformation in order to transformed into the vertical standard system; c) The condition of best machining posture in order to get highest efficiency of NC machining; d) The extremely efficient cutting method on the local point to be machined; e) The steps of calculation (take an elliptic ball for example); f) The Coordinate Transformation in order to be changed into coordinate system of the machine tool. 2. The 2nd-Order Approximation of the Surface Use the Maclaulin series to develop the surface in the adjacent reign of point M0 (Figure 1) r\left(u,v\right)=r\left({u}_{0},{v}_{0}\right)+\left(\frac{\partial }{\partial u}\Delta u+\frac{\partial }{\partial v}\Delta v\right)r+\frac{1}{2!}{\left(\frac{\partial }{\partial u}\Delta u+\frac{\partial }{\partial v}\Delta v\right)}^{2}r+\cdots \begin{array}{c}M{M}_{0}=r\left(u,v\right)-r\left({u}_{0},{v}_{0}\right)\\ =\left(\frac{\partial }{\partial u}\Delta u+\frac{\partial }{\partial v}\Delta v\right)r+\frac{1}{2!}{\left(\frac{\partial }{\partial u}\Delta u+\frac{\partial }{\partial v}\Delta v\right)}^{2}r+\cdots \end{array} 3. The Condition of the Local Contact As show in Figure 2, the coordination of transformation in the local system x=r\left(\mathrm{cos}\theta -1\right)\mathrm{cos}\lambda \mathrm{cos}\omega -r\mathrm{sin}\theta \mathrm{sin}\omega y=r\left(\mathrm{cos}\theta -1\right)\mathrm{cos}\lambda \mathrm{sin}\omega +r\mathrm{sin}\theta \mathrm{cos}\omega z=-r\left(\mathrm{cos}\theta -1\right)\mathrm{sin}\lambda 4. The 2nd Order Local Contact Method in 5-Axis NC Milling of Complicated Surface (Take an Ellipsoid Dir for Example) The surface to be machined is an ellipsoid dir. r\left(u,v\right)=\left(A\mathrm{cos}u\mathrm{cos}v,B\mathrm{cos}u\mathrm{sin}v,C\mathrm{sin}u\right) {r}_{u}=\left[-A\mathrm{sin}u\mathrm{cos}v,-B\mathrm{sin}u\mathrm{sin}v,C\mathrm{cos}u\right] {r}_{v}=\left[-A\mathrm{cos}u\mathrm{sin}v,B\mathrm{cos}u\mathrm{cos}v,0\right] n\left(u,v\right)=\left(CB\mathrm{cos}u\mathrm{cos}v,AC\mathrm{sin}u\mathrm{sin}v,AB\mathrm{cos}u\right)/D D=\sqrt{{A}^{2}{B}^{2}{\mathrm{cos}}^{2}u{\mathrm{cos}}^{2}v+{A}^{2}{C}^{2}{\mathrm{cos}}^{2}u{\mathrm{sin}}^{2}v+{A}^{2}{B}^{2}{\mathrm{sin}}^{2}u} Figure 1. The distance to the tangential plan. Figure 2. The position and coordination of the cutter. {r}_{uu}=\left[-A\mathrm{cos}u\mathrm{cos}v,-B\mathrm{cos}u\mathrm{sin}v,-C\mathrm{sin}u\right] {r}_{uv}=\left[A\mathrm{sin}u\mathrm{sin}v,-B\mathrm{sin}u\mathrm{cos}v,0\right] {r}_{vv}=\left[-A\mathrm{cos}u\mathrm{cos}v,-B\mathrm{cos}u\mathrm{sin}v,0\right] E={r}_{u}\cdot {r}_{u}={A}^{2}{\mathrm{sin}}^{2}u{\mathrm{cos}}^{2}v+{B}^{2}{\mathrm{sin}}^{2}u{\mathrm{sin}}^{2}v+{C}^{2}{\mathrm{cos}}^{2}u F=0 G={r}_{v}\cdot {r}_{v}={A}^{2}{\mathrm{cos}}^{2}u{\mathrm{sin}}^{2}v+{B}^{2}{\mathrm{cos}}^{2}u{\mathrm{cos}}^{2}v n\cdot {r}_{uu}=\frac{ABC}{D},\text{\hspace{0.17em}}a=\frac{ABC}{2DE} n\cdot {r}_{uv}=0,b=0 n\cdot {r}_{vv}=\frac{ABC}{D}{\mathrm{cos}}^{2}u,\text{\hspace{0.17em}}c=\frac{ABC}{2DG}{\mathrm{cos}}^{2}u 5. The Coordinate Transformation from Local System to the System of Machine Tool The orientation of the cutter center at the local system q=\left(\mathrm{sin}\lambda ,0,\mathrm{cos}\lambda \right)=\left(0.13,0,0.99\right) the coordinates of the surface: \begin{array}{c}{e}_{u}=\frac{{r}_{u}}{\left|{r}_{u}\right|}\\ =\frac{1}{\sqrt{E}}\left(iA\mathrm{cos}u,jB\mathrm{sin}u\mathrm{cos}v,-kC\mathrm{sin}u\mathrm{sin}v\right)\\ =\frac{1}{\sqrt{14356.200}}\left(i\times 120\times 0.985,j\times 40\times 0.174\times 0.174,-k\times 20\times 0.174\times \left(-0.985\right)\right)\\ =0.986i-0.06j+0.03k\end{array} \begin{array}{c}{e}_{v}=\frac{{r}_{v}}{\left|{r}_{v}\right|}=\frac{1}{\sqrt{G}}\left(0,-jB\mathrm{cos}u\mathrm{sin}v,kC\mathrm{cos}u\mathrm{cos}v\right)\\ =\frac{1}{\sqrt{1517.9}}\left(0,-j\times 40\times 0.985\times \left(-0.985\right),k\times 20\times 0.985\times 0.174\right)\end{array} 6. Computer Aided Simulation (Figure 3 and Figure 4) Figure 3. The machine tool. Figure 4. The machined surface. 5-axis machining is difficult, in ordinary factory; the NC programming is automatically generated by the software; the efficiency is unknown. In this paper, a machining theory is presented. The general machining program and the gap between tool and the work piece are a large interval. While in this paper, the cutter and the surface are contacted closely, so the machining efficiency is high. Supported by the State Key Laboratory of Digital Manufacturing Equipment and Technology of China, Huazhong University of Science and Technology: DMETKF2015016. Supported by the State Key Laboratory of Tribology of China, Tsinghua University: SKLTK14A06. [1] Liu, H.R. (2013) The Extreme Efficiency of the New Model in Milling of Complicated Surfaces. International Journal of Advanced Manufacture Technology, 67, 2765-2770. [2] Susan, X.L. (1994) 5-Axis Machining of Sculptured Surfaces with a Flat-End Cutter. Computer-Aided Design, 26, 165-178. [3] Lee, Y.-S. (1998) Non-Isoparametric Tool Path Planning by Machining Strip Evaluation for 5-Axis Sculptured Surface Machining. Computer-Aided Design, 30, 559-570. [4] Lee, Y.-S. (1997) Admissible Tool Orientation Control of Gouging Avoidance for 5-Axis Complex Surface. Aided Design, 29, 507-521. [5] Vhoi, B.K. (1993) Cutter-Location Data Optimization in 5-Axis Surface Machining. Computer-Aided Design, 25, 377-386. [6] Kim, B.H. (1994) Effect of Cutter Mark on Surface Roughness and Scallop Height in Sculptured Surface. Computer-Aided Design, 2, 179-188. [7] Liu, X.-W. (1995) Five-Axis NC Cylindrical Milling of Sculptured Surfaces. Computer-Aided Design, 27, 887-894. [8] Hwang, J.S. (1992) Interference-Free Tool-Path Generation in the NC Machining of Parametric Compound Surfaces. Computer-Aided Design, 24, 667-676. [9] Lo, C.-C. (2000)CNC Machine Tool Surface Interpolator for Ball-End Milling of Free-Form Surfaces. International Journal of Machine Tool & Manufacture, 40, 307-326.
Rewrite the following integral expressions as single integrals. \int _ { - 3 } ^ { - 5 } f ( x ) d x + \int _ { - 5 } ^ { - 3 } g ( x ) d x \int_{a}^{b}f(x)dx=\int_{b}^{a}-f(x)dx \int_{-5}^{-3}\left (g(x)-f(x) \right )dx 3 \int _ { 1 } ^ { 6 } f ( x ) d x + 5 \int _ { 1 } ^ { 6 } g ( x ) d x Translation: three times the area under f(x) x = 1 x=6+ five times the same area. Write this as a single integral. \int _ { 6 } ^ { 11 } f ( x ) d x + \int _ { 11 } ^ { 6 } f ( x ) d x See hint (a) and then see hint (c). \int _ { 7 } ^ { 10 } f ( t ) d t - \int _ { 7 } ^ { 9 } f ( t ) d t Since an integral is the area underneath or above a curve, if you take a smaller portion of that area and subtract it from a larger portion, both starting at the same point, what are you left with?
Introduction to Chemical Engineering Processes/Multiple Components in Multiple Processes - Wikibooks, open books for an open world 1 Introduction to Problem Solving with Multiple Components and Processes 2 Degree of Freedom Analysis 2.1 Degrees of Freedom in Multiple-Process Systems 3 Using Degrees of Freedom to Make a Plan 4 Multiple Components and Multiple Processes: Orange Juice Production 4.2 Step 2: Degree of Freedom analysis 4.3 So how do we solve it? 4.4 Step 3: Convert Units Introduction to Problem Solving with Multiple Components and ProcessesEdit In the vast majority of chemical processes, in which some raw materials are processed to yield a desired end product or set of end products, there will be more than one raw material entering the system and more than one unit operation through which the product must pass in order to achieve the desired result. The calculations for such processes, as you can probably guess, are considerably more complicated than those either for only a single component, or for a single-operation process. Therefore, several techniques have been developed to aid engineers in their analyses. This section describes these techniques and how to apply them to an example problem. Degree of Freedom AnalysisEdit For more complex problems than the single-component or single-operation problems that have been explored, it is essential that you have a method of determining if a problem is even solvable given the information that you have. There are three ways to describe a problem in terms of its solvability: If the problem has a finite (not necessarily unique!) set of solutions then it is called well-defined. The problem can be overdetermined (also known as overspecified), which means that you have too much information and it is either redundant or inconsistent. This could possibly be fixed by consolidating multiple data into a single function or, in extreme cases, a single value (such as a slope of a linear correlation), or it could be fixed by removing an assumption about the system that one had made. The problem can be underdetermined (or underspecified), which means that you don't have enough information to solve for all your unknowns. There are several ways of dealing with this. The most obvious is to gather additional information, such as measuring additional temperatures, flow rates, and so on until you have a well-defined problem. Another way is to use additional equations or information about what we want out of a process, such as how much conversion you obtain in a reaction, how efficient a separation process is, and so on. Finally, we can make assumptions in order to simplify the equations, and perhaps they will simplify enough that they become solvable. The method of analyzing systems to see whether they are over or under-specified, or if they are well-defined, is called a degree of freedom analysis. It works as follows for mass balances on a single process: From your flowchart, determine the number of unknowns in the process. What qualifies as an unknown depends on what you're looking for, but in a material balance calculation, masses and concentrations are the most common. In equilibrium and energy balance calculations, temperature and pressure also become important unknowns. In a reactor, you should include the conversion as an unknown unless it is given OR you are doing an atom balance. Subtract the number of Equations you can write on the process. This can include mass balances, energy balances, equilibrium relationships, relations between concentrations, and any equations derived from additional information about the process. The number you are left with is the degrees of freedom of the process. If the degrees of freedom are negative that means the unit operation is overspecified. If it is positive, the operation is underspecified. If it is zero then the unit operation is well-defined, meaning that it is theoretically possible to solve for the unknowns with a finite set of solutions. Degrees of Freedom in Multiple-Process SystemsEdit Multiple-process systems are tougher but not undoable. Here is how to analyze them to see if a problem is uniquely solvable: Label a flowchart completely with all the relevant unknowns. Perform a degree of freedom analysis on each unit operation, as described above. Add the degrees of freedom for each of the operations. Subtract the number of variables in intermediate streams, i.e. streams between two unit operations. This is because each of these was counted twice, once for the operation it leaves and once for the one it enters. The number you are left with is the process degrees of freedom, and this is what will tell you if the process as a whole is overspecified, underspecified, or well-defined. If any single process is overspecified, and is found to be inconsistent, then the problem as a whole cannot be solved, regardless of whether the process as a whole is well-defined or not. Using Degrees of Freedom to Make a PlanEdit Once you have determined that your problem is solvable, you still need to figure out how you'll solve for your variables. This is the suggested method. Find a unit operation or combination of unit operations for which the degrees of freedom are zero. Calculate all of the unknowns involved in this combination. Recalculate the degrees of freedom for each process, treating the calculated values as known rather than as variables. Repeat these steps until everything is calculated (or at least that which you seek) You must be careful when recalculating the degrees of freedom in a process. You have to be aware of the sandwich effect, in which calculations from one unit operation can trivialize balances on another operation. For example, suppose you have three processes lined up like this: Suppose also that through mass balances on operations A and C, you calculate the exit composition of A and the inlet composition of C. Once these are performed, the mass balances on B are already completely defined. The moral of the story is that before you claim that you can write an equation to solve an unknown, write the equation and make sure that it contains an unknown. Do not count equations that have no unknowns in your degree of freedom analysis. Multiple Components and Multiple Processes: Orange Juice ProductionEdit Consider a process in which raw oranges are processed into orange juice. A possible process description follows: The oranges enter a crusher, in which all of the water contained within the oranges is released. The now-crushed oranges enter a strainer. The strainer is able to capture 90% of the solids, the remainder exit with the orange juice as pulp. The velocity of the orange juice stream was measured to be {\displaystyle 30{\frac {m}{s}}} and the radius of the piping was 8 inches. Calculate: a) The mass flow rate of the orange juice product. b) The number of oranges per year that can be processed with this process if it is run 8 hours a day and 360 days a year. Ignore changes due to unsteady state at startup. Use the following data: Mass of an orange: 0.4 kg Water content of an orange: 80% Density of the solids: Since its mostly sugars, its about the density of glucose, which is {\displaystyle 1.540{\frac {g}{cm^{3}}}} This time we have multiple processes so it's especially important to label each one as its given in the problem. Notice how I changed the 90% capture of solids into an algebraic equation relating the mass of solids in the solid waste to the mass in the feed. This will be important later, because it is an additional piece of information that is necessary to solve the problem. Also note that from here in, "solids" are referred to as S and "water" as W. Step 2: Degree of Freedom analysisEdit Recall that for each stream there are C independent unknowns, where C is the number of components in the individual stream. These generally are concentrations of C-1 species and the total mass flow rate, since with C-1 concentrations we can find the last one, but we cannot obtain the total mass flow rate from only concentration. Let us apply the previously described algorithm to determining if the problem is well-defined. On the strainer: There are 6 unknowns: m2, xS2, m3, xS3, m4, and xS4 We can write 2 independent mass balances on the overall system (one for each component). We are given a conversion and enough information to write the mass flow rate in the product in terms of only concentration of one component (which eliminates one unknown). Thus we have 2 additional pieces of information. Thus the degrees of freedom of the strainer are 6-2-2 = 2 DOF We are given the mass of an individual orange, but since we cannot use that information alone to find a total mass flow rate of oranges in the feed, and we already have used up our allotment of C-1 independent concentrations, we cannot count this as "given information". If, however, we were told the number of oranges produced per year, then we could use the two pieces of information in tandem to eliminate a single unknown (because then we can find the mass flow rate) On the crusher: There are 3 unknowns (m1, m2, and xS2). We can write 2 independent mass balances. Thus the crusher has 3-2 = 1 DOF Therefore for the system as a whole: Sum of DOF for unit operations = 2 + 1 = 3 DOF Number of intermediate variables = 2 (m2 and xS2) Total DOF = 3 - 2 = 1 DOF. Hence the problem is underspecified. So how do we solve it?Edit In order to solve an underspecified problem, one way we can obtain an additional specification is to make an assumption. What assumptions could we make that would reduce the number of unknowns (or equivalently, increase the number of variables we do know)? The most common type of assumption is to assume that something that is relatively insignificant is zero. In this case, one could ask: will the solid stream from the strainer contain any water? It might, of course, but this amount is probably very small compared to both the amount of solids that are captured and how much is strained, provided that it is cleaned regularly and designed well. If we make this assumption, then this specifies that the mass fraction of water in the waste stream is zero (or equivalently, that the mass fraction of solids is one). Therefore, we know one additional piece of information and the degrees of freedom for the overall system become zero. Step 3: Convert UnitsEdit This step should be done after the degree of freedom analysis, because that analysis is independent of your unit system, and if you don't have enough information to solve a problem (or worse, you have too much), you shouldn't waste time converting units and should instead spend your time defining the problem more precisely and/or seeking out appropriate assumptions to make. Here, the most sensible choice is either to convert everything to the cgs system or to the m-kg-s system, since most values are already in metric. Here, the latter route is taken. {\displaystyle r_{4}=8{\mbox{ in}}*{\frac {2.54{\mbox{ cm}}}{in}}*{\frac {1{\mbox{ m}}}{100{\mbox{ cm}}}}=0.2032{\mbox{ m}}} {\displaystyle {\rho }_{S}=1.54{\frac {g}{cm^{3}}}=1540{\frac {kg}{m^{3}}}} Now that everything is in the same system, we can move on to the next step. First we have to relate the velocity and area given to us to the mass flowrate of stream 4, so that we can actually use that information in a mass balance. From chapter 2, we can start with the equation: {\displaystyle {\rho }_{n}*v_{n}*A_{n}={\dot {m}}_{n}} Since the pipe is circular and the area of a circle is {\displaystyle \pi *r^{2}} {\displaystyle A_{4}=\pi *0.2032^{2}=0.1297{\mbox{ m}}^{2}} {\displaystyle {\rho }_{4}*30*0.1297=3.8915*{\rho }_{4}={\dot {m}}_{4}} Now to find the density of stream 4 we assume that volumes are additive, since the solids and water are essentially immiscible (does an orange dissolve when you wash it?). Hence we can use the ideal-fluid model for density: {\displaystyle {\frac {1}{\rho _{4}}}={\frac {x_{S4}}{\rho _{S}}}+{\frac {x_{W4}}{\rho _{W}}}={\frac {x_{S4}}{\rho _{S}}}+{\frac {1-x_{S4}}{\rho _{W}}}} {\displaystyle ={\frac {x_{S4}}{1540}}+{\frac {1-x_{S4}}{1000}}} Hence, we have the equation we need with only concentrations and mass flowrates: {\displaystyle {\frac {x_{S4}}{1540}}+{\frac {1-x_{S4}}{1000}}={\frac {3.8915}{{\dot {m}}_{4}}}} Now we have an equation but we haven't used either of our two (why two?) independent mass balances yet. We of course have a choice on which two to use. In this particular problem, since we are directly given information concerning the amount of solid in stream 4 (the product stream), it seems to make more sense to do the balance on this component. Since we don't have information on stream 2, and finding it would be pointless in this case (all parts of it are the same as those of stream 1), lets do an overall-system balance on the solids: {\displaystyle \Sigma {\dot {m}}_{S,in}-\Sigma {\dot {m}}_{S,out}=0} Since there is no reaction, the generation term is 0 even for individual-species balances. Expanding the mass balance in terms of mass fractions gives: {\displaystyle {\dot {m}}_{1}*x_{S1}={\dot {m}}_{3}*x_{S3}+{\dot {m}}_{4}*x_{S4}} Plugging in the known values, with the assumption that stream 3 is pure solids (no water) and hence {\displaystyle x_{S3}=1} {\displaystyle 0.2*{\dot {m}}_{1}=(0.9*0.2*{\dot {m}}_{1})*1+x_{S4}*{\dot {m}}_{4}} Finally, we can utilize one further mass balance, so let's use the easiest one: the overall mass balance. This one again assumes that the total flowrate of stream 3 is equal to the solids flowrate. {\displaystyle {\dot {m}}_{1}=0.9*0.2*{\dot {m}}_{1}+{\dot {m}}_{4}} We now have three equations in three unknowns {\displaystyle ({\dot {m}}_{1},{\dot {m}}_{4},x_{S4})} so the problem is solvable. This is where all those system-solving skills will come in handy. If you don't like solving by hand, there are numerous computer programs out there to help you solve equations like this, such as MATLAB, POLYMATH, and many others. You'll probably want to learn how to use the one your school prefers eventually so why not now? Using either method, the results are: {\displaystyle {\dot {m}}_{1}=4786{\frac {kg}{s}}} {\displaystyle {\dot {m}}_{4}=3925.07} {\displaystyle x_{S4}=0.0244} We're almost done here, now we just have to calculate the number of oranges per year. {\displaystyle 4786{\frac {kg}{s}}*1{\frac {orange}{0.4{\mbox{ kg}}}}*3600{\frac {s}{hr}}*8{\frac {wk{\mbox{ hr}}}{day}}*360{\frac {\mbox{wk day}}{year}}} Yearly Production: {\displaystyle 1.24*10^{11}{\frac {oranges}{year}}} Retrieved from "https://en.wikibooks.org/w/index.php?title=Introduction_to_Chemical_Engineering_Processes/Multiple_Components_in_Multiple_Processes&oldid=3527375"
New Liouville theorems for linear second order degenerate elliptic equations in divergence form Moschini, Luisa Kondratiev, Vladimir ; Liskevich, Vitali ; Moroz, Vitaly Super-critical boundary bubbling in a semilinear Neumann problem del Pino, Manuel ; Musso, Monica ; Pistoia, Angela Comparison results and steady states for the Fujita equation with fractional laplacian Birkner, Matthias ; López-Mimbela, José Alfredo ; Wakolbinger, Anton Escobedo, M. ; Mischler, S. ; Rodriguez Ricard, M. {L}^{p} estimates for the spatially homogeneous Boltzmann equation Desvillettes, Laurent ; Mouhot, Clément Malchiodi, A. ; Ni, Wei-Ming ; Wei, Juncheng On a Cahn-Hilliard model for phase separation with elastic misfit Nonlinear eigenvalues and bifurcation problems for Pucci's operators Busca, Jérôme ; Esteban, Maria J. ; Quaas, Alexander Weak solutions to a nonlinear variational wave equation with general data On the existence of blowing-up solutions for a mean field equation Esposito, Pierpaolo ; Grossi, Massimo ; Pistoia, Angela \mathrm{BV}\left({S}^{2},{S}^{1}\right) On backward-time behavior of the solutions to the 2-D space periodic Navier-Stokes equations Spikes in two coupled nonlinear Schrödinger equations Lin, Tai-Chia ; Wei, Juncheng Global solutions to vortex density equations arising from sup-conductivity Masmoudi, Nader ; Zhang, Ping Blowing up solutions for an elliptic Neumann problem with sub- or supercritical nonlinearity. Part II : N\ge 4 Rey, Olivier ; Wei, Juncheng A compactness theorem of n Wang, Chang You On the boundary ergodic problem for fully nonlinear equations in bounded domains with general nonlinear Neumann boundary conditions Barles, Guy ; Da Lio, Francesca Pointwise curvature estimates for F -stable hypersurfaces H -surface index formula Nonoccurrence of the Lavrentiev phenomenon for nonconvex variational problems Multi-bump type nodal solutions having a prescribed number of nodal domains : I Liu, Zhaoli ; Wang, Zhi-Qiang Multi-bump type nodal solutions having a prescribed number of nodal domains : II Cheridito, Patrick ; Soner, H. Mete ; Touzi, Nizar Homogenization of degenerate second-order PDE in periodic and almost periodic environments and applications Lions, Pierre-Louis ; Souganidis, Panagiotis E. Stability results for obstacle problems with measure data On the three-dimensional Euler equations with a free boundary subject to surface tension Semilinear equations with exponential nonlinearity and measure data Bartolucci, Daniele ; Leoni, Fabiana ; Orsina, Luigi ; Ponce, Augusto C. Markov structures and decay of correlations for non-uniformly expanding dynamical systems Alves, José F. ; Luzzatto, Stefano ; Pinheiro, Vilton
Structural Reliability of Monopods Under Storm Overload | J. Offshore Mech. Arct. Eng. | ASME Digital Collection Center for Oil & Gas Engineering, The University of Western Australia, Perth, Australia N. R. Anthony, N. R. Anthony S. Tuty, S. Tuty Contributed by the OOAE Division for publication in the JOURNAL OF OFFSHORE MECHANICS AND ARCTIC ENGINEERING. Manuscript received August 2001; final revision, April 2002. Associate Editor: A. Naess. Ronalds , B. F., Anthony , N. R., Tuty , S., and Fakas, E. (April 16, 2003). "Structural Reliability of Monopods Under Storm Overload ." ASME. J. Offshore Mech. Arct. Eng. May 2003; 125(2): 114–118. https://doi.org/10.1115/1.1555113 The structural reliability of a generic caisson under storm overload is investigated using typical Australian North West Shelf (NWS) environmental data. The bending moment due to the hydrodynamic loading is shown to increase in proportion to the fourth power of the wave height in certain critical cases, rather than the more usual exponent of α≈2. Combined with the steep wave height return period curve on the NWS, this may lead to low structural reliability, as much as two orders of magnitude smaller than more typical structures in other locations. The wave height exponent α varies with water depth and environmental force type (e.g. shear or bending moment) and increases rapidly near water line. This nonproportional nature of the loading is not addressed in design codes of practice or in traditional pushover analysis. hydrodynamics, load distribution, reliability, structural engineering, storms, marine systems, failure (mechanical), buckling, ocean waves, bending, mechanical engineering Design, Reliability, Storms, Stress, Waves, Water, Caissons, Shear (Mechanics) Buchan, S. J., Black, P. G. and Cohen, R. L., 1999, “The Impact of Tropical Cyclone Olivia on Australia’s Northwest Shelf,” Proc. OTC, Houston, OTC 10791. Bea, R. G., 1992, “Structural Reliability: Design and Re-qualification of Offshore Platforms,” Reliability of Offshore Operations, Proc. Int. Workshop, Building and Fire Research Laboratory, National Institute of Standards and Techonology, Gaithersburg, Special Publication 833, pp. 41–67. Frieze, P. A., Birkinshaw, M., and Smith, D., 1994, “Jacket Risk Assessment Under Extreme Conditions–Some Practical Tools,” Int. Conf. Offshore Structural Design–Hazards, Safety and Engineering, ERA Technology, Leatherhead. Tromans, P. S., and Vanderschuren, L., 1995, “Response Bsed Design Conditions in the North Sea: Application of a New Method,” Proc. OTC, Houston, OTC 7683. The Statistics of the Extreme Response of Offshore Structures Ocean Engng. Department of Energy, 1990, Offshore Installations: Guidance on Design, Construction and Certification, 4th Ed., HMSO, UK. Stroud, 1999, “Identification of Tropical Cyclone Storm Sub-populations–Northwest Shelf Australia,” Proc. State of the Art Pipeline Risk Management Conf., Perth, Nov. Moses, F., and Larrabee, R. D., 1988, “Calibration of the Draft RP2A-LRFD for Fixed Platforms,” Proc. OTC, Houston, OTC 5699. Piermattei, E. J., Ronalds, B. F., and Stock, D. J., 1990, “Jacket Ductility Design,” Proc. OTC, Houston, OTC 6383. Efthymiou, M., van der Graaf, J. W., Tromans, P. S., and Hines, I. M., 1996, “Reliability Based Criteria for Fixed Steel Offshore Platforms,” Proc. 15th Int. Conf. OMAE, I, pp. 129–141. Ronalds, B. F., Wong, Y. T., Tuty, S., and Piermattei, E. J., 1998, “Monopod Reliability Offshore Australia,” Proc. 17th Int. Conf. OMAE, Lisbon. American Petroleum Institute, 1993, Recommended Practice for Planning, Designing and Constructing Fixed Offshore Platforms–Working Stress Design, API RP2A-WSD, 20th Ed. Steedman Science & Engineering, 1994, “Oceanographic and Meteorological Design Criteria–Wandoo Location,” Report R590. Wave-current Loading on Shallow-water Caisson: An Evaluation of API Recommended Practice Nonlinear Performance of Offshore Platforms in Extreme Storm Waves Tromans, P. S., Anaturk, A. R., and Hagemeijer, P. M., 1991, “A New Model for the Kinematics of Large Ocean Waves–Application as a Design Wave,” Proc. 1st Int. Offshore and Polar Engineering Conf., Edinburgh, III, pp. 64–71.
Canjar Filters 2017 Canjar Filters Osvaldo Guzmán, Michael Hrušák, Arturo Martínez-Celis \mathcal{F} \omega \mathcal{F} is Canjar if the corresponding Mathias forcing does not add a dominating real. We prove that any Borel Canjar filter is {F}_{\sigma } , solving a problem of Hrušák and Minami. We give several examples of Canjar and non-Canjar filters; in particular, we construct a \mathsf{MAD} family such that the corresponding Mathias forcing adds a dominating real. This answers a question of Brendle. Then we prove that in all the “classical” models of \mathsf{ZFC} \mathsf{MAD} families whose Mathias forcing does not add a dominating real. We also study ideals generated by branches, and we uncover a close relation between Canjar ideals and the selection principle {S}_{\mathrm{fin}}\left(\Omega ,\Omega \right) on subsets of the Cantor space. Osvaldo Guzmán. Michael Hrušák. Arturo Martínez-Celis. "Canjar Filters." Notre Dame J. Formal Logic 58 (1) 79 - 95, 2017. https://doi.org/10.1215/00294527-3496040 Keywords: Canjar filters , dominating reals , Mad families , Mathias forcing Osvaldo Guzmán, Michael Hrušák, Arturo Martínez-Celis "Canjar Filters," Notre Dame Journal of Formal Logic, Notre Dame J. Formal Logic 58(1), 79-95, (2017)
Difference between revisions of "751.17 Concrete Slab Bridges" - Engineering_Policy_Guide Difference between revisions of "751.17 Concrete Slab Bridges" m (Smithk moved page 751.20 Continuous Concrete Slab Bridges to 751.17 Concrete Slab Bridges: Per BR, renamed and renumbered) m (Per BR, renamed and renumbered) ==751.20.1 General== This article illustrates the general design procedure for Continuous Concrete Slab Bridge using AASHTO LRFD Bridge Design Specifications. ===751.20.1.1 Material Properties=== ==751.20.2 Design== ===751.20.2.1 Limit States and Load Factors=== '''[[751.2_Loads#Load Modifiers|Load Modifiers]]''' ===751.20.2.2 Loads=== '''Permanent (Dead) Loads''' '''For Additional Design Information, see LRFD 5.14.4.2''' ==751.20.3 Details== ===751.20.3.1 Solid Slabs=== '''SLAB LONGITUDINAL SECTIONS - SOLID SLABS''' :::::::::All longitudinal dimensions shown are horizontal. ===751.20.3.2 C.I.P. Voided Slabs=== 2.2 751.17.2.2 Loads 3.1 751.17.3.1 Solid Slabs 3.2 751.17.3.2 C.I.P. Voided Slabs Unit weight of reinforced concrete, {\displaystyle \,{\boldsymbol {\gamma }}_{c}=0.150kcf} Continuous Cast-In-Place Solid/Voided Concrete Slab Class B-2 {\displaystyle \,f'_{c}=4.0ksi} {\displaystyle \,n=8} Precast Prestressed Multicell Voided Concrete Girders Class A-1 {\displaystyle \,f'_{c}=6.0ksi} {\displaystyle \,f'_{ci}=4.5ksi} <math\, >n=8</math> Intermediate bent columns, end bents (below construction joint at bottom of slab) in continuous concrete slab bridges {\displaystyle \,f'_{c}=4.0ksi} {\displaystyle \,n=8} Class B, Open bent, footing {\displaystyle \,f'_{c}=3.0ksi} {\displaystyle \,n=10} {\displaystyle E_{c}=33,000\ K_{1}\ (w_{c}^{1.5}){\sqrt {f_{c}^{'}}}} f'c in ksi wc = unit weight of nonreinforced concrete = 0.145 kcf K1 = correction factor for source of aggregate ---- = 1.0 unless determined by physical testing {\displaystyle \,f_{r}} {\displaystyle \,{\sqrt {f_{c}^{'}}}} LRFD 5.4.2.6 Minimum yield strength, {\displaystyle \,f_{y}=60.0ksi} Steel modulus of elasticity {\displaystyle \,E_{s}=29000ksi} Unit weight of future wearing surface, {\displaystyle \,{\boldsymbol {\gamma }}_{fws}=140lb./ft^{3}} {\displaystyle \,Q=\textstyle \sum \eta _{i}\gamma _{i}Q_{i}\leq \phi R_{n}=R_{r}} {\displaystyle \,Q} {\displaystyle \,Q_{i}} {\displaystyle \,\eta _{i}} {\displaystyle \,\gamma _{i}} {\displaystyle \,\phi } {\displaystyle \,R_{n}} {\displaystyle \,R_{r}} The following limit states shall be considered for slab and edge beam design: STRENGTH - I SERVICE - I EXTREME EVENT - II For STRENGTH limit state, Flexure and tension of reinforced concrete, {\displaystyle \phi } Shear and torsion, {\displaystyle \,\phi } {\displaystyle \,\phi } 751.17.2.2 Loads Permanent (Dead) Loads Permanent loads include the following: Future Wearing Surface A 3” thick future wearing surface (35psf) shall be considered on the roadway. Barrier/Railing For slab overhang design, assume the weight of the barrier or railing acts at the centroid of the barrier or railing. * 2'-0" Min. ** 12" For deck overhang design (LRFD 3.6.1.3.1), 2’-0" for design of all other components Application of Live Load to Slab Gravity Live Loads Gravity live loads include vehicular, dynamic load allowance, and pedestrian loads. The design vehicular live load HL-93 shall be used. It consists of either the design truck or a combination of design truck and design lane load. For slab design, where the primary strips are longitudinal, the force effects shall be determined on the following basis: The longitudinal strips shall be designed for all loads specified in AASHTO Article 3.6.1.3.3 includuing lane load. For the purpose of slab design, the lane load consists of a load equal to 0.640 klf uniformly distributed over 10 feet in the transverse direction. For precast prestressed multicell girders, live load shall be distributed according to AASHTO LRFD Tables 4.6.2.2.2b-1, 4.6.2.2.2d-1, 4.6.2.2.3a-1 and 4.6.2.2.3b-1 for both moment and shear. The dynamic load allowance replaces the effect of impact used in AASHTO Standard Specifications. It accounts for wheel load impact from moving vehicles. For slabs, the static effect of the vehicle live load shall be increased by the percentage specified in Table below. Dynamic Load Allowance, {\displaystyle \,IM} {\displaystyle \,IM} Deck Joints – All Limit States 75% All Other Limit States 33% The factor to be applied to the static load shall be taken as: {\displaystyle \,(1+IM)} The dynamic load allowance is not to be applied to pedestrian or design lane loads. Multiple Presence Factor, {\displaystyle \,m} The multiple presence factor accounts for the probability for multiple trucks passing over a multilane bridge simultaneously. {\displaystyle \,m} = 1.20 for 1 Loaded Lane 1.00 for 2 Loaded Lanes 0.65 for more than 3 Loaded Lanes Pedestrian live load on sidewalks greater than 2 ft wide shall be: {\displaystyle \,PL} = 0.075 ksf This does not include bridges designed exclusively for pedestrians or bicycles. For Additional Design Information, see LRFD 5.14.4.2 751.17.3.1 Solid Slabs SLAB LONGITUDINAL SECTIONS - SOLID SLABS END SPANS INTERMEDIATE SPANS All longitudinal dimensions shown are horizontal. 751.17.3.2 C.I.P. Voided Slabs SLAB LONGITUDINAL SECTIONS - CAST-IN-PLACE VOIDED SLAB (*) 3'-0" or greater than or equal to 5% of span length. (**) By Design (6" increments measured normal to the centerline of bent) (The minimum is equal to the column diameter + 2'6") For sections A-A and B-B, see below. SLAB CROSS SECTION Sonovoids are produced in half sizes 2" to 18". D = 4" to 36" T=19" (Min. preferred. Consult Structural Project Manager prior to the use of a thinner slab.) (**) For Roadway with slab drains, use 10" minimum. For Roadways that require additional reinforcement for resisting moment of the edge beam 20" minimum. Check for adequate space for development of barrier or railing reinforcement. Retrieved from "https://epg.modot.org/index.php?title=751.17_Concrete_Slab_Bridges&oldid=49199"
Stochastic Simultaneous Stabilization and Parking Control of the Brockett Integrator | J. Dyn. Sys., Meas., Control. | ASME Digital Collection School of Civil and. Mechanical Engineering, Curtin University Kent Street, Bentley, WA 6102, 1Corresponding author. e-mail: duc@curtin.edu.au Do, K. D. (February 21, 2022). "Stochastic Simultaneous Stabilization and Parking Control of the Brockett Integrator." ASME. J. Dyn. Sys., Meas., Control. May 2022; 144(5): 051004. https://doi.org/10.1115/1.4053640 This paper proposes a design of stochastic control laws for simultaneous stabilization and parking (i.e., tracking a reference trajectory that converges to the desired configuration) of the Brockett integrator based on Lyapunov's direct method, recent developments in the stabilization of stochastic systems, and Itô's formula for nonsmooth (weakly differentiable) functions. The control laws use two independent Wiener processes with the same covariance and the reference trajectory update. The proposed control design guarantees global K∞ -exponential stability of the closed-loop system in probability. Brockett integrator, Wiener processes, K∞-exponential stability, Lyapunov direct method Design, Probability, Stability, Trajectories (Physics) Asymptotic Stability and Feedback Stabilization Adaptive Tracking Control of a Nonholonomic Mobile Robot Discontinuous Control of the Brockett Global Inverse Optimal Stabilization of Stochastic Nonholonomic Systems Exponential Stabilization of Nonholonomic Chained Systems Exponential Stabilization of Mobile Robots With Noholonomic Constraints Simultaneous Stabilization and Tracking Control of Mobile Robots: An Adaptive Approach Stabilization of Linear Systems by Noise Stochastic Stabilisation and Destabilization Noise Suppresses Explosive Solutions of Differential Systems With Coefficients Satisfying the Polynomial Growth Condition Global Asymptotic Stabilization of Nonlinear Deterministic Systems Using Wiener Noise Suppresses Explosive Solutions of Differential Systems: A New General Polynomial Growth Condition Suppression and Stabilisation of Noise The Asymptotic Properties of the Suppressed Functional Differential System by Brownian Noise Under Regime Switching Robustness of Exponential Stability of a Class of Stochastic Functional Differential Equations With Infinite Delay Inverse Optimal Stabilization of Dynamical Systems by Wiener Processes Design of Feedback Stabilisers Using Wiener Processes for Nonlinear Systems E96.A .10.1587/transfun.E96.A.1695 Stabilization of Brockett Integrator Using Sussmann-Type Artificial Wiener Processes , Firenze, Italy, Dec. 10–13, pp. Stabilization Problems of Nonlinear Systems Using Feedback Laws With Wiener Processes , Shanghai, China, Dec. 18, pp. Ito's Formula for Non-Smooth Functions Publ. RIMS, Kyoto Univ. .10.2977/prims/1195168209 Stochastic Flows and Stochastic Differential Equations Successive Approximations to Solutions of Stochastic Differential Equations J. Differ. Eqs. S&N International Practical Asymptotic Stability of Stochastic Systems Driven by Lévy Processes and Its Application to Control of Tora Systems Wong-Zakai Approximations for Stochastic Differential Equations Exponential Stabilization of nonholonomic systems by Means of Oscillating Controls Linear Controllers for Tracking Chained-Form Systems Stability and Stabilization of Nonlinear Systems. Lecture Notes in Control and Information Sciences Lamnabhi-Lagarrigue, and Inverse Optimal Control of Stochastic Systems Driven by Lévy Processes Stochastic Control of Drill-Heads Driven by Lévy Processes Almost Sure Exponential Stability of Dynamical Systems Driven by Lévy Processes and Its Application to Control Design for Magnetic Bearings On the Convergence of Ordinary Integrals to Stochastic Integrals Design and Path Planning for a Spherical Rolling Robot
Introduction to Chemical Engineering Processes/Steady state energy balance - Wikibooks, open books for an open world 1 General Balance Equation Revisited 3 Energy Flows due to Mass Flows 4 Other energy flows into and out of the system 5 Overall steady - state energy balance General Balance Equation RevisitedEdit Recall the general balance equation that was derived for any system property: {\displaystyle In-Out+Generation-Consumption=Accumulation} When we derived the mass balance, we did so by citing the law of conservation of mass, which states that the total generation of mass is 0, and therefore {\displaystyle Accumulation=In-Out} There is one other major conservation law which provides an additional equation we can use: the law of conservation of energy. This states that if E denotes the entire amount of energy in the system, {\displaystyle E_{in}-E_{out}=E_{accumulated}} In order to write an energy balance, we need to know what kinds of energy can enter or leave a system. Here are some examples (this is not an exhaustive list by any means) of the types of energy that can be gained or lost. A system could gain or lose kinetic energy, if we're analyzing a moving system. Again, if the system is moving, there could be potential energy changes. Heat could enter the system via conduction, convection, or radiation. Work (either expansion work or shaft work) could be done on, or by, the system. The total amount of energy entering the system is the sum of all of the different types entering the system. Here are the expressions for the different types of energy: From physics, recall that {\displaystyle KE={\frac {1}{2}}mv^{2}} . If the system itself is not moving, this is zero. The gravitational potential energy of a system is {\displaystyle GPE=mgh} where g is the gravitational constant, m is mass in kg and h is the height of the center of mass of the system. If the system does not change height, there is no change in GPE. The heat entering the system is denoted by Q, regardless of the mechanism by which it enters (the means of calculating this will be discussed in a course on transport phenomenon). According to this book's conventions, heat entering a system is positive and heat leaving a system is negative, because the system in effect gains energy when heat enters. The work done by or on the system is denoted by W. Work done BY a system is negative because the system has to "give up" energy to do work on its surroundings. For example, if a system expands, it loses energy to account for that expansion. Conversely, work done ON a system is positve. Energy Flows due to Mass FlowsEdit Accumulation of anything is 0 at steady state, and energy is no exception. If, as we have the entire time, we assume that the system is at steady state, we obtain the energy balance equation: {\displaystyle E_{in}=E_{out}} This is the starting point for all of the energy balances below. Consider a system in which a mass, such as water, enters a system, such as a cup, like so: The mass flow into (or out of) the system carries a certain amount of energy, associated with how fast it is moving (kinetic energy), how high off the ground it is (potential energy), and its temperature (internal energy). It is possible for it to have other types of energy as well, but for now let's assume that these are the only three types of energy that are important. If this is true, then we can say that the total energy carried in the flow itself is: {\displaystyle {\dot {E}}_{i}=({\frac {1}{2}}{\dot {m}}v^{2}+{\dot {m}}gh+{\dot {U}})_{i}} However, there is one additional factor that must be taken into account. When a mass stream flows into a system it expands or contracts and therefore performs work on the system. An expression for work due to this expansion is: {\displaystyle W_{exp}=P*{\dot {V}}_{i}} Since this work is done on the system, it enters the energy balance as a positive quantity. Therefore the total energy flow into the system due to mass flow is as follows: {\displaystyle {\dot {E}}_{i}=({\frac {1}{2}}{\dot {m}}v^{2}+{\dot {m}}gh+{\dot {U}})_{i}+P*{\dot {V}}_{i}} Now, to simplify the math a little bit, we generally don't use internal energy and the PV term. Instead, we combine these terms and call the result the enthalpy of the stream. Enthalpy is just the combination of internal energy and expansion work due to the stream's flow, and is denoted by the letter H: {\displaystyle H=U+PV} Therefore, we obtain the following important equation for energy flow carried by mass: In stream i, if only KE, GPE, internal energy, and expansion work are considered, the energy carried by mass flow is: {\displaystyle {\dot {E}}_{i}=({\frac {1}{2}}{\dot {m}}v^{2}+{\dot {m}}gh+{\dot {H}})_{i}} Kinetic energy and potential energy are generally very small compared to the enthalpy, except in cases of very rapid flow or when there are no significant temperature changes occurring in the system. Therefore, they are often neglected when performing energy balances. Other energy flows into and out of the systemEdit The other types of energy flows that could occur in and out of a system are heat and work. Heat is defined as energy flow due to a change in temperature, and always flows from higher temperature to lower temperature. Work is defined as an energy transferred by a force (see here for details). If there is no heat flow into or out of a system, it is referred to as adiabatic. If there are no mechanical parts connected to a system, and the system is not able to expand, then the work is essentially 0. Some systems which have mechanical parts that perform work are turbines, mixers, engines, stirred tank reactors, agitators, and many others. The type of work performed by these parts is called shaft work to distinguish it from work due to expansion of the system itself (which is called expansion work). An "insulated system" is generally interpreted as being essentially adiabatic, though how good this assumption is depends on the quality of the insulation. A system that cannot expand is sometimes described as "rigid". The notation for these values are as follows: Heat flows: {\displaystyle {\dot {Q}}_{j}} , at the "j"th location. Shaft work: {\displaystyle {\dot {W}}_{s}} Expansion work: {\displaystyle P*{\frac {\Delta {V}}{\Delta {t}}}} Note that the above implies that there is no expansion work at steady state because at steady state nothing about the system, including the volume, changes with time, i.e. {\displaystyle {\frac {\Delta {V}}{\Delta {t}}}=0{\mbox{ at steady state}}} Overall steady - state energy balanceEdit If we combine all of these components together, remembering that heat flow into a system and work done on a system are positive, we obtain the following: Steady State Energy Balance on an Open System {\displaystyle \Sigma ({\frac {1}{2}}{\dot {m}}v^{2}+{\dot {m}}gh+{\dot {H}})_{i,in}-\Sigma ({\frac {1}{2}}{\dot {m}}v^{2}+{\dot {m}}gh+{\dot {H}})_{i,out}+\Sigma {\dot {Q}}_{j}+{\dot {W}}_{s}=0} If the system is closed AND at steady state that means the total heat flow must equal the total work done in magnitude, and be opposite in sign. However, according to another law of thermodynamics, the second law, it is impossible to change ALL of the heat flow into work, even in the most ideal case. In an adiabatic system with no work done, the total amount of energy carried by mass flows is equal between those flowing in and those flowing out. However, that DOES NOT imply that the temperature remains the same, as we will see in a later section. Some substances have a greater capacity to hold heat than others, hence the term heat capacity. If the conditions inside the system change over time, then we CANNOT use this form of the energy balance. The next section has information on what to do in the case that the energetics of the system change. Retrieved from "https://en.wikibooks.org/w/index.php?title=Introduction_to_Chemical_Engineering_Processes/Steady_state_energy_balance&oldid=3325801"
Abelian_variety Knowpia In mathematics, particularly in algebraic geometry, complex analysis and algebraic number theory, an abelian variety is a projective algebraic variety that is also an algebraic group, i.e., has a group law that can be defined by regular functions. Abelian varieties are at the same time among the most studied objects in algebraic geometry and indispensable tools for much research on other topics in algebraic geometry and number theory. An abelian variety can be defined by equations having coefficients in any field; the variety is then said to be defined over that field. Historically the first abelian varieties to be studied were those defined over the field of complex numbers. Such abelian varieties turn out to be exactly those complex tori that can be embedded into a complex projective space. Abelian varieties defined over algebraic number fields are a special case, which is important also from the viewpoint of number theory. Localization techniques lead naturally from abelian varieties defined over number fields to ones defined over finite fields and various local fields. Since a number field is the fraction field of a Dedekind domain, for any nonzero prime of your Dedekind domain, there is a map from the Dedekind domain to the quotient of the Dedekind domain by the prime, which is a finite field for all finite primes. This induces a map from the fraction field to any such finite field. Given a curve with equation defined over the number field, we can apply this map to the coefficients to get a curve defined over some finite field, where the choices of finite field correspond to the finite primes of the number field. Analytic theoryEdit Riemann conditionsEdit The Jacobian of an algebraic curveEdit Every algebraic curve C of genus g ≥ 1 is associated with an abelian variety J of dimension g, by means of an analytic map of C into J. As a torus, J carries a commutative group structure, and the image of C generates J as a group. More accurately, J is covered by Cg:[1] any point in J comes from a g-tuple of points in C. The study of differential forms on C, which give rise to the abelian integrals with which the theory started, can be derived from the simpler, translation-invariant theory of differentials on J. The abelian variety J is called the Jacobian variety of C, for any non-singular curve C over the complex numbers. From the point of view of birational geometry, its function field is the fixed field of the symmetric group on g letters acting on the function field of Cg. Abelian functionsEdit One important structure theorem of abelian varieties is Matsusaka's theorem. It states that over an algebraically closed field every abelian variety {\displaystyle A} is the quotient of the Jacobian of some curve; that is, there is some surjection of abelian varieties {\displaystyle J\to A} {\displaystyle J} is a Jacobian. This theorem remains true if the ground field is infinite.[2] Structure of the group of pointsEdit Polarisation and dual abelian varietyEdit Dual abelian varietyEdit PolarisationsEdit A polarisation of an abelian variety is an isogeny from an abelian variety to its dual that is symmetric with respect to double-duality for abelian varieties and for which the pullback of the Poincaré bundle along the associated graph morphism is ample (so it is analogous to a positive-definite quadratic form). Polarised abelian varieties have finite automorphism groups. A principal polarisation is a polarisation that is an isomorphism. Jacobians of curves are naturally equipped with a principal polarisation as soon as one picks an arbitrary rational base point on the curve, and the curve can be reconstructed from its polarised Jacobian when the genus is > 1. Not all principally polarised abelian varieties are Jacobians of curves; see the Schottky problem. A polarisation induces a Rosati involution on the endomorphism ring {\displaystyle \mathrm {End} (A)\otimes \mathbb {Q} } Polarisations over the complex numbersEdit Abelian schemeEdit For an abelian scheme A / S, the group of n-torsion points forms a finite flat group scheme. The union of the pn-torsion points, for all n, forms a p-divisible group. Deformations of abelian schemes are, according to the Serre–Tate theorem, governed by the deformation properties of the associated p-divisible groups. {\displaystyle A,B\in \mathbb {Z} } {\displaystyle x^{3}+Ax+B} has no repeated complex roots. Then the discriminant {\displaystyle \Delta =-16(4A^{3}+27B^{2})} is nonzero. Let {\displaystyle R=\mathbb {Z} [1/\Delta ]} {\displaystyle \operatorname {Spec} R} is an open subscheme of {\displaystyle \operatorname {Spec} \mathbb {Z} } {\displaystyle \operatorname {Proj} R[x,y,z]/(y^{2}z-x^{3}-Axz^{2}-Bz^{3})} is an abelian scheme over {\displaystyle \operatorname {Spec} R} . It can be extended to a Néron model over {\displaystyle \operatorname {Spec} \mathbb {Z} } , which is a smooth group scheme over {\displaystyle \operatorname {Spec} \mathbb {Z} } , but the Néron model is not proper and hence is not an abelian scheme over {\displaystyle \operatorname {Spec} \mathbb {Z} } Non-existenceEdit V. A. Abrashkin[3] and Jean-Marc Fontaine[4] independently proved that there are no nonzero abelian varieties over Q with good reduction at all primes. Equivalently, there are no nonzero abelian schemes over Spec Z. The proof involves showing that the coordinates of pn-torsion points generate number fields with very little ramification and hence of small discriminant, while, on the other hand, there are lower bounds on discriminants of number fields.[5] Semiabelian varietyEdit ^ Bruin, N. "N-Covers of Hyperelliptic Curves" (PDF). Math Department Oxford University. Retrieved 14 January 2015. J is covered by Cg: ^ Milne, J.S., Jacobian varieties, in Arithmetic Geometry, eds Cornell and Silverman, Springer-Verlag, 1986 ^ "V. A. Abrashkin, "Group schemes of period $p$ over the ring of Witt vectors", Dokl. Akad. Nauk SSSR, 283:6 (1985), 1289–1294". www.mathnet.ru. Retrieved 2020-08-23. ^ Fontaine, Jean-Marc. Il n'y a pas de variété abélienne sur Z. OCLC 946402079. ^ "There is no Abelian scheme over Z" (PDF). Archived (PDF) from the original on 23 Aug 2020. Birkenhake, Christina; Lange, H. (1992), Complex Abelian Varieties, Berlin, New York: Springer-Verlag, ISBN 978-0-387-54747-3 . A comprehensive treatment of the complex theory, with an overview of the history of the subject. Dolgachev, I.V. (2001) [1994], "Abelian scheme", Encyclopedia of Mathematics, EMS Press Mumford, David (2008) [1970], Abelian varieties, Tata Institute of Fundamental Research Studies in Mathematics, vol. 5, Providence, R.I.: American Mathematical Society, ISBN 978-81-85931-86-9, MR 0282985, OCLC 138290 Venkov, B.B.; Parshin, A.N. (2001) [1994], "Abelian_variety", Encyclopedia of Mathematics, EMS Press
Introduction to Chemical Engineering Processes/How to Analyze a Recycle System - Wikibooks, open books for an open world 1 Differences between Recycle and non-Recycle systems 1.1 Assumptions at the Splitting Point 1.2 Assumptions at the Recombination Point 2 Degree of Freedom Analysis of Recycle Systems 3 Suggested Solving Method 4 Example problem: Improving a Separation Process 4.1 Implementing Recycle on the Separation Process 4.1.1 Step 1: Draw a Flowchart 4.1.2 Step 2: Do a Degree of Freedom Analysis 4.1.3 Step 3: Devise a Plan and Carry it Out Differences between Recycle and non-Recycle systemsEdit The biggest difference between recycle and non-recycle systems is that the extra splitting and recombination points must be taken into account, and the properties of the streams change from before to after these points. To see what is meant by this, consider any arbitrary process in which a change occurs between two streams: Feed -> Process -> Outlet If we wish to implement a recycle system on this process, we often will do something like this: The "extra" stream between the splitting and recombination point must be taken into account, but the way to do this is not to do a mass balance on the process, since the recycle stream itself does not go into the process, only the recombined stream does. Instead, we take it into account by performing a mass balance on the recombination point and one on the splitting point. Assumptions at the Splitting PointEdit The recombination point is relatively unpredictable because the composition of the stream leaving depends on both the composition of the feed and the composition of the recycle stream. However, the splitting point is special because when a stream is split, it generally is split into two streams with equal composition. This is a piece of information that counts towards "additional information" when performing a degree of freedom analysis. As an additional specification, it is common to know the ratio of splitting, i.e. how much of the exit stream from the process will be put into the outlet and how much will be recycled. This also counts as "additional information". Assumptions at the Recombination PointEdit The recombination point is generally not specified like the splitting point, and also the recycle stream and feed stream are very likely to have different compositions. The important thing to remember is that you can generally use the properties of the stream coming from the splitting point for the stream entering the recombination point, unless it goes through another process in between (which is entirely possible). Degree of Freedom Analysis of Recycle SystemsEdit Degree of freedom analyses are similar for recycle systems to those for other systems, but with a couple important points that the engineer must keep in mind: The recombination point and the splitting point must be counted in the degree of freedom analysis as "processes", since they can have unknowns that aren't counted anywhere else. When doing the degree of freedom analysis on the splitting point, you should not label the concentrations as the same but leave them as separate unknowns until after you complete the DOF analysis in order to avoid confusion, since labeling the concentrations as identical "uses up" one of your pieces of information and then you can't count it. As an example, let's do a degree of freedom analysis on the hypothetical system above, assuming that all streams have two components. Recombination Point: 6 variables (3 concentrations and 3 total flow rates) - 2 mass balances = 4 DOF Process: Assuming it's not a reactor and there's only 2 streams, there's 4 variables and 2 mass balances = 2 DOF Splitting Point: 6 variables - 2 mass balances - 1 knowing compositions are the same - 1 splitting ratio = 2 DOF So the total is 4 + 2 + 2 - 6 (in-between variables) = 2 DOF. Therefore, if the feed is specified then this entire system can be solved! Of course the results will be different if the process has more than 2 streams, if the splitting is 3-way, if there are more than two components, and so on. Suggested Solving MethodEdit The solving method for recycle systems is similar to those of other systems we have seen so far but as you've likely noticed, they are increasingly complicated. Therefore, the importance of making a plan becomes of the utmost importance. The way to make a plan is generally as follows: Draw a completely labeled flow chart for the process. Do a DOF analysis to make sure the problem is solvable. If it is solvable, a lot of the time, the best place to start with a recycle system is with a set of overall system balances, sometimes in combination with balances on processes on the border. The reason for this is that the overall system balance cuts out the recycle stream entirely, since the recycle stream does not enter or leave the system as a whole but merely travels between two processes, like any other intermediate stream. Often, the composition of the recycle stream is unknown, so this simplifies the calculations a good deal. Find a set of independent equations that will yield values for a certain set of unknowns (this is often most difficult the first time; sometimes, one of the unit operations in the system will have 0 DOF so start with that one. Otherwise it'll take some searching.) Considering those variables as known, do a new DOF balance until something has 0 DOF. Calculate the variables on that process. Repeat until all processes are specified completely. Example problem: Improving a Separation ProcessEdit This example helps to show that this is true and also show some limitations of the use of recycle on real processes. Consider the following proposed system without recycle. A mixture of 50% A and 50% B enters a separation process that is capable of splitting the two components into two streams: one containing 60% of the entering A and half the B, and one with 40% of the A and half the B (all by mass): If 100 kg/hr of feed containing 50% A by mass enters the separator, what are the concentrations of A in the exit streams? A degree of freedom analysis on this process: 4 unknowns ( {\displaystyle {\dot {m}}_{2},x_{A2},{\dot {m}}_{3},{\mbox{ and }}x_{A3}} ), 2 mass balances, and 2 pieces of information (knowing that 40% of A and half of B leaves in stream 3 is not independent from knowing that 60% of A and half of B leaves in stream 2) = 0 DOF. Methods of previous chapters can be used to determine that {\displaystyle {\dot {m}}_{2}=55{\frac {kg}{hr}},x_{A2}=0.545,{\dot {m}}_{3}=45{\frac {kg}{hr}}} {\displaystyle x_{A3}=0.444} . This is good practice for the interested reader. If we want to obtain a greater separation than this, one thing that we can do is use a recycle system, in which a portion of one of the streams is siphoned off and remixed with the feed stream in order for it to be re-separated. The choice of which stream should be re-siphoned depends on the desired properties of the exit streams. The effects of each choice will now be assessed. Implementing Recycle on the Separation ProcessEdit Suppose that in the previous example, a recycle system is set up in which half of stream 3 is siphoned off and recombined with the feed (which is still the same composition as before). Recalculate the concentrations of A in streams 2 and 3. Is the separation more or less effective than that without recycle? Can you see a major limitation of this method? How might this be overcome? This is a rather involved problem, and must be taken one step at a time. The analyses of the cases for recycling each stream are similar, so the first case will be considered in detail and the second will be left for the reader. You must be careful when drawing the flowchart because the separator separates 60% of all the A that enters it into stream 2, not 60% of the fresh feed stream. Note: there is a mistake in the flow scheme. m6 and xA6 before the process is actually m4 and xA4 Step 2: Do a Degree of Freedom AnalysisEdit Recall that you must include the recombination and splitting points in your analysis. Recombination point: 4 unknowns - 2 mass balances = 2 degrees of freedom Separator: 6 unknowns (nothing is specified) - 2 independent pieces of information - 2 mass balances = 2 DOF Splitting point: 6 unknowns (again, nothing is specified) - 2 mass balances - 1 assumption that concentration remains constant - 1 splitting ratio = 2 DOF Total = 2 + 2 + 2 - 6 = 0. Thus the problem is completely specified. Step 3: Devise a Plan and Carry it OutEdit First, look at the entire system, since none of the original processes individually had 0 DOF. Overall mass balance on A: {\displaystyle 0.5*100{\frac {kg}{h}}={\dot {m}}_{2}*x_{A2}+{\dot {m}}_{6}*x_{A6}} Overall mass balance on B: {\displaystyle 50{\frac {kg}{h}}={\dot {m}}_{2}*(1-x_{A2})+{\dot {m}}_{6}*(1-x_{A6})} We have 4 unknowns and 2 equations at this point. This is where the problem solving requires some ingenuity. First, lets see what happens when we combine this information with the splitting ratio and constant concentration at the splitter: Splitting Ratio: {\displaystyle {m}_{6}={\frac {{\dot {m}}_{3}}{2}}} Constant concentration: {\displaystyle x_{A6}=x_{A3}} Plugging these into the overall balances we have: {\displaystyle 50={\dot {m}}_{2}*x_{A2}+{\frac {{\dot {m}}_{3}}{2}}*x_{A3}} {\displaystyle 50={\dot {m}}_{2}*(1-x_{A2})+{\frac {{\dot {m}}_{3}}{2}}*(1-x_{A3})} Again we have more equations than unknowns but we know how to relate everything in these two equations to the inlet concentrations in the separator. This is due to the conversions we are given: 60% of entering A goes into stream 2 means {\displaystyle {\dot {m}}_{2}*x_{A2}=0.6*x_{A4}*{\dot {m}}_{4}} {\displaystyle {\dot {m}}_{3}*x_{A3}=0.4*x_{A4}*{\dot {m}}_{4}} 50% of entering B goes into stream 2 means {\displaystyle {\dot {m}}_{2}*(1-x_{A2})=0.5*(1-x_{A4})*{\dot {m}}_{4}} {\displaystyle {\dot {m}}_{3}*x_{A3}=0.5*(1-x_{A4})*{\dot {m}}_{4}} Spend some time trying to figure out where these equations come from, it's all definition of mass fraction and translating words into algebraic equations. Plugging in all of these into the existing balances, we finally obtain 2 equations in 2 unknowns: {\displaystyle 50=0.6{\dot {m}}_{4}*x_{A4}+{\frac {0.4}{2}}{\dot {m}}_{4}*x_{A4}} On B: {\displaystyle 50=0.5{\dot {m}}_{4}*(1-x_{A4})+{\frac {0.5}{2}}{\dot {m}}_{4}*(1-x_{A4})} {\displaystyle {\dot {m}}_{4}=129.17{\frac {kg}{h}},x_{A4}=0.484} Notice that two things happened as expected: the concentration of the stream entering the evaporator went down (because the feed is mixing with a more dilute recycle stream), and the total flowrate went up (again due to contribution from the recycle stream). This is always a good rough check to see if your answer makes sense, for example if the flowrate was lower than the feed rate you'd know something went wrong Once these values are known, you can choose to do a balance either on the separator or on the recombination point, since both now have 0 degrees of freedom. We choose the separator because that leads directly to what we're looking for. The mass balances on the separator can be solved using the same method as that without a recycle system, the results are: {\displaystyle {\dot {m}}_{2}=70.83{\frac {kg}{hr}},x_{A2}=0.530,{\dot {m}}_{3}=58.33{\frac {kg}{hr}},x_{A3}=0.429} Now since we know the flowrate of stream 3 and the splitting ratio we can find the rate of stream 6: {\displaystyle {\dot {m}}_{6}={\frac {{\dot {m}}_{3}}{2}}=29.165{\frac {kg}{hr}},x_{A6}=x_{A3}=0.429} You should check to make sure that m2 and m6 add up to the total feed rate, otherwise you made a mistake. Now we can assess how effective the recycle is. The concentration of A in the liquid stream was reduced, by a small margin of 0.015 mole fraction. However, this extra reduction came at a pair of costs: the flow rate of dilute stream was significantly reduced: from 45 to 29.165 kg/hr! This limitation is important to keep in mind and also explains why we bother trying to make very efficient separation processes. Retrieved from "https://en.wikibooks.org/w/index.php?title=Introduction_to_Chemical_Engineering_Processes/How_to_Analyze_a_Recycle_System&oldid=3325796"
Development of Polymeric Nerve Guidance Conduits That Contain Anisotropic Cues Including Aligned Microfibers and Gradients of Adsorbed Laminin-1 | J. Med. Devices | ASME Digital Collection Jared M. Cregg, Jared M. Cregg , Houghton, MI USA Han Bing Wang, Han Bing Wang Michael E. Mullins, Ph.D., Michael E. Mullins, Ph.D. Ryan J. Gilbert, Ph.D. Cregg, J. M., Wang, H. B., Mullins, M. E., and Gilbert, R. J. (June 12, 2008). "Development of Polymeric Nerve Guidance Conduits That Contain Anisotropic Cues Including Aligned Microfibers and Gradients of Adsorbed Laminin-1." ASME. J. Med. Devices. June 2008; 2(2): 027524. https://doi.org/10.1115/1.2934348 Structures that direct neurite extension are important for regeneration following spinal cord injury and peripheral nerve injury. Within the spinal cord, neurons encounter a glial scar environment that impedes regeneration. In the peripheral nervous system, endogenous regeneration cannot occur across nerve gaps greater than 2mm ⁠. Current repair strategies use guidance conduits to channel axonal growth towards distal targets. While showing promise, conduit walls do not provide a suitable environment for neuronal attachment or extension, and axonal growth within conduits remains tortuous. Hence, there is a need for development of three-dimensional (3D) structures that use contact guidance—rather than confinement—as a means of guided regeneration. Our laboratory has developed aligned, electrospun fiber matrices that have been shown to direct neurite extension in vitro. In addition, a gradient of the glycoprotein laminin-1 has been adsorbed onto aligned microfiber matrices to stimulate directional growth. These matrices were then manipulated into 3D conduit structures. Novel polymeric conduits that utilize contact guidance and contain gradients of molecules that stimulate directional growth have the potential to foster fast, directed regeneration into and through conduit structures. biomedical materials, molecular biophysics, neurophysiology, proteins, tissue engineering Anisotropy, Polyester fibers, Spinal cord, Wounds, Biomaterials, Fibers, Maintenance, Molecular biophysics, Nervous system, Neurophysiology, Proteins, Tissue engineering
System Of Particles And Rotational Motion, Popular Questions: Karnataka Class 11-science PHYSICS, Physics Puc I (part 1) - Meritnation A car is moving from rest.After 10secs its wheels rotate 360 times in 1min.if radius of the wheel is 50cm,then find i)angular acceleration and ii)angular velocity after 30 secs Derive an expression for moment of inertia of a rectangular plate of sides a and b, where a is the longer side; about an axis parallel to width b and passing through the centre. what will be the moment of inertia about an axis perpendicular to the plane of the plate passing through its centre c? Find the magnetic dipole moment \frac{mg}{2} \frac{mg}{4} \left(1\right) \frac{{v}_{0}}{R}\phantom{\rule{0ex}{0ex}}\left(2\right) \frac{\sqrt{2}{v}_{0}}{R}\phantom{\rule{0ex}{0ex}}\left(3\right) \frac{2{v}_{0}}{R}\phantom{\rule{0ex}{0ex}}\left(4\right) \frac{{v}_{0}}{\sqrt{2}R} A solid sphere of mass ‘m’ and radius ’r’ starts from rest and rolls down along an inclined plane as shown. a) Write an expression for the moment of inertia of the sphere about its axis passing through the centre. b) Why moment of inertia is also called as rotational inertia? c) Find the velocity when it reaches the ground. v X mv =0 ??? why \mathcal{l} \mathrm{M}\left(\frac{{\mathrm{R}}^{2}+{\mathrm{r}}^{2}}{2}\right)+\frac{\mathrm{M}{\mathcal{l}}^{2}}{12} \mathrm{M}\left(\frac{{\mathrm{R}}^{2}+{\mathrm{r}}^{2}}{2}\right)+\frac{\mathrm{M}{\mathcal{l}}^{2}}{3} \mathrm{M}\left(\frac{{\mathcal{l}}^{2}+3{\mathrm{R}}^{2}-3{\mathrm{r}}^{2}}{12}\right) \mathrm{M}\left(\frac{{\mathrm{R}}^{2}+{\mathrm{r}}^{2}}{4}\right)+\frac{\mathrm{M}{\mathcal{l}}^{2}}{12} A rod of mass m and length l is lying along the y-axis such that one of its ends is at the origin. Suddenly an impulse is given to the rod such that immediately after the impulse, the end on the origin has a velocity voi^ and the other end has a velocity 2vo​i . The magnitude of angular momentum of the rod about the origin at this instant is? (a)2/3 mv​ol (b) 3/2 mvol (c) 5/6 mvol (d) 7/8mvol Four point masses P,Q,R and S with respective masses 1 kg,1 kg,2kg and 2 kg form the corners of a square of side a.The centre of mass of the system will be farthest from Moment of inertia of combination of 2 discs of same mass M and same radius R kept in contact about the tangent passing through point of contact and in the plane of discs, as shown is Q. A uniform rod AB of mass M and length \sqrt{2} R is moving in a vertical plane inside a hollow sphere of radius R. The sphere is rolling on a fixed horizontal surface without slipping with velocity of its centre 2v. When the end B is at the lowest position, its speed is found to be v as shown in the figure. If kinetic energy of the rod at this instant is \frac{4}{K}M{v}^{2} Q. A uniform rod of mass M and length a lies on a smooth horizontal plane. A partical of mass m moving at a speed v perpendicular to the length of rod strikes it at a distance a/4 from the centre and stops after the collision. Find (a) the vel. of centre of the rod and (b) the angular velocity of the rod about its centre just after the collision. What is the meaning of rotational analogue? Q no. 2...Answer is A Q.2. An infinite number of uniform discs are pivoted at their respective centres and arrange in such a manner that centres of all the discs are at the same horizontal level. Radius of biggest disc is R and successive discs have radii 1/ {3}^{rd} of radius of disc to its left. A rod having same mass as the mass of largest disc is placed over the discs, touching all of them. There is no friction between discs but friction is sufficient between rod and discs, so that rod does not slip. Acceleration of rod is \frac{8g}{25} \frac{3g}{25} \frac{g}{25} \frac{g}{50} Two identical blocks each of mass 1 kg are join together with a compressed spring. At any time after releasing they are moving with unequal speed in the opposite direction as shown in figure. whatever may be the speed of the blocks the centre of mass will remain stationary the centre of mass of the system is moving with a velocity of 2ms-1 the centre of mass of the system is moving with a velocity of 1 ms-1 A force of -F k cap acts on origint of the coordinate system. The torque about the point (1, -1) is a) -F(i cap + j cap) b) F(i+j) c) -F(i-j) d)F(i-j)
Lemma 10.156.3 (05WR)—The Stacks project Section 10.156: Henselization and quasi-finite ring maps Lemma 10.156.3. Let $R \to S$ be a ring map. Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p$ in $R$. Assume $R \to S$ is quasi-finite at $\mathfrak q$. Let $\kappa _2^{sep}/\kappa (\mathfrak q)$ be a separable algebraic closure and denote $\kappa _1^{sep} \subset \kappa _2^{sep}$ the subfield of elements separable algebraic over $\kappa (\mathfrak q)$ (Fields, Lemma 9.14.6). The commutative diagram \[ \xymatrix{ R_{\mathfrak p}^{sh} \ar[r] & S_{\mathfrak q}^{sh} \\ R_{\mathfrak p} \ar[u] \ar[r] & S_{\mathfrak q} \ar[u] } \] of Lemma 10.155.10 identifies $S_{\mathfrak q}^{sh}$ with the localization of $R_{\mathfrak p}^{sh} \otimes _{R_{\mathfrak p}} S_{\mathfrak q}$ at the prime ideal which is the kernel of the map \[ R_{\mathfrak p}^{sh} \otimes _{R_{\mathfrak p}} S_{\mathfrak q} \longrightarrow \kappa _1^{sep} \otimes _{\kappa (\mathfrak p)} \kappa (\mathfrak q) \longrightarrow \kappa _2^{sep} \] Moreover, the ring map $R_{\mathfrak p}^{sh} \to S_{\mathfrak q}^{sh}$ is a finite local homomorphism of local rings whose residue field extension is the extension $\kappa _2^{sep}/\kappa _1^{sep}$ which is both finite and purely inseparable. Proof. Since $R \to S$ is quasi-finite at $\mathfrak q$ we see that the extension $\kappa (\mathfrak q)/\kappa (\mathfrak p)$ is finite, see Definition 10.122.3 and Lemma 10.122.2. Hence $\kappa _1^{sep}$ is a separable algebraic closure of $\kappa (\mathfrak p)$ (small detail omitted). In particular Lemma 10.155.10 does really apply. Next, the compositum of $\kappa (\mathfrak p)$ and $\kappa _1^{sep}$ in $\kappa _2^{sep}$ is separably algebraically closed and hence equal to $\kappa _2^{sep}$. We conclude that $\kappa _2^{sep}/\kappa _1^{sep}$ is finite. By construction the extension $\kappa _2^{sep}/\kappa _1^{sep}$ is purely inseparable. The ring map $R_{\mathfrak p}^{sh} \to S_{\mathfrak q}^{sh}$ is indeed local and induces the residue field extension $\kappa _2^{sep}/\kappa _1^{sep}$ which is indeed finite purely inseparable. Note that $R_{\mathfrak p}^{sh} \otimes _ R S$ is quasi-finite over $R_{\mathfrak p}^{sh}$ at the prime ideal $\mathfrak q'$ given in the statement of the lemma, see Lemma 10.122.6. Hence the localization $S'$ of $R_{\mathfrak p}^{sh} \otimes _{R_{\mathfrak p}} S_{\mathfrak q}$ at $\mathfrak q'$ is henselian and finite over $R_{\mathfrak p}^{sh}$, see Lemma 10.153.4. Note that the residue field of $S'$ is $\kappa _2^{sep}$ as the map $\kappa _1^{sep} \otimes _{\kappa (\mathfrak p)} \kappa (\mathfrak q) \to \kappa _2^{sep}$ is surjective by the discussion in the previous paragraph. Furthermore, as a localization $S'$ is a filtered colimit of étale $R_{\mathfrak p}^{sh} \otimes _{R_{\mathfrak p}} S_{\mathfrak q}$-algebras. By Lemma 10.155.12 we see that $S_{\mathfrak q}^{sh}$ is the strict henselization of $R_{\mathfrak p}^{sh} \otimes _{R_{\mathfrak p}} S_{\mathfrak q}$ at $\mathfrak q'$. Thus $S' = S_\mathfrak q^{sh}$ by the uniqueness result of Lemma 10.154.7. $\square$ In the last formula in the Proof, it should be S'=S_{\mathfrak{q}}^{sh}
Double wishbone independent suspension - Simulink - MathWorks Switzerland Independent Suspension - Double Wishbone Enable active damping Compliance and Damping - Active The Independent Suspension - Double Wishbone block implements an independent double wishbone suspension for multiple axles with multiple tracks per axle. The block uses a linear spring and damper to model the vertical dynamic effects of the suspension system. Using the relative positions and velocities of the vehicle and wheel carrier, the block calculates the vertical suspension forces on the wheel and vehicle. The block uses a linear equation that relates the vertical damping and compliance to the suspension height, suspension height rate of change, and absolute value of the steering angles. The block implements this equation. {F}_{w{z}_{a,t}}={F}_{z{0}_{a}}+{k}_{{z}_{a}}\left({z}_{{v}_{a,t}}-{z}_{{w}_{a,t}}+{m}_{hstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\right)+c\left({\stackrel{˙}{z}}_{{v}_{a,t}}-{\stackrel{˙}{z}}_{{w}_{a,t}}\right)+{F}_{zhsto{p}_{a,t}}+{F}_{zasw{y}_{a,t}} The damping coefficient, c, depends on the Enable active damping parameter setting. Enable active damping Setting Constant, c = cza Lookup table that is a function of active damper duty cycle and actuator velocity c=f\left(duty,\left({\stackrel{˙}{z}}_{{v}_{a,t}}-{\stackrel{˙}{z}}_{{w}_{a,t}}\right)\right) \begin{array}{l}{F}_{v{x}_{a,t}}={F}_{w{x}_{a,t}}\\ {F}_{v{y}_{a,t}}={F}_{w{y}_{a,t}}\\ {F}_{v{z}_{a,t}}=-{F}_{w{z}_{a,t}}\\ \\ {M}_{v{x}_{a,t}}={M}_{w{x}_{a,t}}+{F}_{w{y}_{a,t}}\left(R{e}_{w{y}_{a,t}}+{H}_{a,t}\right)\\ {M}_{v{y}_{a,t}}={M}_{w{y}_{a,t}}+{F}_{w{x}_{a,t}}\left(R{e}_{w{x}_{a,t}}+{H}_{a,t}\right)\\ {M}_{v{z}_{a,t}}={M}_{w{z}_{a,t}}\end{array} \begin{array}{l}{x}_{{w}_{a,t}}={x}_{{v}_{a,t}}\\ {y}_{{w}_{a,t}}={y}_{{v}_{a,t}}\\ {\stackrel{˙}{x}}_{{w}_{a,t}}={\stackrel{˙}{x}}_{{v}_{a,t}}\\ {\stackrel{˙}{y}}_{{w}_{a,t}}={\stackrel{˙}{y}}_{{v}_{a,t}}\end{array} \begin{array}{l}{\theta }_{0a}={\mathrm{tan}}^{-1}\left(\frac{{z}_{0}}{r}\right)\\ \Delta {\theta }_{a,t}={\mathrm{tan}}^{-1}\left(\frac{r\mathrm{tan}{\theta }_{0a}-{z}_{{w}_{a,t}}+{z}_{{v}_{a,t}}}{r}\right)\end{array} {\theta }_{a}=-{\mathrm{tan}}^{-1}\left(\frac{r\mathrm{tan}{\theta }_{0a}-{z}_{{w}_{a,1}}+{z}_{{v}_{a,1}}}{r}\right)-{\mathrm{tan}}^{-1}\left(\frac{r\mathrm{tan}{\theta }_{0a}-{z}_{{w}_{a,2}}+{z}_{{v}_{a,2}}}{r}\right) {\tau }_{a}={k}_{a}{\theta }_{a} \begin{array}{l}{F}_{zasw{y}_{a,1}}=\left(\frac{{\tau }_{a}}{r}\right)\mathrm{cos}\left({\theta }_{0a}-{\mathrm{tan}}^{-1}\left(\frac{r\mathrm{tan}{\theta }_{0a}-{z}_{{w}_{a,1}}+{z}_{{v}_{a,1}}}{r}\right)\right)\\ {F}_{zasw{y}_{a,2}}=\left(\frac{{\tau }_{a}}{r}\right)\mathrm{cos}\left({\theta }_{0a}-{\mathrm{tan}}^{-1}\left(\frac{r\mathrm{tan}{\theta }_{0a}-{z}_{{w}_{a,2}}+{z}_{{v}_{a,2}}}{r}\right)\right)\end{array} \begin{array}{l}{\xi }_{a,t}={\xi }_{0a}+{m}_{hcambe{r}_{a}}\left({z}_{{w}_{a,t}}-{z}_{{v}_{a,t}}-{m}_{hstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\right)+{m}_{camberstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\\ {\eta }_{a,t}={\eta }_{0a}+{m}_{hcaste{r}_{a}}\left({z}_{{w}_{a,t}}-{z}_{{v}_{a,t}}-{m}_{hstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\right)+{m}_{casterstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\\ {\zeta }_{a,t}={\zeta }_{0a}+{m}_{hto{e}_{a}}\left({z}_{{w}_{a,t}}-{z}_{{v}_{a,t}}-{m}_{hstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\right)+{m}_{toestee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\\ \end{array} {\delta }_{whlstee{r}_{a,t}}={\delta }_{stee{r}_{a,t}}+{m}_{hto{e}_{a}}\left({z}_{{w}_{a,t}}-{z}_{{v}_{a,t}}-{m}_{hstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\right)+{m}_{toestee{r}_{a}}|{\delta }_{stee{r}_{a,t}}| {P}_{sus{p}_{a,t}}={F}_{wzlooku{p}_{a}}\left({\stackrel{˙}{z}}_{{v}_{a,t}}-{\stackrel{˙}{z}}_{{w}_{a,t}},{\stackrel{˙}{z}}_{{v}_{a,t}}-{\stackrel{˙}{z}}_{{w}_{a,t}},{\delta }_{stee{r}_{a,t}}\right) {E}_{sus{p}_{a,t}}={F}_{wzlooku{p}_{a}}\left({\stackrel{˙}{z}}_{{v}_{a,t}}-{\stackrel{˙}{z}}_{{w}_{a,t}},{\stackrel{˙}{z}}_{{v}_{a,t}}-{\stackrel{˙}{z}}_{{w}_{a,t}},{\delta }_{stee{r}_{a,t}}\right) {H}_{a,t}=-\left({z}_{{v}_{a,t}}-{z}_{{w}_{a,t}}+\frac{{F}_{z{0}_{a}}}{{k}_{{z}_{a}}}+{m}_{hstee{r}_{a}}|{\delta }_{stee{r}_{a,t}}|\right) {z}_{wt{r}_{a,t}}=R{e}_{{w}_{a,t}}+{H}_{a,t} \mathrm{WhlPz}={z}_{w}=\left[\begin{array}{cccc}{z}_{{w}_{1,1}}& {z}_{{w}_{1,2}}& {z}_{{w}_{2,1}}& {z}_{{w}_{2,2}}\end{array}\right] \mathrm{Whl}\mathrm{Re}=R{e}_{w}=\left[\begin{array}{cccc}R{e}_{{w}_{1,1}}& R{e}_{{w}_{1,2}}& R{e}_{{w}_{2,1}}& R{e}_{{w}_{2,2}}\end{array}\right] \mathrm{WhlVz}={\stackrel{˙}{z}}_{w}=\left[\begin{array}{cccc}{\stackrel{˙}{z}}_{{w}_{1,1}}& {\stackrel{˙}{z}}_{{w}_{1,2}}& {\stackrel{˙}{z}}_{{w}_{2,1}}& {\stackrel{˙}{z}}_{{w}_{2,2}}\end{array}\right] \mathrm{WhlFx}={F}_{wx}=\left[\begin{array}{cccc}{F}_{w{x}_{1,1}}& {F}_{w{x}_{1,2}}& {F}_{w{x}_{2,1}}& {F}_{w{x}_{2,2}}\end{array}\right] \mathrm{WhlFy}={F}_{wy}=\left[\begin{array}{cccc}{F}_{w{y}_{1,1}}& {F}_{w{y}_{1,2}}& {F}_{w{y}_{2,1}}& {F}_{w{y}_{2,2}}\end{array}\right] \mathrm{WhlM}={M}_{w}=\left[\begin{array}{cccc}{M}_{w{x}_{1,1}}& {M}_{w{x}_{1,2}}& {M}_{w{x}_{2,1}}& {M}_{w{x}_{2,2}}\\ {M}_{w{y}_{1,1}}& {M}_{w{y}_{1,2}}& {M}_{w{y}_{2,1}}& {M}_{w{y}_{2,2}}\\ {M}_{w{z}_{1,1}}& {M}_{w{z}_{1,2}}& {M}_{w{z}_{2,1}}& {M}_{w{z}_{2,2}}\end{array}\right] \mathrm{VehP}=\left[\begin{array}{c}{x}_{v}\\ {y}_{v}\\ {z}_{v}\end{array}\right]=\left[\begin{array}{cccc}{x}_{v}{}_{{}_{1,1}}& {x}_{v}{}_{{}_{1,2}}& {x}_{v}{}_{{}_{2,1}}& {x}_{v}{}_{{}_{2,2}}\\ {y}_{v}{}_{{}_{1,1}}& {y}_{v}{}_{{}_{1,2}}& {y}_{v}{}_{{}_{2,1}}& {y}_{v}{}_{{}_{2,2}}\\ {z}_{v}{}_{{}_{1,1}}& {z}_{v}{}_{{}_{1,2}}& {z}_{v}{}_{{}_{2,1}}& {z}_{v}{}_{{}_{2,2}}\end{array}\right] \mathrm{VehV}=\left[\begin{array}{c}{\stackrel{˙}{x}}_{v}\\ {\stackrel{˙}{y}}_{v}\\ {\stackrel{˙}{z}}_{v}\end{array}\right]=\left[\begin{array}{cccc}{\stackrel{˙}{x}}_{{v}_{1,1}}& {\stackrel{˙}{x}}_{{v}_{1,2}}& {\stackrel{˙}{x}}_{{v}_{2,1}}& {\stackrel{˙}{x}}_{{v}_{2,2}}\\ {\stackrel{˙}{y}}_{{v}_{1,1}}& {\stackrel{˙}{y}}_{{v}_{1,2}}& {\stackrel{˙}{y}}_{{v}_{2,1}}& {\stackrel{˙}{y}}_{{v}_{2,2}}\\ {\stackrel{˙}{z}}_{{v}_{1,1}}& {\stackrel{˙}{z}}_{{v}_{1,2}}& {\stackrel{˙}{z}}_{{v}_{2,1}}& {\stackrel{˙}{z}}_{{v}_{2,2}}\end{array}\right] \mathrm{StrgAng}={\delta }_{steer}=\left[\begin{array}{cc}{\delta }_{stee{r}_{1,1}}& {\delta }_{stee{r}_{1,2}}\end{array}\right] \mathrm{WhlAng}\left[1,...\right]=\xi =\left[{\xi }_{a,t}\right] \mathrm{WhlAng}\left[2,...\right]=\eta =\left[{\eta }_{a,t}\right] \mathrm{WhlAng}\left[3,...\right]=\zeta =\left[{\zeta }_{a,t}\right] \mathrm{VehF}={F}_{v}=\left[\begin{array}{cccc}{F}_{v}{}_{{x}_{1,1}}& {F}_{v}{}_{{x}_{1,2}}& {F}_{v}{}_{{x}_{2,1}}& {F}_{v}{}_{{x}_{2,2}}\\ {F}_{v}{}_{{y}_{1,1}}& {F}_{v}{}_{{y}_{1,2}}& {F}_{v}{}_{{y}_{2,1}}& {F}_{v}{}_{{y}_{2,2}}\\ {F}_{v}{}_{{z}_{1,1}}& {F}_{v}{}_{{z}_{1,2}}& {F}_{v}{}_{{z}_{2,1}}& {F}_{v}{}_{{z}_{2,2}}\end{array}\right] \mathrm{VehM}={M}_{v}=\left[\begin{array}{cccc}{M}_{v{x}_{1,1}}& {M}_{v{x}_{1,2}}& {M}_{v{x}_{2,1}}& {M}_{v{x}_{2,2}}\\ {M}_{v{y}_{1,1}}& {M}_{v{y}_{1,2}}& {M}_{v{y}_{2,1}}& {M}_{v{y}_{2,2}}\\ {M}_{v{z}_{1,1}}& {M}_{v{z}_{1,2}}& {M}_{v{z}_{2,1}}& {M}_{v{z}_{2,2}}\end{array}\right] \mathrm{WhlF}={F}_{w}=\left[\begin{array}{cccc}{F}_{w}{}_{{x}_{1,1}}& {F}_{w}{}_{{x}_{1,2}}& {F}_{w}{}_{{x}_{2,1}}& {F}_{w}{}_{{x}_{2,2}}\\ {F}_{w}{}_{{y}_{1,1}}& {F}_{w}{}_{{y}_{1,2}}& {F}_{w}{}_{{y}_{2,1}}& {F}_{w}{}_{{y}_{2,2}}\\ {F}_{w}{}_{{z}_{1,1}}& {F}_{w}{}_{{z}_{1,2}}& {F}_{w}{}_{{z}_{2,1}}& {F}_{w}{}_{{z}_{2,2}}\end{array}\right] \mathrm{WhlP}=\left[\begin{array}{c}{x}_{w}\\ {y}_{w}\\ {z}_{w}\end{array}\right]=\left[\begin{array}{cccc}{x}_{w}{}_{{}_{1,1}}& {x}_{w}{}_{{}_{1,2}}& {x}_{w}{}_{{}_{2,1}}& {x}_{{w}_{2,2}}\\ {y}_{w}{}_{{}_{1,1}}& {y}_{w}{}_{{}_{1,2}}& {y}_{w}{}_{{}_{2,1}}& {y}_{w}{}_{{y}_{2,2}}\\ {z}_{wtr}{}_{{}_{1,1}}& {z}_{wtr}{}_{{}_{1,2}}& {z}_{wtr}{}_{{}_{2,1}}& {z}_{wt{r}_{2,2}}\end{array}\right] \mathrm{WhlV}=\left[\begin{array}{c}{\stackrel{˙}{x}}_{w}\\ {\stackrel{˙}{y}}_{w}\\ {\stackrel{˙}{z}}_{w}\end{array}\right]=\left[\begin{array}{cccc}{\stackrel{˙}{x}}_{{w}_{1,1}}& {\stackrel{˙}{x}}_{{w}_{1,2}}& {\stackrel{˙}{x}}_{{w}_{2,1}}& {\stackrel{˙}{x}}_{{w}_{2,2}}\\ {\stackrel{˙}{y}}_{{w}_{{}_{1,1}}}& {\stackrel{˙}{y}}_{{w}_{1,2}}& {\stackrel{˙}{y}}_{{w}_{2,1}}& {\stackrel{˙}{y}}_{{w}_{2,2}}\\ {\stackrel{˙}{z}}_{{w}_{{}_{1,1}}}& {\stackrel{˙}{z}}_{{w}_{1,2}}& {\stackrel{˙}{z}}_{{w}_{2,1}}& {\stackrel{˙}{z}}_{{w}_{2,2}}\end{array}\right] \mathrm{WhlAng}=\left[\begin{array}{c}\xi \\ \eta \\ \zeta \end{array}\right]=\left[\begin{array}{cccc}{\xi }_{1,1}& {\xi }_{1,2}& {\xi }_{2,1}& {\xi }_{2,2}\\ {\eta }_{1,1}& {\eta }_{1,2}& {\eta }_{2,1}& {\eta }_{2,2}\\ {\zeta }_{1,1}& {\zeta }_{1,2}& {\zeta }_{2,1}& {\zeta }_{2,2}\end{array}\right] \mathrm{VehF}={F}_{v}=\left[\begin{array}{cccc}{F}_{v}{}_{{x}_{1,1}}& {F}_{v}{}_{{x}_{1,2}}& {F}_{v}{}_{{x}_{2,1}}& {F}_{v}{}_{{x}_{2,2}}\\ {F}_{v}{}_{{y}_{1,1}}& {F}_{v}{}_{{y}_{1,2}}& {F}_{v}{}_{{y}_{2,1}}& {F}_{v}{}_{{y}_{2,2}}\\ {F}_{v}{}_{{z}_{1,1}}& {F}_{v}{}_{{z}_{1,2}}& {F}_{v}{}_{{z}_{2,1}}& {F}_{v}{}_{{z}_{2,2}}\end{array}\right] \mathrm{VehM}={M}_{v}=\left[\begin{array}{cccc}{M}_{v{x}_{1,1}}& {M}_{v{x}_{1,2}}& {M}_{v{x}_{2,1}}& {M}_{v{x}_{2,2}}\\ {M}_{v{y}_{1,1}}& {M}_{v{y}_{1,2}}& {M}_{v{y}_{2,1}}& {M}_{v{y}_{2,2}}\\ {M}_{v{z}_{1,1}}& {M}_{v{z}_{1,2}}& {M}_{v{z}_{2,1}}& {M}_{v{z}_{2,2}}\end{array}\right] \mathrm{WhlF}={F}_{w}=\left[\begin{array}{cccc}{F}_{w}{}_{{x}_{1,1}}& {F}_{w}{}_{{x}_{1,2}}& {F}_{w}{}_{{x}_{2,1}}& {F}_{w}{}_{{x}_{2,2}}\\ {F}_{w}{}_{{y}_{1,1}}& {F}_{w}{}_{{y}_{1,2}}& {F}_{w}{}_{{y}_{2,1}}& {F}_{w}{}_{{y}_{2,2}}\\ {F}_{w}{}_{{z}_{1,1}}& {F}_{w}{}_{{z}_{1,2}}& {F}_{w}{}_{{z}_{2,1}}& {F}_{w}{}_{{z}_{2,2}}\end{array}\right] \mathrm{WhlV}=\left[\begin{array}{c}{\stackrel{˙}{x}}_{w}\\ {\stackrel{˙}{y}}_{w}\\ {\stackrel{˙}{z}}_{w}\end{array}\right]=\left[\begin{array}{cccc}{\stackrel{˙}{x}}_{{w}_{1,1}}& {\stackrel{˙}{x}}_{{w}_{1,2}}& {\stackrel{˙}{x}}_{{w}_{2,1}}& {\stackrel{˙}{x}}_{{w}_{2,2}}\\ {\stackrel{˙}{y}}_{{w}_{{}_{1,1}}}& {\stackrel{˙}{y}}_{{w}_{1,2}}& {\stackrel{˙}{y}}_{{w}_{2,1}}& {\stackrel{˙}{y}}_{{w}_{2,2}}\\ {\stackrel{˙}{z}}_{{w}_{{}_{1,1}}}& {\stackrel{˙}{z}}_{{w}_{1,2}}& {\stackrel{˙}{z}}_{{w}_{2,1}}& {\stackrel{˙}{z}}_{{w}_{2,2}}\end{array}\right] \mathrm{WhlAng}=\left[\begin{array}{c}\xi \\ \eta \\ \zeta \end{array}\right]=\left[\begin{array}{cccc}{\xi }_{1,1}& {\xi }_{1,2}& {\xi }_{2,1}& {\xi }_{2,2}\\ {\eta }_{1,1}& {\eta }_{1,2}& {\eta }_{2,1}& {\eta }_{2,2}\\ {\zeta }_{1,1}& {\zeta }_{1,2}& {\zeta }_{2,1}& {\zeta }_{2,2}\end{array}\right] Enable active damping — Include damping Include damping Selecting this parameter creates: Damping coefficient map, f_act_susp_cz Damping actuator duty cycle breakpoints, f_act_susp_duty_bpt Damping actuator velocity breakpoints, f_act_susp_zdot_bpt \mathrm{StrgAng}={\delta }_{steer}=\left[\begin{array}{cc}{\delta }_{stee{r}_{1,1}}& {\delta }_{stee{r}_{1,2}}\end{array}\right] Damping coefficient map, f_act_susp_cz — Lookup table [10000 10000;10000 10000] (default) | M-by-N array Damping coefficient table as a function of active duty cycle and actuator compression velocity, in N·s/m. Each value specifies the damping for a specific combination of actuator duty cycle and velocity. The array dimensions must match the duty cycle, M, and actuator velocity, N, breakpoint vector dimensions. Damping actuator duty cycle breakpoints, f_act_susp_duty_bpt — Duty cycle breakpoints [0 1] (default) | 1-by-M vector Damping actuator duty cycle breakpoints, dimensionless. Damping actuator velocity breakpoints, f_act_susp_zdot_bpt — Velocity breakpoints [-1 1] (default) | 1-by-N vector Damping actuator velocity breakpoints, in m/s. Independent Suspension - MacPherson | Independent Suspension - Mapped | Independent Suspension - K and C
Form regulator given state-feedback and estimator gains - MATLAB reg - MathWorks España Form regulator given state-feedback and estimator gains rsys = reg(sys,K,L) rsys = reg(sys,K,L,sensors,known,controls) rsys = reg(sys,K,L) forms a dynamic regulator or compensator rsys given a state-space model sys of the plant, a state-feedback gain matrix K, and an estimator gain matrix L. The gains K and L are typically designed using pole placement or LQG techniques. The function reg handles both continuous- and discrete-time cases. This syntax assumes that all inputs of sys are controls, and all outputs are measured. The regulator rsys is obtained by connecting the state-feedback law u = –Kx and the state estimator with gain matrix L (see estim). For a plant with equations \begin{array}{l}\stackrel{˙}{x}=Ax+Bu\\ y=Cx+Du\end{array} this yields the regulator \begin{array}{l}\stackrel{˙}{\stackrel{^}{x}}=\left[A-LC-\left(B-LD\right)K\right]\stackrel{^}{x}+Ly\\ u=-K\stackrel{^}{x}\end{array} This regulator should be connected to the plant using positive feedback. rsys = reg(sys,K,L,sensors,known,controls) handles more general regulation problems where: The plant inputs consist of controls u, known inputs ud, and stochastic inputs w. Only a subset y of the plant outputs is measured. The index vectors sensors, known, and controls specify y, ud, and u as subsets of the outputs and inputs of sys. The resulting regulator uses [ud ; y] as inputs to generate the commands u (see next figure). with seven outputs and four inputs, suppose you have designed: A state estimator with gain L using outputs 4, 7, and 1 of the plant as sensors, and input 3 of the plant as an additional known input You can then connect the controller and estimator and form the complete regulation system by regulator = reg(sys,K,L,sensors,known,controls) estim | kalman | lqgreg | lqr | dlqr | place
The Movement of Orbits and Their Effect on the Encoding of Letters in Partition Theory Rahmah J. Shareef, Ammar S. Mahmood* University of Mosul, Department of Mathematic, College of Education for Pure Science, Mosul, Iraq. This research aims to study the movement of orbits proposed by Mohammed et al. in 2015 and 2016, and their impact on the encoding of letters adopted by Mahmood and Mahmood in 2019 in order to make the latter more difficult when read in the theory of partition. Partition Theory, Encoding, E-Abacus Diagram Shareef, R. and Mahmood, A. (2019) The Movement of Orbits and Their Effect on the Encoding of Letters in Partition Theory. Open Access Library Journal, 6, 1-7. doi: 10.4236/oalib.1105834. \mu =\left({\mu }_{1},{\mu }_{2},\cdots ,{\mu }_{n}\right) |\mu |={\sum }_{i=1}^{n}{\mu }_{i}=r i\ge 1 {\mu }_{i}\ge {\mu }_{i+1}. \mu \mu {\beta }_{i}={\mu }_{i}+b-i,1\le i\le b \left\{{\beta }_{1},{\beta }_{2},\cdots ,{\beta }_{b}\right\} is said to be the set of β-numbers for \mu Let e be a positive integer number greater than or equal to 2, we can represent β-numbers by a diagram called e-Abacus diagram. e-Abacusdiagram. \beta will be represented by a bead (●) which takes its location e-abacus diagram [2]. The logic by which the value of partition is found has led some researchers to choose the concept of orbits for any form of e-abacus diagrams, see [3] and [4]. The basic idea of forming these orbits where e\ge 2 then e-abacus diagram will appear in an order that is very similar to (matrix mode). Now, if we consider that all that exists within the outer frame of this (matrix) frame is a second orbit and so we will have Table 1. Table 1. The relation between the value of e and the number of orbits. e=3 Figure 1. The relation between the value of e and the number of orbits. e=4 This concept will be applied with the coding that has been adopted on the English letters by Mahmood and Mahmood in [5] and [6]. 3. Encoding English Letters It is well known that there is a lot of research concerning coding or encoding on English letters, they all depend on each letter having a corresponding number from 0 to 25, hence a particular process begins with special conditions and takes a concept (mod 26), with the exception of Mahmood and Mahmood [5] and [6], they made a change to this concept through the use of e-abacus diagram away from the concept of (mod 26), as shown in Table 2. Table 2. The partition of each English letters. Note that, they are based on the value of e and the number of rows is equal to 5, for example R and H write. Based on Figure 3, we have three orbits for each letter as shown below. 3.1. Behavior of Each Orbit in e = 5 1) 1st-orbit: It is the outer orbit and the largest of all orbits where are 16 sites, we can change the locations according to the clock with one movement, two movements, … and up to 16 movements until we reach the origin of orbit. Since we always need to have the first location of this orbit empty so we can read the partition; see Mahmood in [7], we cancel this course and keep it as it is to preserve the overall frame of the form of partition. 2) 2nd-orbit: It is the middle orbit and has 8 locations. 3) 3rd-orbit: It is the last orbit that is usually fixed in place because it contains only one location. If we assume that wt is the motion of the t-orbit where t=1,2,3 , then we have the following. \left[{w}_{1};{w}_{2};{w}_{3}\right] {w}_{1}={w}_{3}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{2}=0,1,\cdots ,7\text{ }\text{\hspace{0.17em}}\text{or}\left({w}_{2}\mathrm{mod}8\right) {w}_{2}=1 The locations of the 2nd-orbit are Its movement will be according to Figure 4. Figure 4. The movement when w2 = 1. Thus, we can make the following proposition: Rule 3.2.1: when choosing a partition for any letter of the English language where e=5 {\beta }_{i} was equal to the location {a}_{\alpha \lambda } within [0; 1; 0] will be {a}_{\alpha \lambda }\to {a}_{\alpha \left(\lambda \mp 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\wedge \lambda =2,3\left(\text{or}\text{\hspace{0.17em}}3,4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively} {a}_{\alpha \lambda }\to {a}_{\left(\alpha \mp 1\right)\lambda },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2,3\left(\text{or}\text{\hspace{0.17em}}3,4\right)\wedge \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively} Then, we have Table 3. Table 3. The partition of each English letter after application w2 = 1. 2\le {w}_{2}\le 7 2\le {w}_{2}\le 7 and applied to Table 2 separately, we produce the following rule: Rule 3.3.1: when choosing a partition for any letter of English language where e=5 {\beta }_{i} was equal to the number of location {a}_{\alpha \lambda } within [0; 1; 0] will be: {w}_{2}=2 {a}_{\alpha \lambda }\to \left\{\begin{array}{l}\left\{\begin{array}{l}{a}_{\alpha \left(\lambda \mp 2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\wedge \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha \mp 2\right)\lambda },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\wedge \lambda =4\left(\text{or}\text{\hspace{0.17em}}2\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\\ \left\{\begin{array}{l}{a}_{\left(\alpha +1\right)\left(\lambda \mp 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}3\right)\wedge \lambda =3\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha -1\right)\left(\lambda \mp 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =3\left(\text{or}\text{\hspace{0.17em}}4\right)\wedge \lambda =2\left(\text{or}\text{\hspace{0.17em}}3\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\end{array} {w}_{2}=3 {a}_{\alpha \lambda }\to \left\{\begin{array}{l}\left\{\begin{array}{l}{a}_{\left(\alpha \mp 1\right)\left(\lambda +2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}3\right)\wedge \lambda =2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha \mp 1\right)\left(\lambda -2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =3\left(\text{or}\text{\hspace{0.17em}}4\right)\wedge \lambda =4\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\\ \left\{\begin{array}{l}{a}_{\left(\alpha +2\right)\left(\lambda \mp 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\wedge \lambda =3\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha -2\right)\left(\lambda \mp 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =4\wedge \lambda =2\left(\text{or}\text{\hspace{0.17em}}3\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\end{array} {w}_{2}=4 {a}_{\alpha \lambda }\to \{\begin{array}{l}\{\begin{array}{l}{a}_{\left(\alpha \mp 2\right)\left(\lambda \mp 2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha \mp 2\right)\left(\lambda \pm 2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =4\left(\text{or}\text{\hspace{0.17em}}2\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\\ \{\begin{array}{l}{a}_{\left(\alpha \mp 2\right)\lambda },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =3\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\alpha \left(\lambda \pm 2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =3\mathmr{and} \lambda =4\left(\text{or}\text{\hspace{0.17em}}2\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\end{array} {w}_{2}=5 {a}_{\alpha \lambda }\to \{\begin{array}{l}\{\begin{array}{l}{a}_{\left(\alpha \mp 2\right)\left(\lambda \mp 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha \mp 2\right)\left(\lambda \pm 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =3\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\\ \{\begin{array}{l}{a}_{\left(\alpha \mp 1\right)\left(\lambda \mp 2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =3\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha \mp 1\right)\left(\lambda \pm 2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =4\left(\text{or}\text{\hspace{0.17em}}2\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\end{array} {w}_{2}=6 {a}_{\alpha \lambda }\to \{\begin{array}{l}\{\begin{array}{l}{a}_{\left(\alpha \mp 2\right)\lambda },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\alpha \left(\lambda \pm 2\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =4\left(\text{or}\text{\hspace{0.17em}}2\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\\ \{\begin{array}{l}{a}_{\left(\alpha \mp 1\right)\left(\lambda \mp 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =3\mathmr{and} \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\left(\alpha \mp 1\right)\left(\lambda \pm 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =3\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array}\end{array} {w}_{2}=7 {a}_{\alpha \lambda }\to \{\begin{array}{l}{a}_{\left(\alpha \mp 1\right)\lambda },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2,3\left(\text{or}\text{\hspace{0.17em}}3,4\right)\mathmr{and} \lambda =2\left(\text{or}\text{\hspace{0.17em}}4\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\\ {a}_{\alpha \left(\lambda \pm 1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \alpha =2\left(\text{or}\text{\hspace{0.17em}}4\right)\mathmr{and} \lambda =3,4\left(\text{or}\text{\hspace{0.17em}}2,3\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{respectively}\end{array} For example (See Table 4). Table 4. The partition of R and N after application 2\le {\text{w}}_{2}\le 7 [1] Mathas (1999) Iwahori-Hecke Algebras and Schur Algebras of the Symmetric Group. Univ. Lecture Series, Vol. 15. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10 .1.1.49.9949&rep=rep1&type=pdf https://doi.org/10.1090/ulect/015/02 [2] James, G. (1978) Some Combinatorial Results Involving Young Diagrams. Mathe-matical Proceedings of the Cambridge Philosophical Society, 83, 1-10. https://www.cambridge.org/core/journals/mathematical -proceedings-of-the-cambridge-philosophical-society/ar ticle/some-combinatorial-results-involving-young-diagra ms/2F5541FEF300385926F88064F5E98F04 https://doi.org/10.1017/S0305004100054220 [3] Mahommed, E.F., Ahmad, N., Ibrahim, H. and Mahmood, A.S. (2015) Embedding Chain Movement in James Diagram for Partitioning Beta Number. AIP Conference Proceedings, 1691, 040019. [4] Mahommed, E.F., Ahmad, N., Ibrahim, H. and Mahmood, A.S. (2016) Nested Chain Movement of Length 1 of Beta Number in James Abacus Diagram. Global Journal of Pure and Applied Mathematics, 12, 2953-2969. https://www.ripublication.com/gjpam16/gjpamv12n4_17.pdf [5] Mahmood, A.B. and Mahmood, A.S. (2019) Secret-Word by e-Abacus Diagram I. Iraqi Journal of Science, 60, 638-646. Secret-word_by_e-abacus_diagram_I [6] Mahmood, A.B. and Mahmood, A.S. (2019) Secret-Text by e-Abacus Diagram II. Iraqi Journal of Science, 60, 840-846. Secret-text_by_e-abacus_diagram_II [7] Mahmood, A.S. (2011) On the Intersection of Young’s Diagrams Core. Journal of Education and Science, 24, 149-157. https://www.iasj.net/iasj?func=fulltext&aId=58795 https://doi.org/10.33899/edusj.1999.58795
Explain why there are an infinite number of antiderivatives for each function. Demonstrate this fact with an example. +C represents a vertical shift. Explain why two functions that are the same in every way except for their vertical position would have the exact same slope at every value of x
Home : Support : Online Help : Programming : Logic : Boolean : verify : neighborhood verify that a point is within a neighborhood of another verify(expr1, expr2, neighborhood(dist, opt1, opt2, ...)) verify(expr1, expr2, neighbourhood(dist, opt1, opt2, ...)) algebraic objects or lists of algebraic objects algebraic object with a non-negative signum The verify(expr1, expr2, neighborhood(dist, opt1, opt2, ...)) calling sequence returns true if it can determine that the distance between expr1 and expr2 is less than dist. By default, the distance is measured in Euclidean space, that is, the square root of the sum of the squares of the differences between the points. This can be modified by using the option p=N N 0 \mathrm{\infty } , inclusive. This distance is given by: {\left(\sum _{i=1}^{N}⁡{|{\mathrm{expr1}}_{i}-{\mathrm{expr2}}_{i}|}^{p}\right)}^{\frac{1}{p}} By default, the neighborhood is open, that is, the distance must be strictly less than dist. This can be modified by using the option closed to indicate that the distance is less than or equal to dist, or by using boundary to indicate that the distance is exactly equal to dist. \mathrm{verify}⁡\left(\mathrm{\pi },3,'\mathrm{neighborhood}⁡\left(1\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{verify}⁡\left(3,4,'\mathrm{neighborhood}⁡\left(1\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{verify}⁡\left(3,4,'\mathrm{neighborhood}⁡\left(1,\mathrm{open}\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{verify}⁡\left([\frac{\mathrm{sqrt}⁡\left(2\right)}{2},\frac{\mathrm{sqrt}⁡\left(2\right)}{2}],[0,0],'\mathrm{neighborhood}⁡\left(1\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{verify}⁡\left([\frac{\mathrm{sqrt}⁡\left(2\right)}{2},\frac{\mathrm{sqrt}⁡\left(2\right)}{2}],[0,0],'\mathrm{neighborhood}⁡\left(1,\mathrm{closed}\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{verify}⁡\left([\frac{\mathrm{sqrt}⁡\left(2\right)}{2},\frac{\mathrm{sqrt}⁡\left(2\right)}{2}],[0,0],'\mathrm{neighborhood}⁡\left(1,p=3\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{verify}⁡\left([1,1],[0,0],'\mathrm{neighborhood}⁡\left(1,p=\mathrm{\infty }\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{verify}⁡\left([1,1],[0,0],'\mathrm{neighborhood}⁡\left(1,p=\mathrm{\infty },\mathrm{closed}\right)'\right) \textcolor[rgb]{0,0,1}{\mathrm{true}}
Use a Riemann sum with 20 rectangles to approximate the following integrals. Then use the numerical integration feature of your graphing calculator to check your answer. \displaystyle \sum_{i=0}^{n-1}\Delta xf(a+\Delta xi) \int _ { 0 } ^ { 4 } ( 2 - 4 x ^ { 3 / 2 } ) d x Riemann approximation: \approx \displaystyle\sum_{i\rightarrow 0}^{19}\frac{1}{5}f\left ( 0+\frac{i}{5} \right )=\sum_{i\rightarrow 0}^{19}\frac{1}{5}\left ( 2-4\left ( \frac{i}{5} \right )^{\frac{3}{2}} \right )=-40.04 Your calculator should reveal that \int_{0}^{4}\left ( 2-4x^{\frac{3}{2}} \right )=-43.2. \int _ { 1 } ^ { 8 } \sqrt { 4 x + 3 } d x
On The Dilution Effect | monkey's uncle Conservation, Human Ecology, Infectious Disease On The Dilution Effect 18 March 2013 jhj1 2 Comments A new paper written by Dan Salkeld (formerly of Stanford), Kerry Padgett (CA Department of Public Health), and myself just came out in the journal Ecology Letters this week. One of the most important ideas in disease ecology is a hypothesis known as the "dilution effect". The basic idea behind the dilution effect hypothesis is that biodiversity -- typically measured by species richness, or the number of different species present in a particular spatially defined locality -- is protective against infection with zoonotic pathogens (i.e., pathogens transmitted to humans through animal reservoirs). The hypothesis emerged from analysis of Lyme disease ecology in the American Northeast by Richard Ostfeld and his colleagues and students from the Cary Institute for Ecosystem Studies in Millbrook, New York. Lyme disease ecology is incredibly complicated, and there are a couple different ways that the dilution effect can come into play even in this one disease system, but I will try to render it down to something easily digestible. Lyme disease is caused by a spirochete bacterium Borrelia burgdorferi. It is a vector-borne disease transmitted by hard-bodied ticks of the genus >Ixodes. These ticks are what is known as hemimetabolous, meaning that they experience incomplete metamorphosis involving larval and nymphal stages. Rather than a pupa, these larvae and nymphs resemble little bitty adults. An Ixodes tick takes three blood meals in its lifetime: one as a larva, once as a nymph, once as an adult. At different life-cycle stages, the ticks have different preferences for hosts. Larval ticks generally favor the white-footed mouse (Peromyscus leucopus) for their blood meal and this is where the catch is. It turns out that white-footed mice are extremely efficient reservoirs for Lyme disease. In fact, an infected mouse has as much as a 90% chance of transmitting infection to a larva feeding on it. The larvae then molt into nymphs and overwinter on the forest floor. Then, in spring or early summer a year after they first hatch from eggs, nymphs seek vertebrate hosts. If an individual tick acquired infection as a larva, it can now transmit to its next host. Nymphs are less particular about their choice of host and are happy to feed on humans (or just about any other available vertebrate host). This is where the dilution effect comes in. The basic idea is that if there are more potential hosts such as chipmunks, shrews, squirrels, or skunks, there are more chances that an infected nymph will take a blood meal on a person. Furthermore, most of these hosts are much less efficient at transmitting the Lyme spirochete than are white-footed mice. This lowers the prevalence of infection and makes it more likely that it will go extinct locally. It's not difficult to imagine the dilution effect working at the larval stage blood-meal too: if there are more species present (and the larvae are not picky about their blood meal), the risk of initial infection is also diluted. In the highly-fragmented landscape of northeastern temperate woodlands, when there is only one species in a forest fragment, it is quite likely that it will be a white-footed mouse. These mice are very adaptable generalists that occur in a wide range of habitats from pristine woodland to degraded forest. Therefore, species-poor habitats tend to have mice but no other species. The idea behind the dilution effect is that by adding different species to the baseline of a highly depauperate assemblage of simply white-footed mice, the prevalence of nymphal infection will decline and the risk for zoonotic infection of people will be reduced. It is not an exaggeration to say that the dilution-effect hypothesis is one of the two or three most important ideas in disease ecology and much of the explosion of interest in disease ecology can be attributed in part to such ideas. The dilution effect is also a nice idea. Wouldn't it be great if every dollar we invested in the conservation of biodiversity potentially paid a dividend in reduced disease risk? However, its importance to the field or the beauty of the idea do not guarantee that it is actually scientifically correct. One major issue with the dilution effect hypothesis is its problem with scale, arguably the central question in ecology. Numerous studies have shown that pathogen diversity is positively related to overall biodiversity at larger spatial scales. For example, in an analysis of global risk of emerging infectious diseases, Kate Jones and her colleagues form the London Zoological Society showed that globally, mammalian biodiversity is positively associated with the odds of an emerging disease. Work by Pete Hudson and colleagues at the Center for Infectious Disease Dynamics at Penn State showed that healthy ecosystems may actually be richer in parasite diversity than degraded ones. Given these quite robust findings, how is it that diversity at a smaller scale is protective? We use a family of statistical tools known as "meta-analysis" to aggregate the results of a number of previous studies into a single synthetic test of the dilution-effect hypothesis. It is well known that inferences drawn from small samples generally have lower precision (i.e., the estimates carry more uncertainty) than inferences drawn from larger samples. A nice demonstration of this comes from the classical asymptotic statistics. The expected value of a sample mean is the "true mean" of a normal distribution and the standard deviation of this distribution is given by the standard error, which is defined as the standard deviation of the distribution divided by the square root of the sample size. Say that for two studies we estimate the standard deviation of our estimate of the mean to be 10. In the first study, this estimate is based on a single observation, whereas in the second, it is based on a sample of 100 observations. The estimated of the mean in the second study is 10 times more precise than the estimate based on the first because 10/\sqrt{1} = 10 10/\sqrt{100} = 1 Meta-analysis allows us to pool estimates from a number of different studies to increase our sample size and, therefore, our precision. One of the primary goals of meta-analysis is to estimate the overall effect size and its corresponding uncertainty. The simplest way to think of effect size in our case is the difference in disease risk (e.g., as measured in the prevalence of infected hosts) between a species rich area and a species poor area. Unfortunately, a surprising number of studies don't publish this seemingly basic result. For such studies, we have to calculate a surrogate of effect size based on the reported test statistics of the hypothesis that the authors report. This is not completely ideal -- we would much rather calculate effect sizes directly, but to paraphrase a dubious source, you do a meta-analysis with the statistics that have been published, not with the statistics you wish had been published. On this note, one of our key recommendations is that disease ecologists do a better job reporting effect sizes to facilitate future meta-anlayses. In addition to allowing us to estimate the mean effect size across studies and its associated uncertainty, another goal of meta-analysis is to test for the existence of publication bias. Stanford's own John Ioannidis has written on the ubiquity of publication bias in medical research. The term "bias" has a general meaning that is not quite the same as the technical meaning. By "publication bias", there is generally no implication of nefarious motives on the part of the authors. Rather, it typically arises through a process of selection at both the individual authors' level and the institutional level of the journals to which authors submit their papers. An author, who is under pressure to be productive by her home institution and funding agencies, is not going to waste her time submitting a paper that she thinks has a low chance of being accepted. This means that there is a filter at the level of the author against publishing negative results. This is known as the "file-drawer effect", referring to the hypothetical 19 studies with negative results that never make it out of the authors' desk for every one paper publishing positive results. Of course, journals, editors, and reviewers prefer papers with results to those without as well. These very sensible responses to incentives in scientific publication unfortunately aggregate into systematic biases at the level of the broader literature in a field. We use a couple methods for detecting publication bias. The first is a graphical device known as a funnel plot. We expect studies done on large samples to have estimates of the effect size that are close to the overall mean effect because estimates based on large samples have higher precision. On the other hand, smaller studies will have effect-size estimates that are more distributed because random error can have a bigger influence in small samples. If we plot the precision (e.g., measured by the standard error) against the effect size, we would expect to see an inverted triangle shape -- or a funnel -- to the scatter plot. Note -- and this is important -- that we expect the scatter around the mean effect size to be symmetrical. Random variation that causes effect-size estimates to deviate from the mean are just as likely to push the estimates above and below the mean. However, if there is a tendency to not publish studies that fail to support the hypothesis, we should see an asymmetry to our funnel. In particular, there should be a deficit of studies that have low power and effect-size estimates that are opposite of the hypothesis. This is exactly what we found. Only studies supporting the dilution-effect hypothesis are published when they have very small samples. Here is what our funnel plot looked like. Note that there are no points in the lower right quadrant of the plot (where species richness and disease risk would be positively related). While the graphical approach is great and provides an intuitive feel for what is happening, it is nice to have a more formal way of evaluating the effect of publication bias on our estimates of effect size. Note that if there is publication bias, we will over-estimate our precision because the studies that are missing are far away from the mean (and on the wrong side of it). The method we use to measure the impact of publication bias on our estimate of uncertainty formalizes this idea. Known as "trim-and-fill", it uses an algorithm to find the most divergent asymmetric observations. These are removed and the precision of the mean effect size is calculated. This sub-sample is known as the "truncated" sample. Then a sample of missing values is imputed (i.e., simulated from the implied distribution) and added to the base sample. This is known as the "augmented" sample. The precision is then re-calculated. If there is no publication bias, these estimates should not be too different. In our sample, we find that estimates of precision differ quite a bit between the truncated and augmented samples. We estimate that between 4-7 studies are missing from the sample. Most importantly, we find that the 95% confidence interval for our estimated mean effect size crosses zero. That is, while the mean effect size is slightly negative (suggesting that biodiversity is protective against disease risk), we can't confidently say that it is actually different than zero. Essentially, our large sample suggests that there is no simple relationship between disease risk and biodiversity. On Ecological Mechanisms One of the main conclusions of our paper is that we need to move beyond simple correlations between species richness and disease risk and focus instead on ecological mechanisms. I have no doubt that there are specific cases where the negative correlation between species richness and disease risk is real (note our title says that we think this link is idiosyncratic). However, I suspect where we see a significant negative correlation, what is really happening is that some specific ecological mechanism is being aliased by species richness. For example, a forest fragment with a more intact fauna is probably more likely to contain predators and these predators may be keeping the population of efficient reservoir species in check. I don't think that this is an especially controversial idea. In fact, some of the biggest advocates for the dilution effect hypothesis have done some seminal work advancing our understanding of the ecological mechanisms underlying biodiversity-disease risk relationships. Ostfeld and Holt (2004) note the importance of predators of rodents for regulating disease. They also make the very important point that not all predators are created equally when it comes to the suppression of disease. A hallmark of simple models of predation is the cycling of abundances of predators and prey. A specialist predator which induces boom-bust cycles in a disease reservoir probably is not optimal for infection control. Indeed, it may exacerbate disease risk if, for example, rodents become more aggressive and are more frequently infected in agonistic encounters with conspecifics during steep growth phases of their population cycle. This phenomenon has been cited in the risk of zoonotic transmission of Sin Nombre Virus in the American Southwest. I have a lot more to write on this, so, in the interest of time, I will end this post now but with the expectation that I will write more in the near future! conservation biologydisease ecologyecologyepidemiologyHuman EcologyInfectious Diseasepublishing Previous PostThe Least Stressful Profession of Them All?Next PostEcology and Evolution of Infectious Disease, 2013 2 thoughts on “On The Dilution Effect” Thanks for the informative post! I'm a college senior pursuing a bachelor's degree in biology, and I'm working on a term paper on the dilution effect for my ecology class. Superficially, the dilution effect seems like a fairly straight-forward concept, but when you really start to think about it you realize that such an over-simplified and generalized explanation of disease transmission rates, no matter how politically or economically appealing, is dangerous and its perpetuation as an all-encompassing rule represents non-progressive science. Hi, just stumbled on this post. I like your take on publication bias and how it related to research on biodiversity/disease relationships. I'm currently researching the effects of community composition and structure on "disease risk". I noticed that you tend to define disease risk as prevalence: "The simplest way to think of effect size in our case is the difference in disease risk (e.g., as measured in the prevalence of infected hosts) between a species rich area and a species poor area." I might be careful here. Prevalence can be uninformative when it comes to diseases that have density-dependent transmission modes (and I think most are!). The percent of infection and density of infection are NOT the same thing and can be uncorrelated. Density-based metrics of disease risk are often brushed to the side because they are inherently difficult to work with mathematically (and perhaps statistically). I think prevalence-based metrics are often used because a prevalence is a percent and thus, one does not have to account for varying abundance among sites. Hence, I tend to advocate a more mechanistic approach to understanding disease while considering ecological context (i.e., the community). I missed that Salkfeld paper...going to read through it now. Thanks.
Introduction to Chemical Engineering Processes/Atom balances - Wikibooks, open books for an open world 1 The concept of atom balances 2 Mathematical formulation of the atom balance 3 Degree of Freedom Analysis for the atom balance 4 Example of the use of the atom balance 4.1 Degree of Freedom Analysis 5 Example of balances with inert species 5.1 Step 1: Flowchart 5.2 Step 2: Degrees of Freedom 5.3 Step 3: Units 5.4 Step 4: Devise a plan 5.5 Step 5: Carry Out the Plan The concept of atom balancesEdit Let's begin this section by looking at the reaction of hydrogen with oxygen to form water: {\displaystyle H_{2}+O_{2}\rightarrow H_{2}O} We may attempt to do our calculations with this reaction, but there is something seriously wrong with this equation! It is not balanced; as written, it implies that an atom of oxygen is somehow "lost" in the reaction, but this is in general impossible. Therefore, we must compensate by writing: {\displaystyle H_{2}+{\frac {1}{2}}O_{2}\rightarrow H_{2}O} or some multiple thereof. Notice that in doing this we have made use of the following conservation law, which is actually the basis of the conservation of mass: The number of atoms of any given element does not change in any reaction (assuming that it is not a nuclear reaction). Since by definition the number of moles of an element is proportional to the number of atoms, this implies that {\displaystyle {\dot {n}}_{A,gen}=0} where A represents any element in atomic form. Mathematical formulation of the atom balanceEdit Now recall the general balance equation: {\displaystyle In-Out+Generation-Consumption=Accumulation} In this course we're assuming {\displaystyle Accumulation=0} . Since the moles of atoms of any element are conserved, {\displaystyle generation=0} {\displaystyle consumption=0} . So we have the following balance on a given element A: For a given element A, {\displaystyle \Sigma {\dot {n}}_{A,in}-\Sigma {\dot {n}}_{A,out}=0} When analyzing a reacting system you must choose either an atom balance or a molecular species balance but not both. Each has advantages; an atom balance often yields simpler algebra (especially for multiple reactions; the actual reaction that takes place is irrelevant!) but also will not directly tell you the extent(s) of reaction, and will not tell you if the system specifications are actually impossible to achieve for a given set of equilibrium reactions. Degree of Freedom Analysis for the atom balanceEdit As before, to do a degree of freedom analysis, it is necessary to count the number of unknowns and the number of equations one can write, and then subtract them. However, there are a couple of important things to be aware of with these balances. When doing atom balances, the extent of reaction does not count as an unknown, while with a molecular species balance it does. This is the primary advantage of this method: the extent of reaction does not matter since atoms of elements are conserved regardless of how far the reaction has proceeded. You need to make sure each atom balance will be independent. This is difficult to tell unless you write out the equations and look to see if any two are identical. In reactions with inert species, each molecular balance on the inert species counts as an additional equation. This is because of the following important note: When you're doing an atom balance you should only include reactive species, not inerts. Suppose a mixture of nitrous oxide ( {\displaystyle N_{2}O} ) and oxygen is used in a natural gas burner. The reaction {\displaystyle CH_{4}+2O_{2}\rightarrow 2H_{2}O+CO_{2}} occurs in it. There would be four equations that you could write: 3 atom balances (C, H, and O) and a molecular balance on nitrous oxide. You would not include the moles of nitrous oxide in the atom balance on oxygen. Example of the use of the atom balanceEdit Let's re-examine a problem from the previous section. In that section it was solved using a molecular species balance, while here it will be solved using atom balances. {\displaystyle 4PH_{3}+8O_{2}\rightarrow P_{4}O_{10}+6H_{2}O} {\displaystyle PH_{3}} {\displaystyle O_{2}} {\displaystyle O_{2}} For purposes of examination, the flowchart is re-displayed here: There are three elements involved in the system (P, H, and O) so we can write three atom balances on the system. There are likewise three unknowns (since the extent of reaction is NOT an unknown when using the atom balance): the outlet concentrations of {\displaystyle PH_{3},P_{4}O_{10},H_{2}O} Therefore, there are 3 - 3 = 0 unknowns. Let's start the same as we did in the previous section: by finding converting the given information into moles. The calculations of the previous section are repeated here: {\displaystyle {\dot {m}}_{out}={\dot {m}}_{in}=100{\mbox{ kg}}} {\displaystyle {\dot {n}}_{PH_{3},in}=0.5*(100{\mbox{ kg}})*{\frac {1{\mbox{ mol}}}{0.034{\mbox{ kg}}}}=1470.6{\mbox{ moles PH}}_{3}{\mbox{ in}}} {\displaystyle {\dot {n}}_{O_{2},in}=0.5*(100{\mbox{ kg}})*{\frac {1{\mbox{ mol}}}{0.032{\mbox{ kg}}}}=1562.5{\mbox{ moles O}}_{2}{\mbox{ in}}} {\displaystyle {\dot {n}}_{O_{2},out}=0.25*(100{\mbox{ kg}})*{\frac {1{\mbox{ mol}}}{0.032{\mbox{ kg}}}}=781.25{\mbox{ moles O}}_{2}{\mbox{ out}}} Now we start to diverge from the path of molecular balances and instead write atom balances on each of the elements in the reaction. Let's start with Phosphorus. How many moles of Phosphorus atoms are entering? Inlet: Only {\displaystyle PH_{3}} provides P, so the inlet moles of P are just {\displaystyle 1*1470.6=1470.6{\mbox{ moles P in}}} Outlet: There are two ways phosphorus leaves: as unused {\displaystyle PH_{3}} or as the product {\displaystyle P_{4}O_{10}} . Therefore, the moles of {\displaystyle PH_{3}} out are {\displaystyle 1*n_{PH_{3},out}+4*n_{P_{4}O_{10},out}} . Note that the 4 in this equation comes from the fact that there are 4 Phosphorus atoms in every mole of {\displaystyle P_{4}O_{10}} Therefore the atom balance on Phosphorus becomes: {\displaystyle 1*n_{PH_{3},out}+4*n_{P_{4}O_{10},out}=1470.6} Similarly, on Oxygen we have: Inlet: {\displaystyle 2*n_{O_{2},in}=2*1562.5=3125{\mbox{ moles O}}_{2}} Outlet: {\displaystyle 2*n_{O_{2},out}+10*n_{P_{4}O_{10},out}+1*n_{H_{2}O,out}=1562.5+10*n_{P_{4}O_{10},out}+1*n_{H_{2}O,out}} {\displaystyle 1562.5+10*n_{P_{4}O_{10},out}+1*n_{H_{2}O,out}=3125} Finally, check to see if you can get the following Hydrogen balance as a practice problem: {\displaystyle 2*n_{H_{2}O,out}+3*n_{PH_{3},out}=4411.8} Solving these three linear equations, the solutions are: {\displaystyle n_{PH_{3},out}=1080,n_{H_{2}O,out}=586,n_{P_{4}O_{10},out}=97.66} All of these answers are identical to those obtained using extents of reaction. Since the remainder of the solution to that problem is identical to that in the previous section, the reader is referred there for its completion. Example of balances with inert speciesEdit Sometimes it's more difficult to choose which type of balance you want, because both are possible but one is significantly easier than the other. As an example, lets consider a basic pollution control system. Suppose that you are running a power plant and your burner releases a lot of pollutants into the air. The flue gas has been analyzed to contain 5% {\displaystyle SO_{2}} {\displaystyle NO_{2}} {\displaystyle O_{2}} {\displaystyle CO_{2}} by moles. The remainder was determined to be inert. Local regulations require that the emissions of sulfur dioxide be less than 200 ppm (by moles) from your plant. They also require you to reduce nitrogen dioxide emissions to less than 50 ppm. You decide that the most economical method for control of these for your plant is to utilize ammonia-based processes. The proposed system is as follows: Put the flue gas through a denitrification system, into which (pure) ammonia is pumped. The amount of ammonia pumped in is three times as much as would theoretically be needed to use all of the nitrogen dioxide in the flue gas. Allow it to react a specified amount of time. Pump it into a desulfurization system. Nothing new is injected here, it just has a different catalyst than the denitrification, and the substrates are at a different temperature and pressure. The reactions that occur are: {\displaystyle 2NO_{2}+4NH_{3}+O_{2}\rightarrow 3N_{2}+6H_{2}O} {\displaystyle H_{2}O+2NH_{3}+SO_{2}\rightarrow (NH_{4})_{2}SO_{3}} If your plant makes {\displaystyle 130{\frac {ft^{3}}{s}}} of flue gas at {\displaystyle T=900K} {\displaystyle P=2{\mbox{ atm}}} , how much ammonia do you need to purchase for each 8-hour shift? How much of it remains unused? Why do we want to have a significant amount of excess ammonia? Assume that the flue gas is an ideal gas. Recall the ideal gas law, {\displaystyle PV=nRT} {\displaystyle R=0.0821{\frac {L*atm}{mol*K}}} Step 1: FlowchartEdit Flowcharts are becoming especially important now as means of organizing all of that information! Step 2: Degrees of FreedomEdit Let's consider an atomic balance on each reactor. Denitrification system: 9 unknowns (all concentrations in stream 3, and {\displaystyle {\dot {n}}_{2}} .) - 3 atom balances (N, H, and O) - 3 inert species ( {\displaystyle CO_{2},SO_{2},inerts} ) - 1 additional info (3X stoichiometric feed) = 2 DOF Desulfurization system: 15 unknowns - 4 atom balances (N, H, O, and S) - 5 inerts ( {\displaystyle CO_{2},O_{2},NO_{2},N_{2},inerts} ) = 6 DOF Total = 2 + 6 - 8 shared = 0 DOF, hence the problem has a unique solution. We can also perform the same type of analysis on molecular balances. Denitrification system: 10 unknowns (now the conversion {\displaystyle X_{1}} is also unknown) - 8 molecular species balances - 1 additional info = 1 DOF Desulfurization system: 16 unknowns (now the conversion {\displaystyle X_{2}} is unknown) - 9 balances = 7 DOF. Total = 1 + 7 - 8 shared = 0 DOF. Therefore the problem is theoretically solvable by both methods. Step 3: UnitsEdit The only weird units in this problem (everything is given in moles already so no need to convert) are in the volumetric flowrate, which is given in {\displaystyle {\frac {ft^{3}}{s}}} . Lets convert this to {\displaystyle {\frac {moles}{s}}} using the ideal gas law. To use the law with the given value of R is is necessary to change the flowrate to units of {\displaystyle {\frac {L}{s}}} {\displaystyle 130{\frac {ft^{3}}{s}}*{\frac {28.317{\mbox{ L}}}{ft^{3}}}=3681.2{\frac {L}{s}}} {\displaystyle P{\dot {V}}={\dot {n}}RT\rightarrow 2*3681.2={\dot {n}}_{1}(0.0821)(900)} {\displaystyle {\dot {n}}_{1}=99.64{\frac {moles}{s}}} Now that everything is in good units we can move on to the next step. Step 4: Devise a planEdit We can first determine the value of {\displaystyle {\dot {n}}_{2}} using the additional information. Then, we should look to an overall system balance. Since none of the individual reactors is completely solvable by itself, it is necessary to look to combinations of processes to solve the problem. The best way to do an overall system balance with multiple reactions is to treat the entire system as if it was a single reactor in which multiple reactions were occurring. In this case, the flowchart will be revised to look like this: Before we try solving anything, we should check to make sure that we still have no degrees of freedom. There are 8 unknowns (don't count conversions when doing atom balances), 4 types of atoms (H, N, O, and S), 2 species that never react, and 1 additional piece of information (3X stoichiometric), so there is 1 DOF. This is obviously a problem, which occurs because when performing atom balances you cannot distinguish between species that react in only ONE reaction and those that take part in more than one. In this case, then, it is necessary to look to molecular-species balances. Molecular-species balance In this case, there are 10 unknowns, but we can do molecular species balances on 9 species {\displaystyle (SO_{2},NO_{2},NH_{3},N_{2},O_{2},CO_{2},H_{2}O,(NH_{4})_{2}SO_{3},inerts)} and have the additional information, so there are 0 DOF when using this method. Once we have all this information, getting the information about stream 3 is trivial from the definition of extent of reaction. Step 5: Carry Out the PlanEdit First off we can determine {\displaystyle {\dot {n}}_{2}} by using the definition of a stoichiometric feed. {\displaystyle {\dot {n}}_{NO_{2},in}=0.03*99.64=2.9892{\frac {mol}{s}}} The stoichiometric amount of ammonia needed to react with this is, from the reaction, {\displaystyle {\frac {4{\mbox{ moles NH}}_{3}}{2{\mbox{ moles NO}}_{2}}}*2.9892=5.96{\frac {moles{\mbox{ NH}}_{3}}{s}}} Since the problem states that three times this amount is injected into the denitrification system, we have: {\displaystyle {\dot {n}}_{2}=17.88{\frac {moles}{s}}} Now, we are going to have a very complex system of equations with the 9 molecular balances. This may be a good time to invest in some equation-solving software. See if you can derive the following system of equations from the overall-system flowchart above. {\displaystyle NH_{3}:{\dot {n}}_{4}*x_{NH_{3},4}=17.88-4*X_{1}-2*X_{2}} {\displaystyle SO_{2}:{\dot {n}}_{4}*2*10^{-4}=0.05*99.64-X_{2}} {\displaystyle NO_{2}:{\dot {n}}_{4}*5*10^{-5}=0.03*99.64-2*X_{1}} {\displaystyle N_{2}:{\dot {n}}_{4}*x_{N_{2},4}=3*X_{1}} {\displaystyle O_{2}:{\dot {n}}_{4}*x_{O_{2},4}=0.07*99.64-X_{1}} {\displaystyle H_{2}O:{\dot {n}}_{4}*x_{H_{2}O,4}=6*X_{1}-X_{2}} {\displaystyle CO_{2}:{\dot {n}}_{4}*x_{CO_{2},4}=0.15*99.64} {\displaystyle (NH_{4})_{2}(SO_{3}):{\dot {n}}_{4}*x_{(NH_{4})_{2}SO_{3},4}=X_{2}} {\displaystyle Inerts:{\dot {n}}_{4}*(1-2*10^{-4}-5*10^{-5}-x_{NH_{3},4}-x_{N_{2},4}-x_{O_{2},4}-x_{H_{2}O,4}-x_{CO_{2},4}-x_{(NH_{4})_{2}SO_{3},4})=0.7*99.64} Using an equation-solving package, the following results were obtained: {\displaystyle X_{1}=1.492{\mbox{ moles}}} {\displaystyle X_{2}=4.961{\mbox{ moles}}} {\displaystyle {\dot {n}}_{3}=105.62{\frac {mol}{s}}} {\displaystyle x_{NH_{3},4}=0.01884} {\displaystyle x_{N_{2},4}=0.04238} {\displaystyle x_{O_{2},4}=0.05191} {\displaystyle x_{H_{2}O,4}=0.03778} {\displaystyle x_{CO_{2},4}=0.1415} {\displaystyle x_{(NH_{4})_{2}SO_{3},4}=0.04697} {\displaystyle x_{I}=1-\Sigma {\mbox{ (other components) }}=0.6606} Stream 3Edit Now that we have completely specified the composition of stream 4, it is possible to go back and find the compositions of stream 3 using the extents of reaction and feed composition. Although this is not necessary to answer the problem statement, it should be done, so that we can then test to make sure that all of the numbers we have obtained are consistent. Retrieved from "https://en.wikibooks.org/w/index.php?title=Introduction_to_Chemical_Engineering_Processes/Atom_balances&oldid=3325773"
Stake BRIGHT - Bright Union Join the Union by staking BRIGHT Many cryptocurrencies allow for staking in a staking pool. This can be seen as a interest-bearing savings account which requires you to lock your funds for a certain amount of time. The reason why - at many good projects - your crypto earns a reward is because they are put to work instead of being idle. Your BRIGHT token can be put to work in two ways: Stake your BRIGHT tokens with Bright Union (see below) Stake BRIGHT and ETH tokens into a Uniswap liquidity pool (Stake BRIGHT/ETH UNI V2) Benefits of Staking your BRIGHT tokens Your tokens will be accumulating a reward while being "locked" in the protocol. a cooldown period of 7 days exists before unstaked tokens can be withdrawn back to your wallet. Once the DAO is operational, staking BRIGHT tokens will provide you voting rights proportional to the amount of BRIGHT tokens you have staked How to Stake your BRIGHT tokens - YouTube tutorial​ Press "Stake BRIGHT" and select an amount Approve spending of BRIGHT token in your Metamask wallet Approve the transaction in your Metamask wallet Your wallet now contains stkBRIGHT as a proof you staked BRIGHT (these tokens can now be send to another wallet if required) Stake BRIGHT token to earn a reward How to Unstake your BRIGHT token Select amount of BRIGHT to unstake Approve transaction in your Metamask wallet Wait until cooldown period has passed (7 days) Press "Unstake" in the next 24 hours Your wallet now contains BRIGHT (including the rewards earned during cooldown) Note: While cooldown period is ongoing the amount of BRIGHT to be unstaked can be adjusted at any time 2. Select amount to unstake 4. Wait for cooldown of 7 days to pass The Annual Percentage Yield (APY) is the rewards the user receives for staking BRIGHT tokens for an entire year. The APY is a dynamic number which can increase or decrease based on the total amount being staked by all users together. As the amount of BRIGHT tokens in circulation will grow over three years it pays of to start staking early as the rewards will be shared with less users. Tokens released per block per year: Bright Union will release 0.3 BRIGHT token for each block released on the Ethereum Blockchain (~every 12 seconds). This amount can be adjusted through votes of the DAO. Blocks Per Year: There are roughly 2.3M blocks released per year. This implies that nearly 1.15M BRIGHT tokens are released per year. This is roughly ~1% of the total supply per year (which is available after 3 years). Total BRIGHT Staked: The total amount of BRIGHT tokens staked by all users together. APY = TokensReleasedPerBlock * BlocksPerYear / TotalBRIGHTStaked Yield For User: The amount of BRIGHT tokens the user will receive his/her staking for the duration of a single block BRIGHT Staked By User: The amount of BRIGHT tokens staked by the user Yield for user (BRIGHT per block) = TokensReleasedPerBlock * BRIGHTStakedByUser / TotalBRIGHTStaked YieldForUser (BRIGHT per block) =TotalBrightStaked * APY / BlocksPerYear During the staking period the user will have a hidden amount of stkBRIGHT in their wallet is a proof of staking the BRIGHT tokens. This is a derivative token exist as a proof of staking and can be traded as a EC20 token with any Ethereum wallet How to Stake your BRIGHT tokens - YouTube tutorial
» RAS200 — engaging citizens with astronomy acr… The night skies and the planet on which we live can be inspirational to young and old alike. In the run up to its 200th anniversary in 2020, the U.K.'s Royal Astronomical Society has put together a £1 million scheme to fund outreach and engagement activities for groups that are less well served in terms of access to astronomy and geophysics. This article outlines the projects funded and the impact they are starting to have. The ancient Babylonians knew them as mulmul, to the Hawai’ians they are the makali’i, for the Japanese they are subaru, to Māori they are Matariki, and in western astronomy they are the Pleiades, to list but a few. The little formation of seven closely grouped stars in the constellation of Taurus has clearly excited note and interest for thousands of years and across many cultures. After all, anyone wherever they are, rich or poor, can look up at the stars and wonder. But as it approaches its 200 {}^{\mathrm{th}} anniversary in 2020, Great Britain’s Royal Astronomical Society, which covers the United Kingdom and Ireland, and has members across the world, has become acutely aware that it actually has less appeal across cultural boundaries in the British Isles and elsewhere than its subject warrants, despite the efforts of its own Fellows and many thousands of amateur and professional astronomers. And that is a failure of duty to communicate with its fellow citizens that was placed on the scientific community as a whole over 30 years ago [Royal Society of London, 1985]. RAS200 is one of the Society’s programmes designed to address this. Set up in 2014 [A&G editorial, 2014] and run by its own Steering Group, the scheme involves a £1 million investment from RAS reserves to fund grass roots projects to reach less well-served groups in society than the “usual suspects” who attend public lectures or visit museums and planetaria. The widest community input to the scheme was encouraged right from the start [see Bowler, 2014]. “If the things we are doing for outreach worked across society, we wouldn’t need to be here today,” explained Steering Group member Helen Fraser, at the scheme’s 2014 launch event [Bowler, 2014]. At its heart, RAS200 had ambitious aims to break down cultural barriers to understanding, appreciating and engaging with the sciences of astronomy and geophysics, and the Society realized that it needed help to achieve these ends. One of the cultural barriers that had to be breached was that — for a Society with aspirations to understand the entire universe — the RAS was heavily based in London. So Steering Group members attended meetings right across the British Isles — Scotland, Wales, the Isle of Man and the various English regions away from the English capital, as well as Ireland. To open up to new groups, the Society deliberately set out to work with partner organisations that already worked with young people who have dropped out of the education and training systems, with adults whose education at school left a lot to be desired or whose lives have led them to get on the wrong side of the judicial system, and with people whose lives are so occupied by caring duties that they have little-to-no time to call their own. RAS200 has run two rounds of funding — in 2014–15 and 2016–17 — attracting some 150 proposals from across the British Isles and abroad. As a result, the RAS is working with 10 British partners, one Irish and one from South Africa to ensure the sciences we support — astronomy and geophysics — are accessible [Bowler, 2015; Bowler, 2017]. Education — or the lack of it — is one of the main cultural barriers in western societies. So one of the first groups with which RAS200 was able to engage was the community served by the Prince’s Trust, set up in 1976 by the U.K.’s heir to the throne to help young people who felt society had turned its back on them. Liz Avery of the Royal Observatory Greenwich helped to train Prince’s Trust leaders so that they could deliver courses in astronomy and geophysics that would really engage young people who had rejected (or been rejected by) their schools and colleges: “They were enthusiastic, but nervous of talking about science and of the questions they might be asked — ‘What’s inside of a Black Hole?’ or ‘What will happen when the Sun dies?’ — as well as the moral and ethical side of science and how religion fits in.” [Avery cited by Bowler, 2018a]. As a result of that RAS200 supported training for its own trainers, astronomy is now a regular part of the camping-based courses the Trust runs, has its own dedicated “get started” course, and has been an inspiration for young film-makers breaking into the profession. The young film-makers course made use of the RAS headquarters in Burlington House, Piccadilly, for great location shots. One of the more unexpected outcomes of the Trust’s observing activities is that some of their hardest-to-reach young people open up about their personal problems and concerns whilst they are looking at the stars. If school and college education did not provide quite what you wanted or needed from your education, the Workers’ Education Association (WEA) may help to fill in the gaps in later life. Now RAS200 funding is helping the WEA make a major turn towards providing courses in science, technology, engineering and medicine (STEM). The Association is concentrating its first courses in astronomy in the north of England, but plans to roll them out nationwide over the next few years. Bounce Back is an organization that works with serving prisoners, attempting to equip them with additional skills to help them after their release. “We were excited by the idea of enabling prisoners to ‘look at the skies’ and contemplate their place in the vast universe from inside the confined prison walls,” says Bounce Back’s Joanne Black. Now an RAS200 project is using astronomy as an inspiration for one of the wallpaper design courses the charity runs in Brixton prison. And astronomy books are featuring in schemes that enable prison dads to read with their children, helping to ensure that they do not lose touch with their families whilst serving their sentences. Steven Gray has a small business running planetarium shows in Scotland. In 2014, he met Ruth MacLennan at an RAS200 “town hall” meeting in Glasgow. MacLennan is the manager for the charity “Care for Carers”, that organizes holidays for carers from rural Scotland, people who are usually too busy looking after their relatives to get any time for themselves. Gray and MacLennan’s project involves giving carers breaks on the Island of Coll in the Inner Hebrides, where the dark skies make it ideal for astronomy — so long as the weather holds. “When it’s clear, we are outside observing the Sun, Moon, planets and deep sky objects. When it’s cloudy, we’re in the planetarium. The feedback has been extremely positive,” Gray says. For some of the carers, the breaks are vital to overcome the sense of isolation that they feel. For others, they contribute enormously to their health and well-being: “Events connected with space and the stars apart from being fun and interesting have helped me through the night to get back to sleep on many occasions since I find looking for and at the stars comforting and extremely pleasurable when I find them. I did not have this tool prior to these events,” explained one of the participants. If the four projects outlined above try to overcome the cultural barriers caused by missing out on educational opportunities, the RAS has also been keen to ensure that the historic cultural diversity of the British Isles is as well-served as possible. Two new projects, starting this year, are set in Celtic regions. Galway, on the west coast of Ireland has an annual cultural festival to which RAS200 funding will add astronomy and geophysics. Galway’s festival, as well as appealing to its settled community also reaches out to travellers and asylum seekers. According to Andy Shearer of the National University of Ireland, involvement in the Arts Festival this will help astronomers to “learn different ways of communicating”. Cornwall, in the extreme south-west of England is a largely rural county with poor town-to-town communication routes. “Cornwall Sea to Skies” emphasizes the rich fishing and navigation heritage of the region and its link both to the impressive Cornish geology and its dark skies. [A&G news, 2018]. As part of the RAS200 project they have acquired and equipped a travelling laboratory so as to be able to reach some of the most isolated communities in the country. Another major cultural divide is between the arts and humanities, on the one side, and the sciences (broadly understood), on the other, a divide immortalized by C.P. Snow’s much-contested 1959 lecture to Cambridge academics [Snow, 2012 [1959]]. Whilst all of the RAS200 projects have elements of tackling this issue, two are very explicitly focused on it. Welsh astronomer Geraint Jones explains: “I am sure there are many in the arts and humanities who are not naturally attracted to the sciences, and this was a fantastic idea to weave astronomy and geophysics with cultural activities.” So the project he is involved with in Wales links with the national and youth Eisteddfodau — annual cultural festivals that deliver their events using the Welsh language and Welsh artistic forms — to bring astronomy to arts-orientated audiences in the Principality through poetry and dance: a “planet clog dance” wowed crowds at 2015 youth Eisteddfod. When Gustav Holst composed The Planet Suite in 1916 he was more interested in the mythological aspects of our nearest space neighbours than the science. But in the century that has followed, numerous space missions and observations with ever-larger and better-equipped telescopes have transformed our understanding of these other worlds. The National Space Centre in Leicester has developed a suite of full-dome planetarium shows based on Holst’s original music but making use of the latest science, as well as versions based on the response of modern composers and artists to our planetary system. Reviewing the new shows, Stephen Sarjeant of the Open University applauds the daring and creativity of the NSC: “We humans are not just rational beings and science needs to engage on more than just an intellectual level.” [Serjeant, 2018]. For better or worse, the U.K. has had a long involvement with South Africa, including with the South African Astronomical Observatory (SAAO) in Cape Town, coincidentally celebrating its bicentenary in 2020 along with the RAS. SAAO scientist Sivuyile Manxoyi and his team will use this and the Iziko Museum and Planetarium to tell the story of astronomy in South Africa, including its latest phase as the country starts to host the massive Square Kilometre Array radio telescope. “We would love the public to be aware of what South Africa is contributing to science, as well as what the impact will be on our society and on our economic development,” Manxoyi says. [Bowler, 2018a]. The U.K. has a tradition of skills-building organisations outside of the formal education system of which scouting for young people has perhaps the greatest popularity. Girlguiding is the country’s girl-only youth organization, and it too is aiming to enhance its STEM provision for members from the ages of 5 to 25. Much of the organisation’s activity centres on its members achieving badges for demonstrating various life and societal skills. “We are passionate about giving girls opportunities that broaden their horizons, enable them to try new things and have adventures,” project leader Robyn McAllister explains. ““Reaching for the Stars” encapsulates this perfectly”. Partnering with the U.K.’s National Space Agency, as well as the RAS, Girlguiding has an ambitious programme to develop space and astronomy activities that will allow its members to achieve badge proficiency at various levels [Bowler, 2018b]. When Leicester City, one of football’s less glamorous teams, became champions of the English Premier League in 2016, most of their fans were unaware that events surrounding this achievement has — quite literally — caused the Earth to move. But while their team scored goals, geology students at the Leicester University had set up seismometers at a local school near the football ground and were measuring small earthquakes due to fans stomping their feet and otherwise applauding their team. This has inspired the Leicester-based National Youth Association and partners to work with RAS200 to create a “Geophysics in a Box” seismology set built out of Lego and other readily available components [Offer, 2018]. Although Leicester City do not seem likely to achieve the heights of the 2015–16 season this year, as “Geophysics in a Box” project rolls out, it is hoped to include many other football clubs, and even to get the project onto BBC Television’s popular “Match of the Day” programme. Some of the people most marginalized by society can be on the autism spectrum so the National Autism Society is working alongside several of the RAS200 projects to see how best they can be adapted and carried out with the needs of autistic people in mind. All of the projects are also being carefully and developmentally evaluated as part of RAS200 by Jenesys Associates so that the lessons from this “experiment” in science communication and engagement can be learned and passed on to the community as a whole. RAS200 projects will last until 2022, taking them through the Society’s bicentennial. But it is hoped that each of them will create a more long-lasting legacy of outreach, engagement, education and training activities and understanding. And RAS200 is also working to change the culture within the RAS itself. In the end, the Society has to be more outward looking and socially engaged if it is to survive for the next 200 years. A&G editorial (2014). ‘Astonishing and glorious: for 200 years’. Astronomy & Geophysics 55 (3), 3.4. https://doi.org/10.1093/astrogeo/atu091. A&G news (2018). ‘RAS 200 takes off in Truro’. Astronomy & Geophysics 59 (3), 3.8. https://doi.org/10.1093/astrogeo/aty125. Bowler, S. (2014). ‘RAS 200: reaching out further for 2020’. Astronomy & Geophysics 55 (5), 5.11. https://doi.org/10.1093/astrogeo/atu210. — (2015). ‘RAS 200: the first projects’. Astronomy & Geophysics 56 (3), 3.11. https://doi.org/10.1093/astrogeo/atv088. — (2017). ‘Winning ways forward: RAS 200 awards’. Astronomy & Geophysics 58 (3), 3.15. https://doi.org/10.1093/astrogeo/atx097. — (2018a). ‘Making big data connections with RAS 200’. Astronomy & Geophysics 59 (5), 5.10. https://doi.org/10.1093/astrogeo/aty227. — (2018b). ‘RAS 200 needs you!’ Astronomy & Geophysics 59 (1), 1.9. https://doi.org/10.1093/astrogeo/aty022. Offer, L. (2018). ‘Football and Lego bring seismology to kids’. Astronomy & Geophysics 59 (4), 4.14. https://doi.org/10.1093/astrogeo/aty189. Serjeant, S. (2018). ‘New RAS 200 show is all-round impressive’. Astronomy & Geophysics 59 (3), 3.11. https://doi.org/10.1093/astrogeo/aty140. Snow, C. P. (2012 [1959]). The two cultures. Cambridge, U.K.: Canto Classics, Cambridge University Press. https://doi.org/10.1017/cbo9781139196949. Steve Miller is Emeritus Professor of Science Communication and Planetary Science at University College London. He chairs the Steering Group for RAS 200 and was co-convenor of the two sessions on “Communicating science across cultures” at PCST 2018 in Dunedin, New Zealand/Aotearoa, from which this article originates. E-mail: s.miller@ucl.ac.uk. Sue Bowler is Visiting Research Fellow in the School of Earth and Environment at the University of Leeds. She is the Editor of Astronomy and Geophysics and a member of the RAS 200 Steering Group. E-mail: sbowler@ras.ac.uk. Sheila Kanani is Education, Outreach and Diversity at the Royal Astronomical Society. She is the Coordinator of the RAS 200 Steering Group. E-mail: skanani@ras.ac.uk. Miller, S., Bowler, S. and Kanani, S. (2018). ‘RAS200 — engaging citizens with astronomy across cultural divides’. JCOM 17 (04), CN03. https://doi.org/10.22323/2.17040303.
Revision as of 19:52, 15 May 2015 by MathAdmin (talk | contribs) (Created page with "Use differentials to approximate the change in profit given <math style="vertical-align: -5%">x = 10</math> units and <math style="vertical-align: 0%">dx = 0.2</math>&...") {\displaystyle x=10} {\displaystyle dx=0.2} {\displaystyle P(x)=-4x^{2}+90x-128} A differential is a method of approximating a change, {\displaystyle \int x^{n}dn={\frac {x^{n+1}}{n+1}}+C} For setup of the problem we need to integrate the region between the x - axis, the curve, x = 1, and x = 4.
International conference on dynamical systems in mathematical physics - Tome (1976) no. 40 title = {International conference on dynamical systems in mathematical physics}, TI - International conference on dynamical systems in mathematical physics International conference on dynamical systems in mathematical physics. Astérisque, no. 40 (1976), 200 p. http://numdam.org/item/AST_1976__40_/ K -flows Introduction to stochastic field theory (abstract) {ℤ}_{+}^{N} Shiokawa, Ietaka A generalized Ruelle Perron-Frobenius theorem and some applications
PneumoniaCheck: A Device for Sampling Lower Airway Aerosols | J. Med. Devices | ASME Digital Collection Tamera L. Scholz, Tamera L. Scholz G.W.W. School of Mechanical Engineering, e-mail: tamera.scholz@me.gatech.edu Prem A. Midha, Prem A. Midha e-mail: prem@gatech.edu Larry J. Anderson, Division of Viral Diseases, NCIRD, CoCID, e-mail: lja2@cdc.gov , 315 Ferst Drive, Room 2307, Atlanta, GA 30332 Scholz, T. L., Midha, P. A., Anderson, L. J., and Ku, D. N. (November 8, 2010). "PneumoniaCheck: A Device for Sampling Lower Airway Aerosols." ASME. J. Med. Devices. December 2010; 4(4): 041005. https://doi.org/10.1115/1.4002760 The pathogens causing pneumonia are difficult to identify because a high quality specimen from the lower lung is difficult to obtain. A new specimen collection device is designed to collect aerosol specimens selectively from the lower lung generated during deep coughing. The PneumoniaCheck device utilizes a separation reservoir and Venturi valve to segregate contents from the upper and lower airways. The device also includes several specially designed features to exclude oral contaminants from the sample and a filter to collect the aerosolized pathogens. Verification testing of PneumoniaCheck demonstrates effective separation of upper airway gas from the lower airway gas (p<0.0001) and exclusion of both liquid and viscous oral material (p<0.0001) from the collection chamber. The filters can collect 99.9997% of virus and bacteria sized particles from the sampled lower lung aerosols. The selective collection of specimens from the lower airway may aid in the diagnosis of specific pathogens causing pneumonia. aerosols, diseases, filters, microorganisms, patient diagnosis, pneumonia, medical device, lower airway separation, streptococcus pneumoniae, alcohol testing Aerosols, Ethanol, Filters, Flow (Dynamics), Reservoirs, Separation (Technology), Venturi tubes, Oxygen, Lung, Valves, Diseases, Testing Assessment of the Usefulness of Sputum Culture for Diagnosis of Community-Acquired Pneumonia Using the PORT Predictive Scoring System Assessment of Experimental and Natural Viral Aerosols Reducing the Burden of Acute Lower Respiratory Infections in Children: The Contribution of New Diagnostics Nature: Improved Diagnostic Technologies for the Developing World Parthenon Publishing Group Inc. The Volume of Saliva in the Mouth Before and After Swallowing Dejsiri Development and Evaluation of a Novel Multiplex PCR Technology for Molecular Differential Detection of Bacterial Respiratory Disease Pathogens Evaluation and Improvement of Real-Time PCR Assays Targeting lytA, ply, and psaA Genes for Detection of Pneumococcal DNA Alveolar Breath Collection Device and Method Method and Device for Determining and Separating the Alveolar Air Proportion From the Breathing Air Apparatus for Sampling and Analyzing Breath Apparatus and Method for Collecting Human Breath Samples Alveolar Gas Trap and Method of Use Prospective Study of the Usefulness of Sputum Gram Stain in the Initial Approach to Community-Acquired Pneumonia Requiring Hospitalization The Influence of the Severity of Community-Acquired Pneumonia on the Usefulness of Blood Cultures
n is a positive integer write an integral to represent \lim\limits_ { n \rightarrow \infty } \frac { 1 } { n } [ \frac { 1 } { ( \frac { 1 } { n } ) } + \frac { 1 } { ( \frac { 2 } { n } ) } + \ldots + \frac { 1 } { ( \frac { n } { n } ) } ] Notice that this is a Riemann sum with infinitely many rectangles. And a Riemann sum with infinitely many rectangles is the Definition of an Integral: Can you rewrite this as an integral? \lim\limits_{n\rightarrow \infty }\frac{1}{n}=\lim\limits_{x\rightarrow 0}\Delta x=dx.............. so we can substitute \frac{1}{n} dx dx represents the infinitely small width of each rectangle. Now let’s find the height of each rectangle. Heights, of course, are represented by a function, f(x) But what is f(x) x is a variable, we will let x represent the part of the series that is changing: This is beginning to look more like an integral: \lim\limits_{n\rightarrow \infty }\frac{1}{x}dx We still need to find the bounds of the integral. x \frac{1}{n} \lim\limits_{n\rightarrow \infty }\frac{1}{n}=0 , the lower bound is 0
wildcard - Maple Help Home : Support : Online Help : Programming : Logic : Boolean : verify : wildcard verify a relation between two expressions, independent of variable names verify(expr1, expr2, wildcard) verify(expr1, expr2, wildcard(typ)) verify(expr1, expr2, wildcard(typ, ver)) The verify(expr1, expr2, wildcard) and verify(expr1, expr2, wildcard(typ, ver)) calling sequences return true if it can be determined that the expressions expr1 and expr2 are equivalent except for their subexpressions of type typ, either by testing for equality or by using verify with verification ver. Concretely, this command determines all subexpressions of expr1 and expr2 of type typ and verifies that expr1 has equally many different subexpressions of type typ as expr2 does. If this is not the case, then the command returns false. Otherwise, it goes through all possible matchings m that pair one subexpression of type typ of expr1 with one of expr2; for each m, Maple tests if substituting the paired subexpression into expr1 yields an expression that is equal to expr2. If ver is specified, then instead of testing for equality, Maple tests if the result of the substitution is in the relation tested by that verification to expr2. The default value for the type typ is name. The verifications wildcard and wildcard(typ) are symmetric. A verification wildcard(typ, ver) is symmetric if and only if the verification ver is symmetric. If expr1 and expr2 have many subexpressions of type typ, this command can take a long time. This returns true, because both expressions contain a single name and substituting y x in the first expression yields the second. \mathrm{verify}⁡\left({x}^{2}-x,{y}^{2}-y,'\mathrm{wildcard}'\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} In the following two examples, only the name a indexed by something is allowed to be considered a wildcard. \mathrm{verify}⁡\left({x}^{2}-a[0],{x}^{2}-a[1],'\mathrm{wildcard}'⁡\left('\mathrm{specindex}'⁡\left('a'\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{verify}⁡\left({x}^{2}-a[0],{y}^{2}-a[1],'\mathrm{wildcard}'⁡\left('\mathrm{specindex}'⁡\left('a'\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} In the following examples, the order of the entries of the lists does not matter. You can specify this by using the verification as_set. \mathrm{verify}⁡\left([{x}^{2}-a,x+x⁢y],[s+s⁢z,{s}^{2}-y],'\mathrm{wildcard}'⁡\left('\mathrm{name}','\mathrm{as_set}'\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{verify}⁡\left([{x}^{2}-a,x+x⁢y+1],[s+s⁢z,{s}^{2}-y],'\mathrm{wildcard}'⁡\left('\mathrm{name}','\mathrm{as_set}'\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} The verify/wildcard command was introduced in Maple 2017.
Behavioral model of voltage-controlled oscillator - MATLAB - MathWorks Benelux Frequency dependence on input voltage Input voltage corresponding to nominal frequency Rate of change of frequency with input voltage Input voltage vector Frequency deviation from nominal Output voltage peak amplitude Frequency tracking time constant Behavioral model of voltage-controlled oscillator The Voltage-Controlled Oscillator block provides a behavioral model of a voltage-controlled oscillator (VCO). The output voltage is defined by the following equations: {v}_{\mathrm{lim}}=\left\{\begin{array}{ll}{v}_{\mathrm{min}}\hfill & \text{for }{v}_{in}<{v}_{min}\hfill \\ {v}_{in}\hfill & \text{for }{v}_{min}\le {v}_{in}\le {v}_{\mathrm{max}}\hfill \\ {v}_{\mathrm{max}}\hfill & \text{for }{v}_{in}>{v}_{\mathrm{max}}\hfill \end{array} \stackrel{˙}{\Phi }=2\pi F\left({v}_{\mathrm{lim}}\right) {v}_{out}=A\mathrm{sin}\left(2\pi {f}_{nom}t+\Phi \right)-{i}_{out}{R}_{out} vin is the voltage applied across the 1+ and 1– ports. vout is the voltage across the 2+ and 2– ports. fnom is the oscillator frequency when the input control voltage is vnom. F is a linear function of vlim or a lookup table function of vlim. A is the output voltage peak amplitude. t is simulation time. If you choose Linear for the Frequency dependence on input voltage parameter, then the function F is given by: F={f}_{nom}+k\left({v}_{lim}-{v}_{nom}\right) where k is the rate of change of frequency with input voltage. If you choose Tabulated for the Frequency dependence on input voltage parameter, then the function F is defined by the vectors of input voltages and corresponding output frequency deviations from nominal that you supply. The values for vmin and vmax are the first and the last values of the input voltage vector. You can model the time delay between a change in the input control voltage and the oscillator frequency. Do this by modeling a first-order dynamic between vlim and the value passed to the function F. 1+ — Positive input voltage Electrical conserving port associated with the oscillator positive input voltage. 1- — Negative input voltage Electrical conserving port associated with the oscillator negative input voltage. 2+ — Positive output voltage Electrical conserving port associated with the oscillator positive output voltage. 2- — Negative output voltage Electrical conserving port associated with the oscillator negative output voltage. Frequency dependence on input voltage — Block parameterization Linear (default) | Tabulated Linear — Define a linear function by specifying the rate of change of frequency with input voltage. This is the default option. Tabulated — Provide the vectors of input voltages and corresponding output frequency deviations from nominal. The block determines the frequency deviation by table lookup based on these values. Nominal frequency — Nominal frequency The oscillator frequency when the input control voltage is at the nominal value. Input voltage corresponding to nominal frequency — Input voltage corresponding to nominal frequency The input voltage corresponding to the oscillator nominal frequency. This parameter is visible only when you select Linear for the Frequency dependence on input voltage parameter. Rate of change of frequency with input voltage — Rate of change of frequency with input voltage 1 Hz/V (default) The linear coefficient defining the rate of change of frequency depending on input voltage. Minimum input voltage — Minimum input voltage The minimum input voltage that affects VCO frequency. Maximum input voltage — Maximum input voltage The maximum input voltage that affects VCO frequency. Input voltage vector — Input voltage vector [0, .2, .4, .6, .8, 1] V (default) The vector of voltages for the tabulated VCO frequency. This parameter is visible only when you select Tabulated for the Frequency dependence on input voltage parameter. Frequency deviation from nominal — Frequency deviation from nominal [-1000, -329, -51, 162, 342, 500] Hz (default) The corresponding vector of VCO frequencies relative to the nominal frequency. Output voltage peak amplitude — Output voltage peak amplitude The peak amplitude of the voltage across the 2+ and 2– terminals. Input resistance — Input resistance The resistance seen at the 1+ and 1– terminals. The value of the series output resistance. Dynamics — Dynamics specification No dynamics (default) | Model frequency tracking dynamics Select one of the following methods for specifying dynamics: No dynamics — Do not model the time delay between a change in the input control voltage and the oscillator frequency. This is the default option. Model frequency tracking dynamics — Model a first order dynamic between the input control voltage and the oscillator frequency. Frequency tracking time constant — Frequency tracking time constant Time constant for the first-order filter that delays the measured input control voltage, to model the lag between a change in VCO demanded frequency and the resulting VCO frequency. This parameter is visible only when you select Model frequency tracking dynamics for the Dynamics parameter. Initial frequency — Initial frequency The initial VCO output frequency.
In finance, a foreign exchange option (commonly shortened to just FX option or currency option) is a derivative financial instrument that gives the right but not the obligation to exchange money denominated in one currency into another currency at a pre-agreed exchange rate on a specified date.[1] See Foreign exchange derivative. The foreign exchange options market is the deepest, largest and most liquid market for options of any kind. Most trading is over the counter (OTC) and is lightly regulated, but a fraction is traded on exchanges like the International Securities Exchange, Philadelphia Stock Exchange, or the Chicago Mercantile Exchange for options on futures contracts. The global market for exchange-traded currency options was notionally valued by the Bank for International Settlements at $158.3 trillion in 2005.[citation needed] 5 Valuation: the Garman–Kohlhagen model For example, a GBPUSD contract could give the owner the right to sell £1,000,000 and buy $2,000,000 on December 31. In this case the pre-agreed exchange rate, or strike price, is 2.0000 USD per GBP (or GBP/USD 2.00 as it is typically quoted) and the notional amounts (notionals) are £1,000,000 and $2,000,000. This type of contract is both a call on dollars and a put on sterling, and is typically called a GBPUSD put, as it is a put on the exchange rate; although it could equally be called a USDGBP call. If the rate is lower than 2.0000 on December 31 (say 1.9000), meaning that the dollar is stronger and the pound is weaker, then the option is exercised, allowing the owner to sell GBP at 2.0000 and immediately buy it back in the spot market at 1.9000, making a profit of (2.0000 GBPUSD − 1.9000 GBPUSD) × 1,000,000 GBP = 100,000 USD in the process. If instead they take the profit in GBP (by selling the USD on the spot market) this amounts to 100,000 / 1.9000 = 52,632 GBP. Call option – the right to buy an asset at a fixed date and price. Put option – the right to sell an asset at a fixed date and price. Foreign exchange option – the right to sell money in one currency and buy money in another currency at a fixed date and rate. Strike price – the asset price at which the investor can exercise an option. Spot price – the price of the asset at the time of the trade. Forward price – the price of the asset for delivery at a future time. Notional – the amount of each currency that the option allows the investor to sell or buy. Ratio of notionals – the strike, not the current spot or forward. Numéraire – the currency in which an asset is valued. Non-linear payoff – the payoff for a straightforward FX option is linear in the underlying currency, denominating the payout in a given numéraire. Change of numéraire – the implied volatility of an FX option depends on the numéraire of the purchaser, again because of the non-linearity of {\displaystyle x\mapsto 1/x} In the money – for a put option, this is when the current price is less than the strike price, and would thus generate a profit were it exercised; for a call option the situation is inverted. The difference between FX options and traditional options is that in the latter case the trade is to give an amount of money and receive the right to buy or sell a commodity, stock or other non-money asset. In FX options, the asset in question is also money, denominated in another currency. For example, a call option on oil allows the investor to buy oil at a given price and date. The investor on the other side of the trade is in effect selling a put option on the currency. To eliminate residual risk, traders match the foreign currency notionals, not the local currency notionals, else the foreign currencies received and delivered do not offset. In the case of an FX option on a rate, as in the above example, an option on GBPUSD gives a USD value that is linear in GBPUSD using USD as the numéraire (a move from 2.0000 to 1.9000 yields a .10 * $2,000,000 / $2.0000 = $100,000 profit), but has a non-linear GBP value. Conversely, the GBP value is linear in the USDGBP rate, while the USD value is non-linear. This is because inverting a rate has the effect of {\displaystyle x\mapsto 1/x} , which is non-linear. Hedging[edit] Corporations primarily use FX options to hedge uncertain future cash flows in a foreign currency. The general rule is to hedge certain foreign currency cash flows with forwards, and uncertain foreign cash flows with options. Suppose a United Kingdom manufacturing firm expects to be paid US$100,000 for a piece of engineering equipment to be delivered in 90 days. If the GBP strengthens against the US$ over the next 90 days the UK firm loses money, as it will receive less GBP after converting the US$100,000 into GBP. However, if the GBP weakens against the US$, then the UK firm receives more GBP. This uncertainty exposes the firm to FX risk. Assuming that the cash flow is certain, the firm can enter into a forward contract to deliver the US$100,000 in 90 days time, in exchange for GBP at the current forward rate. This forward contract is free, and, presuming the expected cash arrives, exactly matches the firm's exposure, perfectly hedging their FX risk. If the cash flow is uncertain, a forward FX contract exposes the firm to FX risk in the opposite direction, in the case that the expected USD cash is not received, typically making an option a better choice.[citation needed] Using options, the UK firm can purchase a GBP call/USD put option (the right to sell part or all of their expected income for pounds sterling at a predetermined rate), which: protects the GBP value that the firm expects in 90 days' time (presuming the cash is received) costs at most the option premium (unlike a forward, which can have unlimited losses) yields a profit if the expected cash is not received but FX rates move in its favor Valuation: the Garman–Kohlhagen model [edit] As in the Black–Scholes model for stock options and the Black model for certain interest rate options, the value of a European option on an FX rate is typically calculated by assuming that the rate follows a log-normal process.[2] The earliest currency options pricing model was published by Biger and Hull, (Financial Management, spring 1983). The model preceded the Garmam and Kolhagen's Model. In 1983 Garman and Kohlhagen extended the Black–Scholes model to cope with the presence of two interest rates (one for each currency). Suppose that {\displaystyle r_{d}} is the risk-free interest rate to expiry of the domestic currency and {\displaystyle r_{f}} is the foreign currency risk-free interest rate (where domestic currency is the currency in which we obtain the value of the option; the formula also requires that FX rates – both strike and current spot be quoted in terms of "units of domestic currency per unit of foreign currency"). The results are also in the same units and to be meaningful need to be converted into one of the currencies.[3] Then the domestic currency value of a call option into the foreign currency is {\displaystyle c=S_{0}e^{-r_{f}T}{\mathcal {N}}(d_{1})-Ke^{-r_{d}T}{\mathcal {N}}(d_{2})} The value of a put option has value {\displaystyle p=Ke^{-r_{d}T}{\mathcal {N}}(-d_{2})-S_{0}e^{-r_{f}T}{\mathcal {N}}(-d_{1})} {\displaystyle d_{1}={\frac {\ln(S_{0}/K)+(r_{d}-r_{f}+\sigma ^{2}/2)T}{\sigma {\sqrt {T}}}}} {\displaystyle d_{2}=d_{1}-\sigma {\sqrt {T}}} {\displaystyle S_{0}} is the current spot rate {\displaystyle K} {\displaystyle {\mathcal {N}}(x)} is the cumulative normal distribution function {\displaystyle r_{d}} is domestic risk free simple interest rate {\displaystyle r_{f}} is foreign risk free simple interest rate {\displaystyle T} is the time to maturity (calculated according to the appropriate day count convention) {\displaystyle \sigma } is the volatility of the FX rate. An earlier pricing model was published by Biger and Hull, Financial Management, spring 1983. The model preceded Garmam and Kolhagen Model. A wide range of techniques are in use for calculating the options risk exposure, or Greeks (as for example the Vanna-Volga method). Although the option prices produced by every model agree (with Garman–Kohlhagen), risk numbers can vary significantly depending on the assumptions used for the properties of spot price movements, volatility surface and interest rate curves. After Garman–Kohlhagen, the most common models are SABR and local volatility[citation needed], although when agreeing risk numbers with a counterparty (e.g. for exchanging delta, or calculating the strike on a 25 delta option) Garman–Kohlhagen is always used. ^ "Foreign Exchange (FX) Terminologies: Forward Deal and Options Deal" Published by the International Business Times AU on February 14, 2011. ^ "British Pound (GBP) to Euro (EUR) exchange rate history". www.exchangerates.org.uk. Retrieved 21 September 2016. ^ "Currency options pricing explained". www.derivativepricing.com. Retrieved 21 September 2016. Retrieved from "https://en.wikipedia.org/w/index.php?title=Foreign_exchange_option&oldid=1081445863"
Atrial fibrillation (AF) is the most common tachyarrhythmia encountered in clinical practice [1]. Due to progressive population aging, the prevalence of this condition, currently settled at 2–4% worldwide, is deemed to double in the coming decades [1, 2, 3, 4]. Coronary artery disease (CAD) frequently coexists with AF, and management of these associated conditions can be challenging [5]. In addition, AF may induce angina-like chest pain and increase markers of myocardial damage, even in the absence of classical CAD [6]. Despite the frequent coexistence of these two cardiac conditions, the prognostic independent implication of AF in patients with stable CAD remains controversial. In particular, although preclinical and clinical evidence suggests that AF itself may promote a reduction in coronary blood flow [7, 8, 9, 10, 11], less is known regarding the impact of the arrhythmia in stable CAD patients in terms of cardiac ischemic outcomes. The aim of the present systematic review and meta-analysis of prospective adjusted observational studies is, therefore, to assess the prognostic independent impact of concomitant AF on stable CAD patients in terms of mortality, coronary events, and cerebrovascular events. This systematic review and meta-analysis was performed in accordance with the PRISMA [12] and MOOSE [13] guidelines. 2.1 Search strategy and study selection PubMed/MEDLINE and Google Scholar databases were screened for pertinent articles, using the following keywords: “coronary artery disease”, “stable”, “atrial fibrillation”, “death”, “myocardial infarction”, “stroke”, “coronary revascularization”. The search was ended in May 2019. Two independent reviewers (AS and VV) screened the retrieved citations through the title and/or abstract, and all disparities were resolved through consensus. Studies were included if they reported data from observational prospective studies describing the risk of all cause death (primary outcome) and/or other cardiovascular outcomes (myocardial infarction, coronary revascularization, stroke) in patients with stable CAD and AF vs patients without history of the arrhythmia, provided that the risk estimates were adjusted for possible confounding variables. Studies that did not fulfil the aforementioned study design criteria or in which data were not adequately reported were excluded from the analysis. Risk of bias evaluation of the included studies was performed using the Newcastle Ottawa Scale. Continuous variables and categorical variables were reported as numbers and percentages, respectively. Median (interquartile range—IQR) was used for the summary statistics. Pairwise meta-analysis of adjusted hazard ratio (HR) of the evaluated endpoints in stable CAD patients with versus without AF was performed after logarithmic transformation using a random-effect model (inverse-variance weighting). Forest plots for each outcome were reported. Cochran I {}^{2} test was used to assess heterogeneity in the included studies. Funnel plot analysis and Egger’s test for funnel plot asymmetry were used to assess potential publication bias. Statistical analyses were performed with R version 4.0.0 (R Foundation for Statistical Computing, Vienna, Austria). The initial search identified 6888 potential studies: among these, 6749 were screened for possible inclusion, 6671 were excluded through title and abstract because not relevant to the topic, and 78 full-text articles were carefully reviewed (Fig. 1). Finally, 5 studies were included in the present systematic review and meta-analysis [14, 15, 16, 17, 18], encompassing 30230 stable CAD patients (2844 with AF, 27386 without AF). Table 1 (Ref. [14, 15, 16, 17, 18]) reports main characteristics of the studies, including the type of statistical adjustment used to control confounding. The median follow-up duration was 4.8 (IQR 4–4.9) years. Table 2 summarizes pooled baseline features of the meta-analytic population. The majority of patients were men (63.4% and 68.0%, in AF and non-AF patients, respectively) and median age was 69.2 and 64.1 years, in AF and non-AF patients, respectively. Median left ventricular ejection fraction was 52.8% and 56.6%, in AF and non-AF patients, respectively. Previous stroke/transient ischemic attach (TIA) history was present in 22.2% and 16.2%, in AF and non-AF patients, respectively. Previous myocardial infarction (MI), percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG) were reported in 21.9% and 27.7%, 25.3% and 28.1%, and 6.8% and 5.2%, for AF and non-AF patients, respectively. All included studies showed low risk of bias according to Newcastle Ottawa Scale (Table 3, Ref. [14, 15, 16, 17, 18]). Study (first Author, year of publication) Patients—n AF—n (%) Non-AF—n (%) Follow-up (years) Statistical adjustment Otterstad, 2006 [14] 7665 313 (4.1) 7352 (95.9) 4.9 Adjusted Cox regression Marte, 2009 [15] 613 57 (9.3) 576 (90.7) 4.0 Adjusted Cox regression Bouzas-Mosquera, 2010 [16] 17100 619 (3.6) 16481 (96.4) 6.5 Adjusted Cox regression Rohla, 2015 [17] 1434 146 (10.2) 1288 (89.8) 4.8 Adjusted Cox regression Han, 2018 [18] 3418 1709 (50.0) 1709 (50.0) 2.2 Adjusted Cox regression n, number; AF, atrial fibrillation. Pooled baseline clinical features of the study population (30230 patients). Variables Median value (lower–upper quartile) AF group Non–AF group (N:2824) (N:27406) Male sex (%) 67 (62.8–72) 63.4 (62.7–71.9) 68.0 (62.3–72.1) Hypertension (%) 51.8 (49.4–83.9) 49.1 (49.0–83.6) 52.0 (49.4–84.0) Smoking (%) 27.6 (20.7–39.7) 16.9 (15.2–24.1) 28.7 (20.9–41.3) Diabetes (%) 20.7 (15.6–29.7) 16.0 (14.5–29.5) 21.0 (15.7–29.7) Dyslipidemia (%) 63.8 (56.2–71.2) 60.1 (51.9–62.6) 63.8 (60.0–71.9) Heart failure (%) 8.9 (5.6–18.2) 27.3 (17.2–27.7) 6.8 (4.2–17.2) Ejection fraction 56.3 (52.6–61.7) 52.8 (50.2–53.4) 56.6 (52.8–62.3) Previous stroke/TIA (%) 17.0 (12.4–21.5) 22.2 (20–24.3) 16.2 (11.5–20.8) Peripheral artery disease (%) 10.4 (8.7–12.2) 10.9 (9.2–12.5) 10.2 (8.4–12.0) Chronic kidney disease (%) 23.4 (16.4–30.5) 30.2 (19.8–40.6) 23.1 (16.1–30.0) Previous coronary events MI (%) 27.1 (22.2–39.2) 21.9 (18.4–39.5) 27.7 (22.6–39.4) PCI* (%) 27.8 (18.5–36.5) 25.3 (16.2–36.7) 28.1 (18.6–36.6) CABG (%) 5.4 (4.2–6.5) 6.8 (5.3–8.2) 5.2 (4.1–6.4) Antiplatelet agent (%) 89.3 (66.8–98.8) 73.0 (52.5–85.9) 90.0 (79.5–94.7) Anticoagulant therapy (%) 5.12 (3.98–6.26) 33.9 (29.3–45.5) 3.0 (2.6–3.1) * Number of patients who had undergone to PCI before enrolment in the study. BMI, Body Mass Index; CABG, coronary artery bypass graft; MI, myocardial infarction; PCI, percutaneous coronary intervention. Risk of bias evaluation using the Newcastle-Ottawa Scale (NOS). Study (first Author, year of publication) NOS domains Otterstad, 2006 [14] **** ** *** Marte, 2009 [15] **** ** *** Bouzas-Mosquera, 2010 [16] **** ** *** Rohla, 2015 [17] **** ** *** Han, 2018 [18] **** ** ** Asterisks indicate the star rating according to the Newcastle-Ottawa Scale. Good quality is defined with: 3–4 stars in “Selection” and 1–2 stars in “Comparability” and 2–3 stars in “Outcome”. Fair quality is defined with: 2 stars in “Selection” and 1–2 stars in “Comparability” and 2–3 stars in “Outcome”. Poor quality is defined with: 0–1 stars in “Selection” or 0 stars in “Comparability” or 0–1 stars in “Outcome”. All five included studies evaluated the primary outcome, while two studies reported adjusted risk estimates for stroke, three for myocardial infarction and two for coronary revascularization. Details on the adjustment performed in each study are reported in Supplementary Table 1 in the Supplementary Material Pooled analysis of adjusted observational results indicates an increased risk of death in stable CAD patients with concomitant AF, compared to stable CAD patients without the arrhythmia (HR 1.39, 95% CI: 1.17–1.66). Low degree of heterogeneity was found for this outcome (I {}^{2} = 35%), and funnel plot analysis (Supplementary Fig. 1) did not suggest potential publication bias (Egger’s test p-value 0.28). Fig. 2 reports the forest plot for the primary outcome. Focusing on secondary outcomes (Fig. 3 and Supplementary Figs. 2–4), AF independently increased the risk of stroke in this group of patients (HR 1.88, 95% CI: 1.45–2.45, I {}^{2} = 0%). Instead, risk of myocardial infarction (HR 0.90, 95% CI: 0.66–1.22, I {}^{2} = 25%) and coronary revascularization (HR 0.96, 95% CI: 0.79–1.16, I {}^{2} = 0%) did not differ in stable CAD patients with or without AF. Forest plot for the primary outcome (death). Forest plots for secondary outcomes (stroke, myocardial infarction and coronary revascularization). The main findings of the present systematic review and meta-analysis are the following: \bullet AF independently increases the risk of death in patients with stable CAD by 39%; \bullet patients with stable CAD and concomitant AF have nearly twofold increase in risk of stroke (+88%) compared to patients with stable CAD without AF; \bullet AF does not seem to result into an increased risk of classically defined coronary events (myocardial infarction and coronary revascularization). AF and CAD are two frequently coexisting conditions, sharing common risk factors, such as age, hypertension, diabetes mellitus, sleep apnoea, obesity and smoking [5]. Considering the increasing life expectancy, these two conditions are deemed to coexist even more in the near future. It is, therefore, critical to evaluate the independent impact (net of possible confounder) that the presence of the most common atrial arrhythmia exerts in patients with stable CAD. Particularly, the eventual impact of AF on the risk of future coronary events is still greatly unexplored. The present systematic review strongly highlights the unexpected paucity of data assessing the clinical impact that AF exerts per se on stable CAD patients: in fact, only 5 studies address the hardest clinical endpoint (death), and even fewer the other cardiovascular outcomes (stroke, myocardial infarction and coronary revascularization). The present analysis shows, in any case, that AF has an independent prognostic influence in patients with stable CAD, conferring an additional 39% risk of death, as well as an 88% additional risk of incident stroke. Being CAD a risk factor for stroke and death per se, this relationship entails an even greater risk for these complications compared to AF alone. On the other hand, interestingly, AF does not appear to confer a worse CAD related outcome. However, before drawing definite conclusion about this relationship, it must be taken into account the small number of studies included into the analysis, which could entail statistical underpowering on the topic. Future studies are warranted to reach definitive conclusions on this topic. Moreover, evidence is needed investigating the effect of the AF related “irregularly irregular” rhythm on the coronary circulation, both in terms of acute hemodynamic data than potential pro-atherogenic effect. First, the observational design of the included studies carries an inherent risk of unaccounted confounders. In addition, the lack of patient-level data limited establishing eventual prognostic implications of the specific AF subtype (paroxysmal, persistent, permanent). Moreover, data on the safety profile of a combined therapy with anticoagulant and antiplatelet agents are missing and considerations on this regard were not possible. Finally, the restricted number of studies evaluating cardiovascular outcomes other than death, limits inferential power to detect potentially significant differences in these outcomes among groups of interest. In patients with stable CAD AF exerts an independent negative prognostic effect, increasing the risk of death and stroke. However, the small number of eligible studies included in this analysis highlights the astonishing lack of data regarding prognostic implications of concomitant AF in patients with stable CAD, stressing the need for future studies focused on this topic, as well as on the hemodynamic effects exerted by the arrhythmia on the coronary circulation. AS, VV and MA designed the research study and conducted the literature search. AS and VV drafted the manuscript. AB, HX, GMDF and MA helped draft and critically revised the manuscript. Benjamin EJ, Muntner P, Alonso A, Bittencourt MS, Callaway CW, Carson AP, et al. Heart disease and stroke statistics-2019 update: a report from the American heart association. Circulation NLM. 2019; 139: e56–e528. Chugh SS, Havmoeller R, Narayanan K, Singh D, Rienstra M, Benjamin EJ, et al. Worldwide epidemiology of atrial fibrillation. Circulation. 2014; 129: 837–847. Krijthe BP, Kunst A, Benjamin EJ, Lip GYH, Franco OH, Hofman A, et al. Projections on the number of individuals with atrial fibrillation in the European Union, from 2000 to 2060. European Heart Journal. 2013; 34: 2746–2751. Michniewicz E, Mlodawska E, Lopatowska P, Tomaszuk-Kazberuk A, Malyszko J. Patients with atrial fibrillation and coronary artery disease - Double trouble. Advances in Medical Sciences. 2018; 63: 30–35. Smit MD, Tio RA, Slart RHJA, Zijlstra F, Van Gelder IC. Myocardial perfusion imaging does not adequately assess the risk of coronary artery disease in patients with atrial fibrillation. Europace. 2010; 12: 643–648. Range FT, Paul M, Schäfers KP, Acil T, Kies P, Hermann S, et al. Myocardial perfusion in nonischemic dilated cardiomyopathy with and without atrial fibrillation. Journal of Nuclear Medicine. 2009; 50: 390–396. Range FT, Schäfers M, Acil T, Schäfers KP, Kies P, Paul M, et al. Impaired myocardial perfusion and perfusion reserve associated with increased coronary resistance in persistent idiopathic atrial fibrillation. European Heart Journal. 2007; 28: 2223–2230. Kochiadakis GE, Skalidis EI, Kalebubas MD, Igoumenidis NE, Chrysostomakis SI, Kanoupakis EM, et al. Effect of acute atrial fibrillation on phasic coronary blood flow pattern and flow reserve in humans. European Heart Journal. 2002; 23: 734–741. Wichmann J, Ertl G, Rudolph G, Kochsiek K. Effect of experimentally induced atrial fibrillation on coronary circulation in dogs. Basic Research in Cardiology. 1983; 78: 473–491. Saito D, Haraoka S, Ueda M, Fujimoto T, Yoshida H, Ogino Y. Effect of atrial fibrillation on coronary circulation and blood flow distribution across the left ventricular wall in anesthetized open-chest dogs. Japanese Circulation Journal. 1978; 42: 417–423. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. British Medical Journal. 2009; 339: b2700. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: A proposal for reporting. The Journal of the American Medical Association. 2000; 283: 2008–2012. Erik Otterstad J, Kirwan B, Lubsen J, De Brouwer S, Fox KAA, Corell P, et al. Incidence and outcome of atrial fibrillation in stable symptomatic coronary disease. Scandinavian Cardiovascular Journal. 2006; 40: 152–159. Marte T, Saely CH, Schmid F, Koch L, Drexel H. Effectiveness of atrial fibrillation as an independent predictor of death and coronary events in patients having coronary angiography. The American Journal of Cardiology. 2009; 103: 36–40. Bouzas-Mosquera A, Peteiro J, Broullón FJ, Alvarez-García N, Mosquera VX, Casas S, et al. Effect of atrial fibrillation on outcome in patients with known or suspected coronary artery disease referred for exercise stress testing. The American Journal of Cardiology. 2010; 105: 1207–1211. Rohla M, Vennekate CK, Tentzeris I, Freynhofer MK, Farhan S, Egger F, et al. Long-term mortality of patients with atrial fibrillation undergoing percutaneous coronary intervention with stent implantation for acute and stable coronary artery disease. International Journal of Cardiology. 2015; 184: 108–114. Han S, Park G, Kim Y, Hwang KW, Roh J, Won K, et al. Effect of atrial fibrillation in Asian patients undergoing percutaneous coronary intervention with drug-eluting stents for stable coronary artery disease. Medicine. 2018; 97: e13488.
Analytic Number Theory/Dirichlet series - Wikibooks, open books for an open world Analytic Number Theory/Dirichlet series For the remainder of this book, we shall use Riemann's convention of denoting complex numbers: {\displaystyle s=\sigma +it} {\displaystyle f} be an arithmetic function. Then the Dirichlet series associated to {\displaystyle f} {\displaystyle \sum _{n=1}^{\infty }{\frac {f(n)}{n^{s}}}} {\displaystyle s} ranges over the complex numbers. Convergence considerations[edit | edit source] Theorem 5.2 (abscissa of absolute convergence): {\displaystyle f} be an arithmetic function such that the series of absolute values associated to the Dirichlet series associated to {\displaystyle f} {\displaystyle \sum _{n=1}^{\infty }\left|{\frac {f(n)}{n^{s}}}\right|} neither diverges at all {\displaystyle s\in \mathbb {C} } nor converges for all {\displaystyle s\in \mathbb {C} } {\displaystyle \sigma _{a}\in \mathbb {R} } , called the abscissa of absolute convergence, such that the Dirichlet series associated to {\displaystyle f} converges absolutely for all {\displaystyle \sigma +it} {\displaystyle \sigma >\sigma _{a}} and it's associated series of absolute values diverges for all {\displaystyle \sigma +it} {\displaystyle \sigma <\sigma _{a}} {\displaystyle S} the set of all real numbers {\displaystyle \sigma } {\displaystyle \sum _{n=1}^{\infty }\left|{\frac {f(n)}{n^{s}}}\right|} diverges. Due to the assumption, this set is neither empty nor equal to {\displaystyle \mathbb {C} } {\displaystyle \sigma _{0}+it_{0}\notin S} {\displaystyle \sigma >\sigma _{0}} {\displaystyle t} {\displaystyle \sigma +it\notin S} {\displaystyle \left|{\frac {f(n)}{n^{s_{0}}}}\right|={\frac {|f(n)|}{n^{\sigma _{0}}}}\geq {\frac {|f(n)|}{n^{\sigma }}}=\left|{\frac {f(n)}{n^{s}}}\right|} and due to the comparison test. It follows that {\displaystyle S} has a supremum. Let {\displaystyle \sigma _{a}} be that supremum. By definition, for {\displaystyle \sigma >\sigma _{a}} we have convergence, and if we had convergence for {\displaystyle \sigma <\sigma _{a}} we would have found a lower upper bound due to the above argument, contradicting the definition of {\displaystyle \sigma _{a}} {\displaystyle \Box } Theorem 5.3 (abscissa of conditional convergence): Formulas[edit | edit source] Theorem 8.4 (Euler product): {\displaystyle f} be a strongly multiplicative function, and let {\displaystyle s\in \mathbb {C} } such that the corresponding Dirichlet series converges absolutely. Then for that series we have the formula {\displaystyle \sum _{n=1}^{\infty }{\frac {f(n)}{n^{s}}}=\prod _{p{\text{ prime}}}{\frac {1}{1-{\frac {f(p)}{p^{s}}}}}} This follows directly from theorem 2.11 and the fact that {\displaystyle f} strongly multiplicative {\displaystyle \Rightarrow } {\displaystyle {\frac {f(n)}{n^{s}}}} strongly multiplicative. {\displaystyle \Box } Retrieved from "https://en.wikibooks.org/w/index.php?title=Analytic_Number_Theory/Dirichlet_series&oldid=3086986"
A Robust Nonlinear Observer for a Class of Neural Mass Models Xian Liu, Dongkai Miao, Qing Gao, "A Robust Nonlinear Observer for a Class of Neural Mass Models", The Scientific World Journal, vol. 2014, Article ID 215943, 5 pages, 2014. https://doi.org/10.1155/2014/215943 Xian Liu ,1 Dongkai Miao,1 and Qing Gao 1 1Key Lab of Industrial Computer Control Engineering of Hebei Province, Institute of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China Academic Editor: G. Cheron A new method of designing a robust nonlinear observer is presented for a class of neural mass models by using the Lur’e system theory and the projection lemma. The observer is robust towards input uncertainty and measurement noise. It is applied to estimate the unmeasured membrane potential of neural populations from the electroencephalogram (EEG) produced by the neural mass models. An illustrative example shows the effectiveness of the proposed method. Mathematical modelling provides a powerful tool for studying mechanisms involved in the generation of different electroencephalogram (EEG) rhythms and neuronal processes of neurological disorders. There are two types of approaches to model neural signals. One is on the basis of networks built with a large number of elementary cells to describe the activity of a given system. The other is a lumped-parameter approach in which neural populations are modeled as nonlinear oscillators. Neural mass models are based on the latter approach. These models comprise macrocolumns or cortical areas and represent the mean activity of the whole population by using one or two state variables. It is seldom tractable to model EEG signals at the neuronal level due to the complexity of real neural networks. The use of neural mass models has been the preferred approach since 1970s. Neural mass models originated from the seminal work of Lopes da Silva et al. for alpha rhythm generation [1] and redesigned by Jansen and Rit to represent the generation of evoked potentials in the visual cortex [2]. The dynamical analysis [3–6] and control [7–9] of the neural mass models have been widely studied over the years. Despite the existence of these neural mass models for simulating distinct rhythms in EEG signals, neural activity is always measured through observing just a single variable such as voltage. A combination of noise in neurons and amplifiers as well as uncertainties in recording equipment leads to uncertainty of the measurement. The observation of states therefore plays significant roles in neuroscientific studies for better understanding of the human brain [10]. In general, neural mass models can be expressed as nonlinear systems of Lur'e type [11]. Observer design for nonlinear systems of Lur'e type [12–14] and the neural mass models [15] has been widely investigated over the years. We here introduce a new method of designing a robust nonlinear observer for the neural mass models. The Lur'e system theory and new tools in linear matrix inequality (LMI) method [16] are used to obtain the new reformulation. We should mention that this new reformulation takes input uncertainty and measurement noise into account. The superiority of the proposed method is demonstrated in the last section which is devoted to numerical comparisons. Notation. The identity matrix is denoted by . The symmetric block component of a symmetric matrix is denoted by . The vector norm is denoted by . The norm is denoted by . The set of positive real numbers is denoted by . Let us consider a class of neural mass models that can be formulated as the following mathematical structure: where is the state vector, is the input, is the measurement output, is the measurement noise, , , , , , and are constant matrices, and : is a memoryless nonlinear vector valued function which is continuously differentiable on . Each entry of the state-dependent nonlinearity is a function of a linear combination of the states where . It satisfies certain slope-restricted condition where . The models in David and Friston [4], Goodfellow et al. [6], Jansen and Rit [2], and Wendling et al. [3] can all be expressed as the form of (1). Let us construct the following observer for plant (1): where is the estimation of state, is the disturbance of input, and , are the observer matrices to be designed. Defining the observer error as , the dynamics of it are governed by where , , and . Note from (3) that each entry of the nonlinearity satisfies The observer design for (1) consists in finding observer matrices and such that the observer error satisfies the following property for all : where scalars , , and . The disturbance gains from and to are and . Theorem 1. Consider plant (1) and observer (4). Under the slope restrictions (3), if there exist a matrix , a diagonal matrix , matrices and , nonsingular matrices and with appropriate dimensions, and scalar constants , , and such that then the observer error satisfies (7) for all , where, , and . Proof. The inequality (8) can be written as where By using the well-known projection lemma in LMI method [16], (9) can be transformed into where The derivative of is given by Applying (11), we have from which it follows that Hence, (7) results from ,, and . Theorem 1 shows that the observer design for (1) consists in finding observer matrices and to satisfy (8) with a symmetric matrix , a diagonal matrix , nonsingular matrices , , and scalar constants , , and . The feasible solution of (8) can be obtained by solving the following optimization problem: Efficient numerical tools such as YALMIP in MATLAB are available for this task. Once the values of and are computed, the disturbance gains and can also be derived. When no input uncertainty and measurement noise are taken into account, Theorem 1 is simplified as follows. Theorem 2. Consider plant (1) and observer (4) with and . Under the slope restrictions (3), if there exist a matrix , a diagonal matrix , matrices , , nonsingular matrices , , and scalar constants , such that where , , and are defined as Theorem 1, then the origin of the observer error system (5) is globally exponentially stable. Let us consider a neural mass model developed by Jansen and Rit [2]. This type of single cortical column model with altered parameters is able to generate realistic patterns such as alpha rhythms and epileptiform spikes in EEG. It can be formulated as the form of (1) with the state vector , where are the mean membrane postsynaptic potentials and are their time derivatives. The input is the afferent influence from neighbouring or more distant columns and is modeled by a Gaussian white noise with mean value 90 and standard deviation 30. The output is the EEG measurement available to the observer. The system matrices are as follows: The function satisfies (3) with . All values of the constants in the model are set on a physiological interpretation basis which can be found in [2]. The standard values of these constants are given anatomically as We design the robust nonlinear observer (4) for the neural mass model. The performance of the observer obtained from Theorem 1 is presented in what follows. Input disturbance and measurement noise are introduced in the design of robust nonlinear observer. For the robust nonlinear observer, we solve the optimization problem (16) to obtain and . The computed disturbance gains and are derived by using the YALMIP toolbox in MATLAB. They are much less than the values given in [15]. In the following simulations, the initial states of the neural mass model and the observer are chosen as and , respectively. Figure 1 presents the time evolutions of the states – (black lines) and their estimations, that is, the states of observer (4) proposed in this study (red lines) and in [15] (blue line). Insets are given to show the zoom-in on data. Figure 1 shows that the states of observer (4) obtained from Theorem 1 do converge to a neighbourhood of the states of the neural mass model. It also shows that the observer proposed in this study performs better than that proposed in [15]. The time evolutions of the states – and their estimations. We have designed a robust nonlinear observer for a class of neural mass models by using the Lur’e system theory and the projection lemma. The resulting observer inhibits input uncertainty and measurement noise. We apply this observer to the neural mass model that generates alpha rhythms to estimate the mean membrane potential of neural populations from the EEG measurement. We show that the proposed observer performs better than some existing ones. The proposed method can also be applied to other types of neural models that have the typical structure of Lur’e systems. F. H. Lopes da Silva, A. Hoeks, H. Smits, and L. H. Zetterberg, “Model of brain rhythmic activity,” Biological Cybernetics, vol. 15, no. 1, pp. 27–37, 1974. View at: Google Scholar B. H. Jansen and V. G. Rit, “Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns,” Biological Cybernetics, vol. 73, no. 4, pp. 357–366, 1995. View at: Publisher Site | Google Scholar A. Babajani and H. Soltanian-Zadeh, “Integrated MEG/EEG and fMRI model based on neural masses,” IEEE Transactions on Biomedical Engineering, vol. 53, no. 9, pp. 1794–1801, 2006. View at: Publisher Site | Google Scholar M. Goodfellow, K. Schindler, and G. Baier, “Self-organised transients in a neural mass model of epileptogenic tissue dynamics,” NeuroImage, vol. 59, no. 3, pp. 2644–2660, 2012. View at: Publisher Site | Google Scholar X. Liu, H. J. Liu, Y. G. Tang, and Q. Gao, “Fuzzy PID control of epileptiform spikes in a neural mass model,” Nonlinear Dynamics, vol. 71, no. 1-2, pp. 13–23, 2013. View at: Google Scholar X. Liu, Q. Gao, B. W. Ma, J. J. Du, and W. J. Ren, “Analysis and control of epileptiform spikes in a class of neural mass models,” Journal of Applied Mathematics, vol. 2013, Article ID 792507, 11 pages, 2013. View at: Publisher Site | Google Scholar X. Liu and Q. Gao, “Parameter estimation and control for a neural mass model based on the unscented Kalman filter,” Physical Review E, vol. 88, no. 4, Article ID 042905, 2013. View at: Publisher Site | Google Scholar S. J. Schiff, Computational Neuroscience, Neural Control Engineering: The Emerging Intersection Between Control Theory and Neuroscience, The MIT Press, London, UK, 2011. G. A. Leonov, D. V. Ponomarenko, and V. B. Smirnova, Frequency-Domain Methods For Nonlinear Analysis: Theory and Applications, World Scientific, Singapore, 1996. M. Arcak and P. Kokotović, “Nonlinear observers: a circle criterion design and robustness analysis,” Automatica, vol. 37, no. 12, pp. 1923–1930, 2001. View at: Publisher Site | Google Scholar X. Fan and M. Arcak, “Observer design for systems with multivariable monotone nonlinearities,” Systems and Control Letters, vol. 50, no. 4, pp. 319–330, 2003. View at: Publisher Site | Google Scholar A. Zemouche and M. Boutayeb, “A unified {H}_{\infty } adaptive observer synthesis method for a class of systems with both Lipschitz and monotone nonlinearities,” Systems and Control Letters, vol. 58, no. 4, pp. 282–288, 2009. View at: Publisher Site | Google Scholar M. Chong, R. Postoyan, D. Nešić, L. Kuhlmann, and A. Varsavsky, “A robust circle criterion observer with application to neural mass models,” Automatica, vol. 48, no. 11, pp. 2986–2989, 2012. View at: Google Scholar S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, USA, 1994. Copyright © 2014 Xian Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A Modified Mann Iteration by Boundary Point Method for Finding Minimum-Norm Fixed Point of Nonexpansive Mappings Songnian He, Wenlong Zhu, "A Modified Mann Iteration by Boundary Point Method for Finding Minimum-Norm Fixed Point of Nonexpansive Mappings", Abstract and Applied Analysis, vol. 2013, Article ID 768595, 6 pages, 2013. https://doi.org/10.1155/2013/768595 Songnian He1,2 and Wenlong Zhu1,2 Academic Editor: Satit Saejung Let be a real Hilbert space and a closed convex subset. Let be a nonexpansive mapping with the nonempty set of fixed points . Kim and Xu (2005) introduced a modified Mann iteration , , , where is an arbitrary (but fixed) element, and and are two sequences in . In the case where , the minimum-norm fixed point of can be obtained by taking . But in the case where , this iteration process becomes invalid because may not belong to . In order to overcome this weakness, we introduce a new modified Mann iteration by boundary point method (see Section 3 for details) for finding the minimum norm fixed point of and prove its strong convergence under some assumptions. Since our algorithm does not involve the computation of the metric projection , which is often used so that the strong convergence is guaranteed, it is easy implementable. Our results improve and extend the results of Kim, Xu, and some others. Let be a subset of a real Hilbert space with an inner product and its induced norm is denoted by and , respectively. A mapping is called nonexpansive if for all . A point is called a fixed point of if . Denote by the set of fixed points of . Throughout this paper, is always assumed to be nonempty. The iteration approximation processes of nonexpansive mappings have been extensively investigated by many authors (see [1–12] and the references therein). A classical iterative scheme was introduced by Mann [13], which is defined as follows. Take an initial guess arbitrarily and define , recursively, by where is a sequence in the interval . It is well known that under some certain conditions the sequence generated by (1) converges weakly to a fixed point of , and Mann iteration may fail to converge strongly even if it is in the setting of infinite-dimensional Hilbert spaces [14]. Some attempts to modify the Mann iteration method (1) so that strong convergence is guaranteed have been made. Nakajo and Takahashi [1] proposed the following modification of the Mann iteration method (1): where denotes the metric projection from onto a closed convex subset of . They proved that if the sequence is bounded above from one, then defined by (2) converges strongly to . But, at each iteration step, an additional projection is needed to calculate, which is not easy in general. To overcome this weakness, Kim and Xu [15] proposed a simpler modification of Mann's iteration scheme, which generates the iteration sequence via the following formula: where is an arbitrary (but fixed) element in , and and are two sequences in . In the setting of Banach spaces, Kim and Xu proved that the sequence generated by (3) converges strongly to the fixed point of under certain appropriate assumptions on the sequences and . In many practical problems, such as optimization problems, finding the minimum norm fixed point of nonexpansive mappings is quite important. In the case where , taking in (3), the sequence generated by (3) converges strongly to the minimum norm fixed point of [15]. But, in the case where , the iteration scheme (3) becomes invalid because may not belong to . To overcome this weakness, a natural way to modify algorithm (3) is adopting the metric projection so that the iteration sequence belongs to ; that is, one may consider the scheme as follows: However, since the computation of a projection onto a closed convex subset is generally difficult, algorithm (4) may not be a well choice. The main purpose of this paper is to introduce a new modified Mann iteration for finding the minimum norm fixed point of , which not only has strong convergence under some assumptions but also has nothing to do with any projection operators. At each iteration step, a point in (the boundary of ) is determined via a particular way, so our modification method is called boundary point method (see Section 3 for details). Moreover, since our algorithm does not involve the computation of the metric projection, it is very easy implementable. The rest of this paper is organized as follows. Some useful lemmas are listed in the next section. In the last section, a function defined on is given firstly, which is important for us to construct our algorithm, then our algorithm is introduced and the strong convergence theorem is proved. Throughout this paper, we adopt the notations listed as follows:(1) converges strongly to ;(2) converges weakly to ;(3) denotes the set of cluster points of (i.e., such that );(4) denotes the boundary of . We need some lemmas and facts listed as follows. Lemma 1 (see [16]). Let be a closed convex subset of a real Hilbert space and let be the (metric of nearest point) projection from onto (i.e., for , is the only point in such that ). Given and . Then if and only if there holds the following relation: Since is a closed convex subset of a real Hilbert space , so the metric projection is reasonable and thus there exists a unique element, which is denoted by , in such that ; that is, . is called the minimum norm fixed point of . Lemma 2 (see [17]). Let be a real Hilbert space. Then there holds the following well-known results:(G1) for all ;(G2) for all . We will give a definition in order to introduce the next lemma. A set is weakly closed if for any sequence such that , there holds . Lemma 3 (see [18, 19]). If is convex, then is weakly closed if and only if is closed. Assume is weakly closed; a function is called weakly lower semicontinuous at if for any sequence such that ; then holds. Generally, we called weakly lower semi-continuous over if it is weakly lower semi-continuous at each point in . Lemma 4 (see [18, 19]). Let be a subset of a real Hilbert space and let be a real function; then is weakly lower semi-continuous over if and only if the set is weakly closed subset of , for any . Lemma 5 (see [20]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping such that . If a sequence in is such that and , then . The following is a sufficient condition for a real sequence to converge to zero. Lemma 6 (see [21, 22]). Let be a nonnegative real sequence satisfying If , and satisfy the conditions: (A1);(A2)either or ;(A3);then . Let be a closed convex subset of a real Hilbert space . In order to give our main results, we first introduce a function by the following definition: Since is closed and convex, it is easy to see that is well defined. Obviously, for all in the case where . In the case where , it is also easy to see that and for every (otherwise, ; we have ; this is a contradiction). An important property of is given as follows. Lemma 7. is weakly lower semi-continuous over . Proof. If , then for all and the conclusion is clear. For the case , using Lemma 4, in order to show that is weakly lower semi-continuous, it suffices to verify that is a weakly closed subset of for every ; that is, if such that , then (i.e., ). Without loss of generality, we assume that (otherwise, there hold for and for , resp., and the conclusion holds obviously). Noting is convex, we have from the definition of that for each , holds for all . Clearly, . Using Lemma 3, then . This implies that Consequently, and this completes the proof. Since the function will be important for us to construct the algorithm of this paper below, it is necessary to explain how to calculate for any given in actual computing programs. In fact, in practical problem, is often a level set of a convex function ; that is, is of the form , where is a real constant. Without loss of generality, we assume that and . Then it is easy to see that, for a given , we have Thus, in order to get the value , we only need to solve a algebraic equation with a single variable , which can be solved easily using many methods, for example, dichotomy method on the interval . In general, solving a algebraic equation above is quite easier than calculating the metric projection . To illustrate this viewpoint, we give the following simple example. Example 8. Let be a strongly positive linear bounded operator with coefficient ; that is, there is a constant with the property , for all . Define a convex function by where is a given point in and is the only solution of the equation . (Notice that is a monogamy.) Setting , then it is easy to show that is a nonempty convex closed subset of such that . (Note that and .) For a given , we have . In order to get , let , where is an unknown number. Thus we obtain an algebraic equation Consequently, we have that is, Now we give a new modified Mann iteration by boundary point method. Algorithm 9. Define in the following way: where and , . Since is closed and convex, we assert by the definition of that, for any given , holds for every , and then is guaranteed, where is generated by Algorithm 9. Obviously, for all if . If , calculating the value implies determining , a boundary point of , and thus our algorithm is called boundary point method. Theorem 10. Assume that and satisfy the following conditions:(D1) ;(D2) ;(D3). Then generated by (17) converges strongly to . Proof. We first show that is bounded. Taking arbitrarily, we have By induction, Thus, is bounded and so are and . As a result, we obtain by condition (D1) that We next show that It suffices to show that Using (17), it follows from direct calculating that Substituting (24) into (23), we obtain Note the fact that (since is monotone increasing) and conditions (D1)–(D3); it concludes by using Lemma 6 that . Noting (20) and (25), we obtain Using Lemma 5, it derives that . Then we show that Indeed take a subsequence of such that Without loss of generality, we may assume that . Noticing , we obtain from and Lemma 1 that Finally, we show that . Using Lemma 2 and (17), it is easy to verify that Hence, where It is not hard to prove that , by conditions (D1) and (D2), and by (29). By Lemma 6, we concludes that , and the proof is finished. Finally, we point out that a more general algorithm can be given for calculating the fixed point for any given . In fact, it suffices to modify the definition of the function by the following form: Algorithm 11. Define in the following way: where and , where is defined by (33). By an argument similar to the proof of Theorem 10, it is easy to obtain the result below. Theorem 12. Assume that , and and satisfy the same conditions as in Theorem 10; then generated by (34) converges strongly to . This work was supported in part by the Fundamental Research Funds for the Central Universities (ZXH2012K001) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing. J. S. Jung, “Iterative approaches to common fixed points of nonexpansive mappings in Banach spaces,” Journal of Mathematical Analysis and Applications, vol. 302, no. 2, pp. 509–520, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet S. S. Chang, “Viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces,” Journal of Mathematical Analysis and Applications, vol. 323, no. 2, pp. 1402–1416, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet G. Marino and H. K. Xu, “Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces,” Journal of Mathematical Analysis and Applications, vol. 329, no. 1, pp. 336–346, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet G. Marino and H. K. Xu, “Convergence of generalized proximal point algorithms,” Communications on Pure and Applied Analysis, vol. 3, no. 4, pp. 791–808, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet A. Genel and J. Lindenstrauss, “An example concerning fixed points,” Israel Journal of Mathematics, vol. 22, no. 1, pp. 81–86, 1975. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet T. H. Kim and H. K. Xu, “Strong convergence of modified Mann iterations,” Nonlinear Analysis: Theory, Methods & Applications, vol. 61, no. 1-2, pp. 51–60, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet C. Martinez-Yanes and H. K. Xu, “Strong convergence of the CQ method for fixed point iteration processes,” Nonlinear Analysis: Theory, Methods & Applications, vol. 64, no. 11, pp. 2400–2411, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet M. Li and Y. Yao, “Strong convergence of an iterative algorithm for λ-strictly pseudo-contractive mappings in Hilbert spaces,” Analele stiintifice ale Universitatii Ovidius Constanta, vol. 18, no. 1, pp. 219–228, 2010. View at: Google Scholar | MathSciNet B. Beauzamy, Introduction to Banach Spaces and Their Geometry, vol. 68 of North-Holland Mathematics Studies, North-Holland, Amsterdam, The Netherlands, 1982. View at: MathSciNet J. Diestel, Geometry of Banach Spaces—Selected Topics, vol. 485 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 1975. View at: MathSciNet F. Wang and H. K. Xu, “Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem,” Journal of Inequalities and Applications, vol. 2010, Article ID 102085, 13 pages, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Copyright © 2013 Songnian He and Wenlong Zhu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Chirality - Course Hero General Chemistry/Organic Chemistry/Chirality If a molecule does not have a plane of symmetry and its isomers cannot be rotated or reflected to match, it is called a chiral molecule. Chirality is a form of spatial isomerism. Two molecules that have the same molecular formula and bond structures but are not superimposable are enantiomers. An enantiomer is a stereoisomer that has a mirror image that is not superimposable on itself. Carbon atoms that contain sp3 hybridization and four unique substituents are likely to be chiral. CH2 and CH3 are never chiral because they have identical groups (H). {\rm{C}{\equiv}{C}} can never be chiral because it isn't sp3. A carbon atom that has four unique substituents is a stereogenic center, also called a chiral center. The R- and S-enantiomers of an organic molecule are shown. The black carbon atom is a chiral center because it has four different substituents, a chlorine (Cl) atom, a hydrogen (H) atom, a fluorine (F) atom, and a bromine (Br) atom. The two structures are mirror images of each other that cannot be superimposed. When describing enantiomers, they are differentiated by the prefixes R- and S-, based on a system of prioritization. For a given chiral center, identify the four atoms directly attached to the chiral center and assign them ranks according to their molecular weight with the heaviest atom ranked first and the lightest atom ranked last. Since the asymmetry of a chiral center sometimes occurs a number of atoms away from the chiral center, apply the rule of ranking to the second atom from the chiral center, then the third atom, and so forth, until a ranking is complete. Treat multiple bonds as two (or three) single bonds to the same atom. Once the ranking is made, orient the molecule so that the lowest ranking atom (the lightest) is pointing away from the observer. Then draw an arrow from the highest-ranking atom (the heaviest) to the second-highest and then to the third-highest. If this arrow points clockwise, the enantiomer is R- (for the Latin rectus, which means "right"). If the arrow points counterclockwise, the enantiomer is S- (for the Latin sinister, which means "left"). Enantiomers were discovered because of their ability to rotate, or polarize, light. It was discovered that some isomers of a compound rotate light to the left; these are called levorotatory, or L-. Other isomers were found to rotate light to the right; these are called dextrorotatory, or D-. There is no correlation between R- and S-conformations and the direction that a molecule will rotate light. Chirality and the ability to polarize light is important because many biochemical molecules are enantiomers. The amino acids used to build proteins, for example, are chiral, and all the amino acids found in nature are L-. Proteins have many functions that rely on their physical shapes, so D-amino acids would make proteins unable to fold into the correct shape and, therefore, unable to perform their functions. <Classes of Functional Groups>Suggested Reading
For each function, find its slope function, f^\prime(x) f ( x ) = \frac { 2 } { 5 } x ^ { - 2 } - 4 x Power Rule. f ( x ) = - 2 \sqrt { x } Before you differentiate, rewrite f(x) with a fractional exponent. f(x) = 6\operatorname{ cos }x f^\prime(x) be positive or negative? f ( x ) = \frac { 4 x ^ { 2 } + 4 } { x ^ { 2 } + 1 } Before you differentiate, factor the numerator and simplify. THINK: Will the derivative need a restricted domain? Why or why not?
TGTPGTCS: August 2021,TGT PGT CS, TGT PGT Computer Science, DSSSB Computer Science, KVS Computer Science, HTET, CSIP4CBSE, CBSE4CSIP If a transaction has obtained a __________ lock, it can read but cannot write on the item a) Shared mode b) Exclusive mode c) Read only mode d) Write only mode If a transaction has obtained a ________ lock, it can both read and write on the item A transaction can proceed only after the concurrency control manager ________ the lock to the transaction If a transaction can be granted a lock on an item immediately in spite of the presence of another mode, then the two modes are said to be ________ A transaction is made to wait until all ________ locks held on the item are released b) Incompatible d) Equivalent The situation where no transaction can proceed with normal execution is known as ________ a) Road block c) Execution halt The protocol that indicates when a transaction may lock and unlock each of the data items is called as __________ a) Locking protocol b) Unlocking protocol c) Granting protocol d) Conflict protocol The two phase locking protocol consists which of the following phases? a) Growing phase b) Shrinking phase If a transaction may obtain locks but may not release any locks then it is in _______ phase c) Deadlock phase d) Starved phase If a transaction may release locks but may not obtain any locks, it is said to be in ______ phase Which of the following cannot be used to implement a timestamp b) Logical counter c) External time counter A logical counter is _________ after a new timestamp has been assigned a) Incremented b) Decremented The _________ requires each transaction executes in two or three different phases in its lifetime a) Validation protocol b) Timestamp protocol c) Deadlock protocol d) View protocol During __________ phase, the system reads data and stores them in variables local to the transaction. a) Read phase b) Validation phase c) Write phase During the _________ phase the validation test is applied to the transaction During the _______ phase, the local variables that hold the write operations are copied to the database Read only operations omit the _______ phase Conflict Serializability Assignment - S P SHARMA CLASSES Question 1 : Consider the following schedules involving two transactions. Which one of the following statement is true? S1: r1(x) r1(y) w2(x) w1(x) r2(y) S2: r1(x) r3(y) w1(x) w2(y) r3(x) w2(x) Question 3 : Consider the following schedules involving three transactions. Which one of the following statement is true? (A) Only S1 is conflict-serializable. (B) Only S2 is conflict-serializable. (C) Both S1 and S2 are conflict-serializable. (D) Neither S1 nor S2 is conflict-serializable. Question 4 : Let S be the following schedule of operations of three transactions , and in a relational database system: Consider the statements P Q : is conflict-serializable. : If commits before finishes, then is recoverable. Both and are true is true and is false is false and is true Both and are false Labels: DSSSB, KVS, PGT, Pgt Computer science, TGT, TGT Computer science
Not to be confused with Vector field. 2 Related concepts and properties 4.1 Arrows in the plane 4.2 Second example: ordered pairs of numbers 4.4 Complex numbers and other field extensions 5 Linear maps and matrices 6.1 Subspaces and quotient spaces 6.2 Direct product and direct sum 7 Vector spaces with additional structure 7.1 Normed vector spaces and inner product spaces 7.3 Algebras over fields 8.1 Vector bundles The second operation, called scalar multiplication,assigns to any scalar a in F and any vector v in V another vector in V, which is denoted av.[nb 2] Identity element of vector addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V. Inverse elements of vector addition For every v ∈ V, there exists an element −v ∈ V, called the additive inverse of v, such that v + (−v) = 0. {\displaystyle \mathbf {v} -\mathbf {w} =\mathbf {v} +(-\mathbf {w} )} Direct consequences of the axioms include that, for every {\displaystyle s\in F} {\displaystyle \mathbf {v} \in V,} {\displaystyle 0\mathbf {v} =\mathbf {0} ,} {\displaystyle s\mathbf {0} =\mathbf {0} ,} {\displaystyle (-1)\mathbf {v} =-\mathbf {v} ,} {\displaystyle s\mathbf {v} =\mathbf {0} } {\displaystyle s=0} {\displaystyle \mathbf {v} =\mathbf {0} .} Related concepts and propertiesEdit {\displaystyle a_{1}\mathbf {g} _{1}+a_{2}\mathbf {g} _{2}+\cdots +a_{k}\mathbf {g} _{k},} {\displaystyle a_{1},\ldots ,a_{k}\in F} {\displaystyle \mathbf {g} _{1},\ldots ,\mathbf {g} _{k}\in V.} {\displaystyle a_{1},\ldots ,a_{k}} are called the coefficients of the linear combination. Let consider a basis {\displaystyle (\mathbf {b} _{1},\mathbf {b} _{2},\ldots ,\mathbf {b} _{n})} of a vector space V of dimension n over a field F. The definition of a basis implies that every {\displaystyle \mathbf {v} \in V} {\displaystyle \mathbf {v} =a_{1}\mathbf {b} _{1}+\cdots +a_{n}\mathbf {b} _{n},} {\displaystyle a_{1},\dots ,a_{n}} in F, and that this decomposition is unique. The scalars {\displaystyle a_{1},\ldots ,a_{n}} are called the coordinates of v on the basis. They are also said to be the coefficients of the decomposition of v on the basis. One says also that the n-tuple of the coordinates is the coordinate vector of v on the basis, since the set {\displaystyle F^{n}} of the n-tuples of elements of F is a vector space for componentwise addition and scalar multiplication, whose dimension is n. Main article: Examples of vector spaces Arrows in the planeEdit Second example: ordered pairs of numbersEdit {\displaystyle (x_{1},y_{1})+(x_{2},y_{2})=(x_{1}+x_{2},y_{1}+y_{2})} {\displaystyle a(x,y)=(ax,ay).} Coordinate spaceEdit Complex numbers and other field extensionsEdit More generally, field extensions provide another class of examples of vector spaces, particularly in algebra and algebraic number theory: a field F containing a smaller field E is an E-vector space, by the given multiplication and addition operations of F.[11] For example, the complex numbers are a vector space over R, and the field extension {\displaystyle \mathbf {Q} (i{\sqrt {5}})} is a vector space over Q. Function spacesEdit Addition of functions: The sum of the sine and the exponential function is {\displaystyle \sin +\exp :\mathbb {R} \to \mathbb {R} } {\displaystyle (\sin +\exp )(x)=\sin(x)+\exp(x)} Linear equationsEdit Main articles: Linear equation, Linear differential equation, and Systems of linear equations {\displaystyle A={\begin{bmatrix}1&3&1\\4&2&2\end{bmatrix}}} is the matrix containing the coefficients of the given equations, x is the vector (a, b, c), Ax denotes the matrix product, and 0 = (0, 0) is the zero vector. In a similar vein, the solutions of homogeneous linear differential equations form vector spaces. For example, Linear maps and matricesEdit {\displaystyle f(\mathbf {v} +\mathbf {w} )=f(\mathbf {v} )+f(\mathbf {w} )} and f(a · v) = a · f(v) for all v and w in V, all a in F.[14] Main articles: Matrix and Determinant {\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})\mapsto \left(\sum _{j=1}^{n}a_{1j}x_{j},\sum _{j=1}^{n}a_{2j}x_{j},\ldots ,\sum _{j=1}^{n}a_{mj}x_{j}\right)} {\displaystyle \sum } denotes summation, Basic constructionsEdit Subspaces and quotient spacesEdit Main articles: Linear subspace and Quotient vector space The counterpart to subspaces are quotient vector spaces.[29] Given any subspace W ⊂ V, the quotient space V/W ("V modulo W") is defined as follows: as a set, it consists of v + W = {v + w : w ∈ W}, where v is an arbitrary vector in V. The sum of two such elements v1 + W and v2 + W is (v1 + v2) + W, and scalar multiplication is given by a · (v + W) = (a · v) + W. The key point in this definition is that v1 + W = v2 + W if and only if the difference of v1 and v2 lies in W.[nb 7] This way, the quotient space "forgets" information that is contained in the subspace W. The kernel ker(f) of a linear map f : V → W consists of vectors v that are mapped to 0 in W.[30] The kernel and the image im(f) = {f(v) : v ∈ V} are subspaces of V and W, respectively.[31] The existence of kernels and images is part of the statement that the category of vector spaces (over a fixed field F) is an abelian category, that is, a corpus of mathematical objects and structure-preserving maps between them (a category) that behaves much like the category of abelian groups.[32] Because of this, many statements such as the first isomorphism theorem (also called rank–nullity theorem in matrix-related terms) {\displaystyle a_{0}f+a_{1}{\frac {df}{dx}}+a_{2}{\frac {d^{2}f}{dx^{2}}}+\cdots +a_{n}{\frac {d^{n}f}{dx^{n}}}=0} , where the coefficients ai are functions in x, too. {\displaystyle f\mapsto D(f)=\sum _{i=0}^{n}a_{i}{\frac {d^{i}f}{dx^{i}}}} Direct product and direct sumEdit Main articles: Direct product and Direct sum of modules {\displaystyle \textstyle {\prod _{i\in I}V_{i}}} of a family of vector spaces Vi consists of the set of all tuples (vi)i ∈ I, which specify for each index i in some index set I an element vi of Vi.[33] Addition and scalar multiplication is performed componentwise. A variant of this construction is the direct sum {\textstyle \bigoplus _{i\in I}V_{i}} (also called coproduct and denoted {\textstyle \coprod _{i\in I}V_{i}} ), where only tuples with finitely many nonzero vectors are allowed. If the index set I is finite, the two constructions agree, but in general they are different. Tensor productEdit Main article: Tensor product of vector spaces v1 ⊗ w1 + v2 ⊗ w2 + ⋯ + vn ⊗ wn, a · (v ⊗ w) = (a · v) ⊗ w = v ⊗ (a · w), where a is a scalar, Vector spaces with additional structureEdit From the point of view of linear algebra, vector spaces are completely understood insofar as any vector space is characterized, up to isomorphism, by its dimension. However, vector spaces per se do not offer a framework to deal with the question—crucial to analysis—whether a sequence of functions converges to another function. Likewise, linear algebra is not adapted to deal with infinite series, since the addition operation allows only finitely many terms to be added. Therefore, the needs of functional analysis require considering additional structures. {\displaystyle f=f^{+}-f^{-}} {\displaystyle f^{+}} denotes the positive part o{\displaystyle f} {\displaystyle f^{-}} the negative part.[37] Normed vector spaces and inner product spacesEdit Main articles: Normed vector space and Inner product space "Measuring" vectors is done by specifying a norm, a datum which measures lengths of vectors, or by an inner product, which measures angles between vectors. Norms and inner products are denoted {\displaystyle |\mathbf {v} |} {\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle } , respectively. The datum of an inner product entails that lengths of vectors can be defined too, by defining the associated norm {\textstyle |\mathbf {v} |:={\sqrt {\langle \mathbf {v} ,\mathbf {v} \rangle }}} . Vector spaces endowed with such data are known as normed vector spaces and inner product spaces, respectively.[38] {\displaystyle \langle \mathbf {x} ,\mathbf {y} \rangle =\mathbf {x} \cdot \mathbf {y} =x_{1}y_{1}+\cdots +x_{n}y_{n}.} {\displaystyle \mathbf {x} \cdot \mathbf {y} =\cos \left(\angle (\mathbf {x} ,\mathbf {y} )\right)\cdot |\mathbf {x} |\cdot |\mathbf {y} |.} Because of this, two vectors satisfying {\displaystyle \langle \mathbf {x} ,\mathbf {y} \rangle =0} {\displaystyle \langle \mathbf {x} |\mathbf {y} \rangle =x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}-x_{4}y_{4}.} In contrast to the standard dot product, it is not positive definite: {\displaystyle \langle \mathbf {x} |\mathbf {x} \rangle } also takes negative values, for example, for {\displaystyle \mathbf {x} =(0,0,0,1)} Topological vector spacesEdit {\displaystyle \sum _{i=0}^{\infty }f_{i}} Unit "spheres" in R2 consist of plane vectors of norm 1. Depicted are the unit spheres in different p-norms, for p = 1, 2, and ∞. The bigger diamond depicts points of 1-norm equal to 2. {\displaystyle \lim _{n\to \infty }|\mathbf {v} _{n}-\mathbf {v} |=0.} Main article: Banach space A first example is the vector space {\displaystyle \ell ^{p}} consisting of infinite vectors with real entries {\displaystyle \mathbf {x} =\left(x_{1},x_{2},\ldots ,x_{n},\ldots \right)} {\displaystyle p} {\displaystyle (1\leq {p}\leq \infty )} {\displaystyle \left\|\mathbf {x} \right\|_{p}:=\left(\sum _{i}\left\vert x_{i}\right\vert ^{p}\right)^{\frac {1}{p}}} {\displaystyle p<\infty } {\displaystyle \left\|\mathbf {x} \right\|_{\infty }:=\sup _{i}\left|x_{i}\right|} The topologies on the infinite-dimensional space {\displaystyle \ell ^{p}} are inequivalent for different {\displaystyle p} . For example, the sequence of vectors {\displaystyle \mathbf {x} _{n}=\left(2^{-n},2^{-n},\ldots ,2^{-n},0,0,\ldots \right)} , in which the first {\displaystyle 2^{n}} {\displaystyle 2^{-n}} and the following ones are {\displaystyle 0} , converges to the zero vector for {\displaystyle p=\infty } , but does not for {\displaystyle p=1} {\displaystyle \left\Vert \mathbf {x} _{n}\right\Vert _{_{\infty }}=\sup(2^{-n},0)=2^{-n}\rightarrow 0} {\displaystyle \left\Vert \mathbf {x} _{n}\right\Vert _{1}=\sum _{i=1}^{2^{n}}2^{-n}=2^{n}\cdot 2^{-n}=1.} More generally than sequences of real numbers, functions {\displaystyle f\colon \Omega \to \mathbb {R} } {\displaystyle \left\Vert {f}\right\Vert _{p}:=\left(\int _{\Omega }\left\vert {f}\left(x\right)\right\vert ^{p}\,{d\mu \left(x\right)}\right)^{\frac {1}{p}}.} The space of integrable functions on a given domain {\displaystyle \Omega } (for example an interval) satisfying {\displaystyle \left\Vert {f}\right\Vert _{p}<\infty } , and equipped with this norm are called Lebesgue spaces, denoted {\displaystyle L^{\;\!p}\left(\Omega \right)} These spaces are complete.[48] (If one uses the Riemann integral instead, the space is not complete, which may be seen as a justification for Lebesgue's integration theory.[nb 10]) Concretely this means that for any sequence of Lebesgue-integrable functions {\displaystyle f_{1},f_{2},\ldots ,f_{n},\ldots } {\displaystyle \left\Vert {f}_{n}\right\Vert _{p}<\infty } {\displaystyle \lim _{k,\ n\to \infty }\int _{\Omega }\left\vert {f}_{k}(x)-{f}_{n}(x)\right\vert ^{p}\,{d\mu \left(x\right)}=0} {\displaystyle {f}\left(x\right)} belonging to the vector space {\displaystyle L^{\;\!p}\left(\Omega \right)} {\displaystyle \lim _{k\to \infty }\int _{\Omega }\left\vert {f}\left(x\right)-{f}_{k}\left(x\right)\right\vert ^{p}\,{d\mu \left(x\right)}=0.} {\displaystyle \langle f\ ,\ g\rangle =\int _{\Omega }f(x){\overline {g(x)}}\,dx,} {\displaystyle {\overline {g(x)}}} denotes the complex conjugate of g(x),[51][nb 11] is a key case. Algebras over fieldsEdit Main articles: Algebra over a field and Lie algebra A hyperbola, given by the equation x ⋅ y = 1. The coordinate ring of functions on this hyperbola is given by R[x, y] / (x · y − 1), an infinite-dimensional vector space over R. [x, y] = −[y, x] (anticommutativity), and v1 ⊗ v2 ⊗ ⋯ ⊗ vn, where the degree n varies. Vector bundlesEdit Main articles: Vector bundle and Tangent bundle π : E → X Affine and projective spacesEdit Main articles: Affine space and Projective space Zero vector (sometimes also called null vector and denoted by {\displaystyle \mathbf {0} } ), the additive identity in a vector space. In a normed vector space, it is the unique vector of norm zero. In a Euclidean vector space, it is the unique vector of length zero.[72] Coordinate vector, the n-tuple of the coordinates of a vector on a basis of n elements. For a vector space over a field F, these n-tuples form the vector space {\displaystyle F^{n}} (where the operation are pointwise addition and scalar multiplication). ^ It is also common, especially in physics, to denote vectors with an arrow on top: {\displaystyle {\vec {v}}.} It is also common, especially in higher mathematics, to not use any typographical method for distinguishing vectors from other mathematical objects. ^ See also Jordan–Chevalley decomposition. ^ The triangle inequality for {\displaystyle \left\Vert {f+g}\right\Vert _{p}\leq \left\Vert {f}\right\Vert _{p}+\left\Vert {g}\right\Vert _{p}} ^ "Many functions in {\displaystyle L^{2}} of Lebesgue measure, being unbounded, cannot be integrated with the classical Riemann integral. So spaces of Riemann integrable functions would not be complete in the {\displaystyle L^{2}} norm, and the orthogonal decomposition would not apply to them. This shows one of the advantages of Lebesgue integration.", Dudley 1989, §5.3, p. 125 ^ For p ≠2, Lp(Ω) is not a Hilbert space. ^ That is, there is a homeomorphism from π−1(U) to V × U which restricts to linear isomorphisms between fibers. ^ Bourbaki 1969, ch. "Algèbre linéaire et algèbre multilinéaire", pp. 78–91. ^ a b Weisstein, Eric W. "Vector". mathworld.wolfram.com. Retrieved 2020-08-19. Blass, Andreas (1984), "Existence of bases implies the axiom of choice" (PDF), Axiomatic set theory (Boulder, Colorado, 1983), Contemporary Mathematics, vol. 31, Providence, R.I.: American Mathematical Society, pp. 31–33, MR 0763890 Mac Lane, Saunders (1999), Algebra (3rd ed.), pp. 193–222, ISBN 978-0-8218-1646-2 Loomis, Lynn H. (1953), An introduction to abstract harmonic analysis, The University series in higher mathematics, Toronto-New York–London: D. Van Nostrand Company, Inc., pp. x+190, hdl:2027/uc1.b4250788 Banach, Stefan (1922), "Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales (On operations in abstract sets and their application to integral equations)" (PDF), Fundamenta Mathematicae (in French), 3: 133–181, doi:10.4064/fm-3-1-133-181, ISSN 0016-2736 Bellavitis, Giuso (1833), "Sopra alcune applicazioni di un nuovo metodo di geometria analitica", Il poligrafo giornale di scienze, lettre ed arti, Verona, 13: 53–61 . Dorier, Jean-Luc (1995), "A general outline of the genesis of vector space theory", Historia Mathematica, 22 (3): 227–261, doi:10.1006/hmat.1995.1024, MR 1347828 Moore, Gregory H. (1995), "The axiomatization of linear algebra: 1875–1940", Historia Mathematica, 22 (3): 262–303, doi:10.1006/hmat.1995.1025 Eisenberg, Murray; Guy, Robert (1979), "A proof of the hairy ball theorem", The American Mathematical Monthly, 86 (7): 572–574, doi:10.2307/2320587, JSTOR 2320587 Halpern, James D. (Jun 1966), "Bases in Vector Spaces and the Axiom of Choice", Proceedings of the American Mathematical Society, 17 (3): 670–673, doi:10.2307/2035388, JSTOR 2035388 Schönhage, A.; Strassen, Volker (1971), "Schnelle Multiplikation großer Zahlen (Fast multiplication of big numbers)", Computing (in German), 7 (3–4): 281–292, doi:10.1007/bf02242355, ISSN 0010-485X, S2CID 9738629 Wallace, G.K. (Feb 1992), "The JPEG still picture compression standard" (PDF), IEEE Transactions on Consumer Electronics, 38 (1): xviii–xxxiv, CiteSeerX 10.1.1.318.4292, doi:10.1109/30.125072, ISSN 0098-3063, archived from the original (PDF) on 2007-01-13, retrieved 2017-10-25 The Wikibook Linear Algebra has a page on the topic of: Real vector spaces The Wikibook Linear Algebra has a page on the topic of: Vector spaces Retrieved from "https://en.wikipedia.org/w/index.php?title=Vector_space&oldid=1088812377"
n-r+1..n n r \mathrm{with}⁡\left(\mathrm{LinearAlgebra}[\mathrm{Modular}]\right): p≔2741 \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{2741} A≔\mathrm{Mod}⁡\left(p,\mathrm{Matrix}⁡\left(5,5,\left(i,j\right)↦\mathrm{rand}⁡\left(\right)\right),\mathrm{integer}[]\right): \mathrm{Fill}⁡\left(p,A,4..5\right): A [\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{2543}& \textcolor[rgb]{0,0,1}{1568}& \textcolor[rgb]{0,0,1}{127}& \textcolor[rgb]{0,0,1}{356}& \textcolor[rgb]{0,0,1}{581}\\ \textcolor[rgb]{0,0,1}{430}& \textcolor[rgb]{0,0,1}{1549}& \textcolor[rgb]{0,0,1}{2376}& \textcolor[rgb]{0,0,1}{1511}& \textcolor[rgb]{0,0,1}{1839}\\ \textcolor[rgb]{0,0,1}{164}& \textcolor[rgb]{0,0,1}{1946}& \textcolor[rgb]{0,0,1}{211}& \textcolor[rgb]{0,0,1}{49}& \textcolor[rgb]{0,0,1}{2418}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}] r≔\mathrm{MatBasis}⁡\left(p,A,3,\mathrm{true}\right): A,r [\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1972}& \textcolor[rgb]{0,0,1}{1878}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1578}& \textcolor[rgb]{0,0,1}{1735}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1724}& \textcolor[rgb]{0,0,1}{2166}\\ \textcolor[rgb]{0,0,1}{769}& \textcolor[rgb]{0,0,1}{1163}& \textcolor[rgb]{0,0,1}{1017}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{863}& \textcolor[rgb]{0,0,1}{1006}& \textcolor[rgb]{0,0,1}{575}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3} rows are the basis of the input vectors, and the remaining rows are the basis of the nullspace. Check that these are orthogonal. \mathrm{Multiply}⁡\left(p,A,1..r,A,r+1..5,'\mathrm{transpose}'\right) [\begin{array}{cc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}] A≔\mathrm{Mod}⁡\left(p,\mathrm{Matrix}⁡\left(5,5,\left(i,j\right)↦\mathrm{rand}⁡\left(\right)\right),\mathrm{float}[8]\right): \mathbf{for}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}i\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{to}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}5\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}A[i,4]≔\mathrm{modp}⁡\left(\mathrm{trunc}⁡\left(p-A[i,1]\right),p\right);\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}A[i,5]≔\mathrm{modp}⁡\left(2⁢\mathrm{trunc}⁡\left(p-A[i,3]\right),p\right)\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}: A [\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{2635.}& \textcolor[rgb]{0,0,1}{353.}& \textcolor[rgb]{0,0,1}{2657.}& \textcolor[rgb]{0,0,1}{106.}& \textcolor[rgb]{0,0,1}{168.}\\ \textcolor[rgb]{0,0,1}{2587.}& \textcolor[rgb]{0,0,1}{1857.}& \textcolor[rgb]{0,0,1}{827.}& \textcolor[rgb]{0,0,1}{154.}& \textcolor[rgb]{0,0,1}{1087.}\\ \textcolor[rgb]{0,0,1}{1720.}& \textcolor[rgb]{0,0,1}{1181.}& \textcolor[rgb]{0,0,1}{493.}& \textcolor[rgb]{0,0,1}{1021.}& \textcolor[rgb]{0,0,1}{1755.}\\ \textcolor[rgb]{0,0,1}{2209.}& \textcolor[rgb]{0,0,1}{884.}& \textcolor[rgb]{0,0,1}{1207.}& \textcolor[rgb]{0,0,1}{532.}& \textcolor[rgb]{0,0,1}{327.}\\ \textcolor[rgb]{0,0,1}{26.}& \textcolor[rgb]{0,0,1}{2325.}& \textcolor[rgb]{0,0,1}{518.}& \textcolor[rgb]{0,0,1}{2715.}& \textcolor[rgb]{0,0,1}{1705.}\end{array}] r≔\mathrm{MatBasis}⁡\left(p,A,5,\mathrm{true}\right): A,r [\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{2740.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{2739.}\\ \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{2.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3} \mathrm{Multiply}⁡\left(p,A,1..r,A,r+1..5,'\mathrm{transpose}'\right) [\begin{array}{cc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\end{array}]
Scalable implementation of the gradient boosted tree machine learning algorithm The XGBoost Contributors XGBoost[2] (eXtreme Gradient Boosting) is an open-source software library which provides a regularizing gradient boosting framework for C++, Java, Python,[3] R,[4] Julia,[5] Perl,[6] and Scala. It works on Linux, Windows,[7] and macOS.[8] From the project description, it aims to provide a "Scalable, Portable and Distributed Gradient Boosting (GBM, GBRT, GBDT) Library". It runs on a single machine, as well as the distributed processing frameworks Apache Hadoop, Apache Spark, Apache Flink, and Dask.[9][10] It has gained much popularity and attention recently as the algorithm of choice for many winning teams of machine learning competitions.[11] XGBoost initially started as a research project by Tianqi Chen[12] as part of the Distributed (Deep) Machine Learning Community (DMLC) group. Initially, it began as a terminal application which could be configured using a libsvm configuration file. It became well known in the ML competition circles after its use in the winning solution of the Higgs Machine Learning Challenge. Soon after, the Python and R packages were built, and XGBoost now has package implementations for Java, Scala, Julia, Perl, and other languages. This brought the library to more developers and contributed to its popularity among the Kaggle community, where it has been used for a large number of competitions.[11] It was soon integrated with a number of other packages making it easier to use in their respective communities. It has now been integrated with scikit-learn for Python users and with the caret package for R users. It can also be integrated into Data Flow frameworks like Apache Spark, Apache Hadoop, and Apache Flink using the abstracted Rabit[13] and XGBoost4J.[14] XGBoost is also available on OpenCL for FPGAs.[15] An efficient, scalable implementation of XGBoost has been published by Tianqi Chen and Carlos Guestrin.[16] While XGBoost model often achieves higher accuracy than a single decision tree, it sacrifices the intrinsic interpretability of decision trees. For example, following the path that a decision tree takes to make its decision is trivial and self-explained, but following the paths of hundreds or thousands of trees is much harder. To achieve both performance and interpretability, some model compression techniques allow transforming an XGBoost into a single "born-again" decision tree that approximates the same decision function.[17] Salient features of XGBoost which make it different from other gradient boosting algorithms include:[18][19][20] Clever penalization of trees A proportional shrinking of leaf nodes Newton Boosting Extra randomization parameter XGBoost works as Newton-Raphson in function space unlike gradient boosting that works as gradient descent in function space, a second order Taylor approximation is used in the loss function to make the connection to Newton Raphson method. A generic unregularized XGBoost algorithm is: Input: training set {\displaystyle \{(x_{i},y_{i})\}_{i=1}^{N}} , a differentiable loss function {\displaystyle L(y,F(x))} , a number of weak learners {\displaystyle M} and a learning rate {\displaystyle \alpha } Initialize model with a constant value: {\displaystyle {\hat {f}}_{(0)}(x)={\underset {\theta }{\arg \min }}\sum _{i=1}^{N}L(y_{i},\theta ).} Compute the 'gradients' and 'hessians': {\displaystyle {\hat {g}}_{m}(x_{i})=\left[{\frac {\partial L(y_{i},f(x_{i}))}{\partial f(x_{i})}}\right]_{f(x)={\hat {f}}_{(m-1)}(x)}.} {\displaystyle {\hat {h}}_{m}(x_{i})=\left[{\frac {\partial ^{2}L(y_{i},f(x_{i}))}{\partial f(x_{i})^{2}}}\right]_{f(x)={\hat {f}}_{(m-1)}(x)}.} Fit a base learner (or weak learner, e.g. tree) using the training set {\displaystyle \displaystyle \left\{x_{i},-{\frac {{\hat {g}}_{m}(x_{i})}{{\hat {h}}_{m}(x_{i})}}\right\}_{i=1}^{N}} by solving the optimization problem below: {\displaystyle {\hat {\phi }}_{m}={\underset {\phi \in \mathbf {\Phi } }{\arg \min }}\sum _{i=1}^{N}{\frac {1}{2}}{\hat {h}}_{m}(x_{i})\left[-{\frac {{\hat {g}}_{m}(x_{i})}{{\hat {h}}_{m}(x_{i})}}-\phi (x_{i})\right]^{2}.} {\displaystyle {\hat {f}}_{m}(x)=\alpha {\hat {\phi }}_{m}(x).} {\displaystyle {\hat {f}}_{(m)}(x)={\hat {f}}_{(m-1)}(x)+{\hat {f}}_{m}(x).} {\displaystyle {\hat {f}}(x)={\hat {f}}_{(M)}(x)=\sum _{m=0}^{M}{\hat {f}}_{m}(x).} John Chambers Award (2016)[21] High Energy Physics meets Machine Learning award (HEP meets ML) (2016)[22] ^ https://github.com/dmlc/xgboost/releases/tag/v1.6.0; retrieved: 17 May 2022. ^ "GitHub project webpage". ^ "Python Package Index PYPI: xgboost". Retrieved 2016-08-01. ^ "CRAN package xgboost". Retrieved 2016-08-01. ^ "Julia package listing xgboost". Retrieved 2016-08-01. ^ "CPAN module AI::XGBoost". Retrieved 2020-02-09. ^ "Installing XGBoost for Anaconda in Windows". Retrieved 2016-08-01. ^ "Installing XGBoost on Mac OSX". Retrieved 2016-08-01. ^ "Dask Homepage". {{cite web}}: CS1 maint: url-status (link) ^ "Distributed XGBoost with Dask — xgboost 1.5.0-dev documentation". xgboost.readthedocs.io. Retrieved 2021-07-15. ^ a b "XGBoost - ML winning solutions (incomplete list)". Retrieved 2016-08-01. ^ "Story and Lessons behind the evolution of XGBoost". Retrieved 2016-08-01. ^ "Rabit - Reliable Allreduce and Broadcast Interface". Retrieved 2016-08-01. ^ "XGBoost4J". Retrieved 2016-08-01. ^ "XGBoost on FPGAs". Retrieved 2019-08-01. ^ Chen, Tianqi; Guestrin, Carlos (2016). "XGBoost: A Scalable Tree Boosting System". In Krishnapuram, Balaji; Shah, Mohak; Smola, Alexander J.; Aggarwal, Charu C.; Shen, Dou; Rastogi, Rajeev (eds.). Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016. ACM. pp. 785–794. arXiv:1603.02754. doi:10.1145/2939672.2939785. ^ Sagi, Omer; Rokach, Lior (2021). "Approximating XGBoost with an interpretable decision tree". Information Sciences. 572 (2021): 522-542. doi:10.1016/j.ins.2021.05.055. ^ Gandhi, Rohith (2019-05-24). "Gradient Boosting and XGBoost". Medium. Retrieved 2020-01-04. ^ "Boosting algorithm: XGBoost". Towards Data Science. 2017-05-14. Retrieved 2020-01-04. ^ "Tree Boosting With XGBoost – Why Does XGBoost Win "Every" Machine Learning Competition?". Synced. 2017-10-22. Retrieved 2020-01-04. ^ "John Chambers Award Previous Winners". Retrieved 2016-08-01. ^ "HEP meets ML Award". Retrieved 2016-08-01. Retrieved from "https://en.wikipedia.org/w/index.php?title=XGBoost&oldid=1083566126"
Least Common Multiple - Maple Help Home : Support : Online Help : Math Apps : Discrete Mathematics : Least Common Multiple and b is the smallest positive integer which is a multiple of both and b . Consider two balls bouncing in steps of length and b respectively, and watch them bounce until they both land on the ground together (after starting together). The total (horizontal) distance they have traveled is the LCM of and b Change the parameters of the lowest common multiple by dragging the sliders. Click animate to watch the balls bounce until they find the LCM.
Revision as of 21:38, 4 June 2015 by MathAdmin (talk | contribs) (Created page with "<span class="exam">Use differentials to find <math style="vertical-align: -4px">dy</math> given <math style="vertical-align: -4px">y = x^2 - 6x, ~ x = 4, ~dx = -0.5.</math> {...") {\displaystyle dy} {\displaystyle y=x^{2}-6x,~x=4,~dx=-0.5.} {\displaystyle dx} {\displaystyle x} {\displaystyle f'(x)} {\displaystyle dy} {\displaystyle f'(x)\,=\,{\frac {dy}{dx}},} {\displaystyle dy\,=\,f'(x)\cdot dx,} where we use a given {\displaystyle x} {\displaystyle f'(x)} {\displaystyle f'(x)\,=\,2x-6.} {\displaystyle x=4} {\displaystyle f'(4)\,=\,2(4)-6\,=\,2.} {\displaystyle dy\,=\,f'(x)\cdot dx\,=\,2(-0.5)\,=\,-1.} {\displaystyle dy\,=\,-1.}
Today's mystery function has these properties: f^{\prime\prime}(−3) = f^\prime(−3) = f(−3) = 0 . What do you know about the graph? What don't you know? When a 1st-derivative is equal to zero at x = a x = a is a CANDIDATE for a local max or a local min. When a 2 nd-derivative is equal to zero at x = a x = a is also a CANDIDATE for a point of inflection. Obviously, the candidate cannot hold all three positions... so, how do we determine if x = a is a local max, a local min or a POI?
Home : Support : Online Help : Math Apps : Real and Complex Numbers : Roots of Unity A root of unity, also known as a de Moivre number, is a complex number z which satisfies {z}^{ n} = 1 , for some positive integer n. Solving for the {\mathbit{n}}^{\mathbit{th}} Note that Maple uses the uppercase letter I, rather than the lowercase letter i, to denote the imaginary unit: {\mathrm{I}}^{ 2} = -1 {z}^{ n} = 1 is a polynomial with complex coefficients and a degree of n, it must have exactly n complex roots according to the Fundamental Theorem of Algebra. To solve for all the {\mathbit{n}}^{\mathbf{th}} roots of unity, we will use de Moivre's Theorem: {\left(\mathrm{cos}\left(x\right)+\mathrm{I}\mathbf{⁢}\mathrm{sin}\left(x\right)\right)}^{n}=\mathrm{cos}\left(n\mathbf{⁢}x\right)+\mathrm{I}\mathbf{⁢}\mathrm{sin}\left(n\mathbf{⁢}x\right) , where x is any complex number and n is any integer (in this particular case x will be any real number and n will be any positive integer). First, convert the complex number z to its polar form: z=|z|\mathbf{⁢}\left(\mathrm{cos}⁡\left(\mathrm{\theta }\right)+I\mathbf{⁢}\mathrm{sin}⁡\left(\mathrm{\theta }\right)\right) |z| is the modulus of z and q is the angle between the positive real axis (Re) and the line segment joining the point z to the origin on the complex plane. Since {z}^{ n} = 1 |z| = 1 , and so the previous equation simply becomes z=\mathrm{cos}\left(\mathrm{\theta }\right)+I\mathbf{⁢}\mathrm{sin}\left(\mathrm{\theta }\right) Also, converting the real number 1 = 1 +0\cdot \mathrm{I} to polar form, we get 1=\mathrm{cos}⁡\left(2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k\right)+I\mathbf{⁢}\mathrm{sin}⁡\left(2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k\right) {z}^{ n} = {\left(\mathrm{cos}\left(\mathrm{\theta }\right)+I\mathbf{⁢}\mathrm{sin}\left(\mathrm{\theta }\right)\right)}^{n} = \mathrm{cos}\left(2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k\right)+I\mathbf{⁢}\mathrm{sin}\left(2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k\right) = 1 and so using de Moivre's Theorem, this equation becomes {z}^{ n} = \mathrm{cos}\left(n\mathbf{⁢}\mathrm{\theta }\right)+\mathrm{I}\mathbf{⁢}\mathrm{sin}\left(n\mathbf{⁢}\mathrm{\theta }\right)= \mathrm{cos}\left(2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k\right)+I\mathbf{⁢}\mathrm{sin}\left(2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k\right) = 1 . From this form of the equation, we can see that n\mathbf{⁢}\mathrm{θ} = 2\mathbf{⁢}\mathrm{π}\mathbf{⁢}k \mathbf{⁢}\mathrm{θ} = \frac{2 \mathrm{π} k}{n} {n}^{ \mathrm{th}} roots of unity can be expressed using the formula {z}_{k}= \mathrm{cos}\left(\frac{2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k}{n}\right)+I\mathbf{⁢}\mathrm{sin}\left(\frac{2\mathbf{⁢}\mathrm{\pi }\mathbf{⁢}k}{n}\right) k = 0, 1, 2, ... , n-1 Using Euler's formula: {ⅇ}^{\mathrm{I}\mathbf{⁢}\mathrm{θ}}=\mathrm{cos}\left(\mathrm{\theta }\right)+\mathrm{I}\mathbf{⁢}\mathrm{sin}\left(\mathrm{θ}\right) , we can write this formula for the {n}^{ \mathrm{th}} roots of unity in its most common form: {z}_{k} = {\mathrm{e}}^{ \mathrm{I} \left(\frac{2\mathbf{⁢}\mathrm{π}\mathbf{⁢}k}{n}\right)} k = 0, 1, 2, ... , n-1 {n}^{\mathrm{th}}\mathit{ } roots of unity are plotted on the complex plane (with the real part [Re] on the horizontal axis and the imaginary part [Im] on the vertical axis), we can see that they all lie on the unit circle and form the vertices of a regular polygon with n sides and a circumradius of 1. Degree of Polynomial, n
Home : Support : Online Help : Programming : Logic : Boolean : verify : RootOf verify equality of expressions involving RootOf verify(expr1, expr2, RootOf) verify(expr1, expr2, 'RootOf'(ver)) verification after normalizing RootOf subexpressions The verify(expr1, expr2, RootOf) calling sequence performs the following steps. It finds all subexpressions of expr1 and expr2 that are of the form RootOf(...), and replaces each such subexpression ro by the result of calling convert(ro, RootOf, form = index). It then compares the expressions resulting from doing this conversion on expr1 and expr2, and returns true if these results are the same, and false if they are different. If ver is specified, then that verification is used to compare the resulting expressions, rather than simple equality testing. This verification is symmetric if ver is not specified, or if ver is specified and it is symmetric. Since RootOf is a Maple function, it must be enclosed in single quotes to prevent evaluation. Suppose you want to compare two expressions involving RootOf calls that are specified with intervals. They do not compare as equal if the intervals are not specified in the same way--even if they refer to the same root. Consider the example below: this polynomial has three roots, all real and between -2 and 2. \mathrm{plot}⁡\left({x}^{3}-3⁢x+1,x=-2..2\right) The root between 0 and 1 is the same as the root between 0.34 and 0.35. However, plain equality testing does not see that. \mathrm{verify}⁡\left(2+\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,0..1\right),2+\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,0.34..0.35\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} The RootOf verification solves this problem. \mathrm{verify}⁡\left(2+\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,0..1\right),2+\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,0.34..0.35\right),\mathrm{RootOf}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} We can still distinguish the two positive roots. \mathrm{verify}⁡\left(2+\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,0..1\right),2+\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,1..2\right),\mathrm{RootOf}\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} If you want to test a different verification than simple equality after normalizing the RootOf subexpressions, you can use the second calling sequence. For example, suppose we want to see whether a given expression is equivalent to either of the following RootOf calls. We can do that by using the second calling sequence and specifying member verification for ver. \mathrm{accepted_answers}≔{\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,0..1\right),\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}+\mathrm{_Z}-3,1..2\right)} \textcolor[rgb]{0,0,1}{\mathrm{accepted_answers}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{\mathrm{RootOf}}\textcolor[rgb]{0,0,1}{⁡}\left({\textcolor[rgb]{0,0,1}{\mathrm{_Z}}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{_Z}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RootOf}}\textcolor[rgb]{0,0,1}{⁡}\left({\textcolor[rgb]{0,0,1}{\mathrm{_Z}}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{_Z}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{2}\right)} \mathrm{to_test}≔\mathrm{RootOf}⁡\left({\mathrm{_Z}}^{3}-3⁢\mathrm{_Z}+1,0.3\right) \textcolor[rgb]{0,0,1}{\mathrm{to_test}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{RootOf}}\textcolor[rgb]{0,0,1}{⁡}\left({\textcolor[rgb]{0,0,1}{\mathrm{_Z}}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{_Z}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.3}\right) \mathrm{verify}⁡\left(\mathrm{to_test},\mathrm{accepted_answers},'\mathrm{RootOf}'⁡\left('\mathrm{member}'\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} Coincidentally, we can specify an equivalent verification by specifying RootOf verification as the parameter to member verification, instead of the other way around. This calling sequence is explained on the verify/member help page. \mathrm{verify}⁡\left(\mathrm{to_test},\mathrm{accepted_answers},'\mathrm{member}'⁡\left('\mathrm{RootOf}'\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}}
Mass in thermal systems - MATLAB Mass in thermal systems The Thermal Mass block represents a thermal mass, which reflects the ability of a material or a combination of materials to store internal energy. The property is characterized by mass of the material and its specific heat. The thermal mass is described with the following equation: Q=c·m\frac{dT}{dt} Q Heat flow c Specific heat of mass material The block has one thermal conserving port. The block positive direction is from its port towards the block. This means that the heat flow is positive if it flows into the block. Mass. The default value is 1 kg. Specific heat of the material. The default value is 447 J/kg/K. The block has one thermal conserving port, associated with the mass connection to the system.
Spline interpolation - MATLAB spapi - MathWorks América Latina Bivariate Spline Interpolant and Osculatory Interpolation to Gridded Data spline = spapi(knots,x,y) spapi(k,x,y) spapi({knork1,...,knorkm},{x1,...,xm},y) spapi(...,'noderiv') spline = spapi(knots,x,y) returns the spline f (if any) of order k = length(knots) - length(x) with knot sequence knots for which (*) f(x(j)) = y(:,j), all j. If some of the entries of x are the same, then: {D}^{m\left(j\right)}f\left(x\left(j\right)\right)=y\left(:,j\right) m\left(j\right)=\left\{i<j:x\left(i\right)=x\left(j\right)\right\} and Dmf the m-th derivative of f. In this case, the r-fold repetition of a site z in x corresponds to the prescribing of value and the first r – 1 derivatives of f at z. To match the average of all data values with the same data instead, call spapi with an additional fourth argument. The data values, y(:,j), can be scalars, vectors, matrices, or ND-arrays. spapi(k,x,y) , with k a positive integer, specifies the desired spline order, k. In this case the spapi function calls the aptknt function to determine a workable, but not necessarily optimal, knot sequence for the given sites x. In other words, the command spapi(k,x,y) has the same effect as the more explicit command spapi(aptknt(x,k),x,y). spapi({knork1,...,knorkm},{x1,...,xm},y) returns the B-form of a tensor-product spline interpolant to gridded data. Here, each knorki is either a knot sequence, or a positive integer specifying the polynomial order used in the i-th variable. The spapi function then provides a corresponding knot sequence for the i-th variable. Further, y must be an (r+m)-dimensional array, with y(:,i1,...,im) the datum to fit at the site [x{1}(i1),...,x{m}(im)], for all i1, ..., im. In contrast to the univariate case, if the spline is scalar-valued, then y can be an m-dimensional array. spapi(...,'noderiv') with the character vector or string scalar 'noderiv' as a fourth argument, has the same effect as spapi(...) except that data values sharing the same site are interpreted differently. With the fourth argument present, the average of the data values with the same data site is interpolated at such a site. Without it, data values with the same data site are interpreted as values of successive derivatives to be matched at such a site, as described above, in the first paragraph of this Description. The function spapi([0 0 0 0 1 2 2 2 2],[0 1 1 1 2],[2 0 1 2 -1]) produces the unique cubic spline f on the interval [0...2] with exactly one interior knot, at 1, that satisfies the five conditions \mathit{f}\left(0+\right)=2,\mathit{f}\left(1\right)=0,\mathrm{Df}\left(1\right)=1,{\mathit{D}}^{2}\mathit{f}\left(1\right)=2,\mathit{f}\left(2-\right)=-1. These include 3-fold matching at 1, i.e., matching there to prescribed values of the function and its first two derivatives. Here is an example of osculatory interpolation, to values y and slopes s at the sites x by a quintic spline: sp = spapi(augknt(x,6,2),[x,x,min(x),max(x)],[y,s,ddy0,ddy1]); with ddy0 and ddy1 values for the second derivative at the endpoints. As a related example, if you want to interpolate the sin(x) function at the distinct data sites by a cubic spline, and to match its slope at a subsequence x(s), then call the spapi function with these arguments: sp = spapi(4,[x x(s)], [sin(x) cos(x(s))]). The aptknt function will provide a suitable knot sequence. If you want to interpolate the same data by quintic splines, then simply change the value 4 to 6. As a bivariate example, here is a bivariate interpolant. x = -2:.5:2; y = -1:.25:1; [xx, yy] = ndgrid(x,y); z = exp(-(xx.^2+yy.^2)); sp = spapi({3,4},{x,y},z); fnplt(sp) As an illustration of osculatory interpolation to gridded data, here is complete bicubic interpolation, with the data explicitly derived from the bicubic polynomial \mathit{g}\left(\mathit{u},\mathit{v}\right)={\mathit{u}}^{3}{\mathit{v}}^{3} . This is helpful to see exactly where the slopes, and slopes of slopes (the cross derivatives), must be placed in the data values supplied. Since g is a bicubic polynomial, its interpolant, f, must be g itself. Test this: sites = {[0,1],[0,2]}; coefs = zeros(4,4); coefs(1,1) = 1; g = ppmak(sites,coefs); Dxg = fnval(fnder(g,[1,0]),sites); Dyg = fnval(fnder(g,[0,1]),sites); Dxyg = fnval(fnder(g,[1,1]),sites); f = spapi({4,4}, {sites{1}([1,2,1,2]),sites{2}([1,2,1,2])}, ... [fnval(g,sites), Dyg ; ... Dxg.' , Dxyg]); if any( squeeze( fnbrk(fn2fm(f,'pp'), 'c') ) - coefs ) 'something went wrong', end knots — Sequence of knots Knot sequence of the spline, specified as a nondecreasing vector. k — Evaluation points spline — Spline structure Spline, returned as a structure with these fields. pp | B- Form of the spline, returned as pp or B-. pp indicates that the spline is given in piecewise polynomial form, B- indicates it is given in B-form. Knots — Knot locations of spline Number — Number of polynomial pieces The given (univariate) knots and sites must satisfy the Schoenberg-Whitney conditions for the interpolant to be defined. If the site sequence x is nondecreasing, then \text{knots}\left(j\right)<x\left(j\right)<\text{knots}\left(j+k\right),\text{ all }j with equality possible at knots(1) and knots(end)). In the multivariate case, these conditions must hold in each variable separately. The function calls spcol to provide the almost-block-diagonal collocation matrix (Bj,k(x)) (with repeats in x denoting derivatives, as described above), and slvblk solves the linear system (*), using a block QR factorization. The function fits gridded data, in tensor-product fashion, one variable at a time, taking advantage of the fact that a univariate spline fit depends linearly on the values that are being fitted. csapi | spap2 | spaps | spline
Pseudo-Euclidean space - Wikipedia In mathematics and theoretical physics, a pseudo-Euclidean space is a finite-dimensional real n-space together with a non-degenerate quadratic form q. Such a quadratic form can, given a suitable choice of basis (e1, …, en), be applied to a vector x = x1e1 + ⋯ + xnen, giving {\displaystyle q(x)=\left(x_{1}^{2}+\dots +x_{k}^{2}\right)-\left(x_{k+1}^{2}+\dots +x_{n}^{2}\right)} which is called the scalar square of the vector x.[1]: 3  For Euclidean spaces, k = n, implying that the quadratic form is positive-definite.[2] When 0 < k < n, q is an isotropic quadratic form, otherwise it is anisotropic. Note that if 1 ≤ i ≤ k < j ≤ n, then q(ei + ej) = 0, so that ei + ej is a null vector. In a pseudo-Euclidean space with k < n, unlike in a Euclidean space, there exist vectors with negative scalar square. As with the term Euclidean space, the term pseudo-Euclidean space may be used to refer to an affine space or a vector space depending on the author, with the latter alternatively being referred to as a pseudo-Euclidean vector space[3] (see point–vector distinction). 1.1 Positive, zero, and negative scalar squares 1.3 Rotations and spheres 1.4 Symmetric bilinear form 1.5 Subspaces and orthogonality 1.6 Parallelogram law and Pythagorean theorem 2 Algebra and tensor calculus The geometry of a pseudo-Euclidean space is consistent despite some properties of Euclidean space not applying, most notably that it is not a metric space as explained below. The affine structure is unchanged, and thus also the concepts line, plane and, generally, of an affine subspace (flat), as well as line segments. Positive, zero, and negative scalar squares[edit] n = 3, k is either 1 or 2 depending on the choice of sign of q A null vector is a vector for which the quadratic form is zero. Unlike in a Euclidean space, such a vector can be non-zero, in which case it is self-orthogonal. If the quadratic form is indefinite, a pseudo-Euclidean space has a linear cone of null vectors given by { x : q(x) = 0 }. When the pseudo-Euclidean space provides a model for spacetime (see below), the null cone is called the light cone of the origin. The null cone separates two open sets,[4] respectively for which q(x) > 0 and q(x) < 0. If k ≥ 2, then the set of vectors for which q(x) > 0 is connected. If k = 1, then it consists of two disjoint parts, one with x1 > 0 and another with x1 < 0. Similar statements can be made for vectors for which q(x) < 0 if k is replaced with n − k. The quadratic form q corresponds to the square of a vector in the Euclidean case. To define the vector norm (and distance) in an invariant manner, one has to get square roots of scalar squares, which leads to possibly imaginary distances; see square root of negative numbers. But even for a triangle with positive scalar squares of all three sides (whose square roots are real and positive), the triangle inequality does not hold in general. Hence terms norm and distance are avoided in pseudo-Euclidean geometry, which may be replaced with scalar square and interval respectively. Though, for a curve whose tangent vectors all have scalar squares of the same sign, the arc length is defined. It has important applications: see proper time, for example. Rotations and spheres[edit] The rotations group of such space is indefinite orthogonal group O(q), also denoted as O(k, n − k) without a reference to particular quadratic form.[5] Such "rotations" preserve the form q and, hence, the scalar square of each vector including whether it is positive, zero, or negative. Whereas Euclidean space has a unit sphere, pseudo-Euclidean space has the hypersurfaces { x : q(x) = 1 } and { x : q(x) = −1 }. Such a hypersurface, called a quasi-sphere, is preserved by the appropriate indefinite orthogonal group. Symmetric bilinear form[edit] The quadratic form q gives rise to a symmetric bilinear form defined as follows: {\displaystyle \langle x,y\rangle ={\frac {1}{2}}[q(x+y)-q(x)-q(y)]=\left(x_{1}y_{1}+\ldots +x_{k}y_{k}\right)-\left(x_{k+1}y_{k+1}+\ldots +x_{n}y_{n}\right).} The quadratic form can be expressed in terms of the bilinear form: q(x) = ⟨x, x⟩. When ⟨x, y⟩ = 0, then x and y are orthogonal vectors of the pseudo-Euclidean space. This bilinear form is often referred to as the scalar product, and sometimes as "inner product" or "dot product", but it does not define an inner product space and it does not have the properties of the dot product of Euclidean vectors. If x and y are orthogonal and q(x)q(y) < 0, then x is hyperbolic-orthogonal to y. The standard basis of the real n-space is orthogonal. There are no orthonormal bases in a pseudo-Euclidean space for which the bilinear form is indefinite, because it cannot be used to define a vector norm. Subspaces and orthogonality[edit] For a (positive-dimensional) subspace[6] U of a pseudo-Euclidean space, when the quadratic form q is restricted to U, following three cases are possible: q|U is either positive or negative definite. Then, U is essentially Euclidean (up to the sign of q). q|U is indefinite, but non-degenerate. Then, U is itself pseudo-Euclidean. It is possible only if dim U ≥ 2; if dim U = 2, which means than U is a plane, then it is called a hyperbolic plane. q|U is degenerate. One of most jarring properties (for a Euclidean intuition) of pseudo-Euclidean vectors and flats is their orthogonality. When two non-zero Euclidean vectors are orthogonal, they are not collinear. The intersections of any Euclidean linear subspace with its orthogonal complement is the {0} subspace. But the definition from the previous subsection immediately implies that any vector ν of zero scalar square is orthogonal to itself. Hence, the isotropic line N = ⟨ν⟩ generated by a null vector ν is a subset of its orthogonal complement N⊥. The formal definition of the orthogonal complement of a vector subspace in a pseudo-Euclidean space gives a perfectly well-defined result, which satisfies the equality dim U + dim U⊥ = n due to the quadratic form's non-degeneracy. It is just the condition U ∩ U⊥ = {0} or, equivalently, U + U⊥ = all space, which can be broken if the subspace U contains a null direction.[7] While subspaces form a lattice, as in any vector space, this ⊥ operation is not an orthocomplementation, in contrast to inner product spaces. For a subspace N composed entirely of null vectors (which means that the scalar square q, restricted to N, equals to 0), always holds: N ⊂ N⊥ or, equivalently, N ∩ N⊥ = N. Such a subspace can have up to min(k, n − k) dimensions.[8] For a (positive) Euclidean k-subspace its orthogonal complement is a (n − k)-dimensional negative "Euclidean" subspace, and vice versa. Generally, for a (d+ + d− + d0)-dimensional subspace U consisting of d+ positive and d− negative dimensions (see Sylvester's law of inertia for clarification), its orthogonal "complement" U⊥ has (k − d+ − d0) positive and (n − k − d− − d0) negative dimensions, while the rest d0 ones are degenerate and form the U ∩ U⊥ intersection. Parallelogram law and Pythagorean theorem[edit] The parallelogram law takes the form {\displaystyle q(x)+q(y)={\frac {1}{2}}(q(x+y)+q(x-y)).} Using the square of the sum identity, for an arbitrary triangle one can express the scalar square of the third side from scalar squares of two sides and their bilinear form product: {\displaystyle q(x+y)=q(x)+q(y)+2\langle x,y\rangle .} This demonstrates that, for orthogonal vectors, a pseudo-Euclidean analog of the Pythagorean theorem holds: {\displaystyle \langle x,y\rangle =0\Rightarrow q(x)+q(y)=q(x+y).} Generally, absolute value |⟨x, y⟩| of the bilinear form on two vectors may be greater than √|q(x)q(y)|, equal to it, or less. This causes similar problems with definition of angle (see Dot product § Geometric definition) as appeared above for distances. If k = 1 (only one positive term in q), then for vectors of positive scalar square: {\displaystyle |\langle x,y\rangle |\geq {\sqrt {q(x)q(y)}}\,,} which permits definition of the hyperbolic angle, an analog of angle between these vectors through inverse hyperbolic cosine:[9] {\displaystyle \operatorname {arcosh} {\frac {|\langle x,y\rangle |}{\sqrt {q(x)q(y)}}}\,.} It corresponds to the distance on a (n − 1)-dimensional hyperbolic space. This is known as rapidity in the context of theory of relativity discussed below. Unlike Euclidean angle, it takes values from [0, +∞) and equals to 0 for antiparallel vectors. There is no reasonable definition of the angle between a null vector and another vector (either null or non-null). Algebra and tensor calculus[edit] Like Euclidean spaces, every pseudo-Euclidean vector space generates a Clifford algebra. Unlike properties above, where replacement of q to −q changed numbers but not geometry, the sign reversal of the quadratic form results in a distinct Clifford algebra, so for example Cl1,2(R) and Cl2,1(R) are not isomorphic. Just like over any vector space, there are pseudo-Euclidean tensors. Like with a Euclidean structure, there are raising and lowering indices operators but, unlike the case with Euclidean tensors, there is no bases where these operations do not change values of components. If there is a vector vβ, the corresponding covariant vector is: {\displaystyle v_{\alpha }=q_{\alpha \beta }v^{\beta }\,,} and with the standard-form {\displaystyle q_{\alpha \beta }={\begin{pmatrix}I_{k\times k}&0\\0&-I_{(n-k)\times (n-k)}\end{pmatrix}}} the first k components of vα are numerically the same as ones of vβ, but the rest n − k have opposite signs. The correspondence between contravariant and covariant tensors makes a tensor calculus on pseudo-Riemannian manifolds a generalization of one on Riemannian manifolds. A very important pseudo-Euclidean space is Minkowski space, which is the mathematical setting in which Albert Einstein's theory of special relativity is formulated. For Minkowski space, n = 4 and k = 3[10] so that {\displaystyle q(x)=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}-x_{4}^{2},} The geometry associated with this pseudo-metric was investigated by Poincaré.[11][12] Its rotation group is the Lorentz group. The Poincaré group includes also translations and plays the same role as Euclidean groups of ordinary Euclidean spaces. Another pseudo-Euclidean space is the plane z = x + yj consisting of split-complex numbers, equipped with the quadratic form {\displaystyle \lVert z\rVert =zz^{*}=z^{*}z=x^{2}-y^{2}.} This is the simplest case of an indefinite pseudo-Euclidean space (n = 2, k = 1) and the only one where the null cone dissects the space to four open sets. The group SO+(1, 1) consists of so named hyperbolic rotations. ^ Élie Cartan (1981), The Theory of Spinors, Dover Publications, ISBN 0-486-64070-1 ^ Euclidean spaces are regarded as pseudo-Euclidean spaces – see for example Rafal Ablamowicz; P. Lounesto (2013), Clifford Algebras and Spinor Structures, Springer Science & Business Media, p. 32 . ^ Rafal Ablamowicz; P. Lounesto (2013), Clifford Algebras and Spinor Structures, Springer Science & Business Media, p. 32 [1] ^ The standard topology on Rn is assumed. ^ What is the "rotations group" depends on exact definition of a rotation. "O" groups contain improper rotations. Transforms that preserve orientation form the group SO(q), or SO(k, n − k), but it also is not connected if both k and n − k are positive. The group SO+(q), which preserves orientation on positive and negative scalar square parts separately, is a (connected) analog of Euclidean rotations group SO(n). Indeed, all these groups are Lie groups of dimension 1/2n(n − 1). ^ A linear subspace is assumed, but same conclusions are true for an affine flat with the only complication that the quadratic form is always defined on vectors, not points. ^ Actually, U ∩ U⊥ is not zero only if the quadratic form q restricted to U is degenerate. ^ Thomas E. Cecil (1992) Lie Sphere Geometry, page 24, Universitext Springer ISBN 0-387-97747-3 ^ Note that cos(i arcosh s) = s, so for s > 0 these can be understood as imaginary angles. ^ Another well-established representation uses k = 1 and coordinate indices starting from 0 (thence q(x) = x02 − x12 − x22 − x32), but they are equivalent up to sign of q. See Sign convention § Metric signature. ^ H. Poincaré (1906) On the Dynamics of the Electron, Rendiconti del Circolo Matematico di Palermo ^ B. A. Rosenfeld (1988) A History of Non-Euclidean Geometry, page 266, Studies in the history of mathematics and the physical sciences #12, Springer ISBN 0-387-96458-4 Cartan, Élie (1981) [1938], The Theory of Spinors, New York: Dover Publications, p. 3, ISBN 978-0-486-64070-9, MR 0631850 Werner Greub (1963) Linear Algebra, 2nd edition, §12.4 Pseudo-Euclidean Spaces, pp. 237–49, Springer-Verlag. Walter Noll (1964) "Euclidean geometry and Minkowskian chronometry", American Mathematical Monthly 71:129–44. Novikov, S. P.; Fomenko, A.T.; [translated from the Russian by M. Tsaplina] (1990). Basic elements of differential geometry and topology. Dordrecht; Boston: Kluwer Academic Publishers. ISBN 0-7923-1009-8. Szekeres, Peter (2004). A course in modern mathematical physics: groups, Hilbert space, and differential geometry. Cambridge University Press. ISBN 0-521-82960-7. D.D. Sokolov (originator), Pseudo-Euclidean space, Encyclopedia of Mathematics Retrieved from "https://en.wikipedia.org/w/index.php?title=Pseudo-Euclidean_space&oldid=1039545068"
Evaluate the following integrals without a calculator. Then write a statement about the connection between them. Check your answer with a calculator. \int _ { 2 } ^ { 9 } 8 x d x y = 8x is a linear function, and the area between x = 2 x = 9 is trapezoidal. \int _ { 2 } ^ { 9 } ( 8 x + 5 ) d x What effect does the +5 have on the bases of the trapezoid? \int _ { 2 } ^ { 9 } 5 d x Notice that the the trapezoid below the function being integrated in part (b) and above the function being integrated here is equivalent to the trapezoid you found in part (a).
Mantle convection - Wikipedia Mantle convection is the very slow creeping motion of Earth's solid silicate mantle caused by convection currents carrying heat from the interior to the planet's surface.[1][2] The Earth's surface lithosphere rides atop the asthenosphere and the two form the components of the upper mantle. The lithosphere is divided into a number of tectonic plates that are continuously being created or consumed at plate boundaries. Accretion occurs as mantle is added to the growing edges of a plate, associated with seafloor spreading. This hot added material cools down by conduction and convection of heat. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction usually at an ocean trench.[3] This subducted material sinks through the Earth's interior. Some subducted material appears to reach the lower mantle,[4] while in other regions, this material is impeded from sinking further, possibly due to a phase transition from spinel to silicate perovskite and magnesiowustite, an endothermic reaction.[5] The subducted oceanic crust triggers volcanism, although the basic mechanisms are varied. Volcanism may occur due to processes that add buoyancy to partially melted mantle, which would cause upward flow of the partial melt due to decrease in its density. Secondary convection may cause surface volcanism as a consequence of intraplate extension[6] and mantle plumes.[7] In 1993 it was suggested that inhomogeneities in D" layer have some impact on mantle convection.[8] Mantle convection causes tectonic plates to move around the Earth's surface.[9] 1 Types of convection 2 Planform and vigour of convection 3 Creep in the mantle 4 Mantle convection in other celestial bodies Types of convectionEdit Earth cross-section showing location of upper (3) and lower (5) mantle Earth's temperature vs depth. Dashed curve: layered mantle convection. Solid curve: whole-mantle convection.[7] A superplume generated by cooling processes in the mantle.[10] During the late 20th century, there was significant debate within the geophysics community as to whether convection is likely to be "layered" or "whole".[11][12] Although elements of this debate still continue, results from seismic tomography, numerical simulations of mantle convection and examination of Earth's gravitational field are all beginning to suggest the existence of 'whole' mantle convection, at least at the present time. In this model, cold, subducting oceanic lithosphere descends all the way from the surface to the core–mantle boundary (CMB) and hot plumes rise from the CMB all the way to the surface.[13] This picture is strongly based on the results of global seismic tomography models, which typically show slab and plume-like anomalies crossing the mantle transition zone. Although it is now well accepted that subducting slabs cross the mantle transition zone and descend into the lower mantle, debate about the existence and continuity of plumes persists, with important implications for the style of mantle convection. This debate is linked to the controversy regarding whether intraplate volcanism is caused by shallow, upper-mantle processes or by plumes from the lower mantle.[6] Many geochemistry studies have argued that the lavas erupted in intraplate areas are different in composition from shallow-derived mid-ocean ridge basalts (MORB). Specifically, they typically have elevated Helium-3 – Helium-4 ratios. Being a primordial nuclide, Helium-3 is not naturally produced on earth. It also quickly escapes from earth's atmosphere when erupted. The elevated He-3/He-4 ratio of Ocean Island Basalts (OIBs) suggest that they must be sources from a part of the earth that has not previously been melted and reprocessed in the same way as MORB source has been. This has been interpreted as their originating from a different, less well-mixed, region, suggested to be the lower mantle. Others, however, have pointed out that geochemical differences could indicate the inclusion of a small component of near-surface material from the lithosphere. Planform and vigour of convectionEdit See also: Heat transfer § Convection vs. conduction On Earth, the Rayleigh number for convection within Earth's mantle is estimated to be of order 107, which indicates vigorous convection. This value corresponds to whole mantle convection (i.e. convection extending from the Earth's surface to the border with the core). On a global scale, surface expression of this convection is the tectonic plate motions, and therefore has speeds of a few cm per year.[14][15][16] Speeds can be faster for small-scale convection occurring in low-viscosity regions beneath the lithosphere, and slower in the lowermost mantle where viscosities are larger. A single shallow convection cycle takes on the order of 50 million years, though deeper convection can be closer to 200 million years.[17] Currently, whole mantle convection is thought to include broad-scale downwelling beneath the Americas and the Western Pacific, both regions with a long history of subduction, and upwelling flow beneath the central Pacific and Africa, both of which exhibit dynamic topography consistent with upwelling.[18] This broad-scale pattern of flow is also consistent with the tectonic plate motions, which are the surface expression of convection in the Earth's mantle and currently indicate degree-2 convergence toward the western Pacific and the Americas, and divergence away from the central Pacific and Africa.[19] The persistence of net tectonic divergence away from Africa and the Pacific for the past 250 Myr indicates the long-term stability of this general mantle flow pattern,[19] and is consistent with other studies [20][21][22] that suggest long-term stability of the LLSVP regions of the lowermost mantle that form the base of these upwellings. Creep in the mantleEdit Due to the varying temperatures and pressures between the lower and upper mantle, a variety of creep processes can occur with dislocation creep dominating in the lower mantle and diffusional creep occasionally dominating in the upper mantle. However, there is a large transition region in creep processes between the upper and lower mantle and even within each section, creep properties can change strongly with location and thus temperature and pressure. In the power law creep regions, the creep equation fitted to data with n = 3–4 is standard.[23] Since the upper mantle is primarily composed of olivine ((Mg,Fe)2SiO4), the rheological characteristics of the upper mantle are largely those of olivine. The strength of olivine not only scales with its melting temperature, but also is very sensitive to water and silica content. The solidus depression by impurities, primarily Ca, Al, and Na, and pressure affects creep behavior and thus contributes to the change in creep mechanisms with location. While creep behavior is generally plotted as homologous temperature versus stress, in the case of the mantle it is often more useful to look at the pressure dependence of stress. Though stress is simple force over area, defining the area is difficult in geology. Equation 1 demonstrates the pressure dependence of stress. Since it is very difficult to simulate the high pressures in the mantle (1MPa at 300–400 km), the low pressure laboratory data is usually extrapolated to high pressures by applying creep concepts from metallurgy.[24] {\displaystyle \left({\frac {\partial \ln \sigma }{\partial P}}\right)_{T,{\dot {\epsilon }}}=\left({\frac {1}{TT_{m}}}\right)\times \left({\frac {\partial \ln \sigma }{\partial (1/T)}}\right)_{P,{\dot {\epsilon }}}\times {\frac {dT_{m}}{dP}}} Most of the mantle has homologous temperatures of 0.65–0.75 and experiences strain rates of {\displaystyle 10^{-14}-10^{-16}} per second. Stresses in mantle are dependent on density, gravity, thermal expansion coefficients, temperature differences driving convection, and distance convection occurs over, all of which give stresses around a fraction of 3-30MPa. Due to the large grain sizes (at low stresses as high as several mm), it is unlikely that Nabarro-Herring (NH) creep truly dominates. Given the large grain sizes, dislocation creep tends to dominate. 14 MPa is the stress below which diffusional creep dominates and above which power law creep dominates at 0.5Tm of olivine. Thus, even for relatively low temperatures, the stress diffusional creep would operate at is too low for realistic conditions. Though the power law creep rate increases with increasing water content due to weakening, reducing activation energy of diffusion and thus increasing the NH creep rate, NH is generally still not large enough to dominate. Nevertheless, diffusional creep can dominate in very cold or deep parts of the upper mantle. Additional deformation in the mantle can be attributed to transformation enhanced ductility. Below 400 km, the olivine undergoes a pressure-induced phase transformation, which can cause more deformation due to the increased ductility.[24] Further evidence for the dominance of power law creep comes from preferred lattice orientations as a result of deformation. Under dislocation creep, crystal structures reorient into lower stress orientations. This does not happen under diffusional creep, thus observation of preferred orientations in samples lends credence to the dominance of dislocation creep.[25] Mantle convection in other celestial bodiesEdit A similar process of slow convection probably occurs (or occurred) in the interiors of other planets (e.g., Venus, Mars) and some satellites (e.g., Io, Europa, Enceladus). Compatibility (geochemistry) - Distribution of trace elements in melt ^ Kobes, Randy. "Mantle Convection". Archived from the original on 9 June 2011. Retrieved 26 February 2020. Physics Department, University of Winnipeg ^ Ricard, Y. (2009). "2. Physics of Mantle Convection". In David Bercovici and Gerald Schubert (ed.). Treatise on Geophysics: Mantle Dynamics. Vol. 7. Elsevier Science. ISBN 9780444535801. ^ Gerald Schubert; Donald Lawson Turcotte; Peter Olson (2001). "Chapter 2: Plate tectonics". Mantle convection in the earth and planets. Cambridge University Press. pp. 16 ff. ISBN 978-0-521-79836-5. ^ Fukao, Yoshio; Obayashi, Masayuki; Nakakuki, Tomoeki; Group, the Deep Slab Project (2009-01-01). "Stagnant Slab: A Review" (PDF). Annual Review of Earth and Planetary Sciences. 37 (1): 19–46. Bibcode:2009AREPS..37...19F. doi:10.1146/annurev.earth.36.031207.124224. ^ Gerald Schubert; Donald Lawson Turcotte; Peter Olson (2001). "§2.5.3: Fate of descending slabs". Cited work. pp. 35 ff. ISBN 978-0-521-79836-5. ^ a b Foulger, G.R. (2010). Plates vs. Plumes: A Geological Controversy. Wiley-Blackwell. ISBN 978-1-4051-6148-0. ^ a b Kent C. Condie (1997). Plate tectonics and crustal evolution (4th ed.). Butterworth-Heinemann. p. 5. ISBN 978-0-7506-3386-4. ^ Czechowski L. (1993) Geodesy and Physics of the Earth pp 392-395, The Origin of Hotspots and The D” Layer ^ Moresi, Louis; Solomatov, Viatcheslav (1998). "Mantle convection with a brittle lithosphere: thoughts on the global tectonic styles of the Earth and Venus". Geophysical Journal International. 133 (3): 669–82. Bibcode:1998GeoJI.133..669M. CiteSeerX 10.1.1.30.5989. doi:10.1046/j.1365-246X.1998.00521.x. ^ Ctirad Matyska & David A Yuen (2007). "Figure 17 in Lower-mantle material properties and convection models of multiscale plumes". Plates, plumes, and planetary processes. Geological Society of America. p. 159. ISBN 978-0-8137-2430-0. ^ Donald Lawson Turcotte; Gerald Schubert (2002). Geodynamics (2nd ed.). Cambridge University Press. ISBN 978-0-521-66624-4. ^ Gerald Schubert; Donald Lawson Turcotte; Peter Olson (2001). Cited work. p. 616. ISBN 978-0-521-79836-5. ^ Montelli, R; Nolet, G; Dahlen, FA; Masters, G; Engdahl ER; Hung SH (2004). "Finite-frequency tomography reveals a variety of plumes in the mantle" (PDF). Science. 303 (5656): 338–43. Bibcode:2004Sci...303..338M. doi:10.1126/science.1092485. PMID 14657505. S2CID 35802740. ^ Small-scale convection in the upper mantle beneath the Chinese Tian Shan Mountains, http://www.vlab.msi.umn.edu/reports/allpublications/files/2007-pap79.pdf Archived 2013-05-30 at the Wayback Machine ^ Polar Wandering and Mantle Convection, http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=1972IAUS...48..212T&db_key=AST&page_ind=0&data_type=GIF&type=SCREEN_VIEW&classic=YES ^ Picture showing convection with velocities indicated. "Archived copy". Archived from the original on 2011-09-28. Retrieved 2011-08-29. {{cite web}}: CS1 maint: archived copy as title (link) ^ Thermal Convection with a Freely Moving Top Boundary, See section IV Discussion and Conclusions http://physics.nyu.edu/jz11/publications/ConvecA.pdf ^ Lithgow-Bertelloni, Carolina; Silver, Paul G. (1998). "Dynamic topography, plate driving forces and the African superswell". Nature. 395 (6699): 269–272. Bibcode:1998Natur.395..269L. doi:10.1038/26212. ISSN 0028-0836. S2CID 4414115. ^ a b Conrad, Clinton P.; Steinberger, Bernhard; Torsvik, Trond H. (2013). "Stability of active mantle upwelling revealed by net characteristics of plate tectonics". Nature. 498 (7455): 479–482. Bibcode:2013Natur.498..479C. doi:10.1038/nature12203. hdl:10852/61522. ISSN 0028-0836. PMID 23803848. S2CID 205234113. ^ Torsvik, Trond H.; Smethurst, Mark A.; Burke, Kevin; Steinberger, Bernhard (2006). "Large igneous provinces generated from the margins of the large low-velocity provinces in the deep mantle". Geophysical Journal International. 167 (3): 1447–1460. Bibcode:2006GeoJI.167.1447T. doi:10.1111/j.1365-246x.2006.03158.x. ISSN 0956-540X. ^ Torsvik, Trond H.; Steinberger, Bernhard; Ashwal, Lewis D.; Doubrovine, Pavel V.; Trønnes, Reidar G. (2016). "Earth evolution and dynamics—a tribute to Kevin Burke". Canadian Journal of Earth Sciences. 53 (11): 1073–1087. Bibcode:2016CaJES..53.1073T. doi:10.1139/cjes-2015-0228. hdl:10852/61998. ISSN 0008-4077. ^ Dziewonski, Adam M.; Lekic, Vedran; Romanowicz, Barbara A. (2010). "Mantle Anchor Structure: An argument for bottom up tectonics". Earth and Planetary Science Letters. 299 (1–2): 69–79. Bibcode:2010E&PSL.299...69D. doi:10.1016/j.epsl.2010.08.013. ISSN 0012-821X. ^ Weertman, J.; White, S.; Cook, Alan H. (1978-02-14). "Creep Laws for the Mantle of the Earth [and Discussion]". Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 288 (1350): 9–26. Bibcode:1978RSPTA.288....9W. doi:10.1098/rsta.1978.0003. ISSN 1364-503X. S2CID 91874725. ^ a b Borch, Robert S.; Green, Harry W. (1987-11-26). "Dependence of creep in olivine on homologous temperature and its implications for flow in the mantle". Nature. 330 (6146): 345–48. Bibcode:1987Natur.330..345B. doi:10.1038/330345a0. S2CID 4319163. ^ Karato, Shun-ichiro; Wu, Patrick (1993-05-07). "Rheology of the Upper Mantle: A Synthesis". Science. 260 (5109): 771–78. Bibcode:1993Sci...260..771K. doi:10.1126/science.260.5109.771. ISSN 0036-8075. PMID 17746109. S2CID 8626640. Retrieved from "https://en.wikipedia.org/w/index.php?title=Mantle_convection&oldid=1059856521"
Definition 110.26.2 (027Y)—The Stacks project Section 110.26: Hilbert functions Definition 110.26.2 (cite) Definition 110.26.2. A graded module $M$ over a ring $A$ is an $A$-module $M$ endowed with a direct sum decomposition $ \bigoplus \nolimits _{n \in {\mathbf Z}} M_ n $ into $A$-submodules. We will say that $M$ is locally finite if all of the $M_ n$ are finite $A$-modules. Suppose that $A$ is a Noetherian ring and that $\varphi $ is a Euler-Poincaré function on finite $A$-modules. This means that for every finitely generated $A$-module $M$ we are given an integer $\varphi (M) \in {\mathbf Z}$ and for every short exact sequence \[ 0 \longrightarrow M' \longrightarrow M \longrightarrow M'' \longrightarrow 0 \] we have $\varphi (M) = \varphi (M') + \varphi (M'')$. The Hilbert function of a locally finite graded module $M$ (with respect to $\varphi $) is the function $\chi _\varphi (M, n) = \varphi (M_ n)$. We say that $M$ has a Hilbert polynomial if there is some numerical polynomial $P_\varphi $ such that $\chi _\varphi (M, n) = P_\varphi (n)$ for all sufficiently large integers $n$. Comment #6663 by bryce on October 27, 2021 at 18:15 I think there is a typo: "we have \phi(M) = \phi(M') + \phi(M') " instead of \phi(M) = \phi(M') + \phi(M'') View Definition 110.26.2 as pdf
Power bandwidth - MATLAB powerbw - MathWorks France 3-dB Bandwidth of Chirps 3-dB Bandwidth of Sinusoids Bandwidth of Bandlimited Signals freqlims flofhi bw = powerbw(x) bw = powerbw(x,fs) bw = powerbw(pxx,f) bw = powerbw(sxx,f,rbw) bw = powerbw(___,freqlims,r) [bw,flo,fhi,power] = powerbw(___) powerbw(___) bw = powerbw(x) returns the 3-dB (half-power) bandwidth, bw, of the input signal, x. bw = powerbw(x,fs) returns the 3-dB bandwidth in terms of the sample rate, fs. bw = powerbw(pxx,f) returns the 3-dB bandwidth of the power spectral density (PSD) estimate, pxx. The frequencies, f, correspond to the estimates in pxx. bw = powerbw(sxx,f,rbw) computes the 3-dB bandwidth of the power spectrum estimate, sxx. The frequencies, f, correspond to the estimates in sxx. rbw is the resolution bandwidth used to integrate each power estimate. bw = powerbw(___,freqlims,r) specifies the frequency interval over which to compute the reference level. This syntax can include any combination of input arguments from previous syntaxes, as long as the second input argument is either fs or f. If the second input is passed as empty, normalized frequency will be assumed. freqlims must lie within the target band. If you also specify r, the function computes the difference in frequency between the points where the spectrum drops below the reference level by r dB or reaches an endpoint. [bw,flo,fhi,power] = powerbw(___) also returns the lower and upper bounds of the power bandwidth and the power within those bounds. powerbw(___) with no output arguments plots the PSD or power spectrum in the current figure window and annotates the bandwidth. Generate 1024 samples of a chirp sampled at 1024 kHz. The chirp has an initial frequency of 50 kHz and reaches 100 kHz at the end of the sampling. Add white Gaussian noise such that the signal-to-noise ratio is 40 dB. Estimate the 3-dB bandwidth of the signal and annotate it on a plot of the power spectral density (PSD). powerbw(x,Fs) Concatenate the chirps to produce a two-channel signal. Estimate the 3-dB bandwidth of each channel. y = powerbw([x x2],Fs) Annotate the 3-dB bandwidths of the two channels on a plot of the PSDs. powerbw([x x2],Fs); Add the two channels to form a new signal. Plot the PSD and annotate the 3-dB bandwidth. powerbw(x+x2,Fs) Use the periodogram function to compute the power spectral density (PSD) of the signal. Specify a Kaiser window with the same length as the signal and a shape factor of 38. Estimate the 3-dB bandwidth of the signal and annotate it on a plot of the PSD. powerbw(Pxx,f); Generate another sinusoid, this one with a frequency of 257.321 kHz and an amplitude that is twice that of the first sinusoid. Add white Gaussian noise. Concatenate the sinusoids to produce a two-channel signal. Estimate the PSD of each channel and use the result to determine the 3-dB bandwidth. y = powerbw(Pyy,f) powerbw(Pyy,f); Add the two channels to form a new signal. Estimate the PSD and annotate the 3-dB bandwidth. powerbw(Pzz,f); 0.25\pi 0.45\pi Compute the 3-dB occupied bandwidth of the signal. Specify as a reference level the average power in the band between 0.2\pi 0.6\pi rad/sample. Plot the PSD and annotate the bandwidth. powerbw(d,[],[0.2 0.6]*pi,3); Output the bandwidth, its lower and upper bounds, and the band power. Specifying a sample rate of 2\pi [bw,flo,fhi,power] = powerbw(d,2*pi,[0.2 0.6]*pi); fprintf('bw = %.3f*pi, flo = %.3f*pi, fhi = %.3f*pi \n', ... [bw flo fhi]/pi) bw = 0.200*pi, flo = 0.250*pi, fhi = 0.450*pi fprintf('power = %.1f%% of total',power/bandpower(d)*100) power = 96.9% of total 0.5\pi 0.8\pi Compute the 6-dB bandwidth of the two-channel signal. Specify as a reference level the maximum power level of the spectrum. powerbw(d,[],[],6); Output the 6-dB bandwidth of each channel and the lower and upper bounds. [bw,flo,fhi] = powerbw(d,[],[],6); bds = [bw;flo;fhi]; fprintf('One: bw = %.3f*pi, flo = %.3f*pi, fhi = %.3f*pi \n',bds(:,1)/pi) One: bw = 0.198*pi, flo = 0.252*pi, fhi = 0.450*pi fprintf('Two: bw = %.3f*pi, flo = %.3f*pi, fhi = %.3f*pi \n',bds(:,2)/pi) Two: bw = 0.294*pi, flo = 0.503*pi, fhi = 0.797*pi Input signal, specified as a vector or matrix. If x is a vector, it is treated as a single channel. If x is a matrix, then powerbw computes the power bandwidth independently for each column. x must be finite-valued. Power spectral density (PSD) estimate, specified as a vector or matrix. If pxx is a one-sided estimate, then it must correspond to a real signal. If pxx is a matrix, then powerbw computes the bandwidth of each column of pxx independently. Frequencies, specified as a vector. If the first element of f is 0, then powerbw assumes that the spectrum is a one-sided spectrum of a real signal. In other words, the function doubles the power value in the zero-frequency bin as it seeks the 3-dB point. Power spectrum estimate, specified as a vector or matrix. If sxx is a matrix, then obw computes the bandwidth of each column of sxx independently. freqlims — Frequency limits Frequency limits, specified as a two-element vector of real values. If you specify freqlims, then the reference level is the average power level in the reference band. If you do not specify freqlims, then the reference level is the maximum power level of the spectrum. r — Power level drop 10 log102 (default) | positive real scalar Power level drop, specified as a positive real scalar expressed in dB. bw — Power bandwidth Power bandwidth, returned as a scalar or vector. If you specify a sample rate, then bw has the same units as fs. If you do not specify a sample rate, then bw has units of rad/sample. flo, fhi — Bandwidth frequency bounds Bandwidth frequency bounds, returned as scalars. power — Power stored in bandwidth Power stored in bandwidth, returned as a scalar or vector. To determine the 3-dB bandwidth, powerbw computes a periodogram power spectrum estimate using a rectangular window and takes the maximum of the estimate as a reference level. The bandwidth is the difference in frequency between the points where the spectrum drops at least 3 dB below the reference level. If the signal reaches one of its endpoints before dropping by 3 dB, then powerbw uses the endpoint to compute the difference. bandpower | obw | periodogram | plomb | pwelch
Samuel J. Li — Asymptotic Expansion of the Error Function Asymptotic Expansion of the Error Function Published May 5, 2021|4 minute read This post documents the implementation of the error function \mathrm{erf}(z) used in my complex function plotter. For small z , I use the asympotic series by Abramowitz & Stegun. For large z , I use a custom expansion around the 45° line. \omega \defeq e^{i \pi/4} , and parameterize the complex plane via z = r \omega + s \, i \omega . Consider the piecewise linear contour traveling from 0 r \omega , and then from r \omega r \omega + s \, i \omega . Using this contour, we compute \begin{aligned} \frac{\sqrt{\pi}}{2} \, \mathrm{erf}(z) &= \int_0^z e^{-x^2} \dd{x} \\ &= \int_0^r e^{-(t \omega)^2} \omega \dd{t} + \int_0^s e^{-(r \omega + t i \omega)^2} i \omega \dd{t} \\ &= \omega \, [C(r) - i \, S(r)] + i \omega e^{-ir^2} \int_0^s e^{2rt + it^2} \dd{t}, \end{aligned} S(x) C(x) are the Fresnel integrals. From Wikipedia, we have the asymptotic expansion C(x) - i \, S(x) = \frac{\sqrt{\pi}}{2} \overline{\omega} - [1 + \O(x^{-4})] \, \frac{\cos{x^2} - i \sin{x^2}}{2x} \, \left[\frac{1}{2x^2} - i\right] x . The second term is more annoying. For large a > 0 , consider the integral \begin{aligned} I_a(x) &\defeq e^{-ax} \int_{-\infty}^x e^{at} e^{it^2} \dd{t} \\ &= a \int_{-\infty}^x \int_x^\infty e^{a \, (t-s)} e^{it^2} \dd{s} \dd{t}. \end{aligned} (u, v) \defeq (a \, (s-t), x-t) \begin{aligned} I_a(x) &= \int_0^\infty e^{-u} \int_0^{u/a} e^{i \, (x-v)^2} \dd{v} \dd{u}. \end{aligned} The inner integral can be evaluated as a sum of four Fresnel integrals, and is therefore bounded by a constant. Then for large a , it is fruitful to perform the Taylor expansion \begin{aligned} \int_0^{u/a} e^{i \, (x-v)^2} \dd{v} &= e^{ix^2} \int_0^{u/a} e^{-2ixv + iv^2} \dd{v} \\ &= e^{ix^2} \int_0^{u/a} [1 - 2ixv + iv^2 - 2x^2v^2 + \O(v^3)] \dd{v} \\ &= e^{ix^2} \, \left[\frac{u}{a} - \frac{ixu^2}{a^2} + \frac{(i - 2x^2) \, u^3}{3a^3} + \O(a^{-4})\right]. \end{aligned} Evaluating the standard gamma integrals, we thus obtain \begin{aligned} I_a(x) &= e^{-ix^2} \left[a^{-1} - 2ixa^{-2} + (2i - 4x^2) \, a^{-3} + \O(a^{-4})\right]. \end{aligned} We were a bit handwavy with the justification for Fubini. Indeed, the above expansion works well for x > 0 , but fails catastrophically for x < 0 I_a(x) , it is evident that small errors in the integral are exponentially amplified for negative x . Empirically, however, we find that the approximation \begin{aligned} \int_0^x e^{at} e^{it^2} \dd{t} &= e^{ax} \, I_a(x) - I_a(0) \\ &\approx e^{ax+ix^2} \left[a^{-1} - 2ixa^{-2} + (2i - 4x^2) \, a^{-3}\right] - a^{-1} - 2i a^{-3} \end{aligned} works very well, and could probably be derived rigorously with more effort. Combining all the above, we have \begin{aligned} \mathrm{erf}(z) &= \frac{2}{\sqrt{\pi}} \omega \, [C(r) - i \, S(r)] + \frac{2}{\sqrt{\pi}} i \omega e^{-ir^2} [e^{2rs} I_{2r}(s) - I_{2r}(0)] \\ &\approx 1 - \frac{2}{\sqrt{\pi}} \frac{\cos{r^2} - i \sin{r^2}}{2r} \left[\frac{1}{2r} - i\right] \\ &+ \frac{i \omega}{2 \sqrt{\pi}} e^{-ir^2} \left[e^{2rs+is^2} \left(2r^{-1} - 2isr^{-2} + (i + 2s^2) \, r^{-3} \right) - 2r^{-1} - i r^{-3}\right], \end{aligned} which is the implementation I use in my complex function plotter. (I go up to fourth order in the app.)
Section 86.33 (0AQE): Maps out of affine formal schemes—The Stacks project Section 86.33: Maps out of affine formal schemes (cite) 86.33 Maps out of affine formal schemes We prove a few results that will be useful later. In the paper [Bhatt-Algebraize] the reader can find very general results of a similar nature. Lemma 86.33.1. Let $S$ be a scheme. Let $A$ be a weakly admissible topological $S$-algebra. Let $X$ be an affine scheme over $S$. Then the natural map \[ \mathop{\mathrm{Mor}}\nolimits _ S(\mathop{\mathrm{Spec}}(A), X) \longrightarrow \mathop{\mathrm{Mor}}\nolimits _ S(\text{Spf}(A), X) \] Proof. If $X$ is affine, say $X = \mathop{\mathrm{Spec}}(B)$, then we see from Lemma 86.9.10 that morphisms $\text{Spf}(A) \to \mathop{\mathrm{Spec}}(B)$ correspond to continuous $S$-algebra maps $B \to A$ where $B$ has the discrete topology. These are just $S$-algebra maps, which correspond to morphisms $\mathop{\mathrm{Spec}}(A) \to \mathop{\mathrm{Spec}}(B)$. $\square$ Lemma 86.33.2. Let $S$ be a scheme. Let $A$ be a weakly admissible topological $S$-algebra such that $A/I$ is a local ring for some weak ideal of definition $I \subset A$. Let $X$ be a scheme over $S$. Then the natural map Proof. Let $\varphi : \text{Spf}(A) \to X$ be a morphism. Since $\mathop{\mathrm{Spec}}(A/I)$ is local we see that $\varphi $ maps $\mathop{\mathrm{Spec}}(A/I)$ into an affine open $U \subset X$. However, this then implies that $\mathop{\mathrm{Spec}}(A/J)$ maps into $U$ for every ideal of definition $J$. Hence we may apply Lemma 86.33.1 to see that $\varphi $ comes from a morphism $\mathop{\mathrm{Spec}}(A) \to X$. This proves surjectivity of the map. We omit the proof of injectivity. $\square$ Lemma 86.33.3. Let $S$ be a scheme. Let $R$ be a complete local Noetherian $S$-algebra. Let $X$ be an algebraic space over $S$. Then the natural map \[ \mathop{\mathrm{Mor}}\nolimits _ S(\mathop{\mathrm{Spec}}(R), X) \longrightarrow \mathop{\mathrm{Mor}}\nolimits _ S(\text{Spf}(R), X) \] Proof. Let $\mathfrak m$ be the maximal ideal of $R$. We have to show that is bijective for $R$ as above. Injectivity: Let $x, x' : \mathop{\mathrm{Spec}}(R) \to X$ be two morphisms mapping to the same element in the right hand side. Consider the fibre product \[ T = \mathop{\mathrm{Spec}}(R) \times _{(x, x'), X \times _ S X, \Delta } X \] Then $T$ is a scheme and $T \to \mathop{\mathrm{Spec}}(R)$ is locally of finite type, monomorphism, separated, and locally quasi-finite, see Morphisms of Spaces, Lemma 66.4.1. In particular $T$ is locally Noetherian, see Morphisms, Lemma 29.15.6. Let $t \in T$ be the unique point mapping to the closed point of $\mathop{\mathrm{Spec}}(R)$ which exists as $x$ and $x'$ agree over $R/\mathfrak m$. Then $R \to \mathcal{O}_{T, t}$ is a local ring map of Noetherian rings such that $R/\mathfrak m^ n \to \mathcal{O}_{T, t}/\mathfrak m^ n\mathcal{O}_{T, t}$ is an isomorphism for all $n$ (because $x$ and $x'$ agree over $\mathop{\mathrm{Spec}}(R/\mathfrak m^ n)$ for all $n$). Since $\mathcal{O}_{T, t}$ maps injectively into its completion (see Algebra, Lemma 10.51.4) we conclude that $R = \mathcal{O}_{T, t}$. Hence $x$ and $x'$ agree over $R$. Surjectivity: Let $(x_ n)$ be an element of the right hand side. Choose a scheme $U$ and a surjective étale morphism $U \to X$. Denote $x_0 : \mathop{\mathrm{Spec}}(k) \to X$ the morphism induced on the residue field $k = R/\mathfrak m$. The morphism of schemes $U \times _{X, x_0} \mathop{\mathrm{Spec}}(k) \to \mathop{\mathrm{Spec}}(k)$ is surjective étale. Thus $U \times _{X, x_0} \mathop{\mathrm{Spec}}(k)$ is a nonempty disjoint union of spectra of finite separable field extensions of $k$, see Morphisms, Lemma 29.36.7. Hence we can find a finite separable field extension $k'/k$ and a $k'$-point $u_0 : \mathop{\mathrm{Spec}}(k') \to U$ such that \[ \xymatrix{ \mathop{\mathrm{Spec}}(k') \ar[d] \ar[r]_-{u_0} & U \ar[d] \\ \mathop{\mathrm{Spec}}(k) \ar[r]^-{x_0} & X } \] commutes. Let $R \subset R'$ be the finite étale extension of Noetherian complete local rings which induces $k'/k$ on residue fields (see Algebra, Lemmas 10.153.7 and 10.153.9). Denote $x'_ n$ the restriction of $x_ n$ to $\mathop{\mathrm{Spec}}(R'/\mathfrak m^ nR')$. By More on Morphisms of Spaces, Lemma 75.16.8 we can find an element $(u'_ n) \in \mathop{\mathrm{lim}}\nolimits \mathop{\mathrm{Mor}}\nolimits _ S(\mathop{\mathrm{Spec}}(R'/\mathfrak m^ nR'), U)$ mapping to $(x'_ n)$. By Lemma 86.33.2 the family $(u'_ n)$ comes from a unique morphism $u' : \mathop{\mathrm{Spec}}(R') \to U$. Denote $x' : \mathop{\mathrm{Spec}}(R') \to X$ the composition. Note that $R' \otimes _ R R'$ is a finite product of spectra of Noetherian complete local rings to which our current discussion applies. Hence the diagram \[ \xymatrix{ \mathop{\mathrm{Spec}}(R' \otimes _ R R') \ar[r] \ar[d] & \mathop{\mathrm{Spec}}(R') \ar[d]^{x'} \\ \mathop{\mathrm{Spec}}(R') \ar[r]^{x'} & X } \] is commutative by the injectivity shown above and the fact that $x'_ n$ is the restriction of $x_ n$ which is defined over $R/\mathfrak m^ n$. Since $\{ \mathop{\mathrm{Spec}}(R') \to \mathop{\mathrm{Spec}}(R)\} $ is an fppf covering we conclude that $x'$ descends to a morphism $x : \mathop{\mathrm{Spec}}(R) \to X$. We omit the proof that $x_ n$ is the restriction of $x$ to $\mathop{\mathrm{Spec}}(R/\mathfrak m^ n)$. $\square$ Lemma 86.33.4. Let $S$ be a scheme. Let $X$ be an algebraic space over $S$. Let $T \subset |X|$ be a closed subset such that $X \setminus T \to X$ is quasi-compact. Let $R$ be a complete local Noetherian $S$-algebra. Then an adic morphism $p : \text{Spf}(R) \to X_{/T}$ corresponds to a unique morphism $g : \mathop{\mathrm{Spec}}(R) \to X$ such that $g^{-1}(T) = \{ \mathfrak m_ R\} $. Proof. The statement makes sense because $X_{/T}$ is adic* by Lemma 86.20.8 (and hence we're allowed to use the terminology adic for morphisms, see Definition 86.23.2). Let $p$ be given. By Lemma 86.33.3 we get a unique morphism $g : \mathop{\mathrm{Spec}}(R) \to X$ corresponding to the composition $\text{Spf}(R) \to X_{/T} \to X$. Let $Z \subset X$ be the reduced induced closed subspace structure on $T$. The incusion morphism $Z \to X$ corresponds to a morphism $Z \to X_{/T}$. Since $p$ is adic it is representable by algebraic spaces and we find \[ \text{Spf}(R) \times _{X_{/T}} Z = \text{Spf}(R) \times _ X Z \] is an algebraic space endowed with a closed immersion to $\text{Spf}(R)$. (Equality holds because $X_{/T} \to X$ is a monomorphism.) Thus this fibre product is equal to $\mathop{\mathrm{Spec}}(R/J)$ for some ideal $J \subset R$ wich contains $\mathfrak m_ R^{n_0}$ for some $n_0 \geq 1$. This implies that $\mathop{\mathrm{Spec}}(R) \times _ X Z$ is a closed subscheme of $\mathop{\mathrm{Spec}}(R)$, say $\mathop{\mathrm{Spec}}(R) \times _ X Z = \mathop{\mathrm{Spec}}(R/I)$, whose intersection with $\mathop{\mathrm{Spec}}(R/\mathfrak m_ R^ n)$ for $n \geq n_0$ is equal to $\mathop{\mathrm{Spec}}(R/J)$. In algebraic terms this says $I + \mathfrak m_ R^ n = J + \mathfrak m_ R^ n = J$ for all $n \geq n_0$. By Krull's intersection theorem this implies $I = J$ and we conclude. $\square$ Comment #1058 by Matthieu Romagny on October 04, 2014 at 09:53 In the statement of Lemma 65.23.3 (0AQH), it would be better to write : Let \text{Spec}(R) S -scheme, with R a complete local Noetherian ring. Yes, thank you! Fixed here. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0AQE. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0AQE, in case you are confused.
f ( x ) = \left\{ \begin{array} { l l } { 2 x ^ { 2 } - 4 \text { for } x \leq 3 } \\ { - 2 x - 5 \text { for } x > 3 } \end{array} \right. \lim\limits_ { x \rightarrow 3 ^ { + } } f ( x ) Notice that the boundary point is x = 3 . Which piece is to the right of that boundary point? \lim\limits_ { x \rightarrow 3 ^ { - } } f ( x ) Which piece of f(x) is to the left of the boundary point? What do your results above tell you about f(x) Do not claim continuity (or discontinuity) without accounting for all three conditions.
ERROR: type should be string, got "https://stats.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FBookshelves%2FIntroductory_Statistics%2FBook%253A_Statistical_Thinking_for_the_21st_Century_(Poldrack)%2F08%253A_Fitting_Models_to_Data%2F8.10%253A_Z-scores\nHaving characterized a distribution in terms of its central tendency and variability, it is often useful to express the individual scores in terms of where they sit with respect to the overall distribution. Let’s say that we are interested in characterizing the relative level of crimes across different states, in order to determine whether California is a particularly dangerous place. We can ask this question using data for 2014 from the FBI’s Uniform Crime Reporting site. The left panel of Figure 8.8 shows a histogram of the number of violent crimes per state, highlighting the value for California. Looking at these data, it seems like California is terribly dangerous, with 153709 crimes in that year.\nFigure 8.8: Left: Histogram of the number of violent crimes. The value for CA is plotted in blue. Right: A map of the same data, with number of crimes plotted for each state in color.\nWith R it’s also easy to generate a map showing the distribution of a variable across states, which is presented in the right panel of Figure 8.8.\nIt may have occurred to you, however, that CA also has the largest population of any state in the US, so it’s reasonable that it will also have a larger number of crimes. If we plot the two against one another (see left panel of Figure 8.9), we see that there is a direct relationship between population and the number of crimes.\nFigure 8.9: Left: A plot of number of crimes versus population by state. Right: A histogram of per capita crime rates, expressed as crimes per 100,000 people.\nInstead of using the raw numbers of crimes, we should instead use the per-capita violent crime rate, which we obtain by dividing the number of crimes by the population of the state. The dataset from the FBI already includes this value (expressed as rate per 100,000 people). Looking at the right panel of Figure 8.9, we see that California is not so dangerous after all – its crime rate of 396.10 per 100,000 people is a bit above the mean across states of 346.81, but well within the range of many other states. But what if we want to get a clearer view of how far it is from the rest of the distribution?\nThe Z-score allows us to express data in a way that provides more insight into each data point’s relationship to the overall distribution. The formula to compute a Z-score for a data point given that we know the value of the population mean\n\\mu\n\\sigma\nIntuitively, you can think of a Z-score as telling you how far away from the mean any data point is, in units of standard deviation. We can compute this for the crime rate data, as shown in Figure 8.10.\n## [1] \"mean of Z-scored data: 1.4658413372004e-16\"\n## [1] \"std deviation of Z-scored data: 1\"\nThe scatterplot shows us that the process of Z-scoring doesn’t change the relative distribution of the data points (visible in the fact that the orginal data and Z-scored data fall on a straight line when plotted against each other) – it just shifts them to have a mean of zero and a standard deviation of one. However, if you look closely, you will see that the mean isn’t exactly zero – it’s just very small. What is going on here is that the computer represents numbers with a certain amount of numerical precision - which means that there are numbers that are not exactly zero, but are small enough that R considers them to be zero.\nFigure 8.11 shows the Z-scored crime data using the geographical view.\nThe “Z” in “Z-score”\" comes from the fact that the standard normal distribution (that is, a normal distribution with a mean of zero and a standard deviation of 1) is often referred to as the “Z” distribution. We can use the standard normal distribution to help us understand what specific Z scores tell us about where a data point sits with respect to the rest of the distribution.\nThe upper panel in Figure 8.12 shows that we expect about 16% of values to fall in\nZ\\ge 1\n, and the same proportion to fall in\nZ\\le -1\nFigure 8.13 shows the same plot for two standard deviations. Here we see that only about 2.3% of values fall in\nZ \\le -2\nand the same in\nZ \\ge 2\n. Thus, if we know the Z-score for a particular data point, we can estimate how likely or unlikely we would be to find a value at least as extreme as that value, which lets us put values into better context.\nOne useful application of Z-scores is to compare distributions of different variables. Let’s say that we want to compare the distributions of violent crimes and property crimes across states. In the left panel of Figure 8.15 we plot those against one another, with CA plotted in blue. As you can see the raw rates of property crimes are far higher than the raw rates of violent crimes, so we can’t just compare the numbers directly. However, we can plot the Z-scores for these data against one another (right panel of Figure 8.15)– here again we see that the distribution of the data does not change. Having put the data into Z-scores for each variable makes them comparable, and lets us see that California is actually right in the middle of the distribution in terms of both violent crime and property crime.\nBecause Z-scores are directly comparable, we can also compute a “Violence difference” score that expresses the relative rate of violent to non-violent (property) crimes across states. We can then plot those scores against population (see right panel of Figure 8.16). This shows how we can use Z-scores to bring different variables together on a common scale.\nIt is worth noting that the smallest states appear to have the largest differences in both directions. While it might be tempting to look at each state and try to determine why it has a high or low difference score, this probably reflects the fact that the estimates obtained from smaller samples are necessarily going to be more variable, as we will discuss in the later chapter on Sampling.\n8.10: Z-scores is shared under a not declared license and was authored, remixed, and/or curated by Russell A. Poldrack via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request."
(Created page with "<span style="font-size:135%"><font face=Times Roman>5. Consider the function <math style="vertical-align: -42%;">h(x)={\displaystyle \frac{x^{3}}{3}-2x^{2}-5x+\frac{35}...") |We learn a lot about the shape of a function's graph from its derivatives. When a first derivative is positive, the function is increasing (heading uphill). When the first derivative is negative, it is decreasing (heading downhill). Of particular interest is when the first derivative at a point is zero. If ''f'' '(''z'') = 0 a point ''z'', and the first derivative splits around it (either ''f'' '(x) < 0 for ''x'' < ''z'' and ''f'' '(x) > 0 for ''x'' > ''z'' or ''f'' '(x) > 0 for ''x'' < ''z'' and ''f'' '(x) < 0 for ''x'' > ''z''), then the point (''z'',''f''(''z'') is a '''local maximum''' or '''minimum''', respectively. |If the second derivative is negative, then the first derivative is decreasing. This means we are turning right as we move from negative ''x''-values to positive. This is called "concave down". The inverted parabola ''y'' = -''x''<span style="font-size:85%"><sup>2</sup></span> is an example of a purely concave down graph. |A point ''z'' where the second derivative is zero, and the sign of the second derivative splits around it (either ''f'' "(x) < 0 for ''x'' < ''z'' and ''f'' "(x) > 0 for ''x'' > ''z'' or ''f'' "(x) > 0 for ''x'' < ''z'' and ''f'' "(x) < 0 for ''x'' > ''z''), then the point (''z'',''f''(''z'') is an inflection point. |A point ''z'' where the second derivative is zero, and the sign of the second derivative splits around it (either ''f'' "(x) < 0 for ''x'' < ''z'' and ''f'' "(x) > 0 for ''x'' > ''z'', or ''f'' "(x) > 0 for ''x'' < ''z'' and ''f'' "(x) < 0 for ''x'' > ''z''), then the point (''z'',''f''(''z'') is an inflection point. |<br>Of course, there are tests we use to find local extrema (maxima and minima, which is the plural of maximum and minimum). We are assuming the function ''f'' is continuous and differentiable in an interval containing the point ''x''<span style="font-size:85%"><sub>0</sub></span>. |(a) The function is increasing on <math>-(\infty,-1) </math> and <math>(5,\infty)</math>, and decreasing on <math>(-1,5)</math>. |(c) The function is concave upward on <math>(-\infty,2)</math> and concave downward on <math>(2,\infty)</math>. |(d) The function has an inflection point at <math>(2,f(2))=\left(2,-\frac{11}{3}\right).</math> {\displaystyle h(x)={\displaystyle {\frac {x^{3}}{3}}-2x^{2}-5x+{\frac {35}{3}}}.} {\displaystyle f(x)} {\displaystyle f(x)} {\displaystyle f'(x)=x^{2}-4x-5=(x-5)(x+1),} {\displaystyle f''(x)=2x-4.} {\displaystyle f'(-10)=(-)(-)=(+),\quad f'(0)=(-)(+)=(-),\quad f'(10)=(+)(+)=(+).} {\displaystyle x:} {\displaystyle x<-1} {\displaystyle x=-1} {\displaystyle -1<x<5} {\displaystyle x=5} {\displaystyle x>5} {\displaystyle f'(x):} {\displaystyle (+)} {\displaystyle 0} {\displaystyle (-)} {\displaystyle 0} {\displaystyle (+)} {\displaystyle -(\infty ,-1)} {\displaystyle (5,\infty )} {\displaystyle (-1,5)} {\displaystyle f(-1)=-{\frac {1}{3}}-2+5+{\frac {35}{3}}=14\,{\frac {1}{3}}} {\displaystyle f(5)={\frac {125}{3}}-50-25+{\frac {35}{3}}=-75+{\frac {90}{3}}=-45} {\displaystyle x:} {\displaystyle x<2} {\displaystyle x=2} {\displaystyle x>2} {\displaystyle f''(x):} {\displaystyle (-)} {\displaystyle 0} {\displaystyle (+)} {\displaystyle (-\infty ,2)} {\displaystyle (2,\infty )} {\displaystyle (2,f(2))=\left(2,-{\frac {11}{3}}\right).}
Differentiate the following functions. Determine if the function is differentiable for all reals. y =\operatorname{sin}(x-3) Compare the graph of y =\operatorname{sin}(x-3) with its parent y =\operatorname{sin}x y =\operatorname{sin}(x-3) looks just like y =\operatorname{sin}x . Their periods are the same. Their amplitudes are the same. Their maximum and minimum y -values are the same. Their SLOPES are the same. The only difference is their horizontal locations. The slopes will shift with the graph: y =\operatorname{sin}x → y =\operatorname{sin}(x+3) y^\prime =\operatorname{cos}x → y^\prime =\operatorname{cos}(x + 3) f ( x ) = \left\{ \begin{array} { c c } { 4 - x ^ { 2 } } & { \text { for } x < 1 } \\ { ( x - 1 ) ^ { 3 } + 3 } & { \text { for } x \geq 1 } \end{array} \right. The derivative will be a piecewise function as well. Differentiate each piece separately. Consider the boundary point of the derivative. Examine the two pieces at x=1 . Is the derivative continuous?
Revision as of 12:48, 7 July 2020 by Smithk (talk | contribs) (→‎640.1.4.2 Curb Opening Inlets (Precast Type T)) EPG 640.1.2.1 applies to roadway design frequency only. For details regarding design frequency on bridges, see EPG 751.10.3 Bridge Deck Drainage – Slab Drains. For major roads, choose a storm frequency in the range of 10 to 50 years. For minor roads, choose a storm frequency in the range of 10 to 25 years. Typical Locations - The higher frequency (i.e., 10-year) criteria will apply to design of most inlets, including those inlets located at low points in grade. Critical Locations - The lower frequencies (i.e., 50-year) should be used for design of critical low points, such as underpasses or high traffic volume "at grade" intersections. Non-Typical Locations – For locations with non-typical features such as super elevation transitions, entrances with slotted drains and non-standard shoulders, a frequency in between the high and low frequencies may be appropriate. Consideration is also given to AADT and importance of route when choosing a design storm frequency. Major Typical 10-year Non-Typical 10-year to 50-year Critical 50-year Minor Typical 10-year The duration of the design storm is set equal to the time of concentration of the drainage area or five minutes, whichever is greater. The peak rates of runoff produced by the design storm is computed by The Rational Method. EPG 640.1.2.2 applies to roadway gutter spread only. For details regarding gutter spread on bridges, see EPG 751.10.3 Bridge Deck Drainage – Slab Drains. Spread is defined as the width of gutter flow, measured laterally from the face of the curb. Allowable spread depends on the classification of the roadway and importance of route to the community as determined by the Core Team. Gutter spreads shown below are maximum widths, other design factors including ADA requirements should be considered when determining the allowable gutter spread. For all interstates, gutter spread shall not encroach upon the traveled lanes. Gutter spread shall be limited to the shoulder and shall not exceed 12 ft. during the design runoff event. For all major roads with a posted speed of 45 mph or greater, gutter spread shall not encroach upon the traveled lanes. Gutter spread shall be limited to the shoulder and/or parking lane and shall not exceed 12 ft. during the design runoff event. For major roads with a posted speed of less than 45 mph: a) For 2-lane facilities or when the cross slope only extends across 1 lane, gutter spread may encroach up to 3 ft. into the driving lane. b) On multilane facilities, when 2 or more lanes in one direction have their cross slopes running in a single direction, the gutter spread may encroach up to ½ of the lane nearest the gutter section for those lanes. For minor roads with a posted speed of 45 mph or greater: For minor roads with a posted speed of less than 45 mph, a gutter spread of up to ½ lane may be used. For one lane facilities, such as ramps, a minimum of 8’ of the traveled lane shall remain unflooded. Design Spread Guide Interstate All Up to the shoulder width, with a 12’ max. Major ≥ 45 mph Up to the shoulder and/or park lane width, with a 12’ max. 2-Lane roads < 45 mph Shoulder + 3 ft. *Multilane roads < 45 mph Shoulder + ½ lane Minor 2-Lane roads ≥ 45 mph Shoulder + 3 ft. *Multilane roads > 45 mph Shoulder + ½ lane < 45 mph Shoulder + ½ lane Single Lane All Minimum of 8 ft. unflooded * 2 or more lanes in one direction that have their cross slope in the same direction. Water flowing across an intersection may cause a certain amount of danger to cross traffic and should be limited. Therefore, inlets placed at the upstream side of intersections are sized so that no more than 2.0 ft3/s (0.05 m3/s) is allowed to flow into the intersection during the design event. {\displaystyle Q={\frac {K_{1}}{nS_{x}}}S^{\frac {1}{2}}d^{\frac {8}{3}}={\frac {K_{1}}{n}}S^{\frac {1}{2}}S_{x}^{\frac {5}{3}}T^{\frac {8}{3}}} {\displaystyle d=1.24\left({\frac {QnS_{x}}{S^{\frac {1}{2}}}}\right)^{0.375}} {\displaystyle T={\frac {d}{S_{x}}}\,} {\displaystyle V={\frac {Q}{A}}\,} {\displaystyle A={\frac {1}{2}}dT={\frac {1}{2}}{\frac {d^{2}}{S_{x}}}} {\displaystyle Q=Q_{ABE}-Q_{DCE}+Q_{DCF}\,} {\displaystyle Q_{ABE}={\frac {K_{1}}{n{\frac {a}{W}}}}S^{\frac {1}{2}}d^{\frac {8}{3}}} {\displaystyle Q_{DCE}={\frac {K_{1}}{n{\frac {a}{W}}}}S^{\frac {1}{2}}(d-a)^{\frac {8}{3}}} {\displaystyle Q_{ABE}={\frac {K_{1}}{nS_{x}}}S^{\frac {1}{2}}(d-a)^{\frac {8}{3}}} {\displaystyle T=W+{\frac {(d-a)}{S_{x}}}} {\displaystyle A=\left(d-{\frac {a}{2}}\right)W+{\frac {1}{2}}(d-a)(T-W)=\left(d-{\frac {a}{2}}\right)W+{\frac {(d-a)^{2}}{2S_{x}}}} Standard curved vane grate inlets are 2 ft. (610 mm) long (measured in the direction of flow), and either 2 or 4 ft. (610 or 1219 mm) wide (measured perpendicular to the direction of flow). Only 2 ft. (610 mm) wide inlets may be used in curb and gutter sections; whereas 2 or 4 ft. widths may be used in conjunction with integral curb (triangular) sections. Two types of inlet grates are shown in the standard plans. The curved vane grate is used in roadway and shoulder applications. The parallel bar grate may be used only in areas outside the roadway and shoulders, such as in grassy medians or other unpaved areas. See Standard Plan details of inlet grates (Std. Plans 614.10 and 614.11) and gutter cross sections (Std. Plan 609.00). Curb opening inlets consist of a longitudinal opening located in the face of the curb. Details of the curb opening inlets may be found in Std. Plan 731.10. The Type T inlet has a minimum length of 2.5 ft. (750 mm) and may be increased in length in 2.5 ft. (750 mm) increments as needed. The Type T inlet has a local depression of 3 in. (75 mm) below the normal gutter flow line. Curb opening inlets are recommended for use at all low points. In locations where heavy debris is expected, consideration should be given to doubling the calculated length of curb opening inlet to provide a safety factor against clogging. {\displaystyle R_{f}=1-C(V-V_{0})\,} {\displaystyle R_{s}=\left(1+{\frac {CV^{1.8}}{S_{x}L^{2.3}}}\right)^{-1}} {\displaystyle Q_{i}=R_{f}Q_{f}+R_{s}Q_{s}\,} {\displaystyle Q_{i}=C_{w}pd^{1.5}\,} {\displaystyle p=2w+l\,} {\displaystyle p=2(w+l)\,} {\displaystyle Q_{i}=C_{o}A(2gd)^{0.5}\,} {\displaystyle L_{T}=KQ^{0.42}S^{0.3}\left({\frac {1}{nS_{x}}}\right)^{0.6}} {\displaystyle E=1-\left(1-{\frac {L}{L_{T}}}\right)^{1.8}} {\displaystyle Q_{i}=EQ\,} {\displaystyle S_{e}=S_{x}+S_{w}^{'}E_{o}} {\displaystyle Q_{i}=C_{w}(L+1.8W)d^{1.5}\,} {\displaystyle Q_{i}=C_{o}hL\left[2g\left(d_{i}-{\frac {h}{2}}\right)\right]^{0.5}}
facint - Maple Help Home : Support : Online Help : Mathematics : Numbers : Type Checking : facint test for factored integer form type(expr, facint) This function will return true if expr is an expression of the form returned by the function ifactor, and false otherwise. 0 1 -1 are considered to be of type facint, but all other integers and rationals must be passed through ifactor before type/facint will return true when applied to them. a≔\mathrm{ifactor}⁡\left(2520\right) \textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}{\left(\textcolor[rgb]{0,0,1}{2}\right)}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}{\left(\textcolor[rgb]{0,0,1}{3}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{5}\right)\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{7}\right) \mathrm{type}⁡\left(a,\mathrm{facint}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} b≔\mathrm{ifactor}⁡\left(\frac{81}{8}\right) \textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}\frac{{\left(\textcolor[rgb]{0,0,1}{3}\right)}^{\textcolor[rgb]{0,0,1}{4}}}{{\left(\textcolor[rgb]{0,0,1}{2}\right)}^{\textcolor[rgb]{0,0,1}{3}}} \mathrm{type}⁡\left(b,\mathrm{facint}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{type}⁡\left(1,\mathrm{facint}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{type}⁡\left(5,\mathrm{facint}\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{type}⁡\left(\mathrm{ifactor}⁡\left(5\right),\mathrm{facint}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}}
Prediction of Flutter of Turbine Blades in a Transonic Annular Cascade | J. Fluids Eng. | ASME Digital Collection , Irvine, California McBean, I., Hourigan, K., Thompson, M., and Liu, F. (May 29, 2005). "Prediction of Flutter of Turbine Blades in a Transonic Annular Cascade." ASME. J. Fluids Eng. November 2005; 127(6): 1053–1058. https://doi.org/10.1115/1.2060731 A parallel multiblock Navier-Stokes solver with the k‐ω turbulence model is used to solve the unsteady flow through an annular turbine cascade, the transonic Standard Test Case 4, Test 628. Computations are performed on a two- and three-dimensional model of the blade row with either the Euler or the Navier-Stokes flow models. Results are compared to the experimental measurements. Comparisons of the unsteady surface pressure and the aerodynamic damping are made between the three-dimensional, two-dimensional, inviscid, viscous simulations, and experimental data. Differences are found between the stability predictions by the two- and three-dimensional computations, and the Euler and Navier-Stokes computations due to three-dimensionality of the cascade model and the presence of a boundary layer separation, respectively. turbines, blades, transonic flow, Navier-Stokes equations, boundary layer turbulence, flow instability, confined flow, aerodynamics, damping, flow simulation, flow separation Blades, Cascades (Fluid dynamics), Damping, Flow (Dynamics), Pressure, Simulation, Flutter (Aerodynamics), Turbine blades, Navier-Stokes equations, Shock (Mechanics), Suction, Separation (Technology) A Nonlinear Aeroelasticity Analysis of a Fan Blade Using Unstructured Dynamic Meshes An Integrated Nonlinear Approach for Turbomachinery Forced Response Prediction, Part I: Formulation An Integrated Nonlinear Approach for Turbomachinery Forced Response Prediction, Part II: Case Studies Fluid∕Structure Coupled Aeroelastic Computations for Transonic Flows in Turbomachinery , New York, ASME Paper No. GT-2002-30313. Coupled Fluid-Structure Simulation for Turbomachinery Blade Rows , Reno, AIAA Paper No. 2005-0018. Recent Developments in Turbomachinery Aeroelasticity ,” Computational Fluid Dynamics ’98, pp. Review of Unsteady Aerodynamic Methods for Turbomachinery Aeroelastic and Aeroacoustic Applications ,” Ph.D. thesis, Monash University. Aeroelasticity in Turbomachines: Comparison of Theoretical and Experimental Cascade Results ,” EPFL Lausanne. Unsteady Flow Calculations with a Multigrid Navier-Stokes Method Proc. AIAA 14th Fluid and Plasma Dynamics Conf. , Washington, DC, AIAA Paper 81-1259. Unsteady Flow Calculations With a Parallel Multi-Block Moving Mesh Algorithm Viscous and Inviscid Linear∕Nonlinear Calculations Versus Quasi-3D Experimental Data for a New Aeroelastic Turbine Standard Configuration , 98-GT-490. Unsteady Navier-Stokes Simulation of a Transonic Flutter Cascade Near Stall Conditions Applying Algebraic Transition Models
Section 10.20 (07RC): Nakayama's lemma—The Stacks project Section 10.20: Nakayama's lemma (cite) 10.20 Nakayama's lemma We quote from [MatCA]: “This simple but important lemma is due to T. Nakayama, G. Azumaya and W. Krull. Priority is obscure, and although it is usually called the Lemma of Nakayama, late Prof. Nakayama did not like the name.” historical remarkreference Lemma 10.20.1 (Nakayama's lemma). Let $R$ be a ring with Jacobson radical $\text{rad}(R)$. Let $M$ be an $R$-module. Let $I \subset R$ be an ideal. If $IM = M$ and $M$ is finite, then there exists an $f \in 1 + I$ such that $fM = 0$. If $IM = M$, $M$ is finite, and $I \subset \text{rad}(R)$, then $M = 0$. If $N, N' \subset M$, $M = N + IN'$, and $N'$ is finite, then there exists an $f \in 1 + I$ such that $fM \subset N$ and $M_ f = N_ f$. If $N, N' \subset M$, $M = N + IN'$, $N'$ is finite, and $I \subset \text{rad}(R)$, then $M = N$. If $N \to M$ is a module map, $N/IN \to M/IM$ is surjective, and $M$ is finite, then there exists an $f \in 1 + I$ such that $N_ f \to M_ f$ is surjective. If $N \to M$ is a module map, $N/IN \to M/IM$ is surjective, $M$ is finite, and $I \subset \text{rad}(R)$, then $N \to M$ is surjective. If $x_1, \ldots , x_ n \in M$ generate $M/IM$ and $M$ is finite, then there exists an $f \in 1 + I$ such that $x_1, \ldots , x_ n$ generate $M_ f$ over $R_ f$. If $x_1, \ldots , x_ n \in M$ generate $M/IM$, $M$ is finite, and $I \subset \text{rad}(R)$, then $M$ is generated by $x_1, \ldots , x_ n$. If $IM = M$, $I$ is nilpotent, then $M = 0$. If $N, N' \subset M$, $M = N + IN'$, and $I$ is nilpotent then $M = N$. If $N \to M$ is a module map, $I$ is nilpotent, and $N/IN \to M/IM$ is surjective, then $N \to M$ is surjective. If $\{ x_\alpha \} _{\alpha \in A}$ is a set of elements of $M$ which generate $M/IM$ and $I$ is nilpotent, then $M$ is generated by the $x_\alpha $. Proof. Proof of (1). Choose generators $y_1, \ldots , y_ m$ of $M$ over $R$. For each $i$ we can write $y_ i = \sum z_{ij} y_ j$ with $z_{ij} \in I$ (since $M = IM$). In other words $\sum _ j (\delta _{ij} - z_{ij})y_ j = 0$. Let $f$ be the determinant of the $m \times m$ matrix $A = (\delta _{ij} - z_{ij})$. Note that $f \in 1 + I$ (since the matrix $A$ is entrywise congruent to the $m \times m$ identity matrix modulo $I$). By Lemma 10.15.5 (1), there exists an $m \times m$ matrix $B$ such that $BA = f 1_{m \times m}$. Writing out we see that $\sum _{i} b_{hi} a_{ij} = f \delta _{hj}$ for all $h$ and $j$; hence, $\sum _{i, j} b_{hi} a_{ij} y_ j = \sum _{j} f \delta _{hj} y_ j = f y_ h$ for every $h$. In other words, $0 = f y_ h$ for every $h$ (since each $i$ satisfies $\sum _ j a_{ij} y_ j = 0$). This implies that $f$ annihilates $M$. By Lemma 10.19.1 an element of $1 + \text{rad}(R)$ is invertible element of $R$. Hence we see that (1) implies (2). We obtain (3) by applying (1) to $M/N$ which is finite as $N'$ is finite. We obtain (4) by applying (2) to $M/N$ which is finite as $N'$ is finite. We obtain (5) by applying (3) to $M$ and the submodules $\mathop{\mathrm{Im}}(N \to M)$ and $M$. We obtain (6) by applying (4) to $M$ and the submodules $\mathop{\mathrm{Im}}(N \to M)$ and $M$. We obtain (7) by applying (5) to the map $R^{\oplus n} \to M$, $(a_1, \ldots , a_ n) \mapsto a_1x_1 + \ldots + a_ nx_ n$. We obtain (8) by applying (6) to the map $R^{\oplus n} \to M$, $(a_1, \ldots , a_ n) \mapsto a_1x_1 + \ldots + a_ nx_ n$. Part (9) holds because if $M = IM$ then $M = I^ nM$ for all $n \geq 0$ and $I$ being nilpotent means $I^ n = 0$ for some $n \gg 0$. Parts (10), (11), and (12) follow from (9) by the arguments used above. $\square$ Lemma 10.20.2. Let $R$ be a ring, let $S \subset R$ be a multiplicative subset, let $I \subset R$ be an ideal, and let $M$ be a finite $R$-module. If $x_1, \ldots , x_ r \in M$ generate $S^{-1}(M/IM)$ as an $S^{-1}(R/I)$-module, then there exists an $f \in S + I$ such that $x_1, \ldots , x_ r$ generate $M_ f$ as an $R_ f$-module.1 Proof. Special case $I = 0$. Let $y_1, \ldots , y_ s$ be generators for $M$ over $R$. Since $S^{-1}M$ is generated by $x_1, \ldots , x_ r$, for each $i$ we can write $y_ i = \sum (a_{ij}/s_{ij})x_ j$ for some $a_{ij} \in R$ and $s_{ij} \in S$. Let $s \in S$ be the product of all of the $s_{ij}$. Then we see that $y_ i$ is contained in the $R_ s$-submodule of $M_ s$ generated by $x_1, \ldots , x_ r$. Hence $x_1, \ldots , x_ r$ generates $M_ s$. General case. By the special case, we can find an $s \in S$ such that $x_1, \ldots , x_ r$ generate $(M/IM)_ s$ over $(R/I)_ s$. By Lemma 10.20.1 we can find a $g \in 1 + I_ s \subset R_ s$ such that $x_1, \ldots , x_ r$ generate $(M_ s)_ g$ over $(R_ s)_ g$. Write $g = 1 + i/s'$. Then $f = ss' + is$ works; details omitted. $\square$ $B$ is finite as an $A$-module, $\mathfrak m_ B$ is a finitely generated ideal, $A \to B$ induces an isomorphism on residue fields, and $\mathfrak m_ A/\mathfrak m_ A^2 \to \mathfrak m_ B/\mathfrak m_ B^2$ is surjective. Then $A \to B$ is surjective. Proof. To show that $A \to B$ is surjective, we view it as a map of $A$-modules and apply Lemma 10.20.1 (6). We conclude it suffices to show that $A/\mathfrak m_ A \to B/\mathfrak m_ AB$ is surjective. As $A/\mathfrak m_ A = B/\mathfrak m_ B$ it suffices to show that $\mathfrak m_ AB \to \mathfrak m_ B$ is surjective. View $\mathfrak m_ AB \to \mathfrak m_ B$ as a map of $B$-modules and apply Lemma 10.20.1 (6). We conclude it suffices to see that $\mathfrak m_ AB/\mathfrak m_ A\mathfrak m_ B \to \mathfrak m_ B/\mathfrak m_ B^2$ is surjective. This follows from assumption (4). $\square$ [1] Special cases: (I) $I = 0$. The lemma says if $x_1, \ldots , x_ r$ generate $S^{-1}M$, then $x_1, \ldots , x_ r$ generate $M_ f$ for some $f \in S$. (II) $I = \mathfrak p$ is a prime ideal and $S = R \setminus \mathfrak p$. The lemma says if $x_1, \ldots , x_ r$ generate $M \otimes _ R \kappa (\mathfrak p)$ then $x_1, \ldots , x_ r$ generate $M_ f$ for some $f \in R$, $f \not\in \mathfrak p$. Comment #3850 by Lucy on December 26, 2018 at 23:55 In the proof of Nakayama's Lemma, lemma 10.16.2 is referred, but it seems that lemma 10.18.1 is a better choice. Comment #4236 by Aolong on June 03, 2019 at 00:37 In the statement (7) and (8) of Nakayama lemma, it would be better to point out x_1,\cdots,x_n M/IM R -module, since M/IM can be naturally R/I -module as well. Whether they generate M/IM R -module or as an R/I -module is the same thing. Comment #4580 by Xavier on October 01, 2019 at 12:38 There is a somewhat shorter proof of Lemma 10.19.1(1): applying Lemma 10.15.3 to the map \mathrm{id}: M \rightarrow M , we get a polynomial p p(\mathrm{1}) \in \mathrm{1} + I p(\mathrm{id}) = p(1) \cdot \mathrm{id} is the zero map on M f = p(1) M @#4580. Yes, this is fine but it isn't really shorter as it uses Cayley-Hamilton.
Creating Quizzes for Descriptive Statistics - Maple Help Home : Support : Online Help : Applications and Example Worksheets : Grading and Assessment : Creating Quizzes for Descriptive Statistics Student Statistics: Creating Quizzes for Descriptive Statistics Using the Quiz command from the Grading package, it is possible to generate interactive quizzes that ask students to answer questions on topics in Statistics, such as finding the mean, median or standard deviation of a sample of data, or specifying the interquartile range directly from the plot of a sample. Defining a randomly generated data sample To begin with, declare a procedure that generates a random sample of data: InitializeData := proc() data := Vector[row](5, rand(1 .. 10)); Grading:-Quiz:-Set( = data); Two key commands are used to randomly generate the data: randomize, which resets the seed for random number generation and inside of the data vector, and the rand( ) constructor, which is a random number generator, generating numbers between 1 and 10 in this case. In order to use this data sample in a quiz, use the Grading[Quiz][Set] command to send the generated data sample to the quiz creation module. Once this procedure is created, it can be called to return a randomly generated sample: \left[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{9}& \textcolor[rgb]{0,0,1}{9}\end{array}\right] Quiz: Calculate the Mean Creating the grading procedure: GradeMean := proc() uses Grading:-Quiz; evalb( Student:-Statistics:-Mean( Get( ) ) = Get(`$RESPONSE`) ); In this procedure, the known value of the sample is compared using the Student[Statistics][Mean] command where the response is entered by the student. The response is retrieved using the Grading[Quiz][Get] command. Use the Quiz command to generate the quiz by first specifying the question, then grading procedure, and finally any initializing procedures used to generate the data: Grading:-Quiz( "Find the mean of this sample: $data", GradeMean, InitializeData ); Find the mean of this sample: By entering a value in the above text box and clicking 'Check Answer', students will receive a notification on whether or not their response is correct. It is also important to note that all of the created quizzes can be regenerated using the 'Try Another' button in the created quiz. Quiz: Calculate the Median Similar to the example for finding the Mean, since a procedure for initializing the data has already been created, start by creating the grading procedure for comparing the known Median value with the entered response. The evalb command is used to determine if these two match, returning either a true or false answer. GradeMedian := proc() evalb( Student:-Statistics:-Median( Get( ) ) = Get(`$RESPONSE`) ); Grading:-Quiz( "Find the median of this sample: $data", GradeMedian, InitializeData ); Find the median of this sample: Quiz: Calculate the Standard Deviation GradeStandardDeviation := proc() evalb( Student:-Statistics:-StandardDeviation( Get( ) ) = Get(`$RESPONSE`) ); Grading:-Quiz( "Find the standard deviation of this sample: $data", GradeStandardDeviation, InitializeData ); Find the standard deviation of this sample: Quiz: Find the Interquartile Range The Quiz command can also be used to create questions using plots. In the following example, a new initialization procedure is created that generates a random sample of data and in addition, a plot of this data. Both of these are set to variable names that are then sent to the quiz module using the Quiz[Set] command. InitPlot := proc() local data, qplot; qplot := Student:-Statistics:-InterquartileRange(data, output = plot); Grading:-Quiz:-Set( = qplot); Similar to the examples above, the grading procedure for the plot question compares a response with the calculated value from routines in Student[Statistics]: GradeIQR := proc() evalb( Student:-Statistics:-InterquartileRange( Get( ) ) = Get(`$RESPONSE`) ); In this case, you can also add more options to the quiz command, such as specifying the size of the plot: Grading:-Quiz( "Using the following graph, calculate the interquartile range: $plot", GradeIQR, InitPlot, plotsize=[400,400] ); Using the following graph, calculate the interquartile range: Student, Student[Statistics], Grading[Quiz], Student[Statistics][Mean], Student[Statistics][Median], Student[Statistics][StandardDeviation], Student[Statistics][InterquartileRange], examples,Quiz
Laboratoire de Chimie Agro-industrielle (LCA), Université de Toulouse, INRA, INPT, Toulouse, France. Abstract: Before proposing an innovative process for the coproduction of ethyl and butyl acetates, the individual syntheses of ethyl acetate and butyl acetate by two different routes were first studied. These syntheses involved the reaction of ethanol or n-butanol with acetic acid or acetic anhydride in the presence of ion exchange resins: Amberlyst 15, Amberlyst 16, Amberlyst 36 and Dowex 50WX8. Kinetic and thermodynamic studies were performed with all resins. The lowest activation energy (Ea) value was obtained with Dowex 50WX8, which was identified as the best-performing resin, able to be reused at least in four runs without regeneration. The presence of water-azeotropes during the synthesis of ethyl acetate makes its purification difficult. A new strategy was adopted here, involving the use of ethanol and acetic anhydride as the starting material. In order to minimize acetic acid as co-product of this reaction, a novel two-step process for the coproduction of ethyl and butyl acetates was developed. The first step involves the production of ethyl acetate and its purification. Butyl acetate was produced in the second step: n-butanol was added to the mixture of acetic acid and the resin remaining after the first-step distillation. This process yields ethyl acetate and butyl acetate at high purity and shows an environmental benefit over the independent syntheses by green metrics calculation and life cycle assessment. Keywords: Ion Exchange Resins, Esterification, Ethyl Acetate, Butyl Acetate, Coproduction, Life Cycle Assessment {\text{CH}}_{\text{3}}\text{COOH}+{\text{CH}}_{\text{3}}{\text{CH}}_{\text{2}}\text{OH}⇌{\text{CH}}_{\text{3}}{\text{COOCH}}_{\text{2}}{\text{CH}}_{\text{3}}+{\text{H}}_{\text{2}}\text{O} {\left({\text{CH}}_{\text{3}}\text{CO}\right)}_{\text{2}}\text{O}+{\text{2CH}}_{\text{3}}{\text{CH}}_{\text{2}}\text{OH}⇌{\text{2CH}}_{\text{3}}{\text{COOCH}}_{\text{2}}{\text{CH}}_{\text{3}}+{\text{H}}_{\text{2}}\text{O} {\left({\text{CH}}_{\text{3}}\text{CO}\right)}_{\text{2}}\text{O}+{\text{CH}}_{\text{3}}{\text{CH}}_{\text{2}}\text{OH}⇌{\text{CH}}_{\text{3}}{\text{COOCH}}_{\text{2}}{\text{CH}}_{\text{3}}+{\text{CH}}_{\text{3}}\text{COOH} {\text{CH}}_{\text{3}}{\text{CH}}_{\text{2}}\text{OH}+{\text{CH}}_{\text{3}}{\text{CH}}_{\text{2}}\text{OH}⇌{\text{CH}}_{\text{3}}{\text{COOCH}}_{\text{2}}{\text{CH}}_{\text{3}}+{\text{H}}_{\text{2}}\text{O} {\left({\text{CH}}_{\text{3}}\text{CO}\right)}_{\text{2}}\text{O}+{\text{H}}_{\text{2}}\text{O}⇌{\text{CH}}_{\text{3}}\text{COOH} {\text{CH}}_{\text{3}}{\left({\text{CH}}_{\text{2}}\right)}_{\text{3}}\text{OH}+{\text{CH}}_{\text{3}}\text{COOH}⇌{\text{CH}}_{\text{3}}\text{COO}{\left({\text{CH}}_{\text{2}}\right)}_{\text{3}}{\text{CH}}_{\text{3}}+{\text{H}}_{\text{2}}\text{O} {\text{CH}}_{\text{3}}{\left({\text{CH}}_{\text{2}}\right)}_{\text{3}}\text{OH}+{\left({\text{CH}}_{\text{3}}\text{CO}\right)}_{\text{2}}\text{O}⇌2{\text{CH}}_{\text{3}}\text{COO}{\left({\text{CH}}_{\text{2}}\right)}_{\text{3}}+{\text{H}}_{\text{2}}\text{O} Cite this paper: Barrera, N. , Bories, C. , Peydecastaing, J. , Sablayrolles, C. , Vedrenne, E. , Vaca-Garcia, C. and Thiebaud-Roux, S. (2018) A Novel Process Using Ion Exchange Resins for the Coproduction of Ethyl and Butyl Acetates. Green and Sustainable Chemistry, 8, 221-246. doi: 10.4236/gsc.2018.83016. [1] Le Berre, C., Serp, P., Kalck, P. and Torrence, G.P. (2014) Acetic Acid. In: Ullmann’s Encyclopedia of Industrial Chemistry, Wiley-VCH Verlag GmbH & Co. KgaA, Weinheim, 1-34. [2] Rönnback, R., Salmi, T., Vuori, A., Haario, H., Lehtonen, J., Sundqvist, A. and Tirronen, E. (1997) Development of a Kinetic Model for the Esterification of Acetic acid with Methanol in the Presence of a Homogeneous Acid Catalyst. Chemical Engineering Science, 52, 3369-3381. [3] Kantam, M.L., Bhaskar, V. and Choudary, B.M. (2002) Direct Condensation of Carboxylic Acids with Alcohols: The Atom Economic Protocol Catalysed by Fe3+-Montmorillonite. Catalysis Letters, 78, 185-188. [4] Liu, Y., Lotero, E. and Goodwin, J. (2006) A Comparison of the Esterification of Acetic Acid with Methanol Using Heterogeneous versus Homogeneous Acid Catalysis. Journal of Catalysis, 242, 278-286. [5] Cheremisinoff, N. (2003) Industrial Solvents Handbook. 2nd Edition, Marcel Dekker, Inc., New York. [6] Izci, A. and Bodur, F. (2007) Liquid-Phase Esterification of Acetic Acid with Isobutanol Catalyzed by Ion-Exchange Resins. Reactive and Functional Polymers, 67, 1458-1464. [7] Gangadwala, J., Mankar, S., Mahajani, S., Kienle, A. and Stein, E. (2003) Esterification of Acetic Acid with Butanol in the Presence of Ion-Exchange Resins as Catalysts. Industrial & Engineering Chemistry Research, 42, 2146-2155. [8] Teo, H. and Saha, B. (2004) Heterogeneous Catalysed Esterification of Acetic Acid with Isoamyl Alcohol: Kinetic Studies. Journal of Catalysis, 228, 174-182. [9] Karakus, S., Sert, E., Buluklu, D. and Atalay, F.S. (2014) Liquid Phase Esterification of Acrylic Acid with Isobutyl Alcohol Catalyzed by Different Cation Exchange Resins. Industrial & Engineering Chemistry Research, 53, 4192-4198. [10] Ali, S.H. and Merchant, S.Q. (2006) Kinetics of the Esterification of Acetic Acid with 2-Propanol: Impact of Different Acidic Cation Exchange Resins on Reaction Mechanism. International Journal of Chemical Kinetics, 38, 593-612. [11] Anasthas, H.M. and Gaikar, V.G. (2001) Adsorption of Acetic Acid on Ion-Exchange Resins in Non-Aqueous Conditions. Reactive and Functional Polymers, 47, 23-35. [12] Jagadeesh, R., Junge, H., Pohl, M., Radnik, J., Bru, A. and Beller, M. (2013) Selective Oxidation of Alcohols to Esters Using Heterogeneous CO3O4-NaC Catalysts under Mild Conditions. Journal of the American Chemical Society, 135, 10776-10782. [13] Nachod, F.C. and Schubert, J. (1956) Ion Exchange Technology. Academic Press Inc., New York. [14] Harmer, M.A. and Sun, Q. (2001) Solid Acid Catalysis Using Ion-Exchange Resins. Applied Catalysis A: General, 221, 45-62. [15] Altiokka, M.R. and Çitak, A. (2003) Kinetics Study of Esterification of Acetic Acid with Isobutanol in the Presence of Amberlite Catalyst. Applied Catalysis A: General, 239, 141-148. [16] Barbaro, P. and Liguori, F. (2009) Ion Exchange Resins: Catalyst Recovery and Recycle. Chemical Reviews, 109, 515-529. [17] Alexandratos, S.D. (2009) Ion-Exchange Resins: A Retrospective from Industrial and Engineering Chemistry Research. Industrial & Engineering Chemistry Research, 48, 388-398. [18] Jagadeesh Babu, P.E., Sandesh, K. and Saidutta, M.B. (2011) Kinetics of Esterification of Acetic Acid with Methanol in the Presence of Ion Exchange Resin Catalyst. Industrial & Engineering Chemistry Research, 50, 7155-7160. [19] Chakrabarti, A. and Sharma, M.M. (1993) Cationic Ion Exchange Resins as Catalyst. Reactive Polymers, 20, 1-45. [20] Peters, T.A., Benes, N.E., Holmen, A. and Keurentjes, J.T.F. (2006) Comparison of Commercial Solid Acid Catalysts for the Esterification of Acetic Acid with Butanol. Applied Catalysis A: General, 297, 182-188. [21] Wang, N., Wang, R., Shi, X. and Zou, G. (2012) Ion-Exchange-Resin-Catalyzed Adamantylation of Phenol Derivatives with Adamantanols: Developing a Clean Process for Synthesis of 2-(1-adamantyl)-4-bromophenol, a Key Intermediate of Adapalene. Beilstein Journal of Organic Chemistry, 8, 227-233. [22] Berg, L. and Ratanapupech, P. (1982) Separation of Ethyl Acetate from Ethanol and Water by Extractive Distillation. US Patent No. 4379028. [23] Berg, L. and Ratanapupech, P. (1984) Process for the Separation of Ethyl Acetate from Ethanol and Water by Extractive Distillation. US Patent No. 4569726A. [24] Zhu, Z., Ri, Y., Jia, H., Li, X., Wang, Y. and Wang, Y. (2017) Process Evaluation on the Separation of Ethyl Acetate and Ethanol Using Extractive Distillation with Ionic Liquid. Separation and Purification Technology, 181, 44-52. [25] Andreatta, A.E., Charnley, M.P. and Brennecke, J.F. (2015) Using Ionic Liquids to Break the Ethanol-Ethyl Acetate Azeotrope. ACS Sustainable Chemistry & Engineering, 3, 3435-3444. [26] William, C.S., Fawcett, C.R., Sharif, M., Marshall Tuck, M.W., Watson, D.J. and Wood, M.A. (1999) Purification of Ethyl Acetate from Mixtures Comprising Ethanol and Water by Pressure Swing Distillation. Patent No. EP1117629B1. [27] Zhang, Q., Liu, M., Li, C. and Zeng, A. (2017) Heat-Integrated Pressure-Swing Distillation Process for Separating the Minimum-Boiling Azeotrope Ethyl-Acetate and Ethanol. Separation and Purification Technology, 189, 310-334. [28] Yang, J., Zhou, M., Wang, Y., Zhang, X. and Wu, G. (2017) Simulation of Pressure-Swing Distillation for Separation of Ethyl Acetate-Ethanol-Water. IOP Conference Series: Materials Science and Engineering, 274, Article ID: 012026. [29] Lee, H.-Y., Huang, H.-P. and Chien, I.-L. (2007) Control of Reactive Distillation Process for Production of Ethyl Acetate. Journal of Process Control, 17, 363-377. [30] Fernandez, M.F., Barroso, B., Meyer, X.M., Meyer, M., Le Lann, M.-V., Le Roux, G.C. and Brehelin, M. (2013) Experiments and Dynamic Modeling of a Reactive Distillation Column for the Production of Ethyl Acetate by Considering the Heterogeneous Catalyst Pilot Complexities. Chemical Engineering Research and Design, 91, 2309-2322. [31] Lv, B., Liu, G., Dong, X., Wei, W. and Jin, W. (2012) Novel Reactive Distillation-Pervaporation Coupled Process for Ethyl Acetate Production with Water Removal from Reboiler and Acetic Acid Recycle. Industrial and Engineering Chemistry Research, 51, 8079-8086. [32] Lai, I.-K., Liu, Y.-C., Yu, C.-C., Lee, M.-J. and Huang, H.-P. (2008) Production of High-Purity Ethyl Acetate Using Reactive Distillation: Experimental and Start-Up Procedure. Chemical Engineering and Processing: Process Intensification, 47, 1831-1843. [33] Santaella, M.A., Orjuela, A. and Narváez, P.C. (2015) Comparison of Different Reactive Distillation Schemes for Ethyl Acetate Production Using Sustainability Indicators. Chemical Engineering and Processing: Process Intensification, 96, 1-13. [34] Tian, H., Zheng, H., Huang, Z., Qiu, T. and Wu, Y. (2012) Novel Procedure for Coproduction of Ethyl Acetate and n-Butyl Acetate by Reactive Distillation. Industrial & Engineering Chemistry Research, 51, 5535-5541. [35] Gadewar, S.B. (2013) Ethyl Acetate Production. US Patent No. 8562921B2. [36] Tian, H., Zhao, S., Zheng, H. and Huang, Z. (2015) Optimization of Coproduction of Ethyl Acetate and n-Butyl Acetate by Reactive Distillation. Chinese Journal of Chemical Engineering, 23, 667-674. [37] ISO 14040 (2006) Management Environnemental—Analyse du cycle de vie— Principes et cadre. [38] International Organization for Standardization (2006) ISO 14044: Environmental Management. Life Cycle Assessment. Requirements and Guidelines. [39] Lee, M.-J., Wu, H.-T. and Lin, H. (2000) Kinetics of Catalytic Esterification of Acetic Acid and Amyl Alcohol over Dowex. Industrial & Engineering Chemistry Research, 39, 4094-4099. [40] Toteja, R.S.D., Jangida, B.L., Sundaresan, M. and Venkataramani, B. (1997) Water Sorption Isotherms and Cation Hydration in Dowex 50W and Amberlyst-15 Ion Exchange Resins. Langmuir, 13, 2980-2982. [41] Beula, C. and Sai, P.S.T. (2015) Kinetics of Esterification of Acetic Acid and Ethanol with a Homogeneous Acid Catalyst. Indian Chemical Engineer, 57, 177-196. [42] Atalay, F.S. (2008) Kinetics of the Esterification Reaction between Ethanol and Acetic Acid. Developments in Chemical Engineering and Mineral Processing, 2, 181-184. [43] Xu, Z.P. and Chuang, K.T. (1996) Kinetics of Acetic Acid Esterification over Ion Exchange Catalysts. The Canadian Journal of Chemical Engineering, 74, 493-500. [44] Kolena, J., Lederer, J., Morávek, P., Hanika, J., Smejkal, Q. and Skála, D. (2004) Method of Producing Ethyl Acetate and an Equipment for Carrying out This Method. US Patent No. 6693213. [45] Casson, V., Lister, D.G., Milazzo, M.F. and Maschio, G. (2012) Comparison of Criteria for Prediction of Runaway Reactions in the Sulphuric Acid Catalyzed Esterification of Acetic Anhydride and Methanol. Journal of Loss Prevention in the Process Industries, 25, 209-217. [46] Duh, Y.-S., Hsu, C.-C., Kao, C.-S. and Yu, S.W. (1996) Applications of Reaction Calorimetry in Reaction Kinetics and Thermal Hazard Evaluation. Thermochimica Acta, 285, 67-79. [47] Horsley, L.H. (1973) Azeotropic Data—III. Vol. 116, American Chemical Society, Washington DC. [48] Gmehling, J., Menke, J., Krafczyk, J., Fischer, K., Fontaine, J.-C. and Kehiaian, H.V. (2005) Fluid Properties. In: Handbook of Chemistry and Physics, 92th Edition, CRC Press, Boca Raton, 6-210. [49] Singh, D., Gupta, R.K. and Kumar, V. (2014) Experimental Studies of Industrial-Scale Reactive Distillation Finishing Column Producing Ethyl Acetate. Industrial & Engineering Chemistry Research, 53, 10448-10456. [50] Brandt, S., Horstmann, S., Steinigeweg, S. and Gmehling, J. (2014) Phase Equilibria and Excess Properties for Binary Systems in Reactive Distillation Processes. Part II. Ethyl Acetate Synthesis. Fluid Phase Equilibria, 376, 48-54. [51] Van Baelen, D., Van der Bruggen, B., Van den Dungen, K., Degreve, J. and Vandecasteele, C. (2005) Pervaporation of Water-Alcohol Mixtures and Acetic Acid-Water Mixtures. Chemical Engineering Science, 60, 1583-1590. [52] Dutta, B.K. and Sikdar, S.K. (1991) Separation of Azeotropic Organic Liquid Mixtures by Pervaporation. AIChE Journal, 37, 581-588. [53] Lipnizki, F., Field, R.W. and Ten, P.-K. (1999) Pervaporation-Based Hybrid Process: A Review of Process Design, Applications and Economics. Journal of Membrane Science, 153, 183-210. [54] Buchaly, C. and Kreis, P. (2007) Hybrid Separation Processes—Combination of Reactive Distillation with Membrane Separation. Chemical Engineering and Processing: Process Intensification, 46, 790-799. [55] Berg, L. and Yeh, A.-I. (1984) Separation of n-Butyl Acetate from n-Butanol by Extractive Distillation. US Patent No. 4507176. [56] Sert, E. and Atalay, F.S. (2011) Chemical and Biochemical Engineering Quarterly. Chemical and Biochemical Engineering Quarterly, 25, 221-227. [57] Anastas, P.T. and Zimmerman, J.B. (2013) Innovations in Green Chemistry and Green Engineering: Selected Entries from the Encyclopedia of Sustainability Science and Technology. Springer, Berlin. [58] Anastas, P.T. and Warner, J.C. (1998) Green Chemistry: Theory and Practice. Oxford University Press, New York. [59] Bories, C., Guzman Barrera, N.I., Peydecastaing, J., Etxeberria, I., Vedrenne, E., Vaca-Garcia, C., Thiebaud-Roux, S. and Sablayrolles, C. (2018) LCA Case Study: Comparison between Independent and Coproduction Pathways for the Production of Ethyl and n-Butyl Acetates. The International Journal of Life Cycle Assessment, 23, 251-266.
Component: acacetin fKr=1.0-1.01.0+Aca_IC50_IKrconcAca_hill_IKrfKs=1.0-1.01.0+Aca_IC50_IKsconcAca_hill_IKsfto=1.0-1.01.0+Aca_IC50_ItoconcAca_hill_Ito ddtimeCa_SRi=1000.0⁢calcium_Ca_SRi_beta⁢DCaSR⁢Ca_SRss-2.0⁢Ca_SRi+Ca_SRidx2+Ca_SRi-Ca_SRss2.0⁢3.0⁢dx2+JSRCaiVSRiddtimeCa_SRss=1000.0⁢calcium_Ca_SRss_beta⁢DCaSR⁢Ca_SRss-2.0⁢Ca_SRss+Ca_SRidx2+Ca_SRss-Ca_SRi2.0⁢4.0⁢dx2+JSRCaSSVSRssddtimeCa_i=JCaVnj⁢calcium_Ca_i_betaddtimeCa_ss=calcium_Ca_ss_beta⁢JCassVss+I_tot2.0⁢Vss⁢FI_tot=-ICaL-IbCa-ICap+2.0⁢INaCaJCa=-J_bulkSERCA+JSRCaleaki+Jreli+Jj_njJCass=-Jj_nj+JSRCaleakss-J_bulkSERCAss+JrelssJSRCaSS=J_SERCASRss-JSRCaleakss-JrelssJSRCai=J_SERCASR-JSRCaleaki-JreliJSRCaleaki=0.5⁢kSRleak⁢Ca_SRi-Ca_i⁢VnjJSRCaleakss=0.5⁢kSRleak⁢Ca_SRss-Ca_ss⁢VssJj_nj=2.5⁢DCa⁢Aj_njxj_nj⁢Ca_ss-Ca_icalcium_Ca_SRi_beta=1.01.0+CSQN⁢KdCSQNCa_SRi+KdCSQN2.0calcium_Ca_SRss_beta=1.01.0+CSQN⁢KdCSQNCa_SRss+KdCSQN2.0calcium_Ca_i_beta=1.01.0+BCa⁢KdBCaCa_i+KdBCa2.0calcium_Ca_ss_beta=1.01.0+SLlow⁢KdSLlowCa_ss+KdSLlow2.0+SLhigh⁢KdSLhighCa_ss+KdSLhigh2.0+BCa⁢KdBCaCa_ss+KdBCa2.0 VSRi=2.0⁢57.0VSRss=2.0⁢80.0Vnj=6.0⁢2531.0Vss=2.0⁢49.9232dx2=dx⁢dx Component: ibca IbCa=1.0⁢Cm⁢gbCa⁢V-ECagbCa=1.4⁢0.001131 Component: ibna IbNa=1.7⁢Cm⁢gbNa⁢V-ENagbNa=0.8⁢ 6.74437500000000015e-04 ICaL=1.333333⁢Cm⁢gCaL⁢d⁢f⁢fca⁢V-ErLddtimed=ical_d_inf-dical_d_tauddtimef=ical_f_inf-fical_f_tauddtimefca=ical_fca_inf-fcaical_fca_taugCaL=0.1294⁢0.75ical_d_a=1.01.0+ⅇV+10.0-6.24ical_d_inf=1.01.0+ⅇV+10.0-8.0ical_d_tau=ical_d_a⁢4.579if|V+10.0|<1e-10ical_d_a⁢1.0-ⅇV+10.0-6.240.035⁢V+10.0otherwiseical_f_inf=ⅇ-V+28.06.91.0+ⅇ-V+28.06.9ical_f_tau=1.5⁢2.0⁢3.00.0197⁢ⅇ-0.03372.0⁢V+10.02.0+0.02ical_fca_inf=1.01.0+Ca_ss0.00035 Component: icap ICap=1.26⁢Cm⁢icapbar⁢Ca_ssCa_ss+kmcap IK1=Cm⁢gK1⁢V-EK1.0+ⅇ0.07⁢V+80.0 IKr=Cm⁢fKr⁢gKr⁢xr⁢V-EK1.0+ⅇV+15.022.4gKr=0.8⁢ 2.94117649999999994e-02ikr_xr_a=0.0015if|V+14.1|<1e-100.0003⁢V+14.11.0-ⅇV+14.1-5.0otherwiseikr_xr_b= 3.78361180000000004e-04if|V-3.3328|<1e-10 7.38980000000000030e-05⁢V-3.3328ⅇV-3.33285.1237-1.0otherwiseikr_xr_inf=1.01.0+ⅇV+14.1-6.5ikr_xr_tau=1.0ikr_xr_a+ikr_xr_bddtimexr=ikr_xr_inf-xrikr_xr_tau IKs=Cm⁢fKs⁢gKs⁢xs⁢xs⁢V-EKgKs=0.8⁢ 1.29411759999999987e-01iks_xs_a=0.00068if|V-19.9|<1e-104e-05⁢V-19.91.0-ⅇV-19.9-17.0otherwiseiks_xs_b=0.000315if|V-19.9|<1e-103.5e-05⁢V-19.9ⅇV-19.99.0-1.0otherwiseiks_xs_inf=1.01.0+ⅇV-19.9-12.7iks_xs_tau=0.5iks_xs_a+iks_xs_bddtimexs=iks_xs_inf-xsiks_xs_tau ddtimeBC=KC⁢ⅇ-ZKC⁢V⁢FRT⁢conc⁢i⁢1.0-a⁢1.0-BO-BC-LC⁢BC⁢ⅇ-ZLC⁢V⁢FRTddtimeBO=KO⁢ⅇ-ZKO⁢V⁢FRT⁢conc⁢i⁢a⁢1.0-BO-BC-LO⁢BO⁢ⅇ-ZLO⁢V⁢FRTIKur=Cm⁢gKur⁢4.5128+1.8997691.0+ⅇV-20.5232-8.26597⁢1.0-BO-BC⁢a⁢i⁢V-EKZKC=- 3.27023573379999988e-01ZKO=- 2.57319531729999995e-01ZLO=- 1.28015801399999993e-02ddtimea=ikur_a_inf-aikur_a_taugKur=0.006398⁢0.9ddtimei=ikur_i_inf-iikur_i_tauikur_a_inf=1.01.0+ⅇ-V+5.528.6ikur_a_tau= 4.56666746825999965e+011.0+ⅇV+ 1.12306497072999996e+01 1.15254705961999999e+01+ 4.26753514992999960e+00⁢ 2.62186042981000011e-011.0+ⅇV+ 3.58658312706999993e+01- 3.87510627762000004e+00+ 2.91755017928000016e-01⁢1.0K_Q10ikur_i_inf=0.524241.0+ⅇV+15.11427.567021+0.4580778ikur_i_tau=2328.01.0+ⅇV-9.4353.5827+1739.139K_Q10 ddtimeBA=drug_Ka⁢drug_concen⁢m3.0⁢h⁢j⁢1.0-BA-BI-drug_La⁢BAddtimeBI=drug_Ki⁢drug_concen⁢1.0-h⁢1.0-BA-BI-drug_Li⁢BIINa=gNa⁢m3.0⁢h⁢j⁢V-ENa⁢Cm⁢1.0-BA-BIdrug_Ka=0.11000.0drug_Ki=0.11000.0ddtimeh=ina_h_alpha⁢1.0-h-ina_h_beta⁢hina_h_alpha=0.0ifV≥-40.00.135⁢ⅇV+80.0-6.8otherwiseina_h_beta=1.00.13⁢1.0+ⅇV+10.66-11.1ifV≥-40.03.56⁢ⅇ0.079⁢V+310000.0⁢ⅇ0.35⁢Votherwiseina_j_alpha=0.0ifV≥-40.0-127140.0⁢ⅇ0.2444⁢V-3.474e-05⁢ⅇ-0.04391⁢V⁢V+37.781.0+ⅇ0.311⁢V+79.23otherwiseina_j_beta=0.3⁢ⅇ-2.535e-07⁢V1.0+ⅇ-0.1⁢V+32.0ifV≥-40.00.1212⁢ⅇ-0.01052⁢V1.0+ⅇ-0.1378⁢V+40.14otherwiseina_m_alpha=3.2if|V+47.13|<1e-100.32⁢V+47.131.0-ⅇ-0.1⁢V+47.13otherwiseina_m_beta=0.08⁢ⅇ-V11.0ddtimej=ina_j_alpha⁢1.0-j-ina_j_beta⁢jddtimem=ina_m_alpha⁢1.0-m-ina_m_beta⁢m INaCa=1.4⁢Cm⁢knacalrkmnalr3.0+Na_o3.0kmcalr+Ca_o1.0+ksatlr⁢ⅇgammalr-1.0⁢V⁢FRT⁢Na_i3.0⁢Ca_o⁢ⅇV⁢gammalr⁢FRT-Na_o3.0⁢Ca_ss⁢ⅇV⁢gammalr-1.0⁢FRT INaK=1.28⁢Cm⁢I_bar⁢fnak⁢K_oK_o+kmko1.0+kmnaiNa_i4.0I_bar=1.4⁢ 5.99338739999999981e-01fnak=1.01.0+0.1245⁢ⅇ-0.1⁢V⁢FRT+0.0365⁢sigma⁢ⅇ-V⁢FRTsigma=ⅇNa_o67.3-1.07.0 Ito=1.05⁢fto⁢Cm⁢gto⁢r⁢s⁢V-EKgto=0.75471⁢0.1962ito_r_inf=1.01.0+ⅇV-1.0-11.0ito_r_tau=3.5⁢ⅇ-V30.0⁢2.0+1.5ito_s_inf=1.01.0+ⅇV+40.511.5ito_s_tau=25.635⁢ⅇ-V+52.4515.8827⁢2.0+14.14ddtimer=ito_r_inf-rito_r_tauddtimes=ito_s_inf-sito_s_tau ddtimeV=-i_ion+i_stimCmi_ion=+INa+Ito+IKur+IKr+IKs+ICaL+IK1+IbNa+IbCa+INaCa+INaK+ICap ECa=13.35⁢ln⁡Ca_oCa_iEK=26.71⁢ln⁡K_oK_iENa=26.71⁢ln⁡Na_oNa_i FRT=FRT ddtimeK_i=0.0 Jreli=nui⁢o_i⁢c_i⁢RyRSRCai⁢Ca_SRi-Ca_iJrelss=nuss⁢o_ss⁢c_ss⁢RyRSRCass⁢Ca_SRss-Ca_ssRyRSRCai=1.0-1.01.0+ⅇCa_SRi-0.30.1RyRSRCass=1.0-1.01.0+ⅇCa_SRss-0.30.1ddtimea_i=ryr_a_i_inf-a_iryr_a_i_tauddtimea_ss=ryr_a_ss_inf-a_ssryr_a_ss_tauddtimec_i=ryr_c_i_inf-c_iryr_c_i_tauddtimec_ss=ryr_c_ss_inf-c_ssryr_c_ss_taunui=0.001⁢Vnjnuss=0.625⁢Vssddtimeo_i=ryr_o_i_inf-o_iryr_o_i_tauddtimeo_ss=ryr_o_ss_inf-o_ssryr_o_ss_tauryr_a_i_inf=0.505-0.4271.0+ⅇ2000.0⁢Ca_i-0.290.082ryr_a_ss_inf=0.505-0.4271.0+ⅇ1000.0⁢Ca_ss-0.290.082ryr_c_i_inf=1.01.0+ⅇ2000.0⁢Ca_i-a_i+0.020.01ryr_c_i_tau=2.0⁢15.0ryr_c_ss_inf=1.01.0+ⅇ1000.0⁢Ca_ss-a_ss+0.020.01ryr_o_i_inf=1.0-1.01.0+ⅇ2000.0⁢Ca_i-a_i+0.220.03ryr_o_ss_inf=1.0-1.01.0+ⅇ1000.0⁢Ca_ss-a_ss+0.220.03 Component: serca J_SERCASR=0.75⁢-k3⁢Ca_SRi2.0⁢cpumps-SERCACa+k4⁢SERCACa⁢Vnj⁢2.0J_SERCASRss=0.75⁢-k3⁢Ca_SRss2.0⁢cpumps-SERCACass+k4⁢SERCACass⁢Vss⁢2.0J_bulkSERCA=0.75⁢k1⁢Ca_i2.0⁢cpumps-SERCACa-k2⁢SERCACa⁢Vnj⁢2.0J_bulkSERCAss=0.75⁢k1⁢Ca_ss2.0⁢cpumps-SERCACass-k2⁢SERCACass⁢Vss⁢2.0ddtimeSERCACa=-J_SERCASR+J_bulkSERCAVnjddtimeSERCACass=-J_SERCASRss+J_bulkSERCAssVss ddtimeNa_i=-3.0⁢INaK-3.0⁢INaCa-IbNa-INaF⁢Vi amplitude=-8000.0i_stim=amplitude⁢pacepace=1.0iftime-offset-period⁢⌊time-offsetperiod⌋<duration0.0otherwise
Admissible Solutions of the Schwarzian Type Difference Equation AbstractIntroduction and ResultsAcknowledgmentsReferencesCopyrightRelated articles Baoqin Chen, Sheng Li, "Admissible Solutions of the Schwarzian Type Difference Equation", Abstract and Applied Analysis, vol. 2014, Article ID 306360, 5 pages, 2014. https://doi.org/10.1155/2014/306360 Baoqin Chen1 and Sheng Li 1 This paper is to investigate the Schwarzian type difference equation where is a rational function in with polynomial coefficients, , respectively are two irreducible polynomials in of degree , respectively . Relationship between and is studied for some special case. Denote . Let be an admissible solution of such that ; then for (≥2) distinct complex constants , In particular, if , then Throughout this paper, a meromorphic function always means being meromorphic in the whole complex plane, and always means a nonzero constant. For a meromorphic function , we define its shift by and define its difference operators by In particular, for the case . We use standard notations of the Nevanlinna theory of meromorphic functions such as , , and and as stated in [1–3]. For a constant , we define the Nevanlinna deficiency by Recently, numbers of papers (see, e.g., [4–12]) are devoted to considering the complex difference equations and difference analogues of Nevanlinna theory. Due to some idea of [13], we consider the admissible solution of the Schwarzian type difference equation: where is a rational function in with polynomial coefficients, , respectively , are two irreducible polynomials in of degree , respectively, . Here and in the following, “admissible” always means “transcendental.” And we denote from now on. For the existence of solutions of (3), we give some examples below. Examples. (1) is an admissible solution of the Schwarzian type difference equation: (2) is an admissible solution of the Schwarzian type difference equation (3) Let , then solves the Schwarzian type difference equation: This example shows that (3) may admit polynomial solutions. Considering the relationship between and in those examples above, we prove the following result. Theorem 1. For the Schwarzian type difference equation (3) with polynomial coefficients, note the following.(i)If it admits an admissible solution such that , then In particular, if , then .(ii)If its coefficients are all constants and it admits a polynomial solution with degree , then and . Remark 2. From examples (1) and (2), we conjecture that in Theorem 1(i). However, we cannot prove it currently. From example (3) given before, we see that the restriction on the coefficients in Theorem 1(ii) cannot be omitted. For the Schwarzian differential equation, where , , and are as stated before; Ishizaki [13] proved the following result (see also Theorem in [2]). Theorem A (see [2, 13]). Let be an admissible solution of (8) with polynomial coefficients, and let be ≥2 distinct complex constants. Then For the Schwarzian type difference equation (3), we prove the following result. Theorem 3. Let be an admissible solution of (3) with polynomial coefficients such that , and let be ≥2 distinct complex constants. Then In particular, if , then Remark 4. From Theorem 1, under the condition in Theorem 3, we have in (11). The behavior of the zeros and the poles of in is essentially different from that in the . We wonder whether the restriction can be omitted or not. The following lemma plays a very important role in the theory of complex differential equations and difference equations. It can be found in Mohon’ko [14] and Valiron [15] (see also Theorem in the book of Laine and Yang [2]). Lemma 5 (see [14, 15]). Let be a meromorphic function. Then, for all irreducible rational functions in , with meromorphic coefficients such that and the characteristic function of satisfies where . The following two results can be found in [10]. In fact, Lemma 6 is a special case of Lemma 8.3 in [10]. Lemma 6 (see [10]). Let be a meromorphic function of hyper order , and . Then possibly outside of a set of with finite logarithmic measure. From Lemma 7, we can easily get the following conclusion. Lemma 8. Let be a meromorphic function of hyper order , and . Then possibly outside of a set of with finite logarithmic measure. Lemma 9. Let be an admissible solution of (3) with coefficients. Then, using the notation , In particular, if , then Proof. We use the idea by Ishizaki [13] (see also [2]) to prove Lemma 9. It follows from Lemma 8 that From this and Lemma 5, we get and hence If , since all coefficients of and are polynomials, there are at the most finitely many poles of , neither the poles of nor the zeros of . Therefore, we see that We obtain (18) from this and (22) immediately. If , there are at most finitely many poles of , not the zeros of , then Now (18) follows from (22) and (24). Notice that if , then (24) always holds. This finishes the proof of Lemma 9. Case 1. Equation (3) admits an admissible solution such that . Since all coefficients of and are polynomials, there are at the most finitely many poles of that are not the poles of and . This implies that We can deduce from (3), (25), (26), and Lemma 8 that It follows from this that What is more is that if , then we obtain from (28) that Case 2. The coefficients of (3) are all constants and it admits a polynomial solution with degree . Set then where From (29) and (30), we obtain that If , then , which yields that . That is a contradiction to our assumption. Thus, . If , then , , and . Now from (3), we get Considering degrees of both sides of the equation above, we can see that . If , we can deduce similarly that where are polynomials such that . Rewrite (3) as follows: From (34), we find that the leading coefficient of is Considering degrees of both sides of (35), we prove that . Firstly, we consider the general case. As mentioned in Remark 1 in [13], due to Jank and Volkmann [16], if (3) admits an admissible solution, then there are at most common zeros of and . Since all coefficients of are polynomials, there are at the most finitely many poles of that are the zeros of . Therefore, from (3), we have Combining this and Lemma 9, applying the second main theorem, we get Thus, we prove that (10) holds. Secondly, we consider the case that . From (3) and Lemma 8, we similarly get that From this and applying Lemma 9 with (19), as arguing before, we can prove that (11) holds. The authors would like to thank the referees for their valuable suggestions. This work is supported by the NNSFC (nos. 11226091 and 11301091), the Guangdong Natural Science Foundation (no. S2013040014347), and the Foundation for Distinguished Young Talents in Higher Education of Guangdong (no. 2013LYM_0037). C.-C. Yang and H.-X. Yi, Uniqueness Theory of Meromorphic Functions, vol. 557 of Mathematics and Its Applications, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 2003. View at: MathSciNet M. J. Ablowitz, R. Halburd, and B. Herbst, “On the extension of the Painlevé property to difference equations,” Nonlinearity, vol. 13, no. 3, pp. 889–905, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet f\left(z+\eta \right) Y.-M. Chiang and S.-J. Feng, “On the growth of logarithmic differences, difference quotients and logarithmic derivatives of meromorphic functions,” Transactions of the American Mathematical Society, vol. 361, no. 7, pp. 3767–3791, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet R. G. Halburd and R. J. Korhonen, “Existence of finite-order meromorphic solutions as a detector of integrability in difference equations,” Physica D: Nonlinear Phenomena, vol. 218, no. 2, pp. 191–203, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet R. G. Halburd, R. J. Korhonen, and K. Tohge, “Holomorphic curves with shift-invariant hyperplane preimages,” submitted to Transactions of the American Mathematical Society, http://arxiv.org/abs/0903.3236. View at: Google Scholar J. Heittokangas, R. Korhonen, I. Laine, J. Rieppo, and K. Tohge, “Complex difference equations of malmquist type,” Computational Methods and Function Theory, vol. 1, no. 1, pp. 27–39, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet q K. Ishizaki, “Admissible solutions of the Schwarzian differential equation,” Australian Mathematical Society A: Pure Mathematics and Statistics, vol. 50, no. 2, pp. 258–278, 1991. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet A. Z. Mohon'ko, “The nevanlinna characteristics of certain meromorphic functions,” Teorija Funkciĭ, Funkcional'nyĭ Analiz i ih Priloženija, no. 14, pp. 83–87, 1971 (Russian). View at: Google Scholar | MathSciNet G. Valiron, “Sur la dérivée des fonctions algébroides,” Bulletin de la Société Entomologique de France, vol. 59, pp. 17–39, 1931. View at: Google Scholar | Zentralblatt MATH | MathSciNet G. Jank and L. Volkmann, Meromorphe Funktionen und Differentialgeichungen, Birkhäuser Verlag, Basel, Switzerland, 1985. Copyright © 2014 Baoqin Chen and Sheng Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Samuel J. Li — On Yoneda’s Lemma On Yoneda’s Lemma Published March 20, 2021|2 minute read Today is the day I understood Yoneda’s lemma. Of course I’ve proven it several times before; in a sense the proof is trivial, since there’s only one thing you can do at each stage. But I never felt that I really understood what the lemma was saying, or why it should be true, until something clicked today. I came upon this realization while reading the book ‘Category Theory for Programmers.’ Although the book’s clarity definitely helped, I have a feeling that my understanding was a gradual process. It’s been a long time in the making. (And as usual in math, obvious in hindsight.) The fundamental reason the lemma works, and why the hom functor -^a should be in a sense ‘universal,’ is that the elements of each image x^a are themselves morphisms f : a \to x . As such, each point f \in x^a contains precisely the information necessary to determine where it should be mapped. Namely, apply the corresponding map F(f) to an arbitrary base point p \in F(a) . The fact this map is forced is aesthetically obvious, and enforced by natrality. This is the same logic used to identify the vector space \mathbb{R}^2 with the the 2D plane (a set of points), once a basepoint is fixed. Each vector \vec{v} acts naturally on the plane by translation, so it’s natural to identify it with the image of the basepoint. From this point of view, it’s quite clear why the 2D plane can be identified with \mathbb{R}^2 , and why there is exactly one identification corresponding to each possible basepoint in the plane. [In fact, if you represent the vector space’s group structure as the hom-set of a one-object category, and identify the plane P with the functor * \mapsto P mapping vectors (automorphisms on * ) to translations (automorphisms on P ), this is Yoneda’s lemma.]
cAssets, the synthetics - cypher docs cAssets are the synthetic assets that are minted via the cypher protocol. These synthetic assets allow for retail and institutional participants to gain exposure to the underlying asset without having to actually own it. cAssets provide access to opportunities that retail participants have been excluded from as well as allow for more traditional type trading strategies to be deployed in pre-IPO markets. cAssets are expiratory token contracts which have a dynamic execution date. A token will expire and financially settle upon an execution triggering event, for these initial pre-IPO derivatives, execution happens at close of trading on listing (IPO) day. Having the execution happen upon the last tick price will allow for arbitrage opportunities to take place and enable convergence of the synthetic asset price to the market price of the stock exchange that the company listed on. In some cases, a company on which an cAsset is based may not go public. To anticipate this, the token will be programmed with a number of alternative execution triggering events as well, such as the implied share price based on value when acquired by a strategic buyer or at the then-current price after a set number of months since contract creation. While cAssets are denominated on a price per-share basis, they imply a valuation of the company on which they are based. This presents a challenge because the number of shares outstanding is not always publicly available prior to filing of a registration document as part of preparing to go public. As such, many cAssets will undergo a price adjustment once the actual number of shares is publicly solidified. This price adjustment will not impact the profit or loss of a participant’s positions, but will result in a new trading price so that the valuation implied by the cAssets coordinates with the public market valuation once applicable. This calculation ensures the price the oracle is pulling as the cAsset approaches settlement is apples-to-apples in terms of valuation with the price the trades had been placed at. P_{adj}=\frac{P_{unadj}\: S_{unadj}}{S_{adj}} Since there are no real-time price feeds for the private capital markets, cAssets will use a time weighted average price (TWAP) with respect to time TWAP of its own price in order to maintain a proper collateralization ratio. The protocol will take the max of two TWAP calculations, one with a shorter time horizon and one with a longer horizon. This implementation will help ensure safety for a more volatile market. If the market is in a downtrend, the TWAP with the longer time horizon will be utilized for setting optimal collateralization and if the market is in an uptrend the TWAP with a shorter time horizon will be utilized. This is an extension of the methodology used for collateralizing the ethereum gas future developed by uLABS. Time horizon for the TWAP calculation will be set at 2 and 4 hours (this will be up for governance in the future). This time frame was chosen after seeing the success of the Ethereum gas future implementation by uLabs and yam Labs. max(TWAP(t_{1_{long}},t_{2_{long}}),TWAP(t_{1_{short}},t_{2_{short}}))=\frac{\sum_{t_2}^{t=t_1}P_{asset}(t)\:V_{asset}(t)}{\sum_{t_2}^{t=t_1}V_{asset}(t)} The synthetic assets tradable via the platform will be over-collateralized by minters to ensure safety, though with healthy market dynamics, traders taking short and long positions should cover each other’s positions. The protocol will only need to leverage collateral from minters as an insurance pool if the longs and shorts cannot cover each other’s positions.
Sensitivity Goal - MATLAB & Simulink Sensitivity Bound Implicit Constraint Sensitivity Goal limits the sensitivity of a feedback loop to disturbances. You specify the maximum sensitivity as a function of frequency. Constrain the sensitivity to be smaller than one at frequencies where you need good disturbance rejection. To specify a Sensitivity Goal, you specify one or more locations at which to limit sensitivity. You also provide the frequency-dependent maximum sensitivity as a numeric LTI model whose magnitude represents the desired sensitivity as a function of frequency. When you create a tuning goal in Control System Tuner, a tuning-goal plot is generated. The dotted line shows the gain profile you specify. The shaded area on the plot represents the region in the frequency domain where the tuning goal is not satisfied. If you prefer to specify disturbance attenuation at a particular location, rather than sensitivity to disturbance, you can use Disturbance Rejection Goal. In the Tuning tab of Control System Tuner, select New Goal > Sensitivity of feedback loops to create a Sensitivity Goal. When tuning control systems at the command line, use TuningGoal.Sensitivity to specify a disturbance rejection goal. Use this section of the dialog box to specify the signal locations at which to compute the sensitivity to disturbance. You can also specify loop-opening locations for evaluating the tuning goal. Measure sensitivity at the following locations Select one or more signal locations in your model at which to measure the sensitivity to disturbance. To constrain a SISO response, select a single-valued location. For example, to limit sensitivity at a location named 'y', click Add signal to list and select 'y'. To constrain a MIMO response, select multiple signals or a vector-valued signal. Specify the maximum sensitivity as a function of frequency. Enter a SISO numeric LTI model whose magnitude represents the desired sensitivity bound as a function of frequency. For example, you can specify a smooth transfer function (tf, zpk, or ss model). Alternatively, you can sketch a piecewise maximum sensitivity using an frd model. When you do so, the software automatically maps the profile to a smooth transfer function that approximates the desired sensitivity. For example, to specify a sensitivity that rolls up at 20 dB per decade and levels off at unity above 1 rad/s, enter frd([0.01 1 1],[0.001 0.1 100]). If you are tuning in discrete time, you can specify the maximum sensitivity profile as a discrete-time model with the same sampling time as you use for tuning. If you specify the sensitivity profile in continuous time, the tuning software discretizes it. Specifying the profile in discrete time gives you more control over the profile near the Nyquist frequency. Use this section of the dialog box to specify additional characteristics of the sensitivity goal. For multiloop or MIMO sensitivity requirements, the feedback channels are automatically rescaled to equalize the off-diagonal (loop interaction) terms in the open-loop transfer function. Select Off to disable such scaling and shape the unscaled open-loop response. For Sensitivity Goal, f(x) is given by: f\left(x\right)={‖{W}_{S}\left(s\right)S\left(s,x\right)‖}_{\infty }, or its discrete-time equivalent. Here, S(s,x) is the closed-loop sensitivity function measured at the location specified in the tuning goal. {‖\text{\hspace{0.17em}}\cdot \text{\hspace{0.17em}}‖}_{\infty } denotes the H∞ norm (see norm). WS is a frequency weighting function derived from the sensitivity profile you specify. The gain of WS roughly matches the inverse of the specified profile for gain values ranging from –20 dB to 60 dB. For numerical reasons, the weighting function levels off outside this range, unless the specified gain profile changes slope outside this range. This adjustment is called regularization. Because poles of WS close to s = 0 or s = Inf might lead to poor numeric conditioning for tuning, it is not recommended to specify sensitivity profiles with very low-frequency or very high-frequency dynamics. For more information about regularization and its effects, see Visualize Tuning Goals.
Less than or Equal to - APL Wiki (Redirected from Less Than or Equal) Less than or Equal to (≤) is a comparison function which tests whether the left argument is tolerantly less than or equal to the right argument, returning 1 if this is the case and 0 otherwise. It is the negation of Less than (<), and in fact was called Not greater in APL\360. When the arguments to Less than or Equal to are Boolean, it is the material implication Boolean function, also known as the IMPLY gate: In the context of logic, it can be read as implies. Less than or Equal to Scan is an occasionally used pattern related to Less than Scan. While <\ changes all 1s after the first to 0, ≤\ changes all 0s after the first to 1. Thus ≤\A {\displaystyle \Leftrightarrow } ~<\~A. ≤\ 1 0 1 0 0 0 1 0 ≤\ appears in the FinnAPL idiom library as entry 350, "Not first zero". Retrieved from ‘https://aplwiki.com/index.php?title=Less_than_or_Equal_to&oldid=5123’
Simplify the following without a calculator. \int _ { 1 } ^ { 8 } ( \sqrt [ 3 ] { 27 u } ) d u Before integrating, rewrite the integrand with a fractional exponent. Note that both the 27 u are beneath the radical sign! \frac { d } { d x } ( \int _ { 4 } ^ { 18 } ( 6 x - 3 ) d x ) The Fundamental Theorem of Calculus, part 1 states that the derivative of an indefinite integral gives you the original function. But this is the derivative of a DEFINITE integral. 0 \int f ^ { \prime \prime } ( x ) d x =\int \frac{d}{dx}f'(x)dx=\underline{ \ \ \ \ \ \ \ \ \ \ \ \ } +C Think: Fundamental Theorem of Calculus, part 1
by Nicola F. Müller and Tim Vaughan Structured-coalescent.pdf h3n2_2deme.fna MTT.xml MTT.h3n2_2deme.trees MTT.h3n2_2deme.map.trees MTT.h3n2_2deme.typedNode.MC... MTT.log MTT.h3n2_2deme.typedNode.trees 12 Jul 2017 - Update programs used section 09 Feb 2017 - Fix quotation marks 07 Feb 2017 - Add heading 07 Feb 2017 - Merge 07 Feb 2017 - Update latex tutorial Population dynamics can influence the shape of a tree. Another thing that has strong influence on the shape of the tree is structure in a population. This is the case as soon as sequences do not mix well, i.e. they cluster together. One cause of this clustering is due to geography. Samples may not have been taken from the same geographic region, leading to clustering of samples from the same region. This clustering of samples can bias the estimation of parameters. The extension of the classic coalescent to the structured coalescent by allowing for migration between regions is trying to circumvent this by allowing individual regions to have distinct coalescent rates and by allowing migration between those regions. Practical: MultiTypeTree In this tutorial we will estimate migration rates, effective population sizes and locations of internal nodes using the structured coalescent implemented in BEAST2, MultiTypeTree (Vaughan et al., 2014). Learn how to infer structure from trees with sampling location Get to know how to choose the set-up of such an analysis Get to know the advantages and disadvantages of working with structured trees The dataset consists of 60 Influenza A/H3N2 sequences sampled in Hong Kong and in New Zealand between 2001 and 2005. South-East Asia has been hypothesized to be a global source location of seasonal Influenza, while more temperate regions such as New Zealand are assumed to be global sinks (missing reference), meaning that Influenza strains are more likely to migrate from the tropic to the temperate regions then vice versa. We want to see if we can infer this source-sink dynamic from sequence data using the structured coalescent. We will use BEAUti to generate the input XML for BEAST2 from the sequence alignment. Install BEAST 2 Plug-Ins The packages MultiTypeTree is not implemented in the core of BEAST, but has to be installed (Figure 1). Figure 1: Install MultiTypeTree. To be able to make .xml’s for MultiTypeTree, we have to load the MultiTypeTree template File > Template > MultiTypeTree. This template allows to specify additional things, such as sampling location, which one can not specify using the standard interface, as well as parameters such as the migration rates. After setting the template, we can load the alignment of the H3N2 data File > Add Alignment. Since the sequences were sampled through time, we have to specify the sampling dates. These are included in the sequence names. To set the sampling dates, go to Tip Dates, guess them by splitting after the “_” and then choose the last group. There are two different ways in how BEAST can interpret sampling dates. They are labeled as Since some time in the past and Before the present. The easiest way to check if you have used the correct one is by checking Height. If the setup is correct, the sequences sampled the most recently (i.e. 2005.66) should have a Height of 0 while all other tips should be larger then 0 (Figure 2). Figure 2: Sampling dates. The main contrast in the setup to previous analyses is that we include additional information about the sampling location of sequences. Sequences were taken from patients in Hong Kong and New Zealand. We can specify these sampling locations by going to Tip Locations in BEAUti and guessing the locations. Use here the second group after splitting the names on the character “_”. After guessing the tip locations, the column Location should contain the entries Hong Kong and New Zealand (Figure 3). Figure 3: Sampling locations. For this analysis, we will be using the HKY model. The HKY model infers different rates for transversion and transition. Transition being the change within purines (A and G) and pyrimidines (T and C) and transversion being the change among those groups. We also want to allow for heterogeneity between sites, which we can do by setting the Gamma Category Count to a value greater than 0 (normally between 4 and 6) and ticking the estimate box for the shape parameter (Figure 4). Figure 4: Setup of the site model. To speed up convergence, we leave the branch model on the Strict Clock model and set a different value for the clock rate (default is 1). A value of 0.005 substitutions * site^(-1) * year^(-1) is closer to the truth. Since we have more than one deme (Hong Kong and New Zealand), we can estimate the effective population size of those two demes separately. Additionally, these demes are connected, so we can (or need to) allow for migration between them. By default, the migration rates and population sizes are not estimated. To change this, we have to go to the Priors setting. There, we have to check the two estimate boxes for the population sizes and the migration rates. After checking those two boxes, there will be two new fields appearing, where we can set the priors for the population sizes and the migration rates. Since we know the time scale of our data (few years), we can choose a proper prior for the migration rates, like an exponential prior distribution with mean 1. Figure 5 shows the final setup for the priors. Figure 5: Check the boxes that say estimate for the population size and the migration rates and change it to an exponential prior with mean 1. The rest of the settings we can leave as they are. After saving, we get an *.xml, which we can use in BEAST2. The run will take a bit of time. If the MultiTypeTree run consumes too much CPU power, you can just close it and then use the “pre-cooked” *.log and *.trees files later instead. Analysis of the MultiTypeTree run Load the *.log files into tracer. First, we should check that the run has converged by looking at the ESS values. If all ESS values are above 200, we should be on the safe side. Next, we can have a look at the estimates of the effective population sizes (Figure 6). Figure 6: Estimated effective population sizes. Hong Kong ( \sim 7 Mio Inhabitants) is inferred to have the larger effective population size than New Zealand ( \sim 4.5 Mio Inhabitants). Keep in mind though that the effective population size is not only dependent on the population size itself, but also on e.g. transmission rates or contact rates. Differences in the real population size are therefore not necessarily reflected in the effective population size. However, they can still act as a sanity test. Next, we want to have a look at the inferred migration rates (Figure 7). Figure 7: Estimated migration rates. As we stated at the beginning, it is assumed that South-East Asia works as a global source of influenza, while e.g. Oceania acts as a sink. Do the inferred migration rates agree with this hypothesis? After having looked at the inferred parameters, we can look at the inferred trees. We can either look at all the sampled trees individually or can use a summary of all tree. This can be done by using the program TreeAnnotator: We need to specify at least 4 settings there. First, the burn-in percentage, then the input tree file, the output tree file for which we have to specify a file name and the Node Heights (set it to Mean Heights) MultiTypeTree logs 3 different tree files: The first one is the *.map.trees file. It contains the maximum posterior tree for the analysis up to a sample of the MCMC. This tree is the tree with the highest posterior probability that has been visited so far. The second one is the *.trees file. It contains the logged trees with so called single child node. In a normal tree, all nodes have two (or sometimes more) children. The nodes there are coalescent events. The single child nodes on the other hand are migration events (in this case a migration event between Hong Kong and New Zealand). MultiTypeTree infers the timing of those migration events on a branch and logs them in the *.trees file. The third tree file logged is the *.typedNode.trees. This file does not use single child nodes. Instead, every coalescent event (here always a node with two daughter lineages) has a location (or state or color or type) where it was inferred to take place (see Figure 8) Figure 8: An example of a tree where the migration events are logged as single child nodes (left) and of the same tree where only the location of a coalescent event is logged (right). To summarize all the trees, we will need the *.typedNode.trees files, since TreeAnnotator cannot handle single child nodes. We will also need to specify the Burnin percentage, which we can guess from looking at the traces of the parameter estimates in Tracer (10% should be more than enough). Next, we also need to specify where the output will be saved and under what name. After that, we can run the analysis. When TreeAnnotator is finished, we can visualize the summarized MultiTypeTree run with FigTree. Open the program and go to File > Open, and open the output tree file from TreeAnnotator. To color the tree, go to Appearance and change Colour by and Width by. To get the coloring by inferred location, one has to set it to type. The expressions type, state, location and color are often used interchangeably. The color gives the estimated (most likely) location of a node (red for Hong Kong and blue for New Zealand), the width gives the certainty of a color. The more certain the estimate is, the wider the branch above (towards the root of) the node. You can also change the Line Weight to better see difference in the width of branches. Figure 9: Color tree according to the inferred location. Next, we would like to know how certain we are about the node heights. This can be visualized by going to Node Bars and there Display the 95% HPD of node heights, which gives the 95% credibility interval of node heights (the mean is indicated by the shown node height). Figure 10: Set the 95% HPD for the node heights. Some considerations for using the structured Coalescent (or any structured method) Inferring structure on a tree is hard and requires a lot of assumptions, e.g. that the migration rates don’t change over time or populations are constant. The number of migration events on a tree might be very low, to infer a rate from such a low number of events can be very hard. In general, it is easier to infer rates within a region (here the effective population size) than it is to infer rates between them, as there are simply more events within than between regions. Despite considering structure in a tree, there might still be states or locations etc. that were not sampled. Even if a node is inferred to be in a location with high certainty, the results could look completely different if samples from other locations would be considered as well. The content of this tutorial is based on the MultiTypeTree tutorial by Tim Vaughan. Vaughan, T. G., Kühnert, D., Popinga, A., Welch, D., & Drummond, A. J. (2014). Efficient Bayesian inference under the structured coalescent. Bioinformatics (Oxford, England), 30(16), 2272–2279. https://doi.org/10.1093/bioinformatics/btu201
12. The probabilities that John and James pass an examination are \(\frac {3}{4}\)and \(\frac {3}{5}\) respectively. Find the probability of both boys failing the examination. A.\(\frac {1}{10}\) B.\(\frac {2}{10}\) C.\(\frac {9}{20}\) D.\(\frac {11}{20}\) 13. An arc of a circle of radius 14cm subtends angle 300° at the centre. Find the perimeter of the sector formed by the arc (take π = 22/ \frac{7}{} 14. Simplify \(\frac {25^{\frac{2}{3}} \div 25^{\frac{1}{6}}} {(\frac{1}{5})^{\frac{7}{6}} \div (\frac{1}{5})^{\frac{1}{6}}}\) B\(\frac{1}{5}\) 15. \(\text{P} = \left[\frac{\mathrm{Q(R – T)}}{15}\right]^\frac{1}{3}\) Make T the subject of the relation. A. T = \(\frac{R + P3}{15Q}\) B. T = \(\frac{R – 15P^3}{Q}\) C. T =R – \(\frac{15P^3}{Q}\) D. T = \(\frac{15R + Q}{P^3}\) 16. What is the place value of 9 in the number 3.0492? B. \(\frac{9}{1000}\) 17. If the simple interest on a sum of money invested at 3% per annum for 2 \frac{1/}{2} years is N123, 18. A machine valued at N20,000 depreciates by 10% every year. What will be the value of the machine at the end of two years? 19. The table shown gives the marks scored by a group of student in a test. Use the table to answer the question given. What is the probability of selecting a student from the group that scored 2 or 3 A.\(\frac{1}{11}\) B.\(\frac{5}{22}\) D.\(\frac{6}{11}\)
1. (a) Define strain. (b) A rubber band is stretched to twice its original length. Calculate the strain on the rubber band. 2. State three materials used for making optical fibres 3. Name three classes of magnetic materials. 4. (a) What is an intrinsic semiconductor? (b) Distinguish between the p-type and n-type semi-conductors. 5.A missile is projected so as to attain its maximum range. Calculate the maximum height attained if the initial velocity of projection is 200 ms−¹. [g = 10ms−²] 6. A black body radiates maximum energy when its surface temperature T and the corresponding wavelength λmax are related by the equation \lambda max T = constant. Given the values of the constant and surface temperature as 2.9 x 10−3 mK and 57°C respectively; Calculate the frequency of the energy radiated. 7. a) What does the acronym LASER stand for? b) What is a laser? 8. (a) Define uniform acceleration. (b) Forces act on a car in motion. List the (i) horizontal forces and their directions; (ii) vertical forces and their directions (c) A car starts from rest and accelerate uniformly for 20s to attain a speed of 25 ms−1. It maintains this speed for 30s before decelerating uniformly to rest. The total time for the journey is 60s. (i) Sketch a velocity-tune graph for the motion. (ii) Use the graph to determine the (\alpha ) total distance travelled by the car ( \beta ) deceleration of the car. The figure here illustrates force-extension graph for a stretched spiral spring. Determine the work done on the spring.
Introduction to -Triple Systems Guy Roger Biyogmam, "Introduction to -Triple Systems", International Scholarly Research Notices, vol. 2014, Article ID 738154, 5 pages, 2014. https://doi.org/10.1155/2014/738154 Guy Roger Biyogmam1 1Department of Mathematics, Southwestern Oklahoma State University, 100 Campus Drive, Weatherford, OK 73096, USA Academic Editor: H. Li This paper introduces the category of -triple systems and studies some of their algebraic properties. Also provided is a functor from this category to the category of Leibniz algebras. A Triple system is a vector space over a field together with a -trilinear map . Among the many examples known in the literature, one may mention -Lie algebras [1] and Lie triple systems [2] which are the generalizations of Lie algebras to ternary algebras, Jordan triple systems [2] which are the generalizations of Jordan algebras, and Leibniz 3-algebras [3] and Leibniz triple systems [4] which are generalizations of Leibniz algebras [5]. In this paper we enrich the family of triple systems by introducing the concept of -triple systems, presented as another generalization of Leibniz algebras with the particularity that, for all , the map , defined by , is a derivation of , a property of great importance in Nambu Mechanics. We investigate some of their algebraic properties and provide a functorial connection with Leibniz algebras and Lie algebras. For the remaining of this paper, we assume that is a field of characteristic different to 2 and all tensor products are taken over . Definition 1. A -triple system is a -vector space equipped with a trilinear operation Example 2. Let be an -dimensional vector space with basis . Define on the bracket by for fixed . It is easy to check that the identity (2) is satisfied. So is a -triple system when endowed with the operation . Because of the resemblance between the identity (2) and the generalized Leibniz identity [3], it is worth mentioning that, in general, Leibniz 3-algebras do not coincide with -triple systems. The following example provides a Leibniz 3-algebra that is not a -triple system. Example 3. The two-dimensional complex Leibniz 3-algebra (see [6, Theorem 2.14]) with basis , , and brackets with , is not a -triple system. It is easy to check that its bracket does not satisfy the identity (2). Definition 4. Let be -triple systems. A function is said to be a homomorphism of -triple systems if We may thus form the category -TS of -triple systems and -triple system homomorphisms. Recall that if is a vector space endowed with a trilinear operation , then a map is called a derivation with respect to if Lemma 5. Let be a -triple system and . Then the map defined on by is a derivation with respect to the bracket of . Proof. By setting and using the identity (2), we have Definition 6. A subspace of a -triple system is a subalgebra of if is a -triple system when endowed with the trilinear operation of . Definition 7. A subalgebra of a -triple system is called ideal (resp., left ideal, resp., right ideal) of if it satisfies the condition (resp., , resp., ). If satisfies the three conditions, then is called a 3-sided ideal. Note that none of these three conditions implies the others as in the case of Lie triple systems. Example 8. In Example 2, the subspace with basis is an ideal of . However the subspace with basis is not an ideal of , since, for , we have . Definition 9. Given a -triple system , one defines the center of and the derived algebra of , respectively, by Lemma 10. For a -triple system , and are ideals of . Proof. Clearly, . So is an ideal of . That is an ideal follows from the fact that is closed under the operation . The following theorem classifies a subfamily of two-dimensional complex -triple systems. This result was obtained by Camacho et al. in [6] for Leibniz 3-algebras. Theorem 11. Up to isomorphism, there are seven two-dimensional complex -triple systems with one-dimensional derived algebra. Proof. The proof is similar to [6, Theorem 2.14]. Let be a -triple system with basis , and assume that . Then write , . Then, using the identity (2), the only possible nonzero coefficients yield to the system of equations for which the solution provides the following -triple systems with bracket operations: with . Definition 12. Given a -triple system , one defines the left center and the right center of , respectively, by Lemma 13. The left center and the right center are 3-sided ideals of . Proof. To show that is an ideal of , let and let . Then, for every , we have, by the identity (2), So . The proof that is both left ideal and right ideal is similar, so is the case for . Definition 14. Given a -triple system , we define left and right centralizers of a subalgebra in by respectively. Lemma 15. Let be an ideal of a -triple system . Then and are also ideals of . Proof. To show that is an ideal of , let , , and . Then, by the identity (2), So . The proof for is similar. Definition 16. For a -triple system and a subalgebra of , we define the left normalizer of in by and the right normalizer of in by Lemma 17. Let be a subalgebra of a -triple system . Then and are also subalgebras of . Proof. To show that is a subalgebra of , let , , and . Then, by the identity (2), we have So . The proof for is similar. Remark 18. If is an ideal, then . 2. From -Triple Systems to Leibniz Algebras Recall that a Leibniz algebra (sometimes called Loday algebra, named after Jean-Louis Loday) is a vector space with a bilinear product satisfying the Leibniz identity Proposition 19. Let be a -triple system. Define on the bracket operation by Then satisfies the Leibniz identity. Proof. On one hand, we have Also, On the other hand, One checks using the identity (2) that the equality holds. Corollary 20. Let be a -triple system; then endowed with the bilinear map has a Leibniz algebra structure. Proof. This is a consequence of Proposition 19. Corollary 21. Let be a -triple system; then has a Leibniz algebra structure, when endowed with the bilinear map defined by These determine two functors from the category -TS of -triple systems to the category of Leibniz algebras. Definition 22. Let be a -triple system and a Leibniz algebra. The action of on is a map satisfying for all and . Proposition 23. Let be a -triple system; then the Leibniz algebra acts on via the map defined by . Proof. The first condition of Definition 22 follows by (2). To show (28), we have Now let and consider the map defined by , . Clearly, this map is a derivation of as it is induced by the action (Proposition 23) defined above. Proposition 24. For a -triple system , the subspace is a Lie algebra with respect to the product More precisely, it is an ideal of the Lie algebra of derivations of . Proof. To show that is a Lie subalgebra of , let . Then, for all , So is closed under the bracket of . Also, for any derivation , we have, for all , Hence . V. T. Filippov, “ n -Lie algebras,” Sibirskiĭ Matematicheskiĭ Zhurnal, vol. 26, no. 6, pp. 126–140, 1985. View at: Google Scholar | Zentralblatt MATH | MathSciNet N. Jacobson, “Lie and Jordan triple systems,” American Journal of Mathematics, vol. 71, pp. 149–170, 1949. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet J. M. Casas, J.-L. Loday, and T. Pirashvili, “Leibniz n -algebras,” Forum Mathematicum, vol. 14, no. 2, pp. 189–207, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet M. R. Bremner and J. Sánchez-Ortega, “Leibniz triple systems,” Communications in Contemporary Mathematics, vol. 14, pp. 189–207, 2013. View at: Google Scholar L. M. Camacho, J. M. Casas, J. R. Gómez, M. Ladra, and B. A. Omirov, “On nilpotent Leibniz n -algebras,” Journal of Algebra and its Applications, vol. 11, no. 3, Article ID 1250062, 17 pages, 2012. View at: Publisher Site | Google Scholar | MathSciNet Copyright © 2014 Guy Roger Biyogmam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: "107" number – news · newspapers · books · scholar · JSTOR (May 2016) (Learn how and when to remove this template message) (one hundred seventh) 107 (one hundred [and] seven) is the natural number following 106 and preceding 108. 107 is the 28th prime number. The next prime is 109, with which it comprises a twin prime, making 107 a Chen prime.[1] Plugged into the equation {\displaystyle 2^{p}-1} , 107 yields 162259276829213363391578010288127, a Mersenne prime.[2] 107 is itself a safe prime.[3] It is the fourth Busy beaver number, the maximum number of steps that any Turing machine with 2 symbols and 4 states can make before eventually halting.[4] As "one hundred and seven", it is the smallest positive integer requiring six syllables in English (without the "and" it only has five syllables and seventy-seven is a smaller 5-syllable number). The atomic number of bohrium. The emergency telephone number in Argentina and Cape Town. The telephone of the police in Hungary. A common designation for the fair use exception in copyright law (from 17 U.S.C. 107) Peugeot 107 model of car The 107% rule, a Formula One Sporting Regulation in operation from 1996 to 2002 and 2011 onwards The number 107 is also associated with the Timbers Army supporters group of the Portland Timbers soccer team, in reference to the stadium seating section where the group originally congregated. ^ "Sloane's A109611 : Chen primes". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-27. ^ "Sloane's A000043 : Mersenne exponents". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-27. ^ "Sloane's A060843 : Busy Beaver problem: number of steps before halting". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2021-09-24.
Two Earthquake Sequences Nearly a Century Apart Reveal a Conjugate Seismogenic System in Central Taiwan | Seismological Research Letters | GeoScienceWorld Ming‐Che Hsieh; Ming‐Che Hsieh Disaster Prevention Technology Research Center, Sinotech Engineering Consultants (Inc.), Taipei, Taiwan Department of Earth Sciences, National Central University, Taoyuan, Taiwan Yen‐Yu Lin; Yen‐Yu Lin * Earthquake‐Disaster & Risk Evaluation and Management Center, National Central University, Taoyuan, Taiwan Corresponding author: yenyulin@cc.ncu.edu.tw Kuo‐Fong Ma; Kuo‐Fong Ma Yi‐Wun Liao Ming‐Che Hsieh, Yen‐Yu Lin, Kuo‐Fong Ma, Li Zhao, Yi‐Wun Liao; Two Earthquake Sequences Nearly a Century Apart Reveal a Conjugate Seismogenic System in Central Taiwan. Seismological Research Letters 2020;; 91 (3): 1469–1481. doi: https://doi.org/10.1785/0220190335 Seismically active central Taiwan is considered part of an orogenic wedge with low‐angle east‐dipping active faults above a detachment surface and an active mountain‐building process later. In 2013, two moderate reverse‐faulting earthquakes of magnitudes ML 6.2 and 6.5 occurred in Nantou. They brought to mind the historically damaging sequence of four earthquakes in the same area that claimed a total of 71 lives in 1916. The 2013 earthquake sequence provides a good opportunity to study the 1916 sequence. We compared the historical Omori record of the main event in the 1916 sequence, discovered in the Seismogram Archives at the Earthquake Research Institute, University of Tokyo, and the corresponding simulated Omori records of the 2013 events. Our comparison shows significant similarity among the earthquakes, although they are separated by nearly 100 yr. To understand the seismogenic structure associated with these earthquake sequences, we further studied the source rupture properties of earthquakes in this region since 1999 using local broadband records to determine the rupture fault planes. Results show that all events have similar focal mechanisms with one low‐angle east‐dipping and another high‐angle west‐dipping nodal planes. Rupture plane determination indicates that whereas events at shallow depths (⁠ <20 km ⁠) ruptured on the low‐angle east‐dipping plane, events at greater depths (⁠ >20 km ⁠) slipped on the high‐angle west‐dipping plane in a conjugate fault system. The comparison also suggests that the 1916 sequence occurred on the low‐angle east‐dipping plane of this conjugate fault system in the orogenic wedge as part of a mountain‐building process. Given the active mountain‐building process in central Taiwan, occurrences of this type of earthquake must be addressed in seismic hazard mitigation efforts.
Low-rank and Rank Structured Matrices - Bobbie's Blog When you begin to research a field of study, you often get overwhelmed by the amount of existing knowledge you have to learn before you could go further. One useful way to bootstrap yourself is to only learn a minimal amount of basic ideas that are enough for you to “survive” in the field. Such basic ideas are the Minimal Actionable Knowledge & Experience (MAKE) of the field. Here, I will try to present the “MAKE” of the field of fast matrix computations. Low-rank matrices and important information An $m\times n$ matrix $\mathbf{A}$ is low-rank if its rank, $k\equiv\mathrm{rank}\,\mathbf{A}$, is far less than $m$ and $n$. Then $\mathbf{A}$ has a factorization \mathbf{A} =\mathbf{E}\mathbf{F} where $\mathbf{E}$ is a tall-skinny matrix with $k$ columns and $\mathbf{F}$ a short-fat matrix with $k$ rows. For example the following $3\times3$ matrix is of rank-$1$ only. Given a matrix $\mathbf{A}$, there are many ways to find $\mathrm{rank}\,\mathbf{A}$. One way is to find the SVD where $\mathbf{\Sigma}=\mathrm{diag}(\sigma_1,\sigma_2,\dots)$ is an $m\times n$ diagonal matrix, whose diagonal elements are called the singular values of $\mathbf{A}$. Then $\mathrm{rank}\,\mathbf{A}$ is the number of nonzero singular values. The SVD tells you the most important information about a matrix: the Eckart-Young theorem says that the best rank-$k$ approximation of $\mathbf{A}=\mathbf{U}\mathbf{\Sigma} \mathbf{V}^*$ can be obtained by only keeping the first $k$ singular values and zeroing out the rest in $\mathbf{\Sigma}$. When the singular values decay quickly, such a low-rank approximation can be very accurate. This is particularly important in practice when we want to solve problems efficiently by ignoring the unimportant information. An interesting example is the $n\times n$ Hilbert matrix $\mathbf{H}_n$, whose $(i,j)$ entry is defined to be $\frac{1}{i+j-1}$. $\mathbf{H}_n$ is full-rank for any size $n$, but it is numerically low-rank, meaning that its singular values decay rapidly such that given any small threshold $\epsilon$, only a few singular values are above $\epsilon$. For example with $\epsilon=10^{-15}$, the $1000\times1000$ has numerical rank $28$. Other examples of (numerically) low-rank matrices include the Vandermonde, Cauchy, Hankel, and Toeplitz matrices, as well as matrices constructed from smooth data or smooth functions. As it turns out, a lot of the matrices we encounter in practice are numerically low-rank. So finding low-rank approximation (e.g. in the form of $\mathbf{A}=\mathbf{EF}$ at the beginning) is one of the most important and fundamental subjects in applied math nowadays. Data sparsity and rank structured matrices Matrix sizes have been growing with technological advancements. Many common matrix algorithms scale cubically with the matrix size, meaning that even if your computing power grows 1000 times, you could only afford to solve problems that are 10 times bigger than before. These common algorithms include matrix multiplication, matrix inversion, and matrix factorizations (e.g. LU, QR, SVD). Therefore, it is important to speed up these matrix computation methods in order to fully exploit the ever growing computing power. One major strategy to accelerate the computations is to exploit the data sparsity of a matrix. Data sparsity is a deliberately vague concept which broadly refers to the kind of internal structures in a matrix that can help make computations faster. Following are some common examples of data-sparse matrices. The most classcial data-sparse matrices are the sparse matrices, ones with a large number of zero entries. Using compressed data formats, you can save a lot of memories by storing only the nonzero entries (together with their positions). You can greatly reduce computation time by only operating on the nonzero entries while maintaining the sparsity of the matrices (i.e., avoid introducing more nonzero entries). Common sparse matrices include diagonal/block-diagonal matrices, banded matrices, permutations, adjacency matrix of graphs, etc. Low-rank matrices, as we have introduced above, are ones admit low-rank factorizations. Commonly used factorizations include the reduced SVDs, interpolative decomposition, CUR decomposition, rank-revealing QR factorizations, etc. Normally the SVD algorithms are more expensive, so the other algorithms are preferred when possible; for very large matrices, all the factorization algorithms can be further accelerated by randomization techniques. Rank structured matrices. These matrices are not necessarily low-rank, but can be split into a relatively small number of submatrices, each of which is low-rank. For example, the picture below shows the structure of a Hierarchically Off-diagonal Low Rank (HODLR) matrix, where all the off-diagonal blocks, big or small, have similar ranks. Such structure can for example arise from gravitational or electrostatic interactions, where the diagonal blocks represent the local interactions and the off-diagonal blocks represent the far interactions; the far interactions are low-rank because they are much smoother than the local interactions. Other rank structured matrices include the Hierarchically Semi-separable (HSS) matrices, the inverse of banded matrices, and the more general $\mathcal{H}$-matrices and $\mathcal{H}^2$-matrices. Rank structured matrices can be efficiently handled using tree structures. Matrix algorithms designed for these matrices can be very fast, with computation time scaling linearly or log-linearly with the matrix size. Complementary low-rank matrices are a special type of rank structured matrices that can be decomposed by the butterfly factorization (BF). The BF is inspired by ideas of the FFT algorithm (divide-and-conquer and permutations), which can be explained using the butterfly diagram. Butterfly algorithms were initially motivated by solving oscillatory problems such as wave scattering. With these ideas above, plus a little coding experience with some simple rank structured matrices (a good place to start is with the first two of these tutorial codes), you are equipped with the “MAKE” that gets you ready for going on an advanture to the fast computations with matrices. All the details and other more advanced topics can be learned later once you dig far enough. Posted by Bowei "Bobbie" Wu . 03-21-21 15:33 math, yq « Lockdown Math It is okay to speed up a little »
Aspect ratio (aeronautics) - 3D Vehicle - 3D Data Aspect ratio (aeronautics) (8295 views - Transportation - Air Water Earth) In aeronautics, the aspect ratio of a wing is the ratio of its span to its mean chord. It is equal to the square of the wingspan divided by the wing area. Thus, a long, narrow wing has a high aspect ratio, whereas a short, wide wing has a low aspect ratio.Aspect ratio and other features of the planform are often used to predict the aerodynamic efficiency of a wing because the lift-to-drag ratio increases with aspect ratio, improving fuel economy in aircraft. Aspect ratio and other features of the planform are often used to predict the aerodynamic efficiency of a wing because the lift-to-drag ratio increases with aspect ratio, improving fuel economy in aircraft. {\displaystyle {\text{AR}}} {\displaystyle b} {\displaystyle S} {\displaystyle b} {\displaystyle {\text{SMC}}} {\displaystyle {\text{AR}}\equiv {\frac {b^{2}}{S}}={\frac {b}{\text{SMC}}}} Roughly speaking, an airplane in flight can be imagined to affect a circular cylinder of air with a diameter equal to the wingspan.[6] A large wingspan affects a large cylinder of air, and a small wingspan affects a small cylinder of air. A small air cylinder must be pushed down with a greater power (energy change per unit time) than a large cylinder in order to produce an equal upward force (momentum change per unit time). This is because giving the same momentum change to a smaller mass of air requires giving it a greater velocity change, and a much greater energy change because energy is proportional to the square of the velocity while momentum is only linearly proportional to the velocity. The aft-leaning component of this change in velocity is proportional to the induced drag, which is the force needed to take up that power at that airspeed. Although a long, narrow wing with a high aspect ratio has aerodynamic advantages like better lift-to-drag-ratio (see also details below), there are several reasons why not all aircraft have high aspect wings: Maneuverability: a low aspect-ratio wing will have a higher roll angular acceleration than one of high aspect ratio, because a high-aspect-ratio wing has a higher moment of inertia to overcome. In a steady roll, the longer wing gives a higher roll moment because of the longer moment arm of the aileron. Low aspect ratio wings are usually used on fighter aircraft, not only for the higher roll rates, but especially for longer chord and thinner airfoils involved in supersonic flight. Parasitic drag: While high aspect wings create less induced drag, they have greater parasitic drag, (drag due to shape, frontal area, and surface friction). This is because, for an equal wing area, the average chord (length in the direction of wind travel over the wing) is smaller. Due to the effects of Reynolds number, the value of the section drag coefficient is an inverse logarithmic function of the characteristic length of the surface, which means that, even if two wings of the same area are flying at equal speeds and equal angles of attack, the section drag coefficient is slightly higher on the wing with the smaller chord. However, this variation is very small when compared to the variation in induced drag with changing wingspan. {\displaystyle c_{d}\;} {\displaystyle c_{d}\varpropto {\frac {1}{({\text{chord}})^{0.129}}}.} Airfield Size: Airfields, hangars and other ground equipment define a maximum wingspan, which cannot be exceeded, and to generate enough lift at the given wingspan, the aircraft designer has to lower the aspect-ratio and increase the total wing area. This limits the Airbus A380 to 80m wide with an aspect ratio of 7.8, while the Boeing 787 or Airbus A350 have an aspect ratio of 9.5, influencing flight economy.[9] By varying the sweep the wing can be optimised for the current flight speed. However the extra weight and complexity of a moveable wing mean that it is not often used. {\displaystyle AR={b \over c}} {\displaystyle AR={b^{2} \over S}} {\displaystyle SMC={S \over b}={b \over AR}} {\displaystyle C_{d}\;} {\displaystyle C_{D}=C_{D0}+{\frac {(C_{L})^{2}}{\pi eAR}}} {\displaystyle C_{D}\;} {\displaystyle C_{D0}\;} {\displaystyle C_{L}\;} {\displaystyle \pi \;} {\displaystyle e\;} {\displaystyle AR} {\displaystyle S_{w}} {\displaystyle {\mathit {AR}}_{\mathrm {wet} }={b^{2} \over S_{w}}} {\displaystyle b} {\displaystyle S_{w}} 航空工学航空機Aircraft design process気球Glider (aircraft)グライダー滑空Model aircraftプロペララジコン模型航空機Schleicher K7練習機翼
A rainscreen is an exterior wall detail where the siding (wall cladding) stands off from the moisture-resistant surface of an air/water barrier applied to the sheathing to create a capillary break and to allow drainage and evaporation. The rainscreen is the cladding or siding itself[1] but the term rainscreen implies a system of building. Ideally the rainscreen prevents the wall air/water barrier from getting wet but because of cladding attachments and penetrations (such as windows and doors) water is likely to reach this point, and hence materials are selected to be moisture tolerant and integrated with flashing. In some cases a rainscreen wall is called a pressure-equalized rainscreen wall where the ventilation openings are large enough for the air pressure to nearly equalize on both sides of the rain screen,[2] but this name has been criticized as being redundant[3] and is only useful to scientists and engineers. 3 Rainscreen cladding 4 The rainscreen system 5 The rainscreen drainage plane 5.1 A predictable pressure equalization plane 5.3 Entrapped moisture risks 5.3.1 Danger levels A screen in general terms is a barrier.[4] The rainscreen in a wall is sometimes defined as the first layer of material on the wall, the siding itself.[2] Also, rainscreen is defined as the entire system of the siding, drainage plane and a moisture/air barrier.[5][6] A veneer that does not stand off from the wall sheathing to create a cavity is not a rainscreen. However, a masonry veneer can be a rainscreen wall if it is ventilated.[7] Many terms have been applied to rain screen walls including basic, open, conventional, pressure-equalized, pressure-moderated rainscreen systems or assemblies. These terms have caused confusion as to what a rain screen is but all reflect the rainscreen principle of a primary and secondary line of defense. One technical difference is between a plane, a gap of 3⁄8 inch (9.5 mm) or less and a channel, a gap of more than 3⁄8 inch (9.5 mm). In general terms a rainscreen wall may be called a cavity or drained wall.[8] The two other basic types of exterior walls in terms of water resistance are barrier walls which rely on the one exterior surface to prevent ingress and mass walls which allow but absorb some leakage.[8] In the early 1960s research was conducted in Norway on rain penetration of windows and walls, and Øivind Birkeland published a treatise referring to a "rain barrier". In 1963 the Canadian National Research Council published a pamphlet titled "Rain Penetration and its Control" using the term "open rain screen".[9] Rainscreen claddingEdit Rainscreen cladding is a kind of double-wall construction that utilizes a surface to help keep the rain out, as well as an inner layer to offer thermal insulation, prevent excessive air leakage and carry wind loading. The surface breathes just like a skin as the inner layer reduces energy losses.[10] The rainscreen systemEdit For water to enter a wall first the water must get onto the wall and the wall must have openings. Water can then enter the wall by capillary action, gravity, momentum, and air pressure (wind).[2] The rainscreen system provides for two lines of defense against the water intrusion into the walls: The rainscreen and a means to dissipate leakage[11] often referred to as a channel. In a rainscreen the air gap allows the circulation of air on the moisture barrier. (These may or may not serve as a vapour barrier, which can be installed on the interior or exterior side of the insulation depending on the climate). This helps direct water away from the main exterior wall which in many climates is insulated. Keeping the insulation dry helps prevent problems such as mold formation and water leakage. The vapour-permeable air/weather barrier prevents water molecules from entering the insulated cavity but allows the passage of vapour, thus reducing the trapping of moisture within the main wall assembly. The air gap (or cavity) can be created in several ways. One method is to use furring (battens, strapping) fastened vertically to the wall. Ventilation openings are made at the bottom and top of the wall so air can naturally rise through the cavity. Wall penetrations including windows and doors require special care to maintain the ventilation. In the pressure-equalized system the ventilation openings must be large enough to allow air-flow to equalize the pressure on both sides of the cladding. A ratio of 10:1 cladding leakage area to ventilation area has been suggested.[2] A water/air resistant membrane is placed between the furring and the sheathing to prevent rain water from entering the wall structure. The membrane directs water away and toward special drip edge flashings which protect other parts of the building. Insulation may be provided beneath the membrane. The thickness of insulation is determined by building code requirements as well as performance requirements set out by the architect. The system is a form of double-wall construction that uses an outer layer to keep out the rain and an inner layer to provide thermal insulation, prevent excessive air leakage and carry wind loading. The outer layer breathes like a skin while the inner layer reduces energy losses. The structural frame of the building is kept absolutely dry, as water never reaches it or the thermal insulation. Evaporation and drainage in the cavity removes water that penetrates between panel joints. Water droplets are not driven through the panel joints or openings because the rainscreen principle means that wind pressure acting on the outer face of the panel is equalized in the cavity. Therefore, there is no significant pressure differential to drive the rain through joints. During extreme weather, a minimal amount of water may penetrate the outer cladding. This, however, will run as droplets down the back of the cladding sheets and be dissipated through evaporation and drainage. The rainscreen drainage planeEdit Typical layers in a wall system with rainscreen drainage plane A rainscreen drainage plane is an air gap and the water resistant barrier of a rainscreen. Together they provide a predictable, unobstructed path drainage for liquid moisture to drain from a high point of the wall (where it enters) to a low point of the wall (where it exits) the wall detail. The drainage plane must move the water out of the wall system quickly to prevent absorption and consequential rot, mold, and structural degradation. is designed to shed bulk rainwater and/or condensation downward and outward in a manner that will prevent uncontrolled water penetration into the conditioned spaces of a building or structure. In a barrier wall system, the exterior cladding also serves as the principal drainage plane and primary line of defense against bulk rainwater penetration. In cavity wall construction, however, the principal drainage plane and primary line of defense against bulk rainwater penetration is located inside the wall cavity, generally on the inboard side of the air space (either directly applied to the outboard surface of the exterior sheathing layer or, in the case of insulated cavity walls, on the outboard surface of the rigid or otherwise moisture-impervious insulation layer).[12] A predictable pressure equalization planeEdit Air pressure difference is one of the forces for driving a rainwater into wall systems but gravity is more often the cause of practical problems.[13] A rainscreen drainage plane that works as a predictable pressure equalization plane creates a separation (an air chamber) between the backside of a rainscreen and the exterior surface of the weather-resistant barrier that is installed on the exterior sheeting of the structural back up wall. This separation allows air contaminated with water vapor from all points in that wall system to exit the interior of the wall system. Moisture laden air that is allowed to pressurize will attempt to move to a lower pressure area that may be deeper into the interior of a wall detail. To prevent bridging due to capillary action, Building Science Consulting recommends the drainage plane maintain a cavity of 3/8" or greater, though smaller cavities with hydrophobic materials can also provide the capillary break.[14] Independently verified testing by manufacturer Masonry Technology Inc. demonstrates that a 3/16" depth is sufficient for drainage and airflow as well.[15] Ensure that the drainage plane does not compress when installed so that it maintains an acceptable air space. Similarly, ensure that the drainage plane isn't plugged by debris which commonly is present in the form of mortar squeezings or excess stucco. Some mechanical drainage planes include measures to prevent clogging. Ensure that the drainage plane creates a compartmentalized pressure equalization plane to prevent pressure driven moisture intrusion.[13] Details at top and bottom terminations of a wall system should accommodate moisture drainage (often termed "weeping") and air flow to properly dry out the wall. ASTM International Standards include a standard test for drainage plane systems in EIFS Systems under code ASTM E2273[16] and the International Code Council features a more general "Evaluation guideline for a moisture drainage system used with exterior wall veneers" under code ICC-ES EG356. Inappropriate rain screen materials may also introduce a risk of fast-spreading external fires.[17] Insects and possibly also rodents (→metal mesh) and bats[18] should be prevented from entering into the airgap at intake or exhaust ventilation openings.[19] Recommended aperture sizes for insect meshes are 3 to 4 millimeters.[20] Effectiveness dwindles rapidly with bigger ones, smaller ones tend to clog quickly. Entrapped moisture risksEdit Once moisture has penetrated deep into a wall system through the weather resistant barrier and into the exterior sheathing, the wall is deep wet. The air flow that exists in most wall systems is a slight draft that will not dry this condition out in a timely manner. The result is a compromised wall system with rot, rust, and mold potential. The structural integrity of the wall is at stake, as is the health of the occupants. The longer the wall remains wet, the greater the risk. 50% percent of homes suffer from mold problems.[21] Billions of dollars are spent annually on litigation involving mold and rot problems stemming from entrapped moisture; this has created an entire industry centered around construction litigation. Such litigation has caused insurance premiums for contractors to increase significantly and has made it difficult for contractors involved in moisture related lawsuits to obtain insurance at all.[22] An effective rainscreen drainage plane system mitigates this risk. Danger levelsEdit Wood Moisture Equivalent Graph Dampness levels in construction are measured in wood moisture equivalent (WME) percentages and is calculated as follows: {\displaystyle {\frac {{\text{wet sample weight}}-{\text{dry sample weight}}}{\text{dry sample weight}}}\times 100=WME} A normal range is 8–13% WME, with fungal growth beginning at the 16% threshold. A 20% WME is enough to promote wood rot.[24] It logically follows that the more time a part of a wall system exceeds one of these thresholds the greater chance of damage from fungal growth or rot. ^ Micheal J. Lough and David Altenhofen, "The Rain Screen Principle" Archived 2014-03-22 at the Wayback Machine ^ a b c d Brown, W. C, Rousseau, M. Z., and Dalgliesh, W. A., "Field Testing of Pressure-Equalized Rain Screen Walls," Donaldson, Barry, ed.. Exterior wall systems: glass and concrete technology, design, and construction. Philadelphia, PA: ASTM, 1991. 59. Print. ^ Rousseau, M.Z., "Facts and Fictions of Rain-Screen Walls", Construction Canada, 1990. ^ "Screen" def. 2. Oxford English Dictionary Second Edition on CD-ROM (v. 4.0) © Oxford University Press 2009 ^ Pressure Equalization in Rainscreen Wall Systems, National Research Council of Canada. Retrieved 2013-12-01 ^ The Rainscreen Principle in Design, National Research Council of Canada. Retrieved 2013-12-01 ^ Technical Note 27, Brick Masonry Rain Screen Walls (pdf file) Brick Industry Association. Retrieved 4 October 2017. ^ a b "Building Envelope Design Guide - Wall Systems" in Whole Building Design Guide ^ Garden, G.K. "Rain penetration and its control". nrc-publications.canada.ca. National Research Council of Canada. Retrieved 22 February 2020. ^ "Rainscreen Cladding". American Fiber Cement Corporation. 2015. Retrieved October 24, 2016. ^ "Building Envelope Design Guide - Wall Systems". Whole Building Design Guide. January 2007. Retrieved March 1, 2009. ^ a b Pressure Equalization in Rainscreen Wall Systems (July 1998). In Construction Technology Update. Retrieved March 1, 2009 from "Archived copy". Archived from the original on 2009-02-28. Retrieved 2014-03-22. {{cite web}}: CS1 maint: archived copy as title (link) ^ BSD-013: Rain Control in Buildings (September 2008). Building Science Consulting. Retrieved March 1, 2009 from http://www.buildingscience.com/documents/digests/bsd-013-rain-control-in-buildings/?full_view=1 ^ It's About Time Video Presentation (July 2006). Masonry Technology Incorporated. Retrieved March 1, 2009 from http://www.mtidry.com/testing/about_time.php ^ "Standard Test Method for Determining the Drainage Efficiency of Exterior Insulation and Finish Systems (EIFS) Clad Wall Assemblies". ASTM International. Retrieved 14 June 2017. ^ "Fire Risks From External Cladding Panels – A Perspective From The UK". Retrieved 14 June 2017. ^ Hygnstrom, Scott (1994). Prevention and control of wildlife damage. Lincoln Washington, DC Nebraska: University of Nebraska Cooperative Extension, Institute of Agriculture and Natural Resources, University of Nebraska--Lincoln U.S. Department of Agriculture, Animal and Plant Health Inspection Service, Animal Damage Control Great Plains Agricultural Council, Wildlife Committee. p. D-20. ISBN 978-0-9613015-1-4. OCLC 32081842. ^ Guertin, Mike (2018-05-18). "Put a Rainscreen Intake Vent Over Windows and Doors". Fine Homebuilding. Retrieved 2019-04-11. ^ Barritt, C. M. H. (1995). The Building Acts and Regulations applied. Harlow: Longman Scientific & Technical. p. 95. ISBN 0-582-27449-4. OCLC 60282122. ^ Mold Occurrence Influenced By Building Inspection Practice (January 2005) Dr. Richard A. Wolfe. Construction News & Articles. Retrieved March 1, 2009 from http://www.greatpossibilities.com/articles/publish/mold.shtml ^ http://www.rics.org/NR/rdonlyres/81485882-20E6-4408-A4D0-61FC8D6C1D3A/0/Grosskopf.pdf[permanent dead link] Identifying the Causes of Moisture-Related Defect Litigation in U.S. Building Construction, Grosskopf & Lucas ^ FAQS: Moisture Measurement. Humitest. Retrieved March 1, 2009 from http://www.domosystem.fr/en/faq/moisture-measurement-1/wood-moisture-equivalent-hbe-2 ^ Moisture Testing. Built Environments. Retrieved March 1, 2009 from http://www.built-environments.com/moisture.htm European Commission's portal for ventilated claddings Wikimedia Commons has media related to Rainscreen cladding. Retrieved from "https://en.wikipedia.org/w/index.php?title=Rainscreen&oldid=1079274673"
Decentralized Control for Large-Scale Systems with Uncertain Missing Measurements Probabilities Ying Zhou, Qiang Zang, Chunxia Fan, "Decentralized Control for Large-Scale Systems with Uncertain Missing Measurements Probabilities", Mathematical Problems in Engineering, vol. 2015, Article ID 379390, 10 pages, 2015. https://doi.org/10.1155/2015/379390 Ying Zhou,1 Qiang Zang,2 and Chunxia Fan1 1College of Automation, Nanjing University of Posts & Telecommunications, Nanjing 210003, China 2School of Information and Control Engineering, Nanjing University of Information Science & Technology, Nanjing 210044, China For large-scale systems which are modeled as interconnection of networked control systems with uncertain missing measurements probabilities, a decentralized state feedback controller design is considered in this paper. The occurrence of missing measurements is assumed to be a Bernoulli random binary switching sequence with an unknown conditional probability distribution in an interval. A state feedback controller is designed in terms of linear matrix inequalities to make closed-loop system exponentially mean square stable and a prescribed performance is guaranteed. Sufficient conditions are derived for the existence of such controller. A numerical example is also provided to demonstrate the validity of the proposed design approach. With the advances in network technology, more and more control systems have appeared whose feedback control loop is based on a network. This kind of control systems are called networked control systems (NCSs) [1–4]. Owing to the data communication errors in network and the temporarily disabled sensor, missing measurements and transmission time delay usually occur, which can degrade the system performance and even make the system unstable. There have been significant research efforts on the design of controllers and filters for system with missing measurements. There are two main approaches to handle missing measurements. One approach is to replace the missing measurements with an estimated value [5], and the other approach is to view missing measurements as “zero” [6], such as Markov chains [7] and Bernoulli binary switching sequence [8–13]. Fault detection is considered for NCS with missing measurements probabilities being known in [8]. Furthermore, still fault detection is considered for NCS with delays and missing measurements in [9]. In [10], the robust control problem is investigated for stochastic uncertain discrete time-delay systems with missing measurements. In [11], an observer-based controller is designed for NCS with missing measurements, where the missing measurements are assumed to obey the Bernoulli random binary distribution. The controlled systems in references [8–11] are linear discrete systems and the missing measurements probabilities are known constants. A robust fault detection method is proposed for NCS with uncertain missing measurements probabilities in [12]. In most existing results, the controlled NCS is usually treated as isolated one and the missing measurement probability is known [13–18]. However, on one hand, in practice the missing measurements probability usually keeps varying and cannot be measured exactly. On the other hand, in many practical applications, controlled systems are large-scale systems which are composed of discrete-time NCSs. Each discrete-time NCS is influenced not only by missing measurements, but also by interconnection terms generated by the other NCSs. At the same time, due to the dispersion of some large-scale systems such as power systems, it is impossible to feed back all states of whole large-scale systems to design the controller. So the decentralized controller that only feed back local information is more practical. In [19], for large-scale systems composed by discrete-time NCSs with missing measurements, where the missing measurements are modeled as Bernoulli distribution with a known conditional probability, the control problem is considered using linear matrix inequality (LMI) method. In summary, to study the decentralized control for large-scale systems composed by discrete-time NCSs with uncertain missing measurements probability is of important significance. But as far as the authors know, such research is seldom to be found. In this paper, the decentralized control problem is studied for linear discrete-time large-scale systems composed of discrete-time NCSs with missing measurements, where the occurrence of missing measurements is assumed to be a Bernoulli random binary switching sequence with an unknown conditional probability distribution that is assumed to be in an interval. Decentralized stabilization controller design is proposed for such systems. Sufficient conditions are established by means of LMI, which can be solved conveniently by MATLAB LMI toolbox. Consider the linear large-scale systems composed of discrete-time NCSs with missing measurements. The th NCSs are assumed to be of the formwhere , , , , and denote the state vector, the control input, the controlled output, the measuring output, and the disturbance of th subsystem, respectively; ; , and are known real matrices with appropriate dimensions; is the interconnection between theth subsystem and th subsystem. The measurements with packet loss are described bywhere is the actual measured states, is a Bernoulli distributed white sequence taking the values of 0 and 1 with certain probabilityand the unknown positive scalar : means the occurrence probability of the missing measurements. Without loss of generality, we assume where and are the upper limit and lower limit of the probability, respectively, and satisfyChoose and ; we can obtain another expression about as follows: Remark 1. The missing measurements probability usually keeps varying and cannot be measured exactly. However, it can be estimated by a value region shown as (4), which is much more practical. In (5), means that no measurement is lost and means that measurements are lost completely. For system (1), the control input can be chosen aswhere , are gain matrices to be designed. Submit (7) into (1); we can get the following closed-loop system: Definition 2 (see [11]). Closed-loop system (8) with is said to be exponentially mean-square stable if there exist constants and such thatwhere . The objective of this paper is to design the state feedback controller (7) for system (1), such that closed-loop system (8) satisfies following requirements: (1) When , closed-loop system (8) is exponentially mean-square stable. (2) Under the zero-initial condition, the controlled output satisfieswhere , , and is a prescribed scalar. We first give following useful two lemmas. Lemma 3 (see [20]). Let be a Lyapunov functional. If there exist real scalars , , , and such that then sequence satisfies Lemma 4 (see [21]). For any parameter and matrices , , and with appropriate dimensions, if , then At first, for the case of system (1) without disturbance, that is, , we have the following two theorems. Theorem 5. Closed-loop system (8) with is exponentially mean-square stable if there exist positive definite matrices and the controller gain matrices satisfyingwhere is an arbitrary given constant, Proof. Consider the following Lyapunov functional:when , we haveBy virtue of Lemma 4 and and , we have where . By Schur complement, (14) implies and we obtainwhere . Definite ; we getwhere . By Definition 2 and Lemma 3, closed-loop system (8) is exponentially mean-square stable. This completed the proof. It should be noted that matrix inequality (14) is not a linear matrix inequality and difficult to be solved. For this, we have following Theorem 6. Theorem 6. Closed-loop system (8) with is exponentially mean-square stable if there exist positive definite matrix and gain matrix satisfying the following linear matrix inequality:where is an arbitrary given constant, Proof. Through left-and-right multiplication of (14) bywe can getwhich is equivalent to LMI (21). By solving (21), we can obtain matrices and . Furthermore, from (21), we can get matrices and . This completed the proof. For the case of system (1) with disturbance, that is, , we have the following two theorems. Theorem 7. Closed-loop system (8) is exponentially mean-square stable and achieves the prescribed performance ifthere exist positive definite matrix and gain matrix satisfying the following LMI:where is a given parameter and is an arbitrary given constant, , , and , , , , , and are the same as in (14). Proof. When , inequality (25) is equivalent to (14). From Theorem 5, closed-loop system (8) is exponentially mean-square stable. When , choose the Lyapunov functional asthen, we havewhereBased on the Schur complement, inequality (25) implies , and then we getNow summing (29) from to with respect to yieldsSince system (8) is exponentially mean-square stable. Under the zero-initial condition, it is straightforward to see thatThis completed the proof. Theorem 8. Closed-loop system (8) is exponentially mean-square stable and achieves the prescribed performance if there exist positive definite matrix and gain matrix satisfying the following LMI:where is a given parameter and , , , , , , , , and are the same as in (21). Proof. Through left-and-right multiplication (25) bywe haveThen matrix inequality (32) is equivalent to (25). From Theorem 7, we can conclude that closed-loop system (8) is exponentially mean-square stable and achieves the prescribed performance. This completed the proof. Consider a linear discrete-time large-scale system which is composed of two NCSs as follows: Assume that . We can obtain the Lyapunov function solution matrices and controller parameters as follows: Choose the disturbance input . The initial state values are and . The simulation results are shown in Figure 1 and the closed-loop systems are stable. Closed-loop system with certain missing measurements probabilities (). When , the simulation results are shown in Figure 2 and the closed-loop systems are unstable. From Figures 1 and 2, we can conclude that the closed-loop systems cannot be guaranteed to be stable when the missing measurements probabilities are large enough. For the limit of space, the detailed design procedure is omitted here. When are uncertain and , we can get the following parameters in Theorem 8 by using the YALMIP toolbox in MATLAB:According to and , we have the Lyapunov function solution matrices and controller parameters as follows:The simulation results are shown in Figure 3 and the closed-loop systems are stable. It can be verified that . Closed-loop system with uncertain missing measurements probabilities. In summary, the closed-loop stability cannot be guaranteed using the method where probability is known to deal with the missing measurements. However, when the probability varies within a given interval, the closed-loop stability can be guaranteed through the controller designed by the method proposed in this paper. In this paper, the decentralized controller has been designed for a class of large-scale systems with uncertain missing measurements probabilities. The random missing measurements are modeled as a stochastic variable satisfying Bernoulli distribution with uncertain probabilities. Sufficient conditions for the existence of a stable controller are presented via LMI, and the designed controller enables the closed-loop system to be exponentially mean-square stable and achieve the prescribed performance. This work is supported by the National Natural Science Foundation of China (nos. 61104103, 61302155, and 61304089). {H}_{\infty } W. Zhang, S. Branickym, and S. M. Philips, “Stability of networked control system,” IEEE Control Systems Magazine, vol. 21, no. 1, pp. 84–99, 2001. View at: Google Scholar Y.-M. Chen and H.-C. Huang, “Multisensor data fusion for manoeuvring target tracking,” International Journal of Systems Science, vol. 32, no. 2, pp. 205–214, 2001. View at: Publisher Site | Google Scholar W. Wang, F.-W. Yang, and Y.-Q. Zhan, “Robust {H}_{2} state estimation for stochastic uncertain discrete-time system with missing measurements,” Control Theory and Applications, vol. 25, no. 3, pp. 439–445, 2008. View at: Google Scholar B.-F. Wang and G. Guo, “State estimation for discrete-time systems with Markovian time-delay and packet loss,” Control Theory and Applications, vol. 26, no. 12, pp. 1331–1336, 2009. View at: Google Scholar Y.-B. Ruan, W. Wang, and F.-W. Yang, “Fault detection filter for networked systems with missing measurements,” Control Theory and Applications, vol. 26, no. 3, pp. 291–295, 2009. View at: Google Scholar J. Zhang, Y. M. Bo, and M. Lv, “Fault detection for networked control systems with delays and data packet dropout,” Control and Decision, vol. 26, no. 6, pp. 933–939, 2011. View at: Google Scholar | MathSciNet F. Yang, Z. D. Wang, D. W. C. Ho, and M. Gani, “Robust {H}_{\infty } control with missing measurements and time delays,” IEEE Transactions on Automatic Control, vol. 52, no. 9, pp. 1666–1672, 2007. View at: Publisher Site | Google Scholar | MathSciNet Z. D. Wang, F. W. Yang, D. W. C. Ho, and X. H. Liu, “Robust {H}_{\infty } control for networked systems with random packet losses,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 37, no. 4, pp. 916–924, 2007. View at: Publisher Site | Google Scholar Y. B. Ruan, F. W. Yang, and W. Wang, “Robust fault detection for networked systems with uncertain missing measurements probabilities,” Control and Decision, vol. 23, no. 8, pp. 894–900, 2008. View at: Google Scholar | MathSciNet M. Basin, P. Shi, and D. Calderon-Alvarez, “Central suboptimal {H}_{\infty } filter design for linear time-varying systems with state and measurement delays,” International Journal of Systems Science, vol. 41, no. 4, pp. 411–421, 2010. View at: Publisher Site | Google Scholar | MathSciNet H. Dong, Z. Wang, D. W. Ho, and H. Gao, “Variance-constrained {calH}_{\infty } H. Dong, Z. Wang, and H. Gao, “Robust filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts,” IEEE Transactions on Signal Processing, vol. 58, no. 4, pp. 1957–1966, 2010. View at: Google Scholar J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements,” Automatica, vol. 48, no. 9, pp. 2007–2015, 2012. View at: Publisher Site | Google Scholar | MathSciNet Z. Wang, D. W. Ho, Y. Liu, and X. Liu, “Robust {H}_{\infty } control for a class of nonlinear discrete time-delay stochastic systems with missing measurements,” Automatica, vol. 45, no. 3, pp. 684–691, 2009. View at: Publisher Site | Google Scholar | MathSciNet Z. D. Wang, F. W. Yang, D. W. C. Ho, and X. H. Liu, “Robust finite-horizon filtering for stochastic systems with missing measurements,” IEEE Signal Processing Letters, vol. 12, no. 6, pp. 437–440, 2005. View at: Publisher Site | Google Scholar Y. Zhou, S. M. Yang, and Q. Zang, “ {H}_{\infty } filter design for large-scale systems with missing measurements,” Mathematical Problems in Engineering, vol. 2013, Article ID 945705, 7 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet C. Cornelis, M. de Cock, and E. E. Kerre, “Intuitionistic fuzzy rough sets: at the crossroads of imperfect knowledge,” Expert Systems, vol. 20, no. 5, pp. 260–270, 2003. View at: Publisher Site | Google Scholar Copyright © 2015 Ying Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Home : Support : Online Help : Science and Engineering : Units : Environments : Simple : Trigonometric Functions Trigonometric and Hyperbolic Functions in the Simple Units Environment In the Simple Units environment, the arguments for the trigonometric or hyperbolic functions is unit-free. With the approach used in this environment, this includes plane angles. An error is returned if the dimension of the argument is not unit-free. 1 \frac{\mathrm{\pi }}{180} \mathrm{unit} \mathrm{with}⁡\left(\mathrm{Units}[\mathrm{Simple}]\right): \mathrm{sin}⁡\left(3⁢\mathrm{Unit}⁡\left('\mathrm{deg}'\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\frac{\textcolor[rgb]{0,0,1}{\mathrm{\pi }}}{\textcolor[rgb]{0,0,1}{60}}\right) \mathrm{sin}⁡\left(180⁢\mathrm{Unit}⁡\left('\mathrm{deg}'\right)\right) \textcolor[rgb]{0,0,1}{0} \mathrm{cos}⁡\left(172.⁢\mathrm{Unit}⁡\left('\mathrm{deg}'\right)+40.⁢\mathrm{Unit}⁡\left('\mathrm{arcmin}'\right)+10.32⁢\mathrm{Unit}⁡\left('\mathrm{arcsec}'\right)\right) \textcolor[rgb]{0,0,1}{-0.9918267366}
Energies | Free Full-Text | An Approximate Transfer Function Model for a Double-Pipe Counter-Flow Heat Exchanger G\left(s\right) \stackrel{^}{G}\left(s\right) {G}_{n}\left(s\right) n=1,2,\dots ,N \stackrel{^}{G}\left(s\right) \stackrel{^}{G}\left(s\right) G\left(s\right) . The presented results show: (a) the advantage of the counter-flow regime over the parallel-flow one; (b) better approximation quality for the transfer function channels with dominating heat conduction effects, as compared to the channels characterized by the transport delay associated with the heat convection. View Full-Text Bartecki, K. An Approximate Transfer Function Model for a Double-Pipe Counter-Flow Heat Exchanger. Energies 2021, 14, 4174. https://doi.org/10.3390/en14144174 Bartecki K. An Approximate Transfer Function Model for a Double-Pipe Counter-Flow Heat Exchanger. Energies. 2021; 14(14):4174. https://doi.org/10.3390/en14144174 Bartecki, Krzysztof. 2021. "An Approximate Transfer Function Model for a Double-Pipe Counter-Flow Heat Exchanger" Energies 14, no. 14: 4174. https://doi.org/10.3390/en14144174
1 Department of Chemistry, LMU University of Munich, Munich, Germany. 2 Institute for Chemical Technology of Organic Materials, Johannes Kepler University Linz, Linz, Austria. Abstract: A significant impact of this work on the use of polymers is expected because the developed organo-nano particles (ONP) mixed into standard polymers will make them unique and traceable. The doping of polymers with non migrating ONP was demonstrated and applications for the recycling of plastics were discussed. Thus, perylene derivatives were linked to polymerisable vinyl groups and copolymerized under RAFT conditions (Reversible Addition Fragmentation chain Transfer) with styrene and methylmethacrylate, respectively, to obtain fluorescent ONP with sizes of 40 nm or even less and narrow distributions of molecular weight in most cases with polydispersities PD of 1.1 and lower. Keywords: Organic Nano Particles (ONP), Reversible Addition Fragmentation Chain Transfer (RAFT), Fluorescence Spectroscopy, Polymers, Recycling \stackrel{˜}{\nu } \stackrel{˜}{\nu } \stackrel{˜}{\nu } \stackrel{˜}{\nu } \stackrel{˜}{\nu } \stackrel{˜}{\nu } \stackrel{˜}{\nu } \stackrel{˜}{\nu } \stackrel{˜}{\nu } Cite this paper: Langhals, H. , Zgela, D. , Haffner, A. , Koschnick, C. , Gottschling, K. and Paulik, C. (2018) Functional Organo-Nano Particles by RAFT Copolymerisation. Green and Sustainable Chemistry, 8, 247-274. doi: 10.4236/gsc.2018.83017. [1] Holden, P.A., Nisbet, R.M., Lenihan, H.S., Miller, R.J., Cherr, G.N., Schimel, J.P. and Gardea-Torresdey, J.L. (2013) Ecological Nanotoxicology: Integrating Nanomaterial Hazard Considerations across the Subcellular, Population, Community, and Ecosystems Levels. Accouts of Chemical Research, 46, 813-822. [2] Johnston, H., Pojana, G., Zuin, S., Jacobsen, N.R., Moller, P., Loft, S., Semmler-Behnke, M., McGuiness, C., Balharry, D., Marcomini, A., Wallin, H., Kreyling, W., Donaldson, K., Tran, L. and Stone, V. (2013) Engineered Nanomaterial Risk. Lessons Learnt from Completed Nanotoxicology Studies: Potential Solutions to Current and Future Challenges. Critical Reviews in Toxicology, 43, 1-20. [3] MacPhail, R.C., Grulke, E.A. and Yokel, R.A. (2013) Assessing Nanoparticle Risk Poses Prodigious Challenges. Nanomedicine and Nanobiotechnology, 5, 374-387. [4] Warheit, D.B. (2013) How to Measure Hazards/Risks Following Exposures to Nanoscale or Pigment-Grade Titanium Dioxide Particles. Toxicology Letters, 220, 193-204. [5] Chen, A., Guan, S. and Wen, W. (2017) Polymorphic Organic Nanoparticle and Preparation Method and Application Thereof. Faming Zhuanli Shenqing, CN 107163203 A 20170915. [6] Langhals, H. and Pust, T. (2010) Fluorescent Nano Particles in the Aqueous Phase by Polymer Analogous Reaction of Polyvinyl Alcohol. Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, 77, 541-544. [7] Langhals, H., Zgela, D., Schmid, T., Herman, M. and Zwiener, M. (2012) Marking of Polymer Materials with Fluorescent Nanoparticles for Their Automatic Sorting. Ger. Offen. DE 102012014982.4 (July 26, 2012). [8] Trommsdorff, E., Kohle, H. and Lagally, P. (1948) Polymerization of Methyl Methacrylates. Makromolekulare Chemie, 1, 169-198. [9] Schulz, G.V. (1956) Polymerization Kinetics in Highly Concentrated Systems. Kinetics of the Trommsdorf Effect on Methyl Methacrylate. Zeitschr. Phys. Chem. (Muenchen, Germany), 8, 290-317. https://doi.org/10.1524/zpch.1956.8.5_6.290 [10] Gerrens, H. (1963) Radical Reactions in Polymerization Processes. Berichte der Bunsen-Gesellschaft für Physikalische Chemie, 67, 741-753. [11] Kinzel, S. (2009) Perylenfarbstoffe mit lateraler heterocyclischer Ringerweiterung. PhD Dissertation, Ludwig-Maximilians-Universitat, München. [12] Demmig, S. and Langhals, H. (1988) Very Soluble and Photostable Perylene Fluorescent Dyes. Chemische Berichte, 121, 225-230. [13] Chiefari, J., Chong, Y.K., Ercole, F., Krstina, J., Jeffery, J., Le, T.P.T., Mayadunne, R.T.A., Meijs, G.F., Moad, C.L., Moad, G., Rizzardo, E. and Thang, S.H. (1998) Living Free-Radical Polymerization by Reversible Addition-Fragmentation Chain Transfer: The RAFT Process. Macromolecules, 31, 5559-5562. [14] Chong, Y.K., Moad, G., Rizzardo, E. and Thang, S.H. (2007) Thiocarbonylthio End Group Removal from RAFT-Synthesized Polymers by Radical-Induced Reduction. Macromolecules, 40, 4446-4455. [15] Langhals, H. (2013) Chromophores for Picoscale Optical Computers. In: Sattler, K., Ed., Fundamentals of Picoscience, Taylor & Francis Inc. CRC Press Inc., Bosa Roca, 705-727. [16] Langhals, H. (2005) Control of the Interactions in Multichromophores: Novel Concepts. Perylene Bisimides as Components for Larger Functional Units. Helvetica Chimica Acta, 88, 1309-1343. [17] Langhals, H. and Wetzel, F. (2002) Polymeric Fluorescent Dyes, Their Production and Their Use. Ger. Offen. DE 10233179 (July 22, 2002). [19] Perrier, S. (2008) RAFT Polymerisation, a Versatile Tool for the Production of Nanostructures. Polymer Preprints, 49, 248-249. [20] Willcock, H., Lu, A., Hansell, C.F., Chapman, E., Collins, I.R. and O’Reilly, R.K. (2014) One-Pot Synthesis of Responsive Sulfobetaine Nanoparticles by RAFT Polymerisation: The Effect of Branching on the UCST Cloud Point. Polymer Chemistry, 5, 1023-1030. [21] Lewis, R.W., Evans, R.A., Malic, N., Saito, K. and Cameron, N.R. (2018) Ultra-Fast Aqueous Polymerisation of Acrylamides by High Power Visible Light Direct Photoactivation RAFT Polymerisation. Polymer Chemistry, 9, 60-68. [22] Langhals, H., Bock, B., Schmid, T. and Marchuk, A. (2012) Angular Benzoperylenetetracarboxylic Bisimides. Chemistry: A European Journal, 18, 13188-13194. [23] Langhals, H. and Fuchs, K. (2006) Fluorescent Labels for Aldehydes. Collection of Czechoslovak Chemical Communications, 71, 625-634. [24] Langhals, H. and Wetzel, F. (2003) Perylene Pigments with Metallic Effects. Ger. Offen. DE 10357978 (Dec. 11, 2003). [25] Langhals, H. and Kirner, S. (2000) Novel Fluorescent Dyes by the Extension of the Core of Perylenetetracarboxylic Bisimides. European Journal of Organic Chemistry, 365-380. https://doi.org/10.1002/(SICI)1099-0690(200001)2000:2<365::AID-EJOC365>3.0.CO;2-R [26] Troster, H. (1983) Studies of the Protonation of Alkali Metal 3,4,9,10-Perylenetetracarboxylates. Dyes and Pigments, 4, 171-177. [28] Chong, B., Moad, G., Rizzardo, E., Skidmore, M. and Thang, S.H. (2006) Thermolysis of RAFT-Synthesized Poly(Methyl Methacrylate). Australian Journal of Chemistry, 59, 755-762. [29] Langhals, H. (1980) Dyes for Fluorescent Solar Collectors. Nachrichten aus Chemie, Technik und Laboratorium, 28, 716-718. [30] Langhals, H., Schmid, T., Herman, M., Zwiener, M. and Hofer, A. (2012) Marking of Polymer Materials with Fluorescence Dyes for Their Clear Automatic Sorting. Ger. Offen. DE 102012012772.3 (June 22, 2012). [31] Wlodarczyk, K.L., Ardron, M., Waddie, A.J., Taghizadeh, M.R., Weston, N.J. and Hand, D.P. (2017) Tamper-Proof Markings for the Identification and Traceability of High-Value Metal Goods. Optics Express, 25, 15216-15230. [32] Yashiki, K., Nagano, A., Sugihara, K. and Tashiro, T. (2017) Optical Element for Forgery-Proof. US Patent US20170334232.
Apollonius - Maple Help Home : Support : Online Help : Mathematics : Geometry : 2-D Euclidean : Polygon Functions : Apollonius find the Apollonius circles of three given circles Apollonius(c1, c2, c3) The problem of constructing, in a given plane, a circle tangent to three given circles. The circle representing the solution of this problem is known as Apollonius circle. The problem was named after Apollonius of Perge (3rd- century B.C.) The routine returns a list of Apollonius circles. In general, there are eight circles. Note that the coordinates of the centers and the radii of the circles must be numeric. The command with(geometry,Apollonius) allows the use of the abbreviated form of this command. \mathrm{with}⁡\left(\mathrm{geometry}\right): \mathrm{circle}⁡\left(\mathrm{c1},{\left(x+3\right)}^{2}+{y}^{2}=4,[x,y]\right): \mathrm{circle}⁡\left(\mathrm{c2},[\mathrm{point}⁡\left(\mathrm{O1},6,0\right),3],[x,y]\right): \mathrm{circle}⁡\left(\mathrm{c3},{x}^{2}+{\left(y-7\right)}^{2}=1,[x,y]\right): A≔\mathrm{Apollonius}⁡\left(\mathrm{c1},\mathrm{c2},\mathrm{c3}\right): \mathrm{draw}⁡\left(A\right)
Triokinase - WikiMili, The Best Wikipedia Reader In enzymology, a triokinase (EC 2.7.1.28) is an enzyme that catalyzes the chemical reaction ATP + D-glyceraldehyde {\displaystyle \rightleftharpoons } ADP + D-glyceraldehyde 3-phosphate Thus, the two substrates of this enzyme are ATP and D-glyceraldehyde, whereas its two products are ADP and D-glyceraldehyde 3-phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:D-glyceraldehyde 3-phosphotransferase. This enzyme is also called triose kinase. This enzyme participates in fructose metabolism. Glycolysis (from glycose, an older term for glucose + -lysis degradation) is the metabolic pathway that converts glucose C6H12O6, into pyruvate, CH3COCOO− (pyruvic acid), and a hydrogen ion, H+. The free energy released in this process is used to form the high-energy molecules ATP (adenosine triphosphate) and NADH (reduced nicotinamide adenine dinucleotide). Glycolysis is a sequence of ten enzyme-catalyzed reactions. Most monosaccharides, such as fructose and galactose, can be converted to one of these intermediates. The intermediates may also be directly useful rather than just utilized as steps in the overall reaction. For example, the intermediate dihydroxyacetone phosphate (DHAP) is a source of the glycerol that combines with fatty acids to form fat. In chemistry, phosphorylation of a molecule is the attachment of a phosphoryl group. This process and its inverse, dephosphorylation, are critical for many cellular processes in biology. Protein phosphorylation is especially important for their function; for example, this modification activates almost half of the enzymes present in Saccharomyces cerevisiae, thereby regulating their function. Many proteins are phosphorylated temporarily, as are many sugars, lipids, and other biologically-relevant molecules. Phosphofructokinase-1 (PFK-1) is one of the most important regulatory enzymes of glycolysis. It is an allosteric enzyme made of 4 subunits and controlled by many activators and inhibitors. PFK-1 catalyzes the important "committed" step of glycolysis, the conversion of fructose 6-phosphate and ATP to fructose 1,6-bisphosphate and ADP. Glycolysis is the foundation for respiration, both anaerobic and aerobic. Because phosphofructokinase (PFK) catalyzes the ATP-dependent phosphorylation to convert fructose-6-phosphate into fructose 1,6-bisphosphate and ADP, it is one of the key regulatory steps of glycolysis. PFK is able to regulate glycolysis through allosteric inhibition, and in this way, the cell can increase or decrease the rate of glycolysis in response to the cell's energy requirements. For example, a high ratio of ATP to ADP will inhibit PFK and glycolysis. The key difference between the regulation of PFK in eukaryotes and prokaryotes is that in eukaryotes PFK is activated by fructose 2,6-bisphosphate. The purpose of fructose 2,6-bisphosphate is to supersede ATP inhibition, thus allowing eukaryotes to have greater sensitivity to regulation by hormones like glucagon and insulin. The Calvin cycle,light-independent reactions, bio synthetic phase,dark reactions, or photosynthetic carbon reduction (PCR) cycle of photosynthesis are the chemical reactions that convert carbon dioxide and other compounds into glucose. These reactions occur in the stroma, the fluid-filled area of a chloroplast outside the thylakoid membranes. These reactions take the products of light-dependent reactions and perform further chemical processes on them. The Calvin cycle uses the reducing powers ATP and NADPH from the light dependent reactions to produce sugars for the plant to use. These substrates are used in a series of reduction-oxidation reactions to produce sugars in a step-wise process. There is no direct reaction that converts CO2 to a sugar because all of the energy would be lost to heat.] There are three phases to the light-independent reactions, collectively called the Calvin cycle: carbon fixation, reduction reactions, and ribulose 1,5-bisphosphate (RuBP) regeneration. Aldolase B also known as fructose-bisphosphate aldolase B or liver-type aldolase is one of three isoenzymes of the class I fructose 1,6-bisphosphate aldolase enzyme, and plays a key role in both glycolysis and gluconeogenesis. The generic fructose 1,6-bisphosphate aldolase enzyme catalyzes the reversible cleavage of fructose 1,6-bisphosphate (FBP) into glyceraldehyde 3-phosphate and dihydroxyacetone phosphate (DHAP) as well as the reversible cleavage of fructose 1-phosphate (F1P) into glyceraldehyde and dihydroxyacetone phosphate. In mammals, aldolase B is preferentially expressed in the liver, while aldolase A is expressed in muscle and erythrocytes and aldolase C is expressed in the brain. Slight differences in isozyme structure result in different activities for the two substrate molecules: FBP and fructose 1-phosphate. Aldolase B exhibits no preference and thus catalyzes both reactions, while aldolases A and C prefer FBP. Fructokinase, also known as D-fructokinase or D-fructose (D-mannose) kinase, is an enzyme of the liver, intestine, and kidney cortex. Fructokinase is in a family of enzymes called transferases, meaning that this enzyme transfers functional groups; it is also considered a phosphotransferase since it specifically transfers a phosphate group. Fructokinase specifically catalyzes the transfer of a phosphate group from adenosine triphosphate to fructose as the initial step in its utilization. The main role of fructokinase is in carbohydrate metabolism, more specifically, sucrose and fructose metabolism. The reaction equation is as follows: In enzymology, 1-phosphofructokinase (PFK1) is an enzyme that catalyzes the chemical reaction In enzymology, a fucokinase is an enzyme that catalyzes the chemical reaction In enzymology, a glycerate kinase is an enzyme that catalyzes the chemical reaction In enzymology, a guanylate kinase is an enzyme that catalyzes the chemical reaction In enzymology, an inositol-tetrakisphosphate 1-kinase is an enzyme that catalyzes the chemical reaction L-Fuculokinase is an enzyme that catalyzes the chemical reaction In enzymology, a mannokinase is an enzyme that catalyzes the chemical reaction In enzymology, a N-acylmannosamine kinase is an enzyme that catalyzes the chemical reaction In enzymology, a rhamnulokinase is an enzyme that catalyzes the chemical reaction In enzymology, a xylulokinase is an enzyme that catalyzes the chemical reaction Fructolysis refers to the metabolism of fructose from dietary sources. Though the metabolism of glucose through glycolysis uses many of the same enzymes and intermediate structures as those in fructolysis, the two sugars have very different metabolic fates in human metabolism. Unlike glucose, which is directly metabolized widely in the body, fructose is almost entirely metabolized in the liver in humans, where it is directed toward replenishment of liver glycogen and triglyceride synthesis. Under one percent of ingested fructose is directly converted to plasma triglyceride. 29% - 54% of fructose is converted in liver to glucose, and about a quarter of fructose is converted to lactate. 15% - 18% is converted to glycogen. Glucose and lactate are then used normally as energy to fuel cells all over the body. Hers HG, Kusaka T (1953). "[The metabolism of fructose-1-phosphate in the liver.]". Biochim. Biophys. Acta. 11 (3): 427–37. doi:10.1016/0006-3002(53)90062-6. PMID 13093749. Sillero MA, Sillero A, Sols A (1969). "Enzymes involved in fructose metabolism in liver and the glyceraldehyde metabolic crossroads". Eur. J. Biochem. 10 (2): 345–50. doi: 10.1111/j.1432-1033.1969.tb00696.x . PMID 5823111.
Incidence_(epidemiology) Knowpia In epidemiology, incidence is a measure of the probability of occurrence of a given medical condition in a population within a specified period of time. Although sometimes loosely expressed simply as the number of new cases during some time period, it is better expressed as a proportion or a rate[1] with a denominator. Evolution of weekly incidence rates of Dengue fever in Cambodia from January 2002 to December 2008. Incidence proportionEdit Incidence proportion (IP), also known as cumulative incidence, is defined as the probability that a particular event, such as occurrence of a particular disease, has occurred before a given time.[2] It is calculated dividing the number of new cases during a given period by the number of subjects at risk in the population initially at risk at the beginning of the study. Where the period of time considered is an entire lifetime, the incidence proportion is called lifetime risk.[3] For example, if a population initially contains 1,000 persons and 28 develop a condition since the disease first occurred until two years later, the cumulative incidence proportion is 28 cases per 1,000 persons, i.e. 2.8%. IP is related to incidence rate (IR) and duration of exposure (D) as follows:[4] {\displaystyle IP(t)=1-e^{-IR(t)\cdot D}\,.} Incidence rateEdit The incidence rate is a measure of the frequency with which a disease or other incident occurs over a specified time period.[5][6] It is also known as the incidence density rate or person-time incidence rate,[7] when the denominator is the combined person-time of the population at risk (the sum of the time duration of exposure across all persons exposed).[8] In the same example as above, the incidence rate is 14 cases per 1000 person-years, because the incidence proportion (28 per 1,000) is divided by the number of years (two). Using person-time rather than just time handles situations where the amount of observation time differs between people, or when the population at risk varies with time.[9] Use of this measure implies the assumption that the incidence rate is constant over different periods of time, such that for an incidence rate of 14 per 1000 persons-years, 14 cases would be expected for 1000 persons observed for 1 year or 50 persons observed for 20 years.[10] When this assumption is substantially violated, such as in describing survival after diagnosis of metastatic cancer, it may be more useful to present incidence data in a plot of cumulative incidence, over time, taking into account loss to follow-up, using a Kaplan-Meier Plot. Incidence vs. prevalenceEdit Incidence should not be confused with prevalence, which is the proportion of cases in the population at a given time rather than rate of occurrence of new cases. Thus, incidence conveys information about the risk of contracting the disease, whereas prevalence indicates how widespread the disease is. Prevalence is the proportion of the total number of cases to the total population and is more a measure of the burden of the disease on society with no regard to time at risk or when subjects may have been exposed to a possible risk factor. Prevalence can also be measured with respect to a specific subgroup of a population (see: denominator data). Incidence is usually more useful than prevalence in understanding the disease etiology: for example, if the incidence rate of a disease in a population increases, then there is a risk factor that promotes the incidence. For example, consider a disease that takes a long time to cure and was widespread in 2002 but dissipated in 2003. This disease will have both high incidence and high prevalence in 2002, but in 2003 it will have a low incidence yet will continue to have a high prevalence (because it takes a long time to cure, so the fraction of individuals that are affected remains high). In contrast, a disease that has a short duration may have a low prevalence and a high incidence. When the incidence is approximately constant for the duration of the disease, prevalence is approximately the product of disease incidence and average disease duration, so prevalence = incidence × duration. The importance of this equation is in the relation between prevalence and incidence; for example, when the incidence increases, then the prevalence must also increase. Note that this relation does not hold for age-specific prevalence and incidence, where the relation becomes more complicated.[11] Consider the following example. Say you are looking at a sample population of 225 people, and want to determine the incidence rate of developing HIV over a 10-year period: At the beginning of the study (t=0) you find 25 cases of existing HIV. These people are not counted as they cannot develop HIV a second time. A follow-up at 5 years (t=5 years) finds 20 new cases of HIV. A second follow-up at the end of the study (t=10 years) finds 30 new cases. If you were to measure prevalence you would simply take the total number of cases (25 + 20 + 30 = 75) and divide by your sample population (225). So prevalence would be 75/225 = 0.33 or 33% (by the end of the study). This tells you how widespread HIV is in your sample population, but little about the actual risk of developing HIV for any person over a coming year. To measure incidence you must take into account how many years each person contributed to the study, and when they developed HIV. When it is not known exactly when a person develops the disease in question, epidemiologists frequently use the actuarial method, and assume it was developed at a half-way point between follow-ups.[citation needed] In this calculation: At 5 yrs you found 20 new cases, so you assume they developed HIV at 2.5 years, thus contributing (20 * 2.5) = 50 person-years of disease-free life. At 10 years you found 30 new cases. These people did not have HIV at 5 years, but did at 10, so you assume they were infected at 7.5 years, thus contributing (30 * 7.5) = 225 person-years of disease-free life. That is a total of (225 + 50) = 275 person years so far. You also want to account for the 150 people who never had or developed HIV over the 10-year period, (150 * 10) contributing 1500 person-years of disease-free life. That is a total of (1500 + 275) = 1775 person-years of life. Now take the 50 new cases of HIV, and divide by 1775 to get 0.028, or 28 cases of HIV per 1000 population, per year. In other words, if you were to follow 1000 people for one year, you would see 28 new cases of HIV. This is a much more accurate measure of risk than prevalence. ^ "INCIDENCE - Epidemiology". Encyclopaedia Britannica. Retrieved 3 April 2020. ^ Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M (July 2004). "A glossary for evidence based public health". J Epidemiol Community Health. 58 (7): 538–45. doi:10.1136/jech.2003.011585. PMC 1732833. PMID 15194712. ^ Bouyer, Jean; Hémon, Denis; Cordier, Sylvaine; Derriennic, Francis; Stücker, Isabelle; Stengel, Bénédicte; Clavel, Jacqueline (2009). Épidemiologie principes et méthodes quantitatives. Paris: Lavoisier. ^ Hargrave, Marshall. "What Does the Incidence Rate Measure?". Investopedia. Retrieved 2019-09-30. ^ Monson, Richard R. (1990-04-25). Occupational Epidemiology, Second Edition. CRC Press. p. 27. ISBN 978-0-8493-4927-0. ^ Last, John M., ed. (2001). A Dictionary of Epidemiology (4 ed.). New York, NY: Oxford University Press. ISBN 978-0-19-514169-6. ^ "Principles of Epidemiology - Lesson 3 - Section 2". Centers for Disease Control and Prevention. 2012-05-18. Retrieved 2021-01-13. ^ Coggon D, Rose G, Barker DJ (1997). "Quantifying diseases in populations". Epidemiology for the Uninitiated (4th ed.). BMJ. ISBN 978-0-7279-1102-5. ^ Dunn, Olive Jean; Clark, Virginia A. (2009). Basic statistics: a primer for the biomedical sciences (4th ed.). Hoboken, N.J.: John Wiley & Sons. pp. 3–5. ISBN 9780470496855. Retrieved 9 May 2016. ^ Brinks R (2011) "A new method for deriving incidence rates from prevalence data and its application to dementia in Germany", arXiv:1112.2720 Calculation of standardized incidence rate PAMCOMP Person-Years Analysis and Computation Programme for calculating standardized incidence rates (SIRs)
\lim\limits_ { x \rightarrow 9 } \frac { \sqrt { x } - 3 } { x - 9 } You could multiply the numerator and denominator by the conjugate of the numerator. Or... you could recognize that this is Ana's Definition of the Derivative. \lim\limits_ { h \rightarrow 0 } \frac { \sqrt { 2 + h } - \sqrt { 2 } } { h } This is Hana's Definition of the Derivative. f(x) a f^\prime(x) = f^\prime(a) = \lim\limits_ { x \rightarrow \infty } \frac { 2 \sqrt { x } + 1 } { 5 - \sqrt { x } } This limit is approaching infinity. What is the end behavior? Is there a horizontal asymptote? Compare the highest-power terms in the numerator and denominator. Pay attention to coefficients. =-2 \lim\limits_ { x \rightarrow \infty } \operatorname { cos } x Visualize the graph of y =\operatorname{cos}x . It oscillates as x→∞
Revision as of 17:40, 15 May 2015 by MathAdmin (talk | contribs) (Created page with "<span class="exam"> '''Set up the equation to solve. You only need to plug in the numbers-not solve for the particular values!''' <span class="exam">What is the present value...") {\displaystyle A\,=\,P\left(1+{\frac {r}{n}}\right)^{nt},} {\displaystyle A} {\displaystyle P} {\displaystyle r} {\displaystyle n} {\displaystyle n} {\displaystyle 365} {\displaystyle 52} {\displaystyle 12} for compounding monthly. As a result, the expone{\displaystyle nt} {\displaystyle t} {\displaystyle r/n} {\displaystyle 7} {\displaystyle 6\%} {\displaystyle nt\,=\,12\cdot 7\,=\,84} {\displaystyle 0.06/12\,=\,0.005} {\displaystyle t} {\displaystyle A\,=\,Pe^{rt}.} {\displaystyle t\,=\,0} {\displaystyle P} {\displaystyle \$3000} {\displaystyle 12} {\displaystyle A\,=\,P\left(1+{\frac {r}{n}}\right)^{nt}\,=\,3000\left(1+{\frac {0.045}{12}}\right)^{12\cdot 6}.} {\displaystyle A\,=\,Pe^{rt}\,=\,3000e^{0.045\cdot 6}.} {\displaystyle A\,=\,3000\left(1+{\frac {0.045}{12}}\right)^{12\cdot 6}.} {\displaystyle A\,=\,Pe^{rt}\,=\,3000e^{0.045\cdot 6}.}
Analysis of parallel algorithms - Wikipedia In computer science, the analysis of parallel algorithms is the process of finding the computational complexity of algorithms executed in parallel – the amount of time, storage, or other resources needed to execute them. In many respects, analysis of parallel algorithms is similar to the analysis of sequential algorithms, but is generally more involved because one must reason about the behavior of multiple cooperating threads of execution. One of the primary goals of parallel analysis is to understand how a parallel algorithm's use of resources (speed, space, etc.) changes as the number of processors is changed. 3 Execution on a limited number of processors A so-called work-time (WT) (sometimes called work-depth, or work-span) framework was originally introduced by Shiloach and Vishkin [1] for conceptualizing and describing parallel algorithms. In the WT framework, a parallel algorithm is first described in terms of parallel rounds. For each round, the operations to be performed are characterized, but several issues can be suppressed. For example, the number of operations at each round need not be clear, processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. Second, the suppressed information is provided. The inclusion of the suppressed information is guided by the proof of a scheduling theorem due to Brent,[2] which is explained later in this article. The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm, inserting the details suppressed by that initial description is often not very difficult. For example, the WT framework was adopted as the basic presentation framework in the parallel algorithms books (for the Parallel random-access machine PRAM model) [3] and, [4] as well as in the class notes .[5] The overview below explains how the WT framework can be used for analyzing more general parallel algorithms, even when their description is not available within the WT framework. The depth or span is the length of the longest series of operations that have to be performed sequentially due to data dependencies (the critical path). The depth may also be called the critical path length of the computation.[7] Minimizing the depth/span is important in designing parallel algorithms, because the depth/span determines the shortest possible execution time.[8] Alternatively, the span can be defined as the time T∞ spent computing using an idealized machine with an infinite number of processors.[9] Speedup is the gain in speed made by parallel execution compared to sequential execution: Sp = T1 / Tp. When the speedup is Ω(n) for input size n (using big O notation), the speedup is linear, which is optimal in simple models of computation because the work law implies that T1 / Tp ≤ p (super-linear speedup can occur in practice due to memory hierarchy effects). The situation T1 / Tp = p is called perfect linear speedup.[9] An algorithm that exhibits linear speedup is said to be scalable.[6] Efficiency is the speedup per processor, Sp / p.[6] Parallelism is the ratio T1 / T∞. It represents the maximum possible speedup on any number of processors. By the span law, the parallelism bounds the speedup: if p > T1 / T∞, then: {\displaystyle {\frac {T_{1}}{T_{p}}}\leq {\frac {T_{1}}{T_{\infty }}}<p} The slackness is T1 / (pT∞). A slackness less than one implies (by the span law) that perfect linear speedup is impossible on p processors.[9] Execution on a limited number of processors[edit] Analysis of parallel algorithms is usually carried out under the assumption that an unbounded number of processors is available. This is unrealistic, but not a problem, since any computation that can run in parallel on N processors can be executed on p < N processors by letting each processor execute multiple units of work. A result called Brent's law states that one can perform such a "simulation" in time Tp, bounded by[10] {\displaystyle T_{p}\leq T_{N}+{\frac {T_{1}-T_{N}}{p}},} {\displaystyle T_{p}=O\left(T_{N}+{\frac {T_{1}}{p}}\right).} {\displaystyle {\frac {T_{1}}{p}}\leq T_{p}\leq {\frac {T_{1}}{p}}+T_{\infty }} ^ Shiloach, Yossi; Vishkin, Uzi (1982). "An O(n2 log n) parallel max-flow algorithm". Journal of Algorithms. 3 (2): 128–146. doi:10.1016/0196-6774(82)90013-X. ^ a b Brent, Richard P. (1974-04-01). "The Parallel Evaluation of General Arithmetic Expressions". Journal of the ACM. 21 (2): 201–206. CiteSeerX 10.1.1.100.9361. doi:10.1145/321812.321815. ISSN 0004-5411. S2CID 16416106. ^ JaJa, Joseph (1992). An Introduction to Parallel Algorithms. Addison-Wesley. ISBN 978-0-201-54856-3. ^ Keller, Jorg; Kessler, Cristoph W.; Traeff, Jesper L. (2001). Practical PRAM Programming. Wiley-Interscience. ISBN 978-0-471-35351-5. ^ Vishkin, Uzi (2009). Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages (PDF). Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion. ^ a b c d e f Casanova, Henri; Legrand, Arnaud; Robert, Yves (2008). Parallel Algorithms. CRC Press. p. 10. CiteSeerX 10.1.1.466.8142. ^ Blelloch, Guy (1996). "Programming Parallel Algorithms" (PDF). Communications of the ACM. 39 (3): 85–97. CiteSeerX 10.1.1.141.5884. doi:10.1145/227234.227246. S2CID 12118850. ^ Michael McCool; James Reinders; Arch Robison (2013). Structured Parallel Programming: Patterns for Efficient Computation. Elsevier. pp. 4–5. ^ a b c d e f Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 779–784. ISBN 0-262-03384-4. ^ Gustafson, John L. (2011). "Brent's Theorem". Encyclopedia of Parallel Computing. pp. 182–185. doi:10.1007/978-0-387-09766-4_80. ISBN 978-0-387-09765-7. Retrieved from "https://en.wikipedia.org/w/index.php?title=Analysis_of_parallel_algorithms&oldid=1070378063"
Restricted Boltzmann machine - Wikipedia Class of artificial neural network Diagram of a restricted Boltzmann machine with three visible units and four hidden units (no bias units). RBMs were initially invented under the name Harmonium by Paul Smolensky in 1986,[1] and rose to prominence after Geoffrey Hinton and collaborators invented fast learning algorithms for them in the mid-2000. RBMs have found applications in dimensionality reduction,[2] classification,[3] collaborative filtering,[4] feature learning,[5] topic modelling[6] and even many body quantum mechanics.[7][8] They can be trained in either supervised or unsupervised ways, depending on the task. As their name implies, RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph: a pair of nodes from each of the two groups of units (commonly referred to as the "visible" and "hidden" units respectively) may have a symmetric connection between them; and there are no connections between nodes within a group. By contrast, "unrestricted" Boltzmann machines may have connections between hidden units. This restriction allows for more efficient training algorithms than are available for the general class of Boltzmann machines, in particular the gradient-based contrastive divergence algorithm.[9] Restricted Boltzmann machines can also be used in deep learning networks. In particular, deep belief networks can be formed by "stacking" RBMs and optionally fine-tuning the resulting deep network with gradient descent and backpropagation.[10] 3 Stacked Restricted Boltzmann Machine The standard type of RBM has binary-valued (Boolean) hidden and visible units, and consists of a matrix of weights {\displaystyle W} {\displaystyle m\times n} . Each weight element {\displaystyle (w_{i,j})} of the matrix is associated with the connection between the visible (input) unit {\displaystyle v_{i}} and the hidden unit {\displaystyle h_{j}} . In addition, there are bias weights (offsets) {\displaystyle a_{i}} {\displaystyle v_{i}} {\displaystyle b_{j}} {\displaystyle h_{j}} . Given the weights and biases, the energy of a configuration (pair of boolean vectors) (v,h) is defined as {\displaystyle E(v,h)=-\sum _{i}a_{i}v_{i}-\sum _{j}b_{j}h_{j}-\sum _{i}\sum _{j}v_{i}w_{i,j}h_{j}} {\displaystyle E(v,h)=-a^{\mathrm {T} }v-b^{\mathrm {T} }h-v^{\mathrm {T} }Wh.} This energy function is analogous to that of a Hopfield network. As with general Boltzmann machines, the joint probability distribution for the visible and hidden vectors is defined in terms of the energy function as follows,[11] {\displaystyle P(v,h)={\frac {1}{Z}}e^{-E(v,h)}} {\displaystyle Z} is a partition function defined as the sum of {\displaystyle e^{-E(v,h)}} over all possible configurations, which can be interpreted as a normalizing constant to ensure that the probabilities sum to 1. The marginal probability of a visible vector is the sum of {\displaystyle P(v,h)} over all possible hidden layer configurations,[11] {\displaystyle P(v)={\frac {1}{Z}}\sum _{\{h\}}e^{-E(v,h)}} and vice versa. Since the underlying graph structure of the RBM is bipartite (meaning there is no intra-layer connections), the hidden unit activations are mutually independent given the visible unit activations. Conversely, the visible unit activations are mutually independent given the hidden unit activations.[9] That is, for m visible units and n hidden units, the conditional probability of a configuration of the visible units v, given a configuration of the hidden units h, is {\displaystyle P(v|h)=\prod _{i=1}^{m}P(v_{i}|h)} Conversely, the conditional probability of h given v is {\displaystyle P(h|v)=\prod _{j=1}^{n}P(h_{j}|v)} The individual activation probabilities are given by {\displaystyle P(h_{j}=1|v)=\sigma \left(b_{j}+\sum _{i=1}^{m}w_{i,j}v_{i}\right)} {\displaystyle \,P(v_{i}=1|h)=\sigma \left(a_{i}+\sum _{j=1}^{n}w_{i,j}h_{j}\right)} {\displaystyle \sigma } denotes the logistic sigmoid. The visible units of Restricted Boltzmann Machine can be multinomial, although the hidden units are Bernoulli.[clarification needed] In this case, the logistic function for visible units is replaced by the softmax function {\displaystyle P(v_{i}^{k}=1|h)={\frac {\exp(a_{i}^{k}+\Sigma _{j}W_{ij}^{k}h_{j})}{\Sigma _{k'=1}^{K}\exp(a_{i}^{k'}+\Sigma _{j}W_{ij}^{k'}h_{j})}}} where K is the number of discrete values that the visible values have. They are applied in topic modeling,[6] and recommender systems.[4] Relation to other models[edit] Restricted Boltzmann machines are a special case of Boltzmann machines and Markov random fields.[12][13] Their graphical model corresponds to that of factor analysis.[14] Training algorithm[edit] Restricted Boltzmann machines are trained to maximize the product of probabilities assigned to some training set {\displaystyle V} (a matrix, each row of which is treated as a visible vector {\displaystyle v} {\displaystyle \arg \max _{W}\prod _{v\in V}P(v)} or equivalently, to maximize the expected log probability of a training sample {\displaystyle v} selected randomly from {\displaystyle V} {\displaystyle \arg \max _{W}\mathbb {E} \left[\log P(v)\right]} The algorithm most often used to train RBMs, that is, to optimize the weight matrix {\displaystyle W} , is the contrastive divergence (CD) algorithm due to Hinton, originally developed to train PoE (product of experts) models.[15][16] The algorithm performs Gibbs sampling and is used inside a gradient descent procedure (similar to the way backpropagation is used inside such a procedure when training feedforward neural nets) to compute weight update. The basic, single-step contrastive divergence (CD-1) procedure for a single sample can be summarized as follows: Take a training sample v, compute the probabilities of the hidden units and sample a hidden activation vector h from this probability distribution. Compute the outer product of v and h and call this the positive gradient. From h, sample a reconstruction v' of the visible units, then resample the hidden activations h' from this. (Gibbs sampling step) Compute the outer product of v' and h' and call this the negative gradient. Let the update to the weight matrix {\displaystyle W} be the positive gradient minus the negative gradient, times some learning rate: {\displaystyle \Delta W=\epsilon (vh^{\mathsf {T}}-v'h'^{\mathsf {T}})} Update the biases a and b analogously: {\displaystyle \Delta a=\epsilon (v-v')} {\displaystyle \Delta b=\epsilon (h-h')} A Practical Guide to Training RBMs written by Hinton can be found on his homepage.[11] Stacked Restricted Boltzmann Machine[edit] The difference between the Stacked Boltzmann and RBM is that RBM has lateral connections within a layer that are prohibited to make analysis tractable. On the other hand, the Stacked Boltzmann consists of a combination of an unsupervised three-layer network with symmetric weights and a supervised fine-tuned top layer for recognizing three classes. The usage of Stacked Boltzmann is to understand Natural languages, retrieve documents, image generation, and classification. These functions are trained with unsupervised pre-training and/or supervised fine-tuning. Unlike the undirected symmetric top layer, with a two-way unsymmetric layer for connection for RBM. The restricted Boltzmann's connection is three-layers with asymmetric weights, and two networks are combined into one. Stacked Boltzmann does share similarities with RBM, the neuron for Stacked Boltzmann is a stochastic binary Hopfield neuron, which is the same as the Restricted Boltzmann Machine. The energy from both Restricted Boltzmann and RBM is given by Gibb's probability measure: {\displaystyle E=-{\frac {1}{2}}\sum _{i,j}{w_{ij}{s_{i}}{s_{j}}}+\sum _{i}{\theta _{i}}{s_{i}}} . The training process of Restricted Boltzmann is similar to RBM. Restricted Boltzmann train one layer at a time and approximate equilibrium state with a 3-segment pass, not performing back propagation. Restricted Boltzmann uses both supervised and unsupervised on different RBM for pre-training for classification and recognition. The training uses contrastive divergence with Gibbs sampling: Δwij = e*(pij - p'ij) The restricted Boltzmann's strength is it performs a non-linear transformation so it's easy to expand, and can give a hierarchical layer of features. The Weakness is that it has complicated calculations of integer and real-valued neurons. It does not follow the gradient of any function, so the approximation of Contrastive divergence to maximum likelihood is improvised. [11] Fischer, Asja; Igel, Christian (2012), "An Introduction to Restricted Boltzmann Machines", Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 14–36, retrieved 2021-09-19 ^ Smolensky, Paul (1986). "Chapter 6: Information Processing in Dynamical Systems: Foundations of Harmony Theory" (PDF). In Rumelhart, David E.; McLelland, James L. (eds.). Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations. MIT Press. pp. 194–281. ISBN 0-262-68053-X. ^ Hinton, G. E.; Salakhutdinov, R. R. (2006). "Reducing the Dimensionality of Data with Neural Networks" (PDF). Science. 313 (5786): 504–507. Bibcode:2006Sci...313..504H. doi:10.1126/science.1127647. PMID 16873662. S2CID 1658773. ^ Larochelle, H.; Bengio, Y. (2008). Classification using discriminative restricted Boltzmann machines (PDF). Proceedings of the 25th international conference on Machine learning - ICML '08. p. 536. doi:10.1145/1390156.1390224. ISBN 9781605582054. ^ a b Salakhutdinov, R.; Mnih, A.; Hinton, G. (2007). Restricted Boltzmann machines for collaborative filtering. Proceedings of the 24th international conference on Machine learning - ICML '07. p. 791. doi:10.1145/1273496.1273596. ISBN 9781595937933. ^ Coates, Adam; Lee, Honglak; Ng, Andrew Y. (2011). An analysis of single-layer networks in unsupervised feature learning (PDF). International Conference on Artificial Intelligence and Statistics (AISTATS). ^ a b Ruslan Salakhutdinov and Geoffrey Hinton (2010). Replicated softmax: an undirected topic model. Neural Information Processing Systems 23. ^ Carleo, Giuseppe; Troyer, Matthias (2017-02-10). "Solving the quantum many-body problem with artificial neural networks". Science. 355 (6325): 602–606. arXiv:1606.02318. Bibcode:2017Sci...355..602C. doi:10.1126/science.aag2302. ISSN 0036-8075. PMID 28183973. S2CID 206651104. ^ Melko, Roger G.; Carleo, Giuseppe; Carrasquilla, Juan; Cirac, J. Ignacio (September 2019). "Restricted Boltzmann machines in quantum physics". Nature Physics. 15 (9): 887–892. Bibcode:2019NatPh..15..887M. doi:10.1038/s41567-019-0545-1. ISSN 1745-2481. ^ a b Miguel Á. Carreira-Perpiñán and Geoffrey Hinton (2005). On contrastive divergence learning. Artificial Intelligence and Statistics. ^ Hinton, G. (2009). "Deep belief networks". Scholarpedia. 4 (5): 5947. Bibcode:2009SchpJ...4.5947H. doi:10.4249/scholarpedia.5947. ^ a b c d Geoffrey Hinton (2010). A Practical Guide to Training Restricted Boltzmann Machines. UTML TR 2010–003, University of Toronto. ^ a b Sutskever, Ilya; Tieleman, Tijmen (2010). "On the convergence properties of contrastive divergence" (PDF). Proc. 13th Int'l Conf. On AI and Statistics (AISTATS). Archived from the original (PDF) on 2015-06-10. ^ a b Asja Fischer and Christian Igel. Training Restricted Boltzmann Machines: An Introduction Archived 2015-06-10 at the Wayback Machine. Pattern Recognition 47, pp. 25-39, 2014 ^ María Angélica Cueto; Jason Morton; Bernd Sturmfels (2010). "Geometry of the restricted Boltzmann machine". Algebraic Methods in Statistics and Probability. American Mathematical Society. 516. arXiv:0908.4425. Bibcode:2009arXiv0908.4425A. ^ Geoffrey Hinton (1999). Products of Experts. ICANN 1999. ^ Hinton, G. E. (2002). "Training Products of Experts by Minimizing Contrastive Divergence" (PDF). Neural Computation. 14 (8): 1771–1800. doi:10.1162/089976602760128018. PMID 12180402. S2CID 207596505. Introduction to Restricted Boltzmann Machines. Edwin Chen's blog, July 18, 2011. "A Beginner's Guide to Restricted Boltzmann Machines". Archived from the original on February 11, 2017. Retrieved November 15, 2018. {{cite web}}: CS1 maint: bot: original URL status unknown (link). Deeplearning4j Documentation "Understanding RBMs". Archived from the original on September 20, 2016. Retrieved December 29, 2014. . Deeplearning4j Documentation Python implementation of Bernoulli RBM and tutorial SimpleRBM is a very small RBM code (24kB) useful for you to learn about how RBMs learn and work. Julia implementation of Restricted Boltzmann machines: https://github.com/cossio/RestrictedBoltzmannMachines.jl Retrieved from "https://en.wikipedia.org/w/index.php?title=Restricted_Boltzmann_machine&oldid=1087640601"
Tryptophanase - WikiMili, The Free Encyclopedia Tryptophanase tetramer, E.Coli In enzymology, a tryptophanase (EC 4.1.99.1) is an enzyme that catalyzes the chemical reaction The Enzyme Commission number is a numerical classification scheme for enzymes, based on the chemical reactions they catalyze. As a system of enzyme nomenclature, every EC number is associated with a recommended name for the respective enzyme. A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei, and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur. L-tryptophan + H2O {\displaystyle \rightleftharpoons } indole + pyruvate + NH3 Thus, the two substrates of this enzyme are L-tryptophan and H2O, whereas its 3 products are indole, pyruvate, and NH3. Products are the species formed from chemical reactions. During a chemical reaction reactants are transformed into products after passing through a high energy transition state. This process results in the consumption of the reactants. It can be a spontaneous reaction or mediated by catalysts which lower the energy of the transition state, and by solvents which provide the chemical environment necessary for the reaction to take place. When represented in chemical equations products are by convention drawn on the right-hand side, even in the case of reversible reactions. The properties of products such as their energies help determine several characteristics of a chemical reaction such as whether the reaction is exergonic or endergonic. Additionally the properties of a product can make it easier to extract and purify following a chemical reaction, especially if the product has a different state of matter than the reactants. Reactants are molecular materials used to create chemical reactions. The atoms aren't created or destroyed. The materials are reactive and reactants are rearranging during a chemical reaction. Here is an example of reactants: CH4 + O2. A non-example is CO2 + H2O or "energy". Indole is an aromatic heterocyclic organic compound with formula C8H7N. It has a bicyclic structure, consisting of a six-membered benzene ring fused to a five-membered pyrrole ring. Indole is widely distributed in the natural environment and can be produced by a variety of bacteria. As an intercellular signal molecule, indole regulates various aspects of bacterial physiology, including spore formation, plasmid stability, resistance to drugs, biofilm formation, and virulence. The amino acid tryptophan is an indole derivative and the precursor of the neurotransmitter serotonin. This enzyme belongs to the family of lyases, specifically in the "catch-all" class of carbon-carbon lyases. The systematic name of this enzyme class is L-tryptophan indole-lyase (deaminating; pyruvate-forming). Other names in common use include L-tryptophanase, and L-tryptophan indole-lyase (deaminating). This enzyme participates in tryptophan metabolism and nitrogen metabolism. It has 2 cofactors: pyridoxal phosphate, and Potassium. In biochemistry, a lyase is an enzyme that catalyzes the breaking of various chemical bonds by means other than hydrolysis and oxidation, often forming a new double bond or a new ring structure. The reverse reaction is also possible. For example, an enzyme that catalyzed this reaction would be a lyase: A cofactor is a non-protein chemical compound or metallic ion that is required for an enzyme's activity as a catalyst, a substance that increases the rate of a chemical reaction. Cofactors can be considered "helper molecules" that assist in biochemical transformations. The rates at which these happen are characterized by in an area of study called enzyme kinetics. As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1AX4, 2C44, and 2OQX. The Protein Data Bank (PDB) is a database for the three-dimensional structural data of large biological molecules, such as proteins and nucleic acids. The data, typically obtained by X-ray crystallography, NMR spectroscopy, or, increasingly, cryo-electron microscopy, and submitted by biologists and biochemists from around the world, are freely accessible on the Internet via the websites of its member organisations. The PDB is overseen by an organization called the Worldwide Protein Data Bank, wwPDB. In enzymology, a D-lactate dehydrogenase (cytochrome) is an enzyme that catalyzes the chemical reaction In enzymology, a 2,3-dihydroxyindole 2,3-dioxygenase (EC 1.13.11.23) is an enzyme that catalyzes the chemical reaction In enzymology, an indole 2,3-dioxygenase (EC 1.13.11.17) is an enzyme that catalyzes the chemical reaction Alanine dehydrogenase (EC 1.4.1.1) is an enzyme that catalyzes the chemical reaction In enzymology, a 3-chloro-D-alanine dehydrochlorinase (EC 4.5.1.2) is an enzyme that catalyzes the chemical reaction In enzymology, a carbamoyl-serine ammonia-lyase (EC 4.3.1.13) is an enzyme that catalyzes the chemical reaction In enzymology, a D-cysteine desulfhydrase (EC 4.4.1.15) is an enzyme that catalyzes the chemical reaction In enzymology, a L-cysteate sulfo-lyase (EC 4.4.1.25) is an enzyme that catalyzes the chemical reaction In enzymology, a L-serine ammonia-lyase (EC 4.3.1.17) is an enzyme that catalyzes the chemical reaction In enzymology, a methylaspartate ammonia-lyase (EC 4.3.1.2) is an enzyme that catalyzes the chemical reaction In enzymology, a 4-hydroxy-2-oxoglutarate aldolase is an enzyme that catalyzes the chemical reaction In enzymology, an indolepyruvate decarboxylase (EC 4.1.1.74) is an enzyme that catalyzes the chemical reaction In enzymology, a phenylpyruvate decarboxylase (EC 4.1.1.43) is an enzyme that catalyzes the chemical reaction In enzymology, a rhamnulose-1-phosphate aldolase is an enzyme that catalyzes the chemical reaction In enzymology, a tyrosine phenol-lyase (EC 4.1.99.2) is an enzyme that catalyzes the chemical reaction In enzymology, a tryptophan-tRNA ligase is an enzyme that catalyzes the chemical reaction In enzymology, a mannonate dehydratase (EC 4.2.1.8) is an enzyme that catalyzes the chemical reaction In enzymology, a pseudouridylate synthase (EC 4.2.1.70) is an enzyme that catalyzes the chemical reaction In enzymology, a tryptophan transaminase is an enzyme that catalyzes the chemical reaction BURNS RO, DEMOSS RD (1962). "Properties of tryptophanase from Escherichia coli". Biochim. Biophys. Acta. 65 (2): 233–44. doi:10.1016/0006-3002(62)91042-9. PMID 14017164. Cowell JL, Maser K, DeMoss, RD (1973). "Tryptophanase from Aeromonas liquifaciens. Purification, molecular weight and some chemical, catalytic and immunological properties". Biochim. Biophys. Acta. 315: 449–463. doi:10.1016/0005-2744(73)90276-3. NEWTON WA, MORINO Y, SNELL EE (1965). "PROPERTIES OF CRYSTALLINE TRYPTOPHANASE". J. Biol. Chem. 240: 1211–8. PMID 14284727. Media related to Tryptophanase at Wikimedia Commons
Analysis and Modifications of Turbulence Models for Wind Turbine Wake Simulations in Atmospheric Boundary Layers | J. Sol. Energy Eng. | ASME Digital Collection Enrico G. A. Antonini, Enrico G. A. Antonini e-mail: enrico.antonini@mail.utoronto.ca e-mail: d.romero@utoronto.ca Contributed by the Solar Energy Division of ASME for publication in the JOURNAL OF SOLAR ENERGY ENGINEERING: INCLUDING WIND ENERGY AND BUILDING ENERGY CONSERVATION. Manuscript received January 9, 2017; final manuscript received February 13, 2018; published online March 13, 2018. Assoc. Editor: Douglas Cairns. Antonini, E. G. A., Romero, D. A., and Amon, C. H. (March 13, 2018). "Analysis and Modifications of Turbulence Models for Wind Turbine Wake Simulations in Atmospheric Boundary Layers." ASME. J. Sol. Energy Eng. June 2018; 140(3): 031007. https://doi.org/10.1115/1.4039377 Computational fluid dynamics (CFD) simulations of wind turbine wakes are strongly influenced by the choice of the turbulence model used to close the Reynolds-averaged Navier-Stokes (RANS) equations. A wrong choice can lead to incorrect predictions of the velocity field characterizing the wind turbine wake and, consequently, to an incorrect power estimation for wind turbines operating downstream. This study aims to investigate the influence of different turbulence models, namely the k–ε, k–ω, SSTk–ω ⁠, and Reynolds stress models (RSM), on the results of CFD wind turbine simulations. Their influence was evaluated by comparing the CFD results with the publicly available experimental measurements of the velocity field and turbulence quantities from the Sexbierum and Nibe wind farms. Consistent turbulence model constants were proposed for atmospheric boundary layer (ABL) and wake flows according to previous literature and appropriate experimental observations, and modifications of the derived turbulence model constants were also investigated in order to improve agreement with experimental data. The results showed that the simulations using the k–ε and k–ω turbulence models consistently overestimated the velocity and turbulence quantities in the wind turbine wakes, whereas the simulations using the shear-stress transport (SST) k–ω and RSMs could accurately match the experimental data. Results also showed that the predictions from the k–ε and k–ω turbulence models could be improved by using the modified set of turbulence coefficients. Clean energy, Energy, Fluid flow, Renewable, Simulation, Wind, Wind turbine Boundary layers, Simulation, Turbulence, Wakes, Wind turbines, Flow (Dynamics), Engineering simulation, Stress, Wind velocity, Wind, Wind farms , “Global Wind Report 2015,” Global Wind Energy Council, Brussel, Belgium, .http://www.gwec.net/wp-content/uploads/vip/GWEC-Global-Wind-2015-Report_April-2016_22_04.pdf , “Numerical Analysis of Wind Turbine Wakes,” Workshop on Wind Energy Application, Delphi, Greece, May 20–22, pp. 15–25. Third Joint ASCE/ASME Mechanics Conference , Forum on Turbulent Flows, San Diego, CA, July 9–12, pp. 121–127. , “Wind Turbine Wake in Atmospheric Turbulence,” , Aalborg University, Aalborg, Denmark.http://orbit.dtu.dk/files/4548747/ris-phd-53.pdf Modelling Wind Turbine Wakes in Complex Terrain , Brussels, Belgium, Mar. 31–Apr. 3, pp. 1–10. Evaluation of the Effects of Turbulence Model Enhancements on Wind Turbine Wake Predictions An Extended k – ε Model for Turbulent Flow Through Horizontal-Axis Wind Turbines Application of the Energy Dissipation Model of Turbulence to the Calculation of Flow Near a Spinning Disc Pettersson Reif Progress in the Development of Reynolds-Stress Turbulence Closure Unsteady Actuator Disc Model for Horizontal Axis Wind Turbines ), Marseille, France, Mar. 16–19.http://orbit.dtu.dk/files/3748022/2009_18.pdf Atmospheric Turbulence: Models and Methods for Engineering Applications Appropriate Boundary Conditions for Computational Wind Engineering Models Using the k-Epsilon Turbulence Model , “Results of Sexbierum Wind Farm; Single Wake Measurements,” TNO Institute of Environmental and Energy Technology, Apeldoorn, The Netherlands, Technical Report No. .https://repository.tudelft.nl/view/tno/uuid:7bb5570d-e223-4c4b-9448-bec95453f061/ , “Wake Measurements on the Nibe Wind Turbines in Denmark,” National Power—Technology and Environment Center, London, Technical Report No. ETSU WN 5020. An Improved k–ε Model Applied to a Wind Turbine Wake in Atmospheric Turbulence A Wind Farm Optimal Control Algorithm Based on Wake Fast-Calculation Model
Revision as of 14:37, 5 September 2012 by Jonesjb (talk | contribs) (→‎230.2.10 Bridge Considerations: Clarified intent of design exception.) Green Book Exhibit 3-3 "Decision Sight Distance" Green Book Exhibit 3-4 "Elements of Passing Sight Distance for Two-Lane Highways" Green Book Exhibit 3-7 "Passing Sight Distance for Design of Two-Lane Highways" Green Book Exibit 3-74 "Design Controls for Sag Vertical Curves-Open Road Conditions" Green Book Exhibit 3-75 "Design Controls for Sag Vertical Curves" Sight distance is the length of the roadway ahead that is visible to the driver. Sight distance is an element of design that affects the safe and efficient operation of a roadway and it is given careful consideration during the location study and preparation of the preliminary plan. Stopping sight distance, based on the anticipated posted speed, is the sum of the distance for braking reaction and the braking distance required for a driver to stop the vehicle after sighting an object on the roadway. Passing sight distance, based on the anticipated posted speed, is the minimum distance required to safely make a normal passing maneuver on two-lane roadways at passing speeds representative of nearly all drivers. Operational sight distance is a portion of the passing sight distance and is the minimum distance necessary for safe passing at the prevailing speed of traffic (85th percentile speed). Operational sight distance is used by the Traffic Division in establishing no passing zones by marking yellow lines on the roadways. Minimum design controls have been established for stopping and passing sight distances. Consideration for the design of a longer vertical curve to provide for operational sight distance is based on good engineering judgment and economy The minimum stopping sight distances and “K” factors for various anticipated operating speeds are given in the Green Book. These controls are based on a 3.5 ft. height of eye and a 2.0 ft. height of object. The “K” factors are approximate only and are used as a guide in determining the length of the vertical curve. K values are a measure of passenger comfort. Use of small K values is not recommended. The stopping sight distance, as determined by formula, is used as the final control. Where practical, vertical curves at least 300 ft. in length are used. Exhibit 3-75 of the AASHTO Greenbook is used to determine the length of a vertical curve required for any SSD based on change in grade. Decision sight distance is used where the stopping sight distance is inadequate to allow a reasonably competent driver the distance to react to potentially hazardous situations. This condition may be present in a roadway environment that is cluttered visually, an intersection congested with traffic, a median crossover, or has an unusual roadway geometric configuration. In decision areas the decision sight distance gives a greater margin for error and provides the distance to maneuver a vehicle safely. See Exhibit 3-3 of the AASHTO Green Book for decision sight distance values. Exhibit 3-7 of the AASHTO Green Book can be used to determine passing sight distances for various speeds. These values are based on a 3.5 ft. height of eye and a 3.5 ft. height of object. Horizontal alignment is also considered in determining the location, extent, and percentage of passing distances. If passing maneuvers are to be performed on upgrades under the same assumptions about the behavior of the passing and the passed vehicles, the passing sight distance should be greater than the derived design values. Specific adjustments for design use are not available; however, the designer should recognize the desirability of exceeding the values shown in Exhibit 3-7 where practicable. The "K" factors shown in Exibits 3-74 and 3-75 of the Green Book are for the most part based upon headlight sight distance and are to be used in the design of sag vertical curves. Where practicable, vertical curves at least 300 ft. in length are used. {\displaystyle L={\frac {AV^{2}}{46.5}}} The profile grade in a flood plain is established to keep the roadway’s shoulder a minimum of 1 ft. above design high water. This applies to all types of roadways. The stream gradient must be considered in establishing profile grades in flood plains. The Bridge Division will determine the design high water elevation to be used in establishing the profile grade.
Revision as of 10:18, 19 March 2020 by Smithk (talk | contribs) (→‎613.1.1 Full Depth Pavement Repairs (Sec 613.10): Per CM, added link) {\displaystyle No.\,of\,Internal\,Saw\,Cuts={\frac {Length\,of\,Pavement\,Repair(ft.)}{6}}-1} {\displaystyle \,Round\,up\,result\,to\,the\,next\,whole\,number.} {\displaystyle No.\,of\,Internal\,Saw\,Cuts={\frac {14}{6}}\;-1=1.33\Rightarrow \;Rounds\,to\,2}
Introduction to Chemical Engineering Processes/Generalized Correlations - Wikibooks, open books for an open world Critical ConstantsEdit At room temperature (about 298K), it is possible to add enough pressure to carbon dioxide to get it to liquify (some fire extinguishers work by keeping liquid carbon dioxide in them under very high pressure, which rapidly vaporizes when the pressure is relieved [1]. However, if the temperature is raised to higher than 304.2 K, it will be impossible to keep carbon dioxide in a liquid form, because it has too much kinetic energy to remain in the liquid phase. No amount of pressure can turn carbon dioxide into a liquid if the temperature is too high. This threshold temperature is called a critical temperature. Any pure stable substance (not just carbon dioxide) will have a single characteristic critical temperature. Pure stable substances will also have a single characteristic critical pressure, which is the pressure needed to achieve a phase transition at the critical temperature, and a critical specific volume which is the specific volume (volume per mass) of the fluid at this temperature and pressure. Critical pressures are typically extremely large, ranging from 2.26 atm for helium to 218.3 atm for water [2], and about 40 atm on average. Critical temperatures typically range from 5.26 K (for helium) to the high 600s K for some aromatic compounds. A substance which is at a temperature higher than the critical temperature and a pressure higher than its critical pressure is called a supercritical fluid. Supercritical fluids have some properties in common with gasses and some in common with liquid, as may be expected since it they are not observed to be liquid but would be expected to be liquefied at extreme pressures. Law of Corresponding StatesEdit Recall from the last section that the compressibility of any substance (but most useful for gasses) is defined as: {\displaystyle Z={\frac {P*{\hat {V}}}{RT}}} The compressibility of a gas is a measure of how non-ideal it is; an ideal gas has a compressibility of 1. At the critical point, in particular, the compressibility is: {\displaystyle Z_{C}={\frac {P_{c}*{\hat {V}}_{c}}{R*T_{c}}}} Critical constants are important because it has been found experimentally that the following rule is true for many substances: Many substances behave in similar manners to each other depending on how far the system conditions are from the critical temperature and pressure of the substance. In particular, the compressibility of a substance is strongly correlated to its variance from the critical conditions. It has been found experimentally that many substances have very similar compressibility at their critical point. [1]. Most nonpolar substances in particular have a critical compressibility of about 0.27. The similarity of the critical compressibility between substances is what gives some weight to the law of corresponding states. However, the fact that the critical compressibility is not exactly the same for all substances leads to potential estimation errors if this method is used. The critical constants are able to effectively predict the properties of a substance without gathering a large amount of data. However, it is necessary to define how the properties of the substance change as the system variables become closer to or farther from the critical point of the substance. These methods are discussed in the following sections. Compressibility ChartsEdit Recall that many substances have similar critical compressibility values near 0.27. Therefore, charts have been developed which relate compressibility at other conditions to those at the critical point. In order to use these charts, the system parameters are normalized by dividing by the critical constants to yield reduced temperature, pressure, and volume: {\displaystyle T_{r}={\frac {T}{T_{c}}}} {\displaystyle P_{r}={\frac {P}{P_{c}}}} {\displaystyle {\hat {V}}_{r}={\frac {\hat {V}}{{\hat {V}}_{c}}}} ↑ how fire extinguishers work ↑ see Wikipedia article on critical properties Retrieved from "https://en.wikibooks.org/w/index.php?title=Introduction_to_Chemical_Engineering_Processes/Generalized_Correlations&oldid=3550221"
-\int _ { 0 } ^ { 3 } x d x Ignoring the negative for the moment, sketch and shade the area under the curve is a triangle with base 3 3 . Find the area ... then make it negative. \int_{0}^{3}xdx. \int _ { 0 } ^ { 3 } (-x ) d x The area of the triangle is below the x -axis. Does this make it positive or negative? \int _ { 3 } ^ { 0 } x d x Notice that the integrand is the same as in part (a); but the bounds have flipped, causing you to look at the base of the triangle from right to left. This will make the area negative. -\int _ { 3 } ^ { 0 } (-x ) d x Think through all the negatives and reverse bounds. Is there a way to rewrite this integral so it will be simpler to sketch and evaluate?
D_meson Knowpia The D mesons are the lightest particle containing charm quarks. They are often studied to gain knowledge on the weak interaction.[1] The strange D mesons (Ds) were called "F mesons" prior to 1986.[2] : 1869.62±0.20 MeV/c2 s: 1968.47±0.33 MeV/c2 : (1.040±0.007)×10−12 s: (5.00±0.07)×10−13 s: ±1 e s: ±1 : +1/2 : −1/2 The D mesons were discovered in 1976 by the Mark I detector at the Stanford Linear Accelerator Center.[3] Since the D mesons are the lightest mesons containing a single charm quark (or antiquark), they must change the charm (anti)quark into an (anti)quark of another type to decay. Such transitions involve a change of the internal charm quantum number, and can take place only via the weak interaction. In D mesons, the charm quark preferentially changes into a strange quark via an exchange of a W particle, therefore the D meson preferentially decays into kaons ( ) and pions ( List of D mesonsEdit Charged D meson[5] 1869.62±0.20 1/2 0− 0 +1 0 (1.040±0.007)×10−12 [6] Neutral D meson[7] Strange D meson[9] 1968.47±0.33 0 0− +1 +1 0 (5.00±0.07)×10−13 [10] Excited charged D meson[11] 2010.27±0.17 1/2 1− 0 +1 0 (6.9±1.9)×10−21‡ Excited neutral D meson[12] 2006.97±0.19 1/2 1− 0 +1 0 >3.1×10−22‡ ‡ ^ PDG reports the resonance width {\displaystyle ~\left(\Gamma \right)~.} Here the conversion {\displaystyle \;\tau ={\frac {\hbar }{\Gamma }}\;} In 2021 it was confirmed with a significance of more than seven standard deviations, that the neutral meson spontaneously transforms into its own antiparticle and back. This phenomenon is called flavor oscillation and was prior known to exist in the neutral B mesons. [13] ^ a b Nave, G., ed. (2016). "D meson". Department of Physics & Astronomy. HyperPhysics. Atlanta, GA: Georgia State University. ^ Wohl, C.G. (1984). "Review of Particle Physics" (PDF). Reviews of Modern Physics. Particle Data Group. 56 (2, Part II). doi:10.1103/RevModPhys.56.S1. ^ Kudryavtsev, Vitaly A. "Charmed mesons" (course files). Physics 466. University of Sheffield. [permanent dead link] ^ Amsler, C.; et al. (Particle Data Group) (2008). "Quark Model" (PDF). Lawrence Berkeley Laboratory. ^ Amsler, C.; et al. (Particle Data Group) (2008). " " (PDF). Particle listings. Lawrence Berkeley Laboratory. " (PDF). Decay modes. Lawrence Berkeley Laboratory. ^ Nakamura, N.; et al. (Particle Data Group) (2010). " s" (PDF). Particle listings. Lawrence Berkeley Laboratory. s" (PDF). Decay modes. Lawrence Berkeley Laboratory. (2007)" (PDF). Decay modes. Lawrence Berkeley Laboratory. ^ Aaij, R.; et al. (LHCb collaboration) (14 September 2021) [7 June 2021]. "Observation of the mass difference between neutral charm-meson eigenstates". Physical Review Letters. 127 (11): 111801. arXiv:2106.03744. Bibcode:2021PhRvL.127k1801A. doi:10.1103/PhysRevLett.127.111801. PMID 34558945. S2CID 235358523. 2106.03744. Published 2021 in Physical Review Letters 127, 111801. Report numbers: LHCb-PAPER-2021-009, CERN-EP-2021-099.
By winning courses in the Pokéathlon, the participating Pokémon will earn medals for their species. Their Trainers earn points for the competition based on how well the team did and if they qualified for any of the several bonuses. The Trainer with the highest score wins, with a tie settled with a random draw by the Pokéathlon host. A win will earn the Trainer an additional 100 points (300 points in the Supreme Cup or 500 points in Link Pokéathlon), which can be spent on prizes at the Athlete Shop or Data Cards with which the player may see records of various actions, course wins and losses, and multiple other statistics in the Pokéathlon Dome. Beating the records for all 10 of the events in the Pokéathlon increases the Trainer Card level in HeartGold and SoulSilver. Once the National Pokédex is obtained and the player has talked to Magnus in the Friendship Room, the Supreme Cup is unlocked. The opponents are slightly harder, but Trainers who get in first place here will earn a bonus 300 points instead of the regular 100. Hurdle Dash, Pennant Capture, and Relay Run Block Smash, Circle Push, and Goal Roll Snow Throw, Goal Roll, and Pennant Capture Ring Drop, Relay Run, and Block Smash Lamp Jump, Disc Catch, and Hurdle Dash {\displaystyle AthletePoints={\Bigl \lfloor }30+{\dfrac {120\times points}{12.5+points}}{\Bigr \rfloor }} Snorlax Winner Pikachu Runner-up Brazilian Portuguese Pokéathlon Brazilian Portuguese Barreira de Choque (anime) Brazilian Portuguese Pegador de Disco (anime) Brazilian Portuguese Bola no Gol (Pokémon: The Lavaury Heroes - Adventures In Southgrace, a fan-made story by MatthewKurara2000)