text
stringlengths
256
16.4k
Biot–Savart law - Wikipedia Important law of classical magnetism In physics, specifically electromagnetism, the Biot–Savart law (/ˈbiːoʊ səˈvɑːr/ or /ˈbjoʊ səˈvɑːr/)[1] is an equation describing the magnetic field generated by a constant electric current. It relates the magnetic field to the magnitude, direction, length, and proximity of the electric current. The Biot–Savart law is fundamental to magnetostatics, playing a role similar to that of Coulomb's law in electrostatics. When magnetostatics does not apply, the Biot–Savart law should be replaced by Jefimenko's equations. The law is valid in the magnetostatic approximation, and consistent with both Ampère's circuital law and Gauss's law for magnetism.[2] It is named after Jean-Baptiste Biot and Félix Savart, who discovered this relationship in 1820. 1.1 Electric currents (along a closed curve/wire) 1.2 Electric current density (throughout conductor volume) 1.3 Constant uniform current 1.4 Point charge at constant velocity 2 Magnetic responses applications 3 Aerodynamics applications 4 The Biot–Savart law, Ampère's circuital law, and Gauss's law for magnetism Electric currents (along a closed curve/wire)[edit] Shown are the directions of {\displaystyle Id{\boldsymbol {\ell }}} {\displaystyle \mathbf {{\hat {r}}'} } , and the value of {\displaystyle |\mathbf {r'} |} The Biot–Savart law is used for computing the resultant magnetic field B at position r in 3D-space generated by a flexible current I (for example due to a wire). A steady (or stationary) current is a continual flow of charges which does not change with time and the charge neither accumulates nor depletes at any point. The law is a physical example of a line integral, being evaluated over the path C in which the electric currents flow (e.g. the wire). The equation in SI units is[3] {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int _{C}{\frac {I\,d{\boldsymbol {\ell }}\times \mathbf {r'} }{|\mathbf {r'} |^{3}}}} {\displaystyle d{\boldsymbol {\ell }}} is a vector along the path {\displaystyle C} whose magnitude is the length of the differential element of the wire in the direction of conventional current. {\displaystyle {\boldsymbol {\ell }}} is a point on path {\displaystyle C} {\displaystyle \mathbf {r'} =\mathbf {r} -{\boldsymbol {\ell }}} is the full displacement vector from the wire element ( {\displaystyle d{\boldsymbol {\ell }}} {\displaystyle {\boldsymbol {\ell }}} to the point at which the field is being computed ( {\displaystyle \mathbf {r} } ), and μ0 is the magnetic constant. Alternatively: {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int _{C}{\frac {I\,d{\boldsymbol {\ell }}\times \mathbf {{\hat {r}}'} }{|\mathbf {r'} |^{2}}}} {\displaystyle \mathbf {{\hat {r}}'} } is the unit vector of {\displaystyle \mathbf {r'} } . The symbols in boldface denote vector quantities. The integral is usually around a closed curve, since stationary electric currents can only flow around closed paths when they are bounded. However, the law also applies to infinitely long wires (this concept was used in the definition of the SI unit of electric current—the Ampere—until 20 May 2019). To apply the equation, the point in space where the magnetic field is to be calculated is arbitrarily chosen ( {\displaystyle \mathbf {r} } ). Holding that point fixed, the line integral over the path of the electric current is calculated to find the total magnetic field at that point. The application of this law implicitly relies on the superposition principle for magnetic fields, i.e. the fact that the magnetic field is a vector sum of the field created by each infinitesimal section of the wire individually.[4] There is also a 2D version of the Biot–Savart equation, used when the sources are invariant in one direction. In general, the current need not flow only in a plane normal to the invariant direction and it is given by {\displaystyle \mathbf {J} } (current density). The resulting formula is: {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{2\pi }}\int _{C}\ {\frac {(\mathbf {J} \,d\ell )\times \mathbf {r} '}{|\mathbf {r} '|}}={\frac {\mu _{0}}{2\pi }}\int _{C}\ (\mathbf {J} \,d\ell )\times \mathbf {{\hat {r}}'} } Electric current density (throughout conductor volume)[edit] The formulations given above work well when the current can be approximated as running through an infinitely-narrow wire. If the conductor has some thickness, the proper formulation of the Biot–Savart law (again in SI units) is: {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\iiint _{V}\ {\frac {(\mathbf {J} \,dV)\times \mathbf {r} '}{|\mathbf {r} '|^{3}}}} {\displaystyle \mathbf {r'} } is the vector from dV to the observation point {\displaystyle \mathbf {r} } {\displaystyle dV} is the volume element, and {\displaystyle \mathbf {J} } is the current density vector in that volume (in SI in units of A/m2). In terms of unit vector {\displaystyle \mathbf {{\hat {r}}'} } {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\iiint _{V}\ dV{\frac {\mathbf {J} \times \mathbf {{\hat {r}}'} }{|\mathbf {r} '|^{2}}}} Constant uniform current[edit] In the special case of a uniform constant current I, the magnetic field {\displaystyle \mathbf {B} } {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}I\int _{C}{\frac {d{\boldsymbol {\ell }}\times \mathbf {r'} }{|\mathbf {r'} |^{3}}}} i.e., the current can be taken out of the integral. Point charge at constant velocity[edit] In the case of a point charged particle q moving at a constant velocity v, Maxwell's equations give the following expression for the electric field and magnetic field:[5] {\displaystyle {\begin{aligned}\mathbf {E} &={\frac {q}{4\pi \epsilon _{0}}}{\frac {1-{\frac {v^{2}}{c^{2}}}}{\left(1-{\frac {v^{2}}{c^{2}}}\sin ^{2}\theta \right)^{\frac {3}{2}}}}{\frac {\mathbf {{\hat {r}}'} }{|\mathbf {r} '|^{2}}}\\\mathbf {H} &=\mathbf {v} \times \mathbf {D} \\\mathbf {B} &={\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {E} \end{aligned}}} {\displaystyle \mathbf {\hat {r}} '} is the unit vector pointing from the current (non-retarded) position of the particle to the point at which the field is being measured, and θ is the angle between {\displaystyle \mathbf {v} } {\displaystyle \mathbf {r} '} When v2 ≪ c2, the electric field and magnetic field can be approximated as[5] {\displaystyle \mathbf {E} ={\frac {q}{4\pi \varepsilon _{0}}}\ {\frac {\mathbf {{\hat {r}}'} }{|\mathbf {r} '|^{2}}}} {\displaystyle \mathbf {B} ={\frac {\mu _{0}q}{4\pi }}\mathbf {v} \times {\frac {\mathbf {{\hat {r}}'} }{|\mathbf {r} '|^{2}}}} These equations were first derived by Oliver Heaviside in 1888. Some authors[6][7] call the above equation for {\displaystyle \mathbf {B} } the "Biot–Savart law for a point charge" due to its close resemblance to the standard Biot–Savart law. However, this language is misleading as the Biot–Savart law applies only to steady currents and a point charge moving in space does not constitute a steady current.[8] Magnetic responses applications[edit] The Biot–Savart law can be used in the calculation of magnetic responses even at the atomic or molecular level, e.g. chemical shieldings or magnetic susceptibilities, provided that the current density can be obtained from a quantum mechanical calculation or theory. Aerodynamics applications[edit] The figure shows the velocity ( {\displaystyle dV} ) induced at a point P by an element of vortex filament ( {\displaystyle dl} ) of strength {\displaystyle \Gamma } The Biot–Savart law is also used in aerodynamic theory to calculate the velocity induced by vortex lines. In the aerodynamic application, the roles of vorticity and current are reversed in comparison to the magnetic application. In Maxwell's 1861 paper 'On Physical Lines of Force',[9] magnetic field strength H was directly equated with pure vorticity (spin), whereas B was a weighted vorticity that was weighted for the density of the vortex sea. Maxwell considered magnetic permeability μ to be a measure of the density of the vortex sea. Hence the relationship, Magnetic induction current {\displaystyle \mathbf {B} =\mu \mathbf {H} } Electric convection current {\displaystyle \mathbf {J} =\rho \mathbf {v} } where ρ is electric charge density. B was seen as a kind of magnetic current of vortices aligned in their axial planes, with H being the circumferential velocity of the vortices. In aerodynamics the induced air currents form solenoidal rings around a vortex axis. Analogy can be made that the vortex axis is playing the role that electric current plays in magnetism. This puts the air currents of aerodynamics (fluid velocity field) into the equivalent role of the magnetic induction vector B in electromagnetism. In electromagnetism the B lines form solenoidal rings around the source electric current, whereas in aerodynamics, the air currents (velocity) form solenoidal rings around the source vortex axis. Hence in electromagnetism, the vortex plays the role of 'effect' whereas in aerodynamics, the vortex plays the role of 'cause'. Yet when we look at the B lines in isolation, we see exactly the aerodynamic scenario insomuch as B is the vortex axis and H is the circumferential velocity as in Maxwell's 1861 paper. In two dimensions, for a vortex line of infinite length, the induced velocity at a point is given by {\displaystyle v={\frac {\Gamma }{2\pi r}}} where Γ is the strength of the vortex and r is the perpendicular distance between the point and the vortex line. This is similar to the magnetic field produced on a plane by an infinitely long straight thin wire normal to the plane. This is a limiting case of the formula for vortex segments of finite length (similar to a finite wire): {\displaystyle v={\frac {\Gamma }{4\pi r}}\left[\cos A-\cos B\right]} where A and B are the (signed) angles between the line and the two ends of the segment. The Biot–Savart law, Ampère's circuital law, and Gauss's law for magnetism[edit] See also: Curl (mathematics) and vector calculus identities In a magnetostatic situation, the magnetic field B as calculated from the Biot–Savart law will always satisfy Gauss's law for magnetism and Ampère's law:[10] Starting with the Biot–Savart law: {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\iiint _{V}d^{3}l\mathbf {J} (\mathbf {l} )\times {\frac {\mathbf {r} -\mathbf {l} }{|\mathbf {r} -\mathbf {l} |^{3}}}} Substituting the relation {\displaystyle {\frac {\mathbf {r} -\mathbf {l} }{|\mathbf {r} -\mathbf {l} |^{3}}}=-\nabla \left({\frac {1}{|\mathbf {r} -\mathbf {l} |}}\right)} and using the product rule for curls, as well as the fact that J does not depend on {\displaystyle \mathbf {r} } , this equation can be rewritten as[10] {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\nabla \times \iiint _{V}d^{3}l{\frac {\mathbf {J} (\mathbf {l} )}{|\mathbf {r} -\mathbf {l} |}}} Since the divergence of a curl is always zero, this establishes Gauss's law for magnetism. Next, taking the curl of both sides, using the formula for the curl of a curl, and again using the fact that J does not depend on {\displaystyle \mathbf {r} } , we eventually get the result[10] {\displaystyle \nabla \times \mathbf {B} ={\frac {\mu _{0}}{4\pi }}\nabla \iiint _{V}d^{3}l\mathbf {J} (\mathbf {l} )\cdot \nabla \left({\frac {1}{|\mathbf {r} -\mathbf {l} |}}\right)-{\frac {\mu _{0}}{4\pi }}\iiint _{V}d^{3}l\mathbf {J} (\mathbf {l} )\nabla ^{2}\left({\frac {1}{|\mathbf {r} -\mathbf {l} |}}\right)} Finally, plugging in the relations[10] {\displaystyle {\begin{aligned}\nabla \left({\frac {1}{|\mathbf {r} -\mathbf {l} |}}\right)&=-\nabla _{l}\left({\frac {1}{|\mathbf {r} -\mathbf {l} |}}\right),\\\nabla ^{2}\left({\frac {1}{|\mathbf {r} -\mathbf {l} |}}\right)&=-4\pi \delta (\mathbf {r} -\mathbf {l} )\end{aligned}}} (where δ is the Dirac delta function), using the fact that the divergence of J is zero (due to the assumption of magnetostatics), and performing an integration by parts, the result turns out to be[10] {\displaystyle \nabla \times \mathbf {B} =\mu _{0}\mathbf {J} } i.e. Ampère's law. (Due to the assumption of magnetostatics, {\displaystyle \partial \mathbf {E} /\partial t=\mathbf {0} } , so there is no extra displacement current term in Ampère's law.) In a non-magnetostatic situation, the Biot–Savart law ceases to be true (it is superseded by Jefimenko's equations), while Gauss's law for magnetism and the Maxwell–Ampère law are still true. Initially, the Biot–Savart law was discovered experimentally, then this law was derived in different ways theoretically. In The Feynman Lectures on Physics, at first, the similarity of expressions for the electric potential outside the static distribution of charges and the magnetic vector potential outside the system of continuously distributed currents is emphasized, and then the magnetic field is calculated through the curl from the vector potential.[11] Another approach involves a general solution of the inhomogeneous wave equation for the vector potential in the case of constant currents.[12] The magnetic field can also be calculated as a consequence of the Lorentz transformations for the electromagnetic force acting from one charged particle on another particle.[13] Two other ways of deriving the Biot–Savart law include: 1) Lorentz transformation of the electromagnetic tensor components from a moving frame of reference, where there is only an electric field of some distribution of charges, into a stationary frame of reference, in which these charges move. 2) the use of the method of retarded potentials.[14] Jefimenko's equations: time-dependent generalization of Biot–Savart law ^ "Biot–Savart law". Random House Webster's Unabridged Dictionary. ^ Jackson, John David (1999). Classical Electrodynamics (3rd ed.). New York: Wiley. Chapter 5. ISBN 0-471-30932-X. ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9 ^ The superposition principle holds for the electric and magnetic fields because they are the solution to a set of linear differential equations, namely Maxwell's equations, where the current is one of the "source terms". ^ a b Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. pp. 222–224, 435–440. ISBN 0-13-805326-X. ^ Knight, Randall (2017). Physics for Scientists and Engineers (4th ed.). Pearson Higher Ed. p. 800. ^ See the cautionary footnote in Griffiths p. 219 or the discussion in Jackson p. 175–176. ^ Maxwell, J. C. "On Physical Lines of Force" (PDF). Wikimedia commons. Retrieved 25 December 2011. ^ a b c d e See Jackson, page 178–79 or Griffiths p. 222–24. The presentation in Griffiths is particularly thorough, with all the details spelled out. ^ The Feynman Lectures on Physics Vol. II Ch. 14: The Magnetic Field in Various Situations ^ David Tong. Lectures on Electromagnetism. University of Cambridge, Part IB and Part II Mathematical Tripos (2015). http://www.damtp.cam.ac.uk/user/tong/em.html. ^ Daniel Zile and James Overdui. Derivation of the Biot-Savart Law from Coulomb’s Law and Implications for Gravity. APS April Meeting 2014, abstract id. D1.033. https://doi.org/10.1103/BAPS.2014.APRIL.D1.33. ^ Fedosin, Sergey G. (2021). "The Theorem on the Magnetic Field of Rotating Charged Bodies". Progress In Electromagnetics Research M. 103: 115–127. arXiv:2107.07418. Bibcode:2021arXiv210707418F. doi:10.2528/PIERM21041203. Feynman, Richard (2005). The Feynman Lectures on Physics (2nd ed.). Addison-Wesley. ISBN 978-0-8053-9045-2. Electricity and Modern Physics (2nd Edition), G.A.G. Bennet, Edward Arnold (UK), 1974, ISBN 0-7131-2459-8 Physics for Scientists and Engineers - with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008, ISBN 0-7167-8964-7 Media related to Biot-Savart law at Wikimedia Commons Retrieved from "https://en.wikipedia.org/w/index.php?title=Biot–Savart_law&oldid=1086079880"
Section 63.5 (03T2): Why derived categories?—The Stacks project Section 63.5: Why derived categories? (cite) 63.5 Why derived categories? With this definition of the trace, let us now discuss another issue with the formula as stated. Let $C$ be a smooth projective curve over $k$. Then there is a correspondence between finite locally constant sheaves $\mathcal{F}$ on $C_{\acute{e}tale}$ whose stalks are isomorphic to ${(\mathbf{Z}/\ell ^ n\mathbf{Z})}^{\oplus m}$ on the one hand, and continuous representations $\rho : \pi _1 (C, \bar c) \to \text{GL}_ m(\mathbf{Z}/\ell ^ n\mathbf{Z}))$ (for some fixed choice of $\bar c$) on the other hand. We denote $\mathcal{F}_\rho $ the sheaf corresponding to $\rho $. Then $H^2 (C_{\bar k}, \mathcal{F}_\rho )$ is the group of coinvariants for the action of $\rho (\pi _1 (C, \bar c))$ on ${(\mathbf{Z}/\ell ^ n\mathbf{Z})}^{\oplus m}$, and there is a short exact sequence \[ 0 \longrightarrow \pi _1 (C_{\bar k}, \bar c) \longrightarrow \pi _1 (C, \bar c) \longrightarrow G_ k \longrightarrow 0. \] For instance, let $\mathbf{Z} = \mathbf{Z} \sigma $ act on $\mathbf{Z}/\ell ^2\mathbf{Z}$ via $\sigma (x) = (1+\ell ) x$. The coinvariants are $(\mathbf{Z}/\ell ^2\mathbf{Z})_{\sigma } = \mathbf{Z}/\ell \mathbf{Z}$, which is not a flat $\mathbf{Z}/\ell ^2\mathbf{Z}$-module. Hence we cannot take the trace of some action on $H^2(C_{\bar k}, \mathcal{F}_\rho )$, at least not in the sense of the previous section. In fact, our goal is to consider a trace formula for $\ell $-adic coefficients. But $\mathbf{Q}_\ell = \mathbf{Z}_\ell [1/\ell ]$ and $\mathbf{Z}_\ell = \mathop{\mathrm{lim}}\nolimits \mathbf{Z}/\ell ^ n\mathbf{Z}$, and even for a flat $\mathbf{Z}/\ell ^ n\mathbf{Z}$ sheaf, the individual cohomology groups may not be flat, so we cannot compute traces. One possible remedy is consider the total derived complex $R\Gamma (C_{\bar k}, \mathcal{F}_\rho )$ in the derived category $D(\mathbf{Z}/\ell ^ n\mathbf{Z})$ and show that it is a perfect object, which means that it is quasi-isomorphic to a finite complex of finite free module. For such complexes, we can define the trace, but this will require an account of derived categories. typo : 'whose stalks' instead of 'which stalks' OK, thanks, fixed here. typo? `which is not a flat \mathbf{Z}/\ell\mathbf{Z} -module' should say \mathbf{Z}/\ell^2\mathbf{Z} OK, good catch! Thanks. Fixed here.
Novel Properties of Fuzzy Labeling Graphs A. Nagoor Gani, Muhammad Akram, D. Rajalaxmi (a) Subahashini, "Novel Properties of Fuzzy Labeling Graphs", Journal of Mathematics, vol. 2014, Article ID 375135, 6 pages, 2014. https://doi.org/10.1155/2014/375135 A. Nagoor Gani,1 Muhammad Akram,2 and D. Rajalaxmi (a) Subahashini3 1PG & Research Department of Mathematics, Jamal Mohamed College, Trichy, India 3Department of Mathematics, Saranathan College of Engineering, Tiruchirappalli, Tamil Nadu 620 012, India Academic Editor: Pierpaolo D’Urso The concepts of fuzzy labeling and fuzzy magic labeling graph are introduced. Fuzzy magic labeling for some graphs like path, cycle, and star graph is defined. It is proved that every fuzzy magic graph is a fuzzy labeling graph, but the converse is not true. We have shown that the removal of a fuzzy bridge from a fuzzy magic cycle with odd nodes reduces the strength of a fuzzy magic cycle. Some properties related to fuzzy bridge and fuzzy cut node have also been discussed. Fuzzy set is a newly emerging mathematical framework to exemplify the phenomenon of uncertainty in real life tribulations. It was introduced by Zadeh in 1965, and the concepts were pioneered by various independent researches, namely, Rosenfeld [1] and Bhutani and Battou [2] during 1970s. Bhattacharya has established the connectivity concepts between fuzzy cut nodes and fuzzy bridges entitled “Some remarks on fuzzy graphs [3].” Several fuzzy analogs of graph theoretic concepts such as paths, cycles, and connectedness were explored by them. There are many problems, which can be solved with the help of the fuzzy graphs. Though it is very young, it has been growing fast and has numerous applications in various fields. Further, research on fuzzy graphs has been witnessing an exponential growth, both within mathematics and in its applications in science and Technology. A fuzzy graph is the generalization of the crisp graph. Therefore it is natural that many properties are similar to crisp graph and also it deviates at many places. In crisp graph, a bijection that assigns to each vertex and/or edge if , a unique natural number is called a labeling. The concept of magic labeling in crisp graph was motivated by the notion of magic squares in number theory. The notion of magic graph was first introduced by Sunitha and Vijaya Kumar [4] in 1964. He defined a graph to be magic if it has an edge-labeling, within the range of real numbers, such that the sum of the labels around any vertex equals some constant, independent of the choice of vertex. This labeling has been studied by Stewart [5, 6] who called the labeling as super magic if the labels are consecutive integers, starting from 1. Several others have studied this labeling. Kotzig and Rosa [7] defined a magic labeling to be a total labeling in which the labels are the integers from 1 to . The sum of labels on an edge and its two endpoints is constant. Recently Enomoto et al. [8] introduced the name super edge magic for magic labeling in the sense of Kotzig and Rosa, with the added property that the vertices receive the smaller labels. Many other researchers have investigated different forms of magic graphs; for example see Avadayappan et al. [9] Ngurah et al. [10], and Trenkler [11]. In this paper, Section 1 contains basic definitions and in Section 2 a new concept of fuzzy labeling and fuzzy magic labeling has been introduced and also fuzzy star graph is defined. In Section 2, fuzzy magic labeling for some graphs like path, cycle, and star is defined. In Section 3, some properties and results with fuzzy bridge and fuzzy cut nodes are discussed. The graphs which are considered in this paper are finite and connected. We have used standard definitions and terminologies in this paper. For graphs considered in this paper, the readers are referred to [12–19]. Let and be two sets. Then is said to be a fuzzy relation from into if is a fuzzy set of . A fuzzy graph is a pair of functions and , where for all , we have . A path in a fuzzy graph is a sequence of distinct nodes such that ; ; here is called the length of the path . The consecutive pairs are called the edge of the path. A path is called a cycle if and . The strength of a path is defined as . Let be a fuzzy graph. The degree of a vertex is defined as . Let be a fuzzy graph. The strong degree of a node is defined as the sum of membership values of all strong edges incident at . It is denoted by . Also if denote the set of all strong neighbours of , then . An edge is called a fuzzy bridge of if its removal reduces the strength of connectedness between some pair of nodes in . A node is a fuzzy cut node of if removal of it reduces the strength of connectedness between some other pairs of nodes. Definition 1 (see, [20]). A graph is said to be a fuzzy labeling graph if and is bijective such that the membership value of edges and vertices are distinct and for all . Example 2 (see, [20]). In Figure 1 and are bijective, such that no vertices and edges receive the same membership value. A fuzzy labeling graph. Definition 3 (see, [20]). A fuzzy labeling graph is said to be a fuzzy magic graph if has a same magic value for all which is denoted as . Example 4 (see, [20]). In Figure 2 , for all . A fuzzy magic path graph: . Definition 5. A star in a fuzzy graph consists of two node sets and with and , such that and , . It is denoted by . Example 6. A fuzzy star graph is shown in Figure 3. A fuzzy star graph. Definition 7 (see, [20]). The fuzzy labeling graph is called a fuzzy labeling subgraph of if for all and , for all . 2. Properties of Fuzzy Labeling Graphs Proposition 8. For all , the path is a fuzzy magic graph. Proof. Let be any path with length and and are the nodes and edges of . Let such that one can choose if and if . Such fuzzy labeling is defined as follows. When length is odd: Case (i). is even. Then for any positive integer and for each edge , Case (ii). is odd. Then for any positive integer and for each edge When length is even: Then for any positive integer and for each edge Therefore in both the cases the magic value is same and unique. Thus is fuzzy magic graph for all . Proposition 9. If is odd, then the cycle is a fuzzy magic graph. Proof. Let be any cycle with odd number of nodes and and , be the nodes and edges of . Let such that one can choose if and if . The fuzzy labeling for cycle is defined as follows: Then for any positive integer and for each edge Therefore from above cases is a fuzzy magic graph if is odd. Proposition 10. For any , star is a fuzzy magic graph. Proof. Let be a star graph with as nodes and as edges. Let such that one can choose if and if . Such a fuzzy labeling is defined as follows: Then for any positive integer and for each edge From the above cases one can easily verify that all star graphs are fuzzy magic graphs. Remark 11. One can observe the same labeling holds well if we choose the value of as 0.03, 0.05, and so forth, for the Propositions 8, 9, and 10. Remark 12. (1) If is a fuzzy magic graph, then for any pair of nodes and . (2) For any fuzzy magic graph, . (3) Sum of the degree of all nodes in a fuzzy magic graph is equal to twice the sum of membership values of all edges (i.e., ). (4) Sum of strong degree of all nodes in a fuzzy magic graph is equal to twice the sum of the membership values of all strong arcs in (i.e., ). 3. Properties of Fuzzy Magic Graphs Proposition 13. Every fuzzy magic graph is a fuzzy labeling graph, but the converse is not true. Proof. This is immediate from Definition 3. Proposition 14. For every fuzzy magic graph , there exists at least one fuzzy bridge. Proof. Let be a fuzzy magic graph, such that there exists only one edge with maximum value, since is bijective. Now we claim that is a fuzzy bridge. If we remove the edge from , then in its subgraph we have , which implies is a fuzzy bridge. Proposition 15. Removal of a fuzzy cut node from a fuzzy magic path is also a fuzzy magic graph. Proof. Let be any fuzzy magic path with length . Then there must be a fuzzy cut node; if we remove that cut node from then it either becomes a smaller path or disconnected path, anyway it remains to be a path with odd or even length; by Proposition 8, it is concluded that removal of a fuzzy cut node from a fuzzy magic path is also a fuzzy magic graph. Proposition 16. When is odd, removal of a fuzzy bridge from a fuzzy magic cycle is a fuzzy magic graph. Proof. Let be any fuzzy magic cycle with odd nodes. If we choose any path then there must be at least one fuzzy bridge, whose removal from will result as a path of odd or even length. By Proposition 8, the removal of a fuzzy bridge from a fuzzy magic cycle is also a fuzzy magic graph. Remark 17. (1) Removal of a fuzzy cut node from the cycle is also a fuzzy magic graph. (2) For all fuzzy magic cycles with odd nodes, there exists at least one pair of nodes and such that . Proposition 18. Removal of a fuzzy bridge from a fuzzy magic cycle will reduce the strength of the fuzzy magic cycle . Proof. Let be a fuzzy magic cycle with odd number of nodes. Now choose any path from , and then it is obvious that there exists at least one fuzzy bridge . Removal of this fuzzy bridge will reduce the strength of connectedness between and . This implies that the removal of fuzzy bridge from the fuzzy magic cycle will reduce its strength. Fuzzy graph theory is finding an increasing number of applications in modeling real time systems where the level of information inherent in the system varies with different levels of precision. Fuzzy models are becoming useful because of their aim in reducing the differences between the traditional numerical models used in engineering and sciences and the symbolic models used in expert systems. In this paper, the concept of fuzzy labeling and fuzzy magic labeling graphs has been introduced. We plan to extend our research work to (1) bipolar fuzzy labeling and bipolar fuzzy magic labeling graphs and (2) fuzzy labeling and fuzzy magic labeling hypergraphs. A. Rosenfeld, “Fuzzy graphs,” in Fuzzy Sets and Their Applications, L. A. Zadeh, K. S. Fu, and M. Shimura, Eds., pp. 77–95, Academic Press, New York, NY, USA, 1975. View at: Google Scholar K. R. Bhutani and A. Battou, “On M -strong fuzzy graphs,” Information Sciences, vol. 155, no. 1-2, pp. 103–109, 2003. View at: Publisher Site | Google Scholar | MathSciNet P. Bhattacharya, “Some remarks on fuzzy graphs,” Pattern Recognition Letters, vol. 6, no. 5, pp. 297–302, 1987. View at: Publisher Site | Google Scholar | Zentralblatt MATH B. M. Stewart, “Magic graphs,” Canadian Journal of Mathematics, vol. 18, pp. 1031–1059, 1966. View at: Google Scholar | MathSciNet B. M. Stewart, “Supermagic complete graphs,” Canadian Journal of Mathematics, vol. 9, pp. 427–438, 1966. View at: Google Scholar | MathSciNet A. Kotzig and A. Rosa, “Magic valuations of finite graphs,” Canadian Mathematical Bulletin, vol. 13, pp. 451–461, 1970. View at: Google Scholar | MathSciNet H. Enomoto, A. S. Llado, T. Nakamigawa, and G. Ringel, “Super edge-magic graphs,” SUT Journal of Mathematics, vol. 34, no. 2, pp. 105–109, 1998. View at: Google Scholar | MathSciNet S. Avadayappan, P. Jeyanthi, and R. Vasuki, “Super magic strength of a graph,” Indian Journal of Pure and Applied Mathematics, vol. 32, no. 11, pp. 1621–1630, 2001. View at: Google Scholar | MathSciNet A. A. G. Ngurah, A. N. M. Salman, and L. Susilowati, “ H -supermagic labelings of graphs,” Discrete Mathematics, vol. 310, no. 8, pp. 1293–1300, 2010. View at: Publisher Site | Google Scholar | MathSciNet M. Trenkler, “Some results on magic graphs,” in Graphs and Other Combinatorial Topics, M. Fieldler, Ed., vol. 59 of Texte zur Mathematik Band, pp. 328–332, Teubner, Leipzig, Germany, 1983. View at: Google Scholar | MathSciNet M. Akram, “Bipolar fuzzy graphs,” Information Sciences, vol. 181, no. 24, pp. 5548–5564, 2011. View at: Publisher Site | Google Scholar | MathSciNet M. Akram and W. A. Dudek, “Interval-valued fuzzy graphs,” Computers & Mathematics with Applications, vol. 61, no. 2, pp. 289–299, 2011. View at: Publisher Site | Google Scholar | MathSciNet J. N. Mordeson and P. S. Nair, Fuzzy Graphs and Fuzzy Hypergraphs, Physica, Heidelberg, Germany, 2000. View at: MathSciNet A. Nagoor Gani and V. T. Chandrasekaran, A First Look at Fuzzy Graph Theory, Allied Publishers, Chennai, India, 2010. S. Mathew and M. S. Sunitha, “Node connectivity and arc connectivity of a fuzzy graph,” Information Sciences, vol. 180, no. 4, pp. 519–531, 2010. View at: Publisher Site | Google Scholar | MathSciNet J. A. MacDougall and W. D. Wallis, “Strong edge-magic labelling of a cycle with a chord,” The Australasian Journal of Combinatorics, vol. 28, pp. 245–255, 2003. View at: Google Scholar | Zentralblatt MATH | MathSciNet A. Nagoor Gani and D. Rajalaxmi (a) Subahashini, “Properties of fuzzy labeling graph,” Applied Mathematical Sciences, vol. 6, no. 69-72, pp. 3461–3466, 2012. View at: Google Scholar | Zentralblatt MATH | MathSciNet Copyright © 2014 A. Nagoor Gani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Lemma 63.3.2 (03SP)—The Stacks project Comment #5116 by Laurent Moret-Bailly on May 24, 2020 at 10:04 The meaning of the statement is not completely formal. To make sense of "the identity on cohomology" we need to show that we can identify \mathcal{F} g_*(\mathcal{F}) g^{-1}(\mathcal{F}) \mathcal{F} . This is of course the case in subsequent lemmas where \mathcal{F} is a constant sheaf. Hmm... yes. OK, well, I think Exercise 63.3.1 about topological spaces just above the lemma explains why this would be so. To be precise g_*\mathcal{F}(U) = \mathcal{F}(U \times_{\varphi, X, g} X) which is functorially equal to \mathcal{F}(U) by the assumption of the lemma. So indeed \mathcal{F} = g_*\mathcal{F} for all sheaves \mathcal{F} and hence also g^{-1}\mathcal{F} = \mathcal{F} \mathcal{F} . Then of course this agrees with what you would do on constant sheaves. Sigh! In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 03SP. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 03SP, in case you are confused.
Janet lit a 12 -inch candle. She noticed that it was getting an inch shorter every 30 Is the association between the time the candle is lit and the height of the candle positive or negative? Since the candle is melting, it is getting shorter over time. Think about this on a graph of height vs. time. As the time increases, the height would decrease. What correlation does this represent? In how many hours will the candle burn out? Support your answer with a reason. If the candle melts 1 inch every 30 minutes, and the candle was originally 12 inches, how many 30 minute periods would it take for the candle to melt completely? Another way you could look at it is through an algebraic expression: 12 − 2x = 0 The candle is originally 12 inches, but loses 2 inches every hour. After how many hours does it melt completely and burn out? 6
The dartboard shown at right is in the shape of an equilateral triangle. It has a smaller equilateral triangle in the center, which was made by joining the midpoints of the three edges. If a dart hits the board at random, what is the probability that: The dart hits the center triangle? How many smaller triangles make up the bigger triangle? \frac{1}{4} The dart misses the center triangle but hits the board? How many smaller triangles can be hit if the center one is missed?
Definition 63.18.8 (03UT)—The Stacks project Comment #75 by Keenan Kidwell on October 15, 2012 at 23:56 I couldn't find the definition of H^1_c(X,F) for a sheaf F on the \'{e}tale site of X anywhere else in the chapter. Did I just miss it? Fixed by adding a definition in Remark 78.2. But of course this needs a lot more work. Thanks. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 03UT. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 03UT, in case you are confused.
Compute credit exposures from contract values - MATLAB creditexposures - MathWorks France creditexposures View Contract Values and Exposures Over Time for a Particular Counterparty Compute the Credit Exposure and Determine the Incremental Exposure for a New Trade Compute Exposures for Counterparties Under Collateral Agreement NettingID CollateralTable exposurecpty Compute credit exposures from contract values [exposures,exposurecpty] = creditexposures(values,counterparties) [exposures,exposurecpty] = creditexposures(___,Name,Value) [exposures,exposurecpty,collateral] = creditexposures(___,Name,Value) [exposures,exposurecpty] = creditexposures(values,counterparties) computes the counterparty credit exposures from an array of mark-to-market OTC contract values. These exposures are used when calculating the CVA (credit value adjustment) for a portfolio. [exposures,exposurecpty] = creditexposures(___,Name,Value) adds optional name-value arguments. [exposures,exposurecpty,collateral] = creditexposures(___,Name,Value) computes the counterparty credit exposures from an array of mark-to-market OTC contract values using optional name-value pair arguments for CollateralTable and Dates, the collateral output is returned for the simulated collateral amounts available to counterparties at each simulation date and over each scenario. After computing the mark-to-market contract values for a portfolio of swaps over many scenarios, compute the credit exposure for a particular counterparty. View the contract values and credit exposure over time. First, load data (ccr.mat) containing the mark-to-market contract values for a portfolio of swaps over many scenarios. % Look at one counterparty. cpID = 4; cpValues = squeeze(sum(values(:,swaps.Counterparty == cpID,:),2)); plot(simulationDates,cpValues); title(sprintf('Mark-to-Market Contract Values for Counterparty: %d',cpID)); % Compute the exposure by counterparty. [exposures, expcpty] = creditexposures(values,swaps.Counterparty,... % View the credit exposure over time for the counterparty. cpIdx = find(expcpty == cpID); plot(simulationDates,squeeze(exposures(:,cpIdx,:))); title(sprintf('Exposure for counterparty: %d',cpIdx)); Load the data (ccr.mat) containing the mark-to-market contract values for a portfolio of swaps over many scenarios. Look at one counterparty. cpIdx = swaps.Counterparty == cpID; cpValues = values(:,cpIdx,:); plot(simulationDates,squeeze(sum(cpValues,2))); title(sprintf('Potential Mark-to-Market Portfolio Values for Counterparty: %d',cpID)); Compute the exposures. netting = swaps.NettingID(cpIdx); exposures = creditexposures(cpValues,cpID,'NettingID',netting); View the credit exposure over time for the counterparty. plot(simulationDates,squeeze(exposures)); title(sprintf('Exposure for counterparty: %d',cpID)); Compute the credit exposure profiles. profilesBefore = exposureprofiles(simulationDates,exposures) profilesBefore = struct with fields: PFE: [37x1 double] MPFE: 2.1580e+05 EffEE: [37x1 double] EPE: 2.8602e+04 EffEPE: 4.9579e+04 Consider a new trade with a counterparty. For this example, take another trade from the original swap portfolio and "copy" it for a new counterparty. This example is only for illustrative purposes. newTradeIdx = 3; newTradeValues = values(:,newTradeIdx,:); % Append a new trade to your existing portfolio. cpValues = [cpValues newTradeValues]; netting = [netting; cpID]; Compute the new credit exposure profiles. profilesAfter = exposureprofiles(simulationDates,exposures) profilesAfter = struct with fields: Visualize the expected exposures and the new trade's incremental exposure. Use the incremental exposure to compute the incremental credit value adjustment (CVA) charge. plot(simulationDates,profilesBefore.EE,... simulationDates,profilesAfter.EE); legend({'EE before','EE with trade'}) title('Expected Exposure before and after new trade'); incrementalEE = profilesAfter.EE - profilesBefore.EE; plot(simulationDates,incrementalEE); legend('incremental EE') Only look at a single counterparty for this example. Compute the uncollateralized exposures. exposures = creditexposures(cpValues,swaps.Counterparty(cpIdx),... 'NettingID',swaps.NettingID(cpIdx)); expYLim = get(gca,'YLim'); title(sprintf('Exposures for Counterparty: %d',cpID)); Add a collateral agreement for the counterparty. The 'CollateralTable' parameter is a MATLAB® table. You can create tables from spreadsheets or other data sources, in addition to building them inline as seen here. For more information, see table. collateralVariables = {'Counterparty';'PeriodOfRisk';'Threshold';'MinimumTransfer'}; periodOfRisk = 14; threshold = 100000; minTransfer = 10000; collateralTable = table(cpID,periodOfRisk,threshold,minTransfer,... 'VariableNames',collateralVariables) collateralTable=1×4 table Counterparty PeriodOfRisk Threshold MinimumTransfer ____________ ____________ _________ _______________ 4 14 1e+05 10000 Compute the collateralized exposures. [collatExp, collatcpty, collateral] = creditexposures(cpValues,... swaps.Counterparty(cpIdx),'NettingID',swaps.NettingID(cpIdx),... 'CollateralTable',collateralTable,'Dates',simulationDates); Plot the collateral levels and collateralized exposures. plot(simulationDates,squeeze(collateral)); set(gca,'YLim',expYLim); title(sprintf('Collateral for counterparty: %d',cpID)); ylabel('Collateral ($)') plot(simulationDates,squeeze(collatExp)); title(sprintf('Collateralized Exposure for Counterparty: %d',cpID)); xlabel('Simulation Dates'); values — 3-D array of simulated mark-to-market values of portfolio of contracts 3-D array of simulated mark-to-market values of a portfolio of contracts simulated over a series of simulation dates and across many scenarios, specified as a NumDates-by-NumContracts-by-NumScenarios “cube” of contract values. Each row represents a different simulation date, each column a different contract, and each “page” is a different scenario from a Monte-Carlo simulation. counterparties — Counterparties corresponding to each contract Counterparties corresponding to each contract in values, specified as a NumContracts-element vector of counterparties. Counterparties can be a vector of numeric IDs or a cell array of counterparty names. By default, each counterparty is assumed to have one netting set that covers all of its contracts. If counterparties are covered by multiple netting sets, then use the NettingID parameter. A value of NaN (or '' in a cell array) indicates that a contract is not included in any netting set unless otherwise specified by NettingID. counterparties is case insensitive and leading or trailing white spaces are removed. Example: [exposures,exposurecpty] = creditexposures(values,counterparties,'NettingID','10','ExposureType','Additive') NettingID — Netting set IDs indicate which netting set each contract belongs Netting set IDs to indicate to which netting set each contract in values belongs, specified by a NumContracts-element vector of netting set IDs. NettingID can be a vector of numeric IDs or else a cell array of character vector identifiers. The creditexposures function uses counterparties and NettingID to define each unique netting set (all contracts in a netting set must be with the same counterparty). By default, each counterparty has a single netting set which covers all of their contracts. A value of NaN (or '' in a cell array) indicates that a contract is not included in any netting set. NettingID is case insensitive and leading or trailing white spaces are removed. ExposureType — Calculation method for exposures 'Counterparty' (default) | character vector with value of 'Counterparty' or 'Additive' Calculation method for exposures, specified with values: 'Counterparty' — Compute exposures per counterparty. 'Additive' — Compute additive exposures at the contract level. Exposures are computed per contract and sum to the total counterparty exposure. CollateralTable — Table containing information on collateral agreements of counterparties Table containing information on collateral agreements of counterparties, specified as a MATLAB table. The table consists of one entry (row) per collateralized counterparty and must have the following variables (columns): 'Counterparty' — Counterparty name or ID. The Counterparty name or ID should match the parameter 'Counterparty' for the ExposureType argument. 'PeriodOfRisk' — Margin period of risk in days. The number of days from a margin call until the posted collateral is available from the counterparty. 'Threshold' — Collateral threshold. When counterparty exposures exceed this amount, the counterparty must post collateral. 'MinimumTransfer' — Minimum transfer amount. The minimum amount over/under the threshold required to trigger transfer of collateral. When computing collateralized exposures, both the CollateralTable parameter and the Dates parameter must be specified. Dates — Simulation dates corresponding to each row of the values array vector of date numbers | cell array of character vectors Simulation dates corresponding to each row of the values array, specified as a NUMDATES-by-1 vector of simulation dates. Dates is either a vector of MATLAB date numbers or else a cell array of character vectors in a known date format. See datenum for known date formats. exposures — 3-D array of credit exposures 3-D array of credit exposures representing the potential losses from each counterparty or contract at each date and over all scenarios. The size of exposures depends on the ExposureType input argument: When ExposureType is 'Counterparty', exposures returns a NumDates-by-NumCounterparties-by-NumScenarios “cube” of credit exposures representing potential losses that could be incurred over all dates, counterparties, and scenarios, if a counterparty defaulted (ignoring any post-default recovery). When ExposureType is 'Additive', exposures returns a NumDates-by-NumContracts-by-NumScenarios “cube,” where each element is the additive exposure of each contract (over all dates and scenarios). Additive exposures sum to the counterparty-level exposure. exposurecpty — Counterparties that correspond to columns of exposures array Counterparties that correspond to columns of the exposures array, returned as NumCounterparties or NumContracts elements depending on the ExposureType. collateral — Simulated collateral amounts available to counterparties at each simulation date and over each scenario Simulated collateral amounts available to counterparties at each simulation date and over each scenario, returned as a NumDates-by-NumCounterparties-by-NumScenarios 3D array. Collateral amounts are calculated using a Brownian bridge to estimate contract values between simulation dates. For more information, see Brownian Bridge. If the CollateralTable was not specified, this output is empty. A Brownian bridge is used to simulate portfolio values at intermediate dates to compute collateral available at the subsequent simulation dates. For example, to estimate collateral available at a particular simulation date, ti, you need to know the state of the portfolio at time ti – dt, where dt is the margin period of risk. Portfolio values are simulated at these intermediate dates by drawing from a distribution defined by the Brownian bridge between ti and the previous simulation date, ti–1. If the contract values at time ti –1 and ti are known and you want to estimate the contract value at time tc (where tc is ti – dt), then a sample from a normal distribution is used with variance: \frac{\left({t}_{i}\text{ }-\text{ }{t}_{c}\right)\left({t}_{c}\text{ }-\text{ }{t}_{i-1}\right)}{\left({t}_{i}\text{ }-\text{ }{t}_{i-1}\right)} and with mean that is simply the linear interpolation of the contract values between the two simulation dates at time tc. For more details, see References. [1] Lomibao, D., and S. Zhu. “A Conditional Valuation Approach for Path-Dependent Instruments.” August 2005. [2] Pykhtin M. “Modeling credit exposure for collateralized counterparties.” December 2009. [3] Pykhtin M., and S. Zhu. “A Guide to Modeling Counterparty Credit Risk.” GARP, July/August 2007, issue 37. [4] Pykhtin, Michael., and Dan Rosen. “Pricing Counterparty Risk at the Trade Level and CVA Allocations.” FEDS Working Paper No. 10., February 1, 2010. exposureprofiles | datenum | table
Section 63.6 (03T3): Derived categories—The Stacks project Section 63.6: Derived categories (cite) To set up notation, let $\mathcal{A}$ be an abelian category. Let $\text{Comp}(\mathcal{A})$ be the abelian category of complexes in $\mathcal{A}$. Let $K(\mathcal{A})$ be the category of complexes up to homotopy, with objects equal to complexes in $\mathcal{A}$ and morphisms equal to homotopy classes of morphisms of complexes. This is not an abelian category. Loosely speaking, $D(A)$ is defined to be the category obtained by inverting all quasi-isomorphisms in $\text{Comp}(\mathcal{A})$ or, equivalently, in $K(\mathcal{A})$. Moreover, we can define $\text{Comp}^+(\mathcal{A}), K^+(\mathcal{A}), D^+(\mathcal{A})$ analogously using only bounded below complexes. Similarly, we can define $\text{Comp}^-(\mathcal{A}), K^-(\mathcal{A}), D^-(\mathcal{A})$ using bounded above complexes, and we can define $\text{Comp}^ b(\mathcal{A}), K^ b(\mathcal{A}), D^ b(\mathcal{A})$ using bounded complexes. Remark 63.6.1. Notes on derived categories. There are some set-theoretical problems when $\mathcal{A}$ is somewhat arbitrary, which we will happily disregard. The categories $K(A)$ and $D(A)$ are endowed with the structure of a triangulated category. The categories $\text{Comp}(\mathcal{A})$ and $K(\mathcal{A})$ can also be defined when $\mathcal{A}$ is an additive category. The homology functor $H^ i : \text{Comp}(\mathcal{A}) \to \mathcal{A}$ taking a complex $K^\bullet \mapsto H^ i(K^\bullet )$ extends to functors $H^ i : K(\mathcal{A}) \to \mathcal{A}$ and $H^ i : D(\mathcal{A}) \to \mathcal{A}$. Lemma 63.6.2. An object $E$ of $D(\mathcal{A})$ is contained in $D^+(\mathcal{A})$ if and only if $H^ i(E) =0 $ for all $i \ll 0$. Similar statements hold for $D^-$ and $D^+$. Proof. Hint: use truncation functors. See Derived Categories, Lemma 13.11.5. $\square$ Lemma 63.6.3. Morphisms between objects in the derived category. Let $I^\bullet \in \text{Comp}^+(\mathcal{A})$ with $I^ n$ injective for all $n \in \mathbf{Z}$. Then \[ \mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{A})}(K^\bullet , I^\bullet ) = \mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(K^\bullet , I^\bullet ). \] Let $P^\bullet \in \text{Comp}^-(\mathcal{A})$ with $P^ n$ is projective for all $n \in \mathbf{Z}$. Then \[ \mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{A})}(P^\bullet , K^\bullet ) = \mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(P^\bullet , K^\bullet ). \] If $\mathcal{A}$ has enough injectives and $\mathcal{I} \subset \mathcal{A}$ is the additive subcategory of injectives, then $ D^+(\mathcal{A})\cong K^+(\mathcal{I}) $ (as triangulated categories). If $\mathcal{A}$ has enough projectives and $\mathcal{P} \subset \mathcal{A}$ is the additive subcategory of projectives, then $ D^-(\mathcal{A}) \cong K^-(\mathcal{P}). $ Definition 63.6.4. Let $F: \mathcal{A} \to \mathcal{B}$ be a left exact functor and assume that $\mathcal{A}$ has enough injectives. We define the total right derived functor of $F$ as the functor $RF: D^+(\mathcal{A}) \to D^+(\mathcal{B})$ fitting into the diagram \[ \xymatrix{ D^+(\mathcal{A}) \ar[r]^{RF} & D^+(\mathcal{B}) \\ K^+(\mathcal I) \ar[u] \ar[r]^ F & K^+(\mathcal{B}). \ar[u] } \] This is possible since the left vertical arrow is invertible by the previous lemma. Similarly, let $G: \mathcal{A} \to \mathcal{B}$ be a right exact functor and assume that $\mathcal{A}$ has enough projectives. We define the total left derived functor of $G$ as the functor $LG: D^-(\mathcal{A}) \to D^-(\mathcal{B})$ fitting into the diagram \[ \xymatrix{ D^-(\mathcal{A}) \ar[r]^{LG} & D^-(\mathcal{B}) \\ K^-(\mathcal{P}) \ar[u] \ar[r]^ G & K^-(\mathcal{B}). \ar[u] } \] This is possible since the left vertical arrow is invertible by the previous lemma. Remark 63.6.5. In these cases, it is true that $R^ iF(K^\bullet ) = H^ i(RF(K^\bullet ))$, where the left hand side is defined to be $i$th homology of the complex $F(K^\bullet )$. Comment #14 by Emmanuel Kowalski on July 22, 2012 at 12:44 The short "Notes on derived categories" (remarks-derived-categories) is duplicated in the next Tag 03T4. That is because we have tags for sections and lemmas, remarks, etc. And lemmas and remarks, etc are items inside sections. So there is some duplication in the material. Comment #2167 by Alex on August 14, 2016 at 13:28 typo: In the definition of K(\mathcal{A}) "objects equal to homotopy classes..." should say "morphisms equal to..."
Non-logical symbol - formulasearchengine A non-logical symbol only has meaning or semantic content when one is assigned to it by means of an interpretation. Consequently, a sentence containing a non-logical symbol lacks meaning except under an interpretation, so a sentence is said to be true or false under an interpretation. Main article: first order logic especially Syntax of first-order logic The logical constants, by contrast, have the same meaning in all interpretations. They include the symbols for truth-functional connectives (such as and, or, not, implies, and logical equivalence) and the symbols for the quantifiers "for all" and "there exists". 3 Informal semantics 4 Descriptive signs {{#invoke:main|main}} A signature is a set of non-logical constants together with additional information identifying each symbol as either a constant symbol, or a function symbol of a specific arity n (a natural number), or a relation symbol of a specific arity. The additional information controls how the non-logical symbols can be used to form terms and formulas. For instance if f is a binary function symbol and c is a constant symbol, then f(x, c) is a term, but c(x, f) is not a term. Relation symbols cannot be used in terms, but they can be used to combine one or more (depending on the arity) terms into an atomic formula. {{#invoke:main|main}} Structures over a signature, also known as models, provide formal semantics to a signature and the first-order language over it. {\displaystyle {\mathbb {Z} }} ↑ Carnap, Rudolf, Introduction to Symbolic Logic and its Applications. Retrieved from "https://en.formulasearchengine.com/index.php?title=Non-logical_symbol&oldid=251363"
Necklace - Maple Help Home : Support : Online Help : Mathematics : Discrete Mathematics : Combinatorics : Iterator : Necklace Necklace(n, m, opts) posint; length of necklace (optional) equation(s) of the form option = value; specify options for the Necklace command The Necklace command returns an iterator that generates all m-ary necklaces of length n, in lexicographic order. The alphabet consists of the integers from 0 to m-1 A necklace is an equivalence class of strings under rotation. The representative of a class is the smallest string, lexicographically, in the class. \mathrm{with}⁡\left(\mathrm{Iterator}\right): Create an iterator that generates all necklaces of length 4 in a 2-character alphabet. P≔\mathrm{Necklace}⁡\left(4,2\right): \mathrm{Print}⁡\left(P,'\mathrm{showrank}'\right): \mathrm{Number}⁡\left(P\right) \textcolor[rgb]{0,0,1}{6} \mathrm{Rank}⁡\left(P,[0,1,0,1]\right) \textcolor[rgb]{0,0,1}{4} Compute the necklace corresponding to a given rank. \mathrm{Unrank}⁡\left(P,3\right) [\begin{array}{cccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}\end{array}] The Iterator[Necklace] command was introduced in Maple 2020.
Chi /ˈkaɪ, ˈxiː/ ( listen)[1][2] (uppercase Χ, lowercase χ; Greek: χῖ) is the 22nd letter of the Greek alphabet. 1.1.2 Koine Greek 1.3 Greek numeral 6.1 Greek chi 6.2 Coptic khi 6.3 Latin chi 6.4 Mathematical chi The Greek alphabet on a black figure vessel, with a cross-shaped chi Its value in Ancient Greek was an aspirated velar stop /kʰ/ (in the Western Greek alphabet: /ks/). In Koine Greek and later dialects it became a fricative ([x]/[ç]) along with Θ and Φ. In Modern Greek, it has two distinct pronunciations: In front of high or front vowels (/e/ or /i/) it is pronounced as a voiceless palatal fricative [ç], as in German ich or like the h in some pronunciations of the English words hew and human. In front of low or back vowels (/a/, /o/ or /u/) and consonants, it is pronounced as a voiceless velar fricative ([x]), as in German ach. Chi is romanized as ⟨ch⟩ in most systematic transliteration conventions, but sometimes ⟨kh⟩ is used.[3] In addition, in Modern Greek, it is often also romanized as ⟨h⟩ or ⟨x⟩ in informal practice. In the system of Greek numerals, it has a value of 600. In ancient times, some local forms of the Greek alphabet used the chi instead of xi to represent the /ks/ sound. This was borrowed into the early Latin language, which led to the use of the letter X for the same sound in Latin, and many modern languages that use the Latin alphabet. Chi was also included in the Cyrillic script as the letter Х, with the phonetic value /x/ or /h/. In the International Phonetic Alphabet, the minuscule chi is the symbol for the voiceless uvular fricative. Chi is the basis for the name literary chiastic structure and the name of chiasmus. In Plato's Timaeus, it is explained that the two bands that form the soul of the world cross each other like the letter Χ. Plato's analogy, along with several other examples of chi as a symbol occur in Thomas Browne's discourse The Garden of Cyrus (1658). Chi or X is often used to abbreviate the name Christ, as in the holiday Christmas (Xmas). When fused within a single typespace with the Greek letter rho, it is called the labarum and used to represent the person of Jesus Christ. GREEK CAPITAL LETTER CHI GREEK SMALL LETTER CHI MODIFIER LETTER SMALL CHI GREEK SUBSCRIPT SMALL LETTER CHI CHI RHO Numeric character reference Χ Χ χ χ ᵡ ᵡ ᵪ ᵪ ☧ ☧ Named character reference Χ χ Coptic khi COPTIC CAPITAL LETTER KHI COPTIC SMALL LETTER KHI COPTIC SYMBOL KHI RHO Numeric character reference Ⲭ Ⲭ ⲭ ⲭ ⳩ ⳩ Latin chi LATIN CAPITAL LETTER CHI LATIN SMALL LETTER CHI LATIN SMALL LETTER CHI WITH LOW RIGHT RING LATIN SMALL LETTER CHI WITH LOW LEFT SERIF Numeric character reference Ꭓ Ꭓ ꭓ ꭓ ꭔ ꭔ ꭕ ꭕ Mathematical chi UTF-16 55349 57022 D835 DEBE 55349 57048 D835 DED8 55349 57080 D835 DEF8 55349 57106 D835 DF12 55349 57138 D835 DF32 55349 57164 D835 DF4C Numeric character reference 𝚾 𝚾 𝛘 𝛘 𝛸 𝛸 𝜒 𝜒 𝜲 𝜲 𝝌 𝝌 UTF-16 55349 57196 D835 DF6C 55349 57222 D835 DF86 55349 57254 D835 DFA6 55349 57280 D835 DFC0 Numeric character reference 𝝬 𝝬 𝞆 𝞆 𝞦 𝞦 𝟀 𝟀 In statistics, the term chi-squared or {\displaystyle \chi ^{2}} has various uses, including the chi-squared distribution, the chi-squared test, and chi-squared target models. In algebraic topology, Chi is used to represent the Euler characteristic of a surface. In neuroanatomy, crossings of peripheral nerves (such as the optic chiasm) are named for the letter Chi because of its Χ-shape.[5] In chemistry, the mole fraction[6][7] and electronegativity[8] may be denoted by the lowercase {\displaystyle \chi } {\displaystyle \chi } denotes electric or magnetic susceptibility. In rhetoric, both chiastic structure (a literary device) and the figure of speech Chiasmus derive from their names from the shape of the letter Chi. In mechanical engineering, chi is used as a symbol for the reduction factor of relevant buckling loads in the EN 1993, a European Standard for the design of steel structures. In graph theory, a lowercase chi is used to represent a graph's chromatic number. Look up Χ or χ in Wiktionary, the free dictionary. Х, х - Kha (Cyrillic) ^ "chi". The Chambers Dictionary (9th ed.). Chambers. 2003. ISBN 0-550-10105-5. ^ "chi". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) ^ Asimov, Isaac (1963). The Human Brain. Boston: Houghton Mifflin. ^ Zumdahl, Steven S. (2008). Chemistry (8th ed.). Cengage Learning. p. 201. ISBN 978-0547125329. ^ Rickard, James N. Spencer, George M. Bodner, Lyman H. (2010). Chemistry : structure and dynamics (5th ed.). Hoboken, N.J.: Wiley. p. 357. ISBN 9780470587119. Retrieved from "https://en.wikipedia.org/w/index.php?title=Chi_(letter)&oldid=1073007727"
Non-abelian cohomology and the homotopy classification of maps author = {Brown, Ronald}, title = {Non-abelian cohomology and the homotopy classification of maps}, AU - Brown, Ronald TI - Non-abelian cohomology and the homotopy classification of maps Brown, Ronald. Non-abelian cohomology and the homotopy classification of maps, dans Homotopie algébrique et algèbre locale, Astérisque, no. 113-114 (1984), 6 p. http://www.numdam.org/item/AST_1984__113-114__167_0/ 1. N. K. Ashley, Crossed complexes and T -complexes, Ph.D. Thesis, University of Wales, (1978). | Zbl 0558.55015 2. R. Brown, 'The homotopy classification of maps from a surface to the projective plane', Ba,gor Preprint 82.5, (1982). 3. R. Brown, (ed.), Simplicial T -complexes, Esquisses Math. (to appear). | MR 766237 | Zbl 0566.55009 4. R. Brown and P. J. Higgins, 'The algebra of cubes', J. Pure Appl. Alg. 21 (1981), 233-260. | Article | MR 617135 | Zbl 0468.55007 5. R. Brown and P. J. Higgins, 'Colimit theorems for relative homotopy groups', J. Pure Appl. Alg. 22 (1981), 11-41. | Article | MR 621285 | Zbl 0475.55009 6. R. Brown and P. J. Higgins, 'Crossed complexes and non-abelian extensions', Proc. Int. Conf. on Category Theory, Gummersbach, 1981 (ed. H. Kamps, D. Pumplum, M. Tholen) Springer L.N.M. (to appear). | MR 682942 | Zbl 0504.55018 7. R. Brown and P. J. Higgins, 'Crossed complexes and chain complexes with operators', (in preparation). | Article | Zbl 0691.18003 8. M. K. Dakin, Kan complexes and multiple groupoids, Ph.D. Thesis, University of Wales, (1977). | Zbl 0566.55010 9. G. Segal, 'Classifying spaces and spectral sequences', Publ. Math. I.H.E.S. 34 (1968), 105-112. | EuDML 103878 | MR 232393 | Zbl 0199.26404 10. J. H. C. Whitehead, 'Combinatorial homotopy II', Bull. Amer. Math. Soc. 55 (1949) 453-496. | Article | MR 30760 | Zbl 0040.38801
Explicitly Create State-Space Model Containing Known Parameter Values - MATLAB & Simulink - MathWorks 한국 This example shows how to create a time-invariant, state-space model containing known parameter values using ssm. Define a state-space model containing two independent, AR(1) states with Gaussian disturbances that have standard deviations 0.1 and 0.3, respectively. Specify that the observation is the deterministic sum of the two states. Symbolically, the equation is \left[\begin{array}{c}{x}_{t,1}\\ {x}_{t,2}\end{array}\right]=\left[\begin{array}{cc}0.5& 0\\ 0& -0.2\end{array}\right]\left[\begin{array}{c}{x}_{t-1,1}\\ {x}_{t-1,2}\end{array}\right]+\left[\begin{array}{cc}0.1& 0\\ 0& 0.3\end{array}\right]\left[\begin{array}{c}{u}_{t,1}\\ {u}_{t,2}\end{array}\right] {y}_{t}=\left[\begin{array}{cc}1& 1\end{array}\right]\left[\begin{array}{c}{x}_{t,1}\\ {x}_{t,2}\end{array}\right]. A = [0.5 0; 0 -0.2]; B = [0.1 0; 0 0.3]; x2(t) = -(0.20)x2(t-1) + (0.30)u2(t) x1 0.01 0 x2 0 0.09 Mdl is an ssm model containing unknown parameters. A detailed summary of Mdl prints to the Command Window. By default, the software sets the initial state means and covariance matrix using the stationary distributions. It is good practice to verify that the state and observations equations are correct. If the equations are not correct, then it might help to expand the state-space equation by hand. Simulate states or observations from Mdl using simulate, or forecast states or observations using forecast.
Replicate - APL Wiki (Redirected from Compress) / ⌿ Replicate (/, ⌿), or Copy (#) in J, is a dyadic function or monadic operator that copies each element of the right argument a given number of times, ordering the copies along a specified axis. Typically / is called Replicate while ⌿ is called "Replicate First" or an equivalent. Replicate is a widely-accepted extension of the function Compress, which requires the number of copies to be Boolean: each element is either retained (1 copy) or discarded (0 copies). Replicate with a Boolean left argument or operand may still be called "Compress". Replicate is usually associated with Expand (\), and the two functions are related to Mask and Mesh. It is also closely related to the Indices function. It shares a glyph with Reduce even though Replicate is naturally a function and Reduce must be an operator. This incongruity is sometimes resolved by making Replicate an operator itself, and sometimes by function-operator overloading allowing both syntactic elements to coexist. Outside of APL, filter typically provides the functionality of Compress, while Replicate has no common equivalent. 1.2 High-rank arrays 1.3 Operator or function? 3 Extension support 4 Outside of APL When used with a Boolean array (often called a "mask") on the left, Replicate is called Compress. It filters the right argument, returning only those elements which correspond to 1s in the provided mask. 1 1 0 1 0 1 0 0 / 'compress' If the right argument is an array of indices generated by Iota, Replicate resembles the function Indices. 1 1 0 0 1 / ⍳5 With an array of non-negative integers, Replicate copies each element of the right argument the corresponding number of times. As with Compress, these copies retain their original ordering, and the length of the result is the sum of the control array. 0 3 0 0 2 0 1 0 2 / 'replicate' eeeiiaee +/ 0 3 0 0 2 0 1 0 2 ⍴ 0 3 0 0 2 0 1 0 2 / 'replicate' Replicate usually allows scalar extension of the left argument, which results in every element being copied a fixed number of times. 3 / 'replicate' rrreeepppllliiicccaaattteee An extension introduced by NARS allows either positive or negative integers, where a negative number indicates that a fill element should be used instead of an element from the right argument. In this case the argument lengths must be equal (unless one side is a singleton). APL2 defined a different extension: negative numbers do not correspond to any element of the right argument, but still indicate that many fills should be inserted. In the APL2 extension the length of the right argument is the number of non-negative elements in the left argument. In both extensions the length of the result is the sum of the absolute value of the control array. 0 2 ¯3 1 / ⍳4 Works in: NARS2000, Dyalog APL, APLX, ngn/apl Works in: APL2, APLX, GNU APL The extensions are the same when the right argument is subject to singleton extension. This extension was usually supported before any extension to negative numbers, but would not typically be useful because v/s {\displaystyle \Leftrightarrow } (+/v)/s where v is a non-negative integer vector and s is a singleton. 1 ¯2 3 / 'a' Works in: NARS2000, APL2, Dyalog APL, APLX, ngn/apl, GNU APL High-rank arrays Replicate works along a particular axis, which can be specified in languages with function axis and otherwise is the first axis for ⌿, and the last axis for / (except in A+, which uses / for the first-axis form and has no last-axis form). ⎕←A ← 4 6⍴⎕A 1 0 0 4 0 2 / A ADDDDFF GJJJJLL MPPPPRR SVVVVXX 0 2 1 1 ⌿ A APL2 further extends the singleton extension of the right argument, allowing it to have length 1 along the replication axis even if other axes have lengths not equal to 1. 1 ¯2 3 / ⍪'abc' Works in: APL2, Dyalog APL, APLX, ngn/apl, GNU APL dzaima/APL expects arguments of ⌿ to have matching shape, and replicates the ravel of both. Operator or function? Main article: Function-operator overloading The syntax a / b is ambiguous: it may be an invocation of a dyadic function / with left argument a and right argument b, or of a monadic operator with operand a and right argument b. In early APLs there was no way to resolve this ambiguity, but with the extension of operators to allow arbitrary function operands instead of a specified set of primitive functions, the distinction becomes apparent: a function Replicate can be used as an operand while an operator Replicate cannot. One test of Replicate's nature is to try Replicate Each[1] with an expression such as 1 3 /¨ 'ab' 'cd'. If Replicate is implemented as an operator, it will be applied to the operand 1 3, and Each will be applied to the resulting derived function 1 3/. 1 3 /¨ 'ab' 'cd' abbb cddd (1 3/)¨ 'ab' 'cd' Works in: SHARP APL (with ¨> in place of ¨), APL2, APLX If Replicate is a function, then Each will apply to Replicate only, and the resulting derived function will be invoked monadically. ab cccddd 1 3 (/¨) 'ab' 'cd' Works in: NARS2000, Dyalog APL, GNU APL In early APLs such as APL\360, applying an operator to Compress will always result in a SYNTAX ERROR, because Compress is not an allowed operand of any operator. This is also the case in ngn/apl: although operators can apply to any function, Replicate cannot be used unless both arguments are immediately available. In both cases there is no way to determine whether Replicate "acts like a function" or "acts like an operator". Compress was described in A Programming Language, where it was written with the symbols {\displaystyle /} {\displaystyle /\!\!/} . In Iverson notation compression was particularly important because Take and Drop could be performed only by compression with a prefix or suffix vector. It was included in APL\360, which changed the doubled slash to a barred slash ⌿, and allowed a specified axis and singleton extension on both sides (very briefly, singleton extension was allowed only for the right argument[2]). The APL\360 definition continued to be included in APLs unchanged until 1980. In 1980, Bob Bernecky introduced the extension Replicate to SHARP APL: he allowed an operand (since SHARP's Replicate is an operator) consisting of non-negative integers rather than just Booleans to indicate the number of times to copy.[3] This extension was rapidly and widely adopted, starting with NARS in 1981, and is now a feature of the ISO/IEC 13751:2001 standard. Two extensions to allow negative numbers in the left argument have been introduced, in each case specifying that the negative of a number indicates that many fill elements should appear in the result. In 1981 NARS specified that these fill elements replace the corresponding right argument element, so that the lengths of the left and right arguments are always equal, and extended Expand similarly. APL2, in 1984, made the opposite choice, so that the length of the right argument along the specified axis is equal to the number of non-negative elements on the left. APL2 also loosened the conformability requirements further than simply allowing singleton extension: it allowed a right argument with length 1 along the replication axis to be extended. Dyalog APL, created before APL2, adopted the NARS definition for negative elements but added APL2 conformability extension in version 13.1. Later APLX took advantage of the fact that the two negative number extensions can be distinguished by the length of the left argument, and implemented every NARS and APL2 extension. A+ and J modified Replicate to fit leading axis theory. Rather than allow Replicate to operate on any axis they have only one Replicate function (in A+, /; in J, #) which works on the first axis—it copies major cells rather than elements. Both languages rejected the NARS extension to negative left arguments, but J introduced its own system to add fill elements by allowing complex numbers in the left argument, and removed the Expand function entirely. Arthur Whitney went on to make a more radical change in K, removing Replicate entirely in favor of Where. Here ">1" refers to the SHARP APL extension to non-negative integers, while "<0" refers to extension to negative integers in either NARS or APL2 style. Conformability refers to extension of the right argument only, as all languages allow scalar extension of the left argument. APL\360 Ambiguous No No Single Yes SHARP APL Operator Yes No Scalar Yes NARS, NARS2000 Function Yes Yes No Single Yes Dyalog APL Function Yes Yes No APL2 (13.1) Yes APL2 Operator Yes No Yes APL2 Yes A+ (/) Function Yes No Single No J (#) Function Yes No Scalar No Complex left argument allowed ISO/IEC 13751:2001 Function Yes No Scalar Yes APLX Operator Yes Yes Yes APL2 Yes ngn/apl Ambiguous Yes Yes No APL2 Yes Implemented as an operator GNU APL Function Yes No Yes APL2 Yes dzaima/APL (⌿) Function Yes Yes No No No BQN (/) Function Yes No No No Multiple leading axes supported In each language without axis specification, there is only one form of Replicate, which applies to the first axis or major cells—the last-axis form is discarded. BQN extends this form to allow any number of leading axes to be manipulated if the left argument has depth 2. Outside of APL While Replicate is rarely used in non-array programming languages, Compress is sometimes seen. Usually the same functionality is provided by the higher-order function filter, which an APLer might define as the monadic operator filter←{(⍺⍺¨ ⍵) / ⍵} on a vector argument. While filter is similar to Compress, some extensions to the x86 instruction set are exactly equivalent to Compress on particular data types. In BMI2, the PEXT and PDEP instructions (parallel bit extract and deposit) are identical to Compress and Expand on the bits of a register argument. Indeed, Dyalog APL uses these instructions to implement those primitives (see Dyalog APL#Instruction set usage). The AVX-512 instructions VPCOMPRESSQ and VPEXPANDQ (and variations) are not only equivalent to Compress and Expand using a mask register for the Boolean argument and a vector register for the other argument, but are named after the APL functions. These instructions allow compression of 4-byte and 8-byte elements, and with AVX-512_VBMI2 support was added for 1-byte and 2-byte elements as well. Marshall Lochbaum "Expanding Bits in Shrinking Time": On implementing Replicate of a Boolean array by a scalar. ↑ Benkard, J. Philip. "Replicate each, anyone?". APL87. ↑ Falkoff, A.D., and K.E. Iverson, "APL\360 User's Manual". IBM, August 1968. ↑ Bernecky, Bob. SATN-34: Replication. IPSA. 1980-08-15. Retrieved from ‘https://aplwiki.com/index.php?title=Replicate&oldid=8402’
This article is about the concept in physics. For other uses, see Fluid (disambiguation). Not to be confused with liquid. {\displaystyle J=-D{\frac {d\varphi }{dx}}} In physics, a fluid is a liquid, gas, or other material that continuously deforms (flows) under an applied shear stress, or external force.[1] They have zero shear modulus, or, in simpler terms, are substances which cannot resist any shear force applied to them. Although the term fluid generally includes both the liquid and gas phases, its definition varies among branches of science. Definitions of solid vary as well, and depending on field, some substances can be both fluid and solid.[2] Viscoelastic fluids like Silly Putty appear to behave similar to a solid when a sudden force is applied.[3] Also substances with a very high viscosity such as pitch appear to behave like a solid (see pitch drop experiment). In particle physics, the concept is extended to include fluidic matters other than liquids or gases.[4] A fluid in medicine or biology refers any liquid constituent of the body (body fluid),[5][6] whereas "liquid" is not used in this sense. Sometimes liquids given for fluid replacement, either by drinking or by injection, are also called fluids[7] (e.g. "drink plenty of fluids"). In hydraulics, fluid is a term which refers to liquids with certain properties, and is broader than (hydraulic) oils.[8] lack of resistance to permanent deformation, resisting only relative rates of deformation in a dissipative, frictional manner, and These properties are typically a function of their inability to support a shear stress in static equilibrium. In contrast, solids respond to shear either with a spring-like restoring force, which means that deformations are reversible, or they require a certain initial stress before they deform (see plasticity). Solids respond with restoring forces to both shear stresses and to normal stresses—both compressive and tensile. In contrast, ideal fluids only respond with restoring forces to normal stresses, called pressure: fluids can be subjected to both compressive stress, corresponding to positive pressure, and to tensile stress, corresponding to negative pressure. Both solids and liquids also have tensile strengths, which when exceeded in solids makes irreversible deformation and fracture, and in liquids causes the onset of cavitation. Both solids and liquids have free surfaces, which cost some amount of free energy to form. In the case of solids, the amount of free energy to form a given unit of surface area is called surface energy, whereas for liquids the same quantity is called surface tension. The ability of liquids to flow results in different behaviour in response to surface tension than in solids, although in equilibrium both will try to minimise their surface energy: liquids tend to form rounded droplets, whereas pure solids tend to form crystals. Gases do not have free surfaces, and freely diffuse. Main article: Fluid mechanics In a solid, shear stress is a function of strain, but in a fluid, shear stress is a function of strain rate. A consequence of this behavior is Pascal's law which describes the role of pressure in characterizing a fluid's state. The behavior of fluids can be described by the Navier–Stokes equations—a set of partial differential equations which are based on: Classification of fluids[edit] Depending on the relationship between shear stress and the rate of strain and its derivatives, fluids can be characterized as one of the following: Newtonian fluids: where stress is directly proportional to rate of strain Non-Newtonian fluids: where stress is not proportional to rate of strain, its higher powers and derivatives. Newtonian fluids follow Newton's law of viscosity and may be called viscous fluids. Fluids may be classified by their compressibility: Compressible fluid: A fluid that causes volume reduction or density change when pressure is applied to the fluid or when the fluid becomes supersonic. Incompressible fluid: A fluid that does not vary in volume with changes in pressure or flow velocity (i.e., ρ=constant) such as water or oil. Newtonian and incompressible fluids do not actually exist, but are assumed to be for theoretical settlement. Virtual fluids that completely ignore the effects of viscosity and compressibility are called perfect fluids. ^ "Fluid | Definition, Models, Newtonian Fluids, Non-Newtonian Fluids, & Facts". Encyclopedia Britannica. Retrieved 2 June 2021. ^ Thayer, Ann (2000). "What's That Stuff? Silly Putty". C&EN (Chemical & Engineering News). American Chemical Society (published 2000-11-27). 78 (48): 27. doi:10.1021/cen-v078n048.p027. Archived from the original on 2021-05-07. ^ Kroen, Gretchen Cuda (2012-04-11). "Silly Putty for Potholes". Science. Retrieved 2021-06-23. ^ Example (in the title): Berdyugin, A. I.; Xu, S. G. (2019-04-12). F. M. D. Pellegrino, R. Krishna Kumar, A. Principi, I. Torre, M. Ben Shalom, T. Taniguchi, K. Watanabe, I. V. Grigorieva, M. Polini, A. K. Geim, D. A. Bandurin. "Measuring Hall viscosity of graphene's electron fluid". Science. 364 (6436): 162–165. arXiv:1806.01606. Bibcode:2019Sci...364..162B. doi:10.1126/science.aau0685. PMID 30819929. S2CID 73477792. ^ "Fluid (B.1.b.)". Oxford English Dictionary. Vol. IV F–G (1978 reprint ed.). Oxford: Oxford university press. 1933 [1901]. p. 358. Retrieved 2021-06-22. ^ "body fluid". Taber's online – Taber's medical dictionary. Archived from the original on 2021-06-21. Retrieved 2021-06-22. ^ Usage example: Guppy, Michelle P B; Mickan, Sharon M; Del Mar, Chris B (2004-02-28). ""Drink plenty of fluids": a systematic review of evidence for this recommendation in acute respiratory infections". BMJ. 328 (7438): 499–500. doi:10.1136/bmj.38028.627593.BE. PMC 351843. PMID 14988184. ^ "What is Fluid Power?". National Fluid Power Association. Archived from the original on 2021-06-23. Retrieved 2021-06-23. With hydraulics, the fluid is a liquid (usually oil) Bird, Robert Byron; Stewart, Warren E.; Lightfoot, Edward N. (2007). Transport Phenomena. New York: Wiley, Revised Second Edition. p. 912. ISBN 978-0-471-41077-5. Retrieved from "https://en.wikipedia.org/w/index.php?title=Fluid&oldid=1081641029"
Pooled, within-group, and between-group covariance matrices » SAS博客列表 Pooled, within-group, and between-group covariance matrices A previous article discusses the pooled variance for two or groups of univariate data. The pooled variance is often used during a t test of two independent samples. For multivariate data, the analogous concept is the pooled covariance matrix, which is an average of the sample covariance matrices of the groups. If you assume that the covariances within the groups are equal, the pooled covariance matrix is an estimate of the common covariance. This article shows how to compute and visualize a pooled covariance matrix in SAS. It explains how the pooled covariance relates to the within-group covariance matrices. It discusses a related topic, called the between-group covariance matrix. The within-group matrix is sometimes called the within-class covariance matrix because a classification variable is used to identify the groups. Similarly, the between-group matrix is sometimes called the between-class covariance matrix. Visualize within-group covariances Suppose you want to analyze the covariance in the groups in Fisher's iris data (the Sashelp.Iris data set in SAS). The data set contains four numeric variables, which measure the length and width of two flower parts, the sepal and the petal. Each observation is for a flower from an iris species: Setosa, Versicolor, or Virginica. The Species variable in the data identifies observations that belong to each group, and each group has 50 observations. The following call to PROC SGPLOT creates two scatter plots and overlays prediction ellipses for two pairs of variables: title "68% Prediction Ellipses for Iris Data"; scatter x=SepalLength y=SepalWidth / group=Species transparency=0.5; ellipse x=SepalLength y=SepalWidth / group=Species alpha=0.32 lineattrs=(thickness=2); scatter x=PetalLength y=PetalWidth / group=Species transparency=0.5; ellipse x=PetalLength y=PetalWidth / group=Species alpha=0.32 lineattrs=(thickness=2); The ellipses enable you to visually investigate whether the variance of the data within the three groups appears to be the same. For these data, the answer is no because the ellipses have different shapes and sizes. Some of the prediction ellipses have major axes that are oriented more steeply than others. Some of the ellipses are small, others are relatively large. You might wonder why the graph shows a 68% prediction ellipse for each group. Recall that prediction ellipses are a multivariate generalization of "units of standard deviation." If you assume that measurements in each group are normally distributed, 68% of random observations are within one standard deviation from the mean. So for multivariate normal data, a 68% prediction ellipse is analogous to +/-1 standard deviation from the mean. The pooled covariance is an average of within-group covariances The pooled covariance is used in linear discriminant analysis and other multivariate analyses. It combines (or "pools") the covariance estimates within subgroups of data. The pooled covariance is one of the methods used by Friendly and Sigal (TAS, 2020) to visualize homogeneity tests for covariance matrices. Suppose you collect multivariate data for \(k\) k groups and \(S_i\) S_i is the sample covariance matrix for the \(n_i\) n_i observations within the \(i\) i th group. If you believe that the groups have a common variance, you can estimate it by using the pooled covariance matrix, which is a weighted average of the within-group covariances: \(S_p = \Sigma_{i=1}^k (n_i-1)S_i / \Sigma_{i=1}^k (n_i - 1)\) S_p = \Sigma_{i=1}^k (n_i-1)S_i / \Sigma_{i=1}^k (n_i - 1) If all groups have the same number of observations, then the formula simplifies to \(\Sigma_{i=1}^k S_i / k\) \Sigma_{i=1}^k S_i / k , which is the simple average of the matrices. If the group sizes are different, then the pooled variance is a weighted average, where larger groups receive more weight than smaller groups. Compute the pooled covariance in SAS In SAS, you can often compute something in two ways. The fast-and-easy way is to find a procedure that does the computation. A second way is to use the SAS/IML language to compute the answer yourself. When I compute something myself (and get the same answer as the procedure!), I increase my understanding. Suppose you want to compute the pooled covariance matrix for the iris data. The fast-and-easy way to compute a pooled covariance matrix is to use PROC DISCRIM. The procedure supports the OUTSTAT= option, which writes many multivariate statistics to a data set, including the within-group covariance matrices, the pooled covariance matrix, and something called the between-group covariance. (It also writes analogous quantities for centered sum-of-squares and crossproduct (CSSCP) matrices and for correlation matrices.) proc discrim data=sashelp.iris method=normal pool=yes outstat=Cov noprint; proc print data=Cov noobs; where _TYPE_ = "PCOV"; format _numeric_ 6.2; var _TYPE_ _NAME_ Sepal: Petal:; The table shows the "average" covariance matrix, where the average is across the three species of flowers. Within-group covariance matrices The same output data set contains the within-group and the between-group covariance matrices. The within-group matrices are easy to understand. They are the covariance matrices for the observations in each group. Accordingly, there are three such matrices for these data: one for the observations where Species="Setosa", one for Species="Versicolor", and one for Species="Virginica". The following call to PROC PRINT displays the three matrices: where _TYPE_ = "COV" and Species^=" "; The output is not particularly interesting, so it is not shown. The matrices are the within-group covariances that were visualized earlier by using prediction ellipses. Visual comparison of the pooled covariance and the within-group covariance Friendly and Sigal (2020, Figure 1) overlay the prediction ellipses for the pooled covariance on the prediction ellipses for the within-group covariances. A recreation of Figure 1 in SAS is shown below. You can use the SAS/IML language to draw prediction ellipses from covariance matrices. The shaded region is the prediction ellipse for these two variables in the pooled covariance matrix. It is centered at the weighted average of the group means. You can see that the pooled ellipse looks like an average of the other ellipses. This graph shows only one pair of variables, but see Figure 2 of Friendly and Sigal (2020) for a complete scatter plot matrix that compares the pooled covariance to the within-group covariance for each pair of variables. Between-group covariance matrices Another matrix in the PROC DISCRIM output is the so-called between-group covariance matrix. Intuitively, the between-group covariance matrix is related to the difference between the full covariance matrix of the data (where the subgroups are ignored) and the pooled covariance matrix (where the subgroups are averaged). The precise definition is given in the next section. For now, here is how to print the between-group covariance matrix from the output of PROC DISCRIM: where _TYPE_ = "BCOV"; How to compute the pooled and between-group covariance If I can compute a quantity "by hand," then I know that I truly understand it. Thus, I wrote a SAS/IML program that reproduces the computations made by PROC DISCRIM. The following steps are required to compute each of these matrices from first principles. For each group, compute the covariance matrix (S_i) of the observations in that group. Note that the quantity (n_i - 1)*S_i is the centered sum-of-squares and crossproducts (CSSCP) matrix for the group. Let M be the sum of the CSSCP matrices. The sum is the numerator for the pooled covariance. Form the pooled covariance matrix as S_p = M / (N-k). Let C be the CSSCP data for the full data (which is (N-1)*(Full Covariance)). The between-group covariance matrix is BCOV = (C - M) * k / (N*(k-1)). You can use the UNIQUE-LOC trick to iterate over the data for each group. The following SAS/IML program implements these computations: /* Compute a pooled covariance matrix when observations belong to k groups with sizes n1, n2, ..., nk, where n1+n2+...+nk = N varNames = {'SepalLength' 'SepalWidth' 'PetalLength' 'PetalWidth'}; read all var varNames into Z; read all var "Species" into Group; /* assume complete cases, otherwise remove rows with missing values */ N = nrow(Z); /* compute the within-group covariance, which is the covariance for the observations in each group */ k = ncol(u); /* number of groups */ p = ncol(varNames); /* number of variables */ M = j(p, p, 0); /* sum of within-group CSCCP matrices */ idx = loc(Group = u[i]); /* find rows for this group */ X = Z[idx,]; /* extract obs for i_th group */ n_i = nrow(X); /* n_i = size of i_th group */ S = cov(X); /* within-group cov */ /* accumulate the weighted sum of within-group covariances */ M = M + (n_i-1) * S; /* (n_i-1)*S is centered X`*X */ /* The pooled covariance is an average of the within-class covariance matrices. */ Sp = M / (N-k); print Sp[L="Pooled Cov" c=varNames r=VarNames format=6.2]; /* The between-class CSSCP is the difference between total CSSCP and the sum of the within-group CSSCPs. The SAS doc for PROC DISCRIM defines the between-class covariance matrix as the between-class SSCP matrix divided by N*(k-1)/k, where N is the number of observations and k is the number of classes. /* the total covariance matrix ignores the groups */ C = (N-1)*cov(Z); BCSSCP = C - M; /* between = Full - Sum(Within) */ BCov = BCSSCP * k/( N*(k-1) ); print BCov[L="Between Cov" c=varNames r=VarNames format=6.2]; Success! The SAS/IML program shows the computations that are needed to reproduce the pooled and between-group covariance matrices. The results are the same as are produced by PROC DISCRIM. In multivariate ANOVA, you might assume that the within-group covariance is constant across different groups in the data. The pooled covariance is an estimate of the common covariance. It is a weighted average of the sample covariances for each group, where the larger groups are weighted more heavily than smaller groups. I show how to visualize the pooled covariance by using prediction ellipses. You can use PROC DISCRIM to compute the pooled covariance matrix and other matrices that represent within-group and between-group covariance. I also show how to compute the matrices from first principles by using the SAS/IML language. You can download the SAS program that performs the computations and creates the graphs in this article. The post Pooled, within-group, and between-group covariance matrices appeared first on The DO Loop. What is a pooled variance? How to evaluate the multivariate normal log likelihood
Definition 97.11.1 (07XL)—The Stacks project Definition 97.11.1. Let $S$ be a scheme. Let $\mathcal{X}$ be a category fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. We say $\mathcal{X}$ is limit preserving if for every affine scheme $T$ over $S$ which is a limit $T = \mathop{\mathrm{lim}}\nolimits T_ i$ of a directed inverse system of affine schemes $T_ i$ over $S$, we have an equivalence \[ \mathop{\mathrm{colim}}\nolimits \mathcal{X}_{T_ i} \longrightarrow \mathcal{X}_ T \] of fibre categories. Comment #6246 by DatPham on May 18, 2021 at 13:21 Maybe this is trivial to ask but is there a definition for the notion \mathrm{colim}\; \mathcal{X}_{T_i} (i.e. a colimit of groupoids)? My understanding is that this is just a suggestive notation (and the meaning of the equivalence \mathrm{colim}\;\mathcal{X}_{T_i}\to \mathcal{X}_T is explained in the paragraph followed the above definition). Is this correct or I am missing something? First of all: yes, please see the explanation following the definition for what the definition means. OTOH you can define colimits of directed systems of groupoids exactly as suggested by this explanation and for the purposes of the Stacks project, this is the correct definition. You can also define \text{colim} \mathcal{X}_i if you just assume: (1) I is a directed set (Definition 4.21.1), (2) for each i \in I we have a groupoid \mathcal{X}_i , and (3) for each i \leq j we have a functor F_{ij} : \mathcal{X}_i \to \mathcal{X}_j F_{ii} = \text{id}_{\mathcal{X}_i} , and (4) for every i \leq j \leq k F_{ik} \to F_{jk} \circ F_{ij} such that this data forms a pseudo functor from I 2 -category of groupoids, see Definition 4.29.5. The colimit of such a beast constructed more or less in the same way will have a weak universal property which I leave it up to you to clarify.
Am I right about the differences between Floyd-Warshall, Dijkstra and Bellman-Ford algorithms? - PhotoLens I’ve been studying the three and I’m stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this. Floyd-Warshall algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles. Bellman-Ford is used like Dijkstra, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall except for one source, right? (This is the one I am least sure about.) Dijkstra’s algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails [in graphs with negative edges] Dijkstra’s algorithm is one example of a single-source shortest path or SSSP algorithm. Every SSSP algorithm computes the shortest-path distances from a chosen source node s to every other node in the graph. Moreover, it computes a compact representation of all the shortest paths from s to every other node, in the form of a rooted tree. In the Wikipedia code, previous[v] is the parent of v in this tree. The behavior of Dijkstra’s algorithm in graphs with negative edges depends on the precise variant under discussion. Some variants of the algorithm, like the one in Wikipedia, always runs quickly but do not correctly compute shortest paths when there are negative edges. Other variants, like the one in Jeff Erickson’s lecture notes always compute shortest paths correctly (unless there is a negative cycle reachable from the source) but may require exponential time in the worst case if there are negative edges. Floyd-Warshall’s algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative That’s correct. Floyd-Warshall is one example of an all-pairs shortest path algorithm, meaning it computes the shortest paths between every pair of nodes. Another example is “for each node v, run Dijkstra with v as the source node”. There are several others. Bellman-Ford is used like Dijkstra’s, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall’s except for one source, right? Bellman-Ford is another example of a single-source shortest-path algorithm, like Dijkstra. Bellman-Ford and Floyd-Warshall are similar—for example, they’re both dynamic programming algorithms—but Floyd-Warshall is not the same algorithm as “for each node v, run Bellman-Ford with v as the source node”. In particular, Floyd-Warshall runs in O(V3) O(V^3) time, while repeated-Bellman-Ford runs in O(V2E) O(V^2E) time (O(VE) time for each source vertex). For further details, consult your favorite algorithms textbook. (You do have a favorite algorithms textbook, don’t you?) Source : Link , Question Author : Community , Answer Author : M.S. Dousti All colors appearing duller in Photoshop – can no longer use full hexidecimal range How does one reorganize one’s entire Aperture library from scratch?
Revision as of 09:49, 10 April 2017 by MathAdmin (talk | contribs) (Created page with "<span class="exam">(a) Find the area of the surface obtained by rotating the arc of the curve ::<math>y^3=x</math> <span class="exam">between <math style="vertical-al...") (a) Find the area of the surface obtained by rotating the arc of the curve {\displaystyle y^{3}=x} {\displaystyle (0,0)} {\displaystyle (1,1)} {\displaystyle y} (b) Find the length of the arc {\displaystyle y=1+9x^{\frac {3}{2}}} {\displaystyle (1,10)} {\displaystyle (4,73).} 1. The surface area {\displaystyle S} {\displaystyle y=f(x)} {\displaystyle y} {\displaystyle S=\int 2\pi x\,ds,} {\displaystyle ds={\sqrt {1+{\bigg (}{\frac {dx}{dy}}{\bigg )}^{2}}}dy.} 2. The formula for the length {\displaystyle L} of a curve {\displaystyle y=f(x)} {\displaystyle a\leq x\leq b} {\displaystyle L=\int _{a}^{b}{\sqrt {1+{\bigg (}{\frac {dy}{dx}}{\bigg )}^{2}}}~dx.} We start by calculating {\displaystyle {\frac {dx}{dy}}.} {\displaystyle x=y^{3},} {\displaystyle {\frac {dx}{dy}}=3y^{2}.} Now, we are going to integrate with respect to {\displaystyle y.} Using the formula given in the Foundations section, {\displaystyle {\begin{array}{rcl}\displaystyle {S}&=&\displaystyle {\int _{0}^{1}2\pi x{\sqrt {1+(3y^{2})^{2}}}~dy}\\&&\\&=&\displaystyle {2\pi \int _{0}^{1}y^{3}{\sqrt {1+9y^{4}}}~dy.}\end{array}}} {\displaystyle S} is the surface area. {\displaystyle u} {\displaystyle u=1+9y^{4}.} {\displaystyle du=36y^{3}dy} {\displaystyle {\frac {du}{36}}=y^{3}dy.} Also, since this is a definite integral, we need to change the bounds of integration. {\displaystyle u_{1}=1+9(0)^{4}=1} {\displaystyle u_{2}=1+9(1)^{4}=10.} {\displaystyle {\begin{array}{rcl}\displaystyle {S}&=&\displaystyle {{\frac {2\pi }{36}}\int _{1}^{10}{\sqrt {u}}~du}\\&&\\&=&\displaystyle {{\frac {\pi }{27}}u^{\frac {3}{2}}{\bigg |}_{1}^{10}}\\&&\\&=&\displaystyle {{\frac {\pi }{27}}(10)^{\frac {3}{2}}-{\frac {\pi }{27}}.}\end{array}}} First, we calculate {\displaystyle {\frac {dy}{dx}}.} {\displaystyle y=1+9x^{\frac {3}{2}},} {\displaystyle {\frac {dy}{dx}}={\frac {27{\sqrt {x}}}{2}}.} Then, the arc length {\displaystyle L} of the curve is given by {\displaystyle L=\int _{1}^{4}{\sqrt {1+{\bigg (}{\frac {27{\sqrt {x}}}{2}}{\bigg )}^{2}}}~dx.} {\displaystyle L=\int _{1}^{4}{\sqrt {1+{\frac {27^{2}x}{2^{2}}}}}~dx.} {\displaystyle u} {\displaystyle u=1+{\frac {27^{2}x}{2^{2}}}.} {\displaystyle du={\frac {27^{2}}{2^{2}}}dx} {\displaystyle dx={\frac {2^{2}}{27^{2}}}du.} {\displaystyle u_{1}=1+{\frac {27^{2}(1)}{2^{2}}}=1+{\frac {27^{2}}{2^{2}}}} {\displaystyle u_{2}=1+{\frac {27^{2}(4)}{2^{2}}}=1+27^{2}.} {\displaystyle L=\int _{1+{\frac {27^{2}}{2^{2}}}}^{1+27^{2}}{\frac {2^{2}}{27^{2}}}u^{\frac {1}{2}}~du.} {\displaystyle {\begin{array}{rcl}\displaystyle {L}&=&\displaystyle {{\frac {2^{2}}{27^{2}}}{\bigg (}{\frac {2}{3}}u^{\frac {3}{2}}{\bigg )}{\bigg |}_{1+{\frac {27^{2}}{2^{2}}}}^{1+27^{2}}}\\&&\\&=&\displaystyle {{\frac {2^{3}}{3^{4}}}u^{\frac {3}{2}}{\bigg |}_{1+{\frac {27^{2}}{2^{2}}}}^{1+27^{2}}}\\&&\\&=&\displaystyle {{\frac {2^{3}}{3^{4}}}(1+27^{2})^{\frac {3}{2}}-{\frac {2^{3}}{3^{4}}}{\bigg (}1+{\frac {27^{2}}{2^{2}}}{\bigg )}^{\frac {3}{2}}.}\end{array}}} {\displaystyle {\frac {\pi }{27}}(10)^{\frac {3}{2}}-{\frac {\pi }{27}}} {\displaystyle {\frac {2^{3}}{3^{4}}}(1+27^{2})^{\frac {3}{2}}-{\frac {2^{3}}{3^{4}}}{\bigg (}1+{\frac {27^{2}}{2^{2}}}{\bigg )}^{\frac {3}{2}}}
Algorithms will process musical excerpts and return the following data: Primary (most salient) Tempo (T1, BPM), Secondary Tempo (T2, BPM), relative salience of T1 (T1_s), and the phases of T1 (P1 and P2, sec from beginning of audio file to first beat). be rated on the following tasks: 2) Evaluation of tempo extraction algorithms Algorithms will process musical excerpts and return the following data: Primary (most salient) Tempo (T1, BPM), Secondary Tempo (T2, BPM), relative salience of T1 ( {\displaystyle T1_{s}} ), and the phases of T1 (P1 and P2, sec from beginning of audio file to first beat). be rated on the following tasks: Ability to identify T1 to within 5% (Task P) Ability to identify T2 to within 5% (Task S) Ability to identify an integer multiple of T1 (to within 5%) (Task PI, given if Task P is performed correctly) Ability to identify an integer multiple of T2 (to within 5%) (Task SI, given if Task S is performed correctly) Ability to identify the relative strength of the primary tempo (Task T1S, Ability to correctly identify phase of tempo
Material_conditional Knowpia The material conditional (also known as material implication) is an operation commonly used in logic. When the conditional symbol {\displaystyle \rightarrow } is interpreted as material implication, a formula {\displaystyle P\rightarrow Q} is true unless {\displaystyle P} {\displaystyle Q} is false. Material implication can also be characterized inferentially by modus ponens, modus tollens, conditional proof, and classical reductio ad absurdum.[citation needed] {\displaystyle x\rightarrow y} {\displaystyle (1011)} {\displaystyle {\overline {x}}+y} {\displaystyle {\overline {x}}+y} {\displaystyle 1\oplus x\oplus xy} The truth table of p → q: {\displaystyle p} {\displaystyle q} {\displaystyle p} {\displaystyle q} Deductive DefinitionEdit Classical contraposition Classical reductio ad absurdum Unlike the semantic definition, this approach to logical connectives permits the examination of structurally identical propositional forms in various logical systems, where somewhat different properties may be demonstrated. For example, in intuitionistic logic, which rejects proofs by contraposition as valid rules of inference, (p → q) ⇒ ¬p ∨ q is not a propositional theorem, but the material conditional is used to define negation.[clarification needed] Formal propertiesEdit Contraposition: {\displaystyle P\to Q\equiv \neg Q\to \neg P} Import-Export: {\displaystyle P\to (Q\to R)\equiv (P\land Q)\to R} Negated conditionals: {\displaystyle \neg (P\to Q)\equiv P\land \neg Q} Or-and-if: {\displaystyle P\to Q\equiv \neg P\lor Q} Commutativity of antecedents: {\displaystyle {\big (}P\to (Q\to R){\big )}\equiv {\big (}Q\to (P\to R){\big )}} {\displaystyle {\big (}R\to (P\to Q){\big )}\equiv {\big (}(R\to P)\to (R\to Q){\big )}} Antecedent strengthening: {\displaystyle P\to Q\models (P\land R)\to Q} Vacuous conditional: {\displaystyle \neg P\models P\to Q} {\displaystyle (P\to Q)\land (Q\to R)\models P\to R} Simplification of disjunctive antecedents: {\displaystyle (P\lor Q)\to R\models (P\to R)\land (Q\to R)} Tautologies involving material implication include: Reflexivity: {\displaystyle \models P\to P} Totality: {\displaystyle \models (P\to Q)\lor (Q\to P)} Conditional excluded middle: {\displaystyle \models (P\to Q)\lor (P\to \neg Q)} Discrepancies with natural languageEdit ^ a b c d Edgington, Dorothy (2008). Edward N. Zalta (ed.). "Conditionals". The Stanford Encyclopedia of Philosophy (Winter 2008 ed.). ^ Starr, Will (2019). "Counterfactuals". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. ^ a b c Gillies, Thony (2017). "Conditionals" (PDF). In Hale, B.; Wright, C.; Miller, A. (eds.). A Companion to the Philosophy of Language. Wiley Blackwell. doi:10.1002/9781118972090.ch17. ^ von Fintel, Kai (2011). "Conditionals" (PDF). In von Heusinger, Klaus; Maienborn, Claudia; Portner, Paul (eds.). Semantics: An international handbook of meaning. de Gruyter Mouton. doi:10.1515/9783110255072.1515. hdl:1721.1/95781. ^ Oaksford, M.; Chater, N. (1994). "A rational analysis of the selection task as optimal data selection". Psychological Review. 101 (4): 608–631. CiteSeerX 10.1.1.174.4085. doi:10.1037/0033-295X.101.4.608. ^ Stenning, K.; van Lambalgen, M. (2004). "A little logic goes a long way: basing experiment on semantic theory in the cognitive science of conditional reasoning". Cognitive Science. 28 (4): 481–530. CiteSeerX 10.1.1.13.1854. doi:10.1016/j.cogsci.2004.02.002. ^ von Sydow, M. (2006). Towards a Flexible Bayesian and Deontic Logic of Testing Descriptive and Prescriptive Rules. Göttingen: Göttingen University Press. Edgington, Dorothy (2001), "Conditionals", in Lou Goble (ed.), The Blackwell Guide to Philosophical Logic, Blackwell. Stalnaker, Robert, "Indicative Conditionals", Philosophia, 5 (1975): 269–286. Media related to Material conditional at Wikimedia Commons Edgington, Dorothy. "Conditionals". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Noncommutative_quantum_field_theory Knowpia In mathematical physics, noncommutative quantum field theory (or quantum field theory on noncommutative spacetime) is an application of noncommutative mathematics to the spacetime of quantum field theory that is an outgrowth of noncommutative geometry and index theory in which the coordinate functions[1] are noncommutative. One commonly studied version of such theories has the "canonical" commutation relation: {\displaystyle [x^{\mu },x^{\nu }]=i\theta ^{\mu \nu }\,\!} which means that (with any given set of axes), it is impossible to accurately measure the position of a particle with respect to more than one axis. In fact, this leads to an uncertainty relation for the coordinates analogous to the Heisenberg uncertainty principle. Various lower limits have been claimed for the noncommutative scale, (i.e. how accurately positions can be measured) but there is currently no experimental evidence in favour of such a theory or grounds for ruling them out. One of the novel features of noncommutative field theories is the UV/IR mixing[2] phenomenon in which the physics at high energies affects the physics at low energies which does not occur in quantum field theories in which the coordinates commute. Other features include violation of Lorentz invariance due to the preferred direction of noncommutativity. Relativistic invariance can however be retained in the sense of twisted Poincaré invariance of the theory.[3] The causality condition is modified from that of the commutative theories. Heisenberg was the first to suggest extending noncommutativity to the coordinates as a possible way of removing the infinite quantities appearing in field theories before the renormalization procedure was developed and had gained acceptance. The first paper on the subject was published in 1947 by Hartland Snyder. The success of the renormalization method resulted in little attention being paid to the subject for some time. In the 1980s, mathematicians, most notably Alain Connes, developed noncommutative geometry. Among other things, this work generalized the notion of differential structure to a noncommutative setting. This led to an operator algebraic description of noncommutative space-times, with the problem that it classically corresponds to a manifold with positively defined metric tensor, so that there is no description of (noncommutative) causality in this approach. However it also led to the development of a Yang–Mills theory on a noncommutative torus. The particle physics community became interested in the noncommutative approach because of a paper by Nathan Seiberg and Edward Witten.[4] They argued in the context of string theory that the coordinate functions of the endpoints of open strings constrained to a D-brane in the presence of a constant Neveu-Schwarz B-field—equivalent to a constant magnetic field on the brane—would satisfy the noncommutative algebra set out above. The implication is that a quantum field theory on noncommutative spacetime can be interpreted as a low energy limit of the theory of open strings. Two papers, one by Sergio Doplicher, Klaus Fredenhagen and John Roberts[5] and the other by D. V. Ahluwalia,[6] set out another motivation for the possible noncommutativity of space-time. The arguments go as follows: According to general relativity, when the energy density grows sufficiently large, a black hole is formed. On the other hand, according to the Heisenberg uncertainty principle, a measurement of a space-time separation causes an uncertainty in momentum inversely proportional to the extent of the separation. Thus energy whose scale corresponds to the uncertainty in momentum is localized in the system within a region corresponding to the uncertainty in position. When the separation is small enough, the Schwarzschild radius of the system is reached and a black hole is formed, which prevents any information from escaping the system. Thus there is a lower bound for the measurement of length. A sufficient condition for preventing gravitational collapse can be expressed as an uncertainty relation for the coordinates. This relation can in turn be derived from a commutation relation for the coordinates. It is worth stressing that, differently from other approaches, in particular those relying upon Connes' ideas, here the noncommutative spacetime is a proper spacetime, i.e. it extends the idea of a four-dimensional pseudo-Riemannian manifold. On the other hand, differently from Connes' noncommutative geometry, the proposed model turns out to be coordinates dependent from scratch. In Doplicher Fredenhagen Roberts' paper noncommutativity of coordinates concerns all four spacetime coordinates and not only spatial ones. Wigner–Weyl transform ^ It is possible to have a noncommuting time coordinate as in the paper by Doplicher, Fredenhagen and Roberts mentioned below, but this causes many problems such as the violation of unitarity of the S-matrix. Hence most research is restricted to so-called "space-space" noncommutativity. There have been attempts to avoid these problems by redefining the perturbation theory. However, string theory derivations of noncommutative coordinates excludes time-space noncommutativity. ^ See, for example, Shiraz Minwalla, Mark Van Raamsdonk, Nathan Seiberg (2000) "Noncommutative Perturbative Dynamics," Journal of High Energy Physics, and Alec Matusis, Leonard Susskind, Nicolaos Toumbas (2000) "The IR/UV Connection in the Non-Commutative Gauge Theories," Journal of High Energy Physics. ^ M. Chaichian, P. Prešnajder, A. Tureanu (2005) "New concept of relativistic invariance in NC space-time: twisted Poincaré symmetry and its implications," Physical Review Letters 94: . ^ Seiberg, N. and E. Witten (1999) "String Theory and Noncommutative Geometry," Journal of High Energy Physics . ^ Sergio Doplicher, Klaus Fredenhagen, John E. Roberts (1995) "The quantum structure of spacetime at the Planck scale and quantum fields," Commun. Math. Phys. 172: 187-220. ^ D. V. Ahluwalia (1993) "Quantum Measurement, Gravitation, and Locality," ``Phys. Lett. B339:301-303,1994. A look at preprint dates shows that this work takes priority over Doplicher et al. publication by eight months M.R. Douglas and N. A. Nekrasov (2001) "Noncommutative field theory," Rev. Mod. Phys. 73: 977–1029. Szabo, R. (2003) "Quantum Field Theory on Noncommutative Spaces," Physics Reports 378: 207-99. An expository article on noncommutative quantum field theories. Noncommutative quantum field theory, see statistics on arxiv.org V. Moretti (2003), "Aspects of noncommutative Lorentzian geometry for globally hyperbolic spacetimes," Rev. Math. Phys. 15: 1171-1218. An expository paper (also) on the difficulties to extend non-commutative geometry to the Lorentzian case describing causality
Section 97.11 (07XK): Limit preserving—The Stacks project Section 97.11: Limit preserving (cite) 97.11 Limit preserving The morphism $p : \mathcal{X} \to (\mathit{Sch}/S)_{fppf}$ is limit preserving on objects, as defined in Criteria for Representability, Section 96.5, if the functor of the definition below is essentially surjective. However, the example in Examples, Section 109.52 shows that this isn't equivalent to being limit preserving. We spell out what this means. First, given objects $x, y$ of $\mathcal{X}$ over $T_ i$ we should have \[ \mathop{\mathrm{Mor}}\nolimits _{\mathcal{X}_ T}(x|_ T, y|_ T) = \mathop{\mathrm{colim}}\nolimits _{i' \geq i} \mathop{\mathrm{Mor}}\nolimits _{\mathcal{X}_{T_{i'}}}(x|_{T_{i'}}, y|_{T_{i'}}) \] and second every object of $\mathcal{X}_ T$ is isomorphic to the restriction of an object over $T_ i$ for some $i$. Note that the first condition means that the presheaves $\mathit{Isom}_\mathcal {X}(x, y)$ (see Stacks, Definition 8.2.2) are limit preserving. Lemma 97.11.2. Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ and $q : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. If $\mathcal{X} \to (\mathit{Sch}/S)_{fppf}$ and $\mathcal{Z} \to (\mathit{Sch}/S)_{fppf}$ are limit preserving on objects and $\mathcal{Y}$ is limit preserving, then $\mathcal{X} \times _\mathcal {Y} \mathcal{Z} \to (\mathit{Sch}/S)_{fppf}$ is limit preserving on objects. If $\mathcal{X}$, $\mathcal{Y}$, and $\mathcal{Z}$ are limit preserving, then so is $\mathcal{X} \times _\mathcal {Y} \mathcal{Z}$. Proof. This is formal. Proof of (1). Let $T = \mathop{\mathrm{lim}}\nolimits _{i \in I} T_ i$ be the directed limit of affine schemes $T_ i$ over $S$. We will prove that the functor $\mathop{\mathrm{colim}}\nolimits \mathcal{X}_{T_ i} \to \mathcal{X}_ T$ is essentially surjective. Recall that an object of the fibre product over $T$ is a quadruple $(T, x, z, \alpha )$ where $x$ is an object of $\mathcal{X}$ lying over $T$, $z$ is an object of $\mathcal{Z}$ lying over $T$, and $\alpha : p(x) \to q(z)$ is a morphism in the fibre category of $\mathcal{Y}$ over $T$. By assumption on $\mathcal{X}$ and $\mathcal{Z}$ we can find an $i$ and objects $x_ i$ and $z_ i$ over $T_ i$ such that $x_ i|_ T \cong T$ and $z_ i|_ T \cong z$. Then $\alpha $ corresponds to an isomorphism $p(x_ i)|_ T \to q(z_ i)|_ T$ which comes from an isomorphism $\alpha _{i'} : p(x_ i)|_{T_{i'}} \to q(z_ i)|_{T_{i'}}$ by our assumption on $\mathcal{Y}$. After replacing $i$ by $i'$, $x_ i$ by $x_ i|_{T_{i'}}$, and $z_ i$ by $z_ i|_{T_{i'}}$ we see that $(T_ i, x_ i, z_ i, \alpha _ i)$ is an object of the fibre product over $T_ i$ which restricts to an object isomorphic to $(T, x, z, \alpha )$ over $T$ as desired. We omit the arguments showing that $\mathop{\mathrm{colim}}\nolimits \mathcal{X}_{T_ i} \to \mathcal{X}_ T$ is fully faithful in (2). $\square$ Lemma 97.11.3. Let $S$ be a scheme. Let $\mathcal{X}$ be an algebraic stack over $S$. Then the following are equivalent $\mathcal{X}$ is a stack in setoids and $\mathcal{X} \to (\mathit{Sch}/S)_{fppf}$ is limit preserving on objects, $\mathcal{X}$ is a stack in setoids and limit preserving, $\mathcal{X}$ is representable by an algebraic space locally of finite presentation. Proof. Under each of the three assumptions $\mathcal{X}$ is representable by an algebraic space $X$ over $S$, see Algebraic Stacks, Proposition 93.13.3. It is clear that (1) and (2) are equivalent as a functor between setoids is an equivalence if and only if it is surjective on isomorphism classes. Finally, (1) and (3) are equivalent by Limits of Spaces, Proposition 69.3.10. $\square$ Comment #5348 by Will Chen on June 24, 2020 at 19:49 After "We spell out what this means", I think the T_i' 's should be T_{i'}
Section 63.3 (03SL): Frobenii—The Stacks project Section 63.3: Frobenii (cite) 63.3 Frobenii In this section we will prove a “baffling” theorem. A topological analogue of the baffling theorem is the following. We now turn to the statement for the étale site. Lemma 63.3.2. Let $X$ be a scheme and $g : X \to X$ a morphism. Assume that for all $\varphi : U \to X$ étale, there is an isomorphism \[ \xymatrix{ U \ar[rd]_\varphi \ar[rr]^-\sim & & {U \times _{\varphi , X, g} X} \ar[ld]^{\text{pr}_2} \\ & X } \] functorial in $U$. Then $g$ induces the identity on cohomology (for any sheaf). Proof. The proof is formal and without difficulty. $\square$ Please see Varieties, Section 33.36 for a discussion of different variants of the Frobenius morphism. Theorem 63.3.3 (The Baffling Theorem). Let $X$ be a scheme in characteristic $p > 0$. Then the absolute frobenius induces (by pullback) the trivial map on cohomology, i.e., for all integers $j\geq 0$, \[ F_ X^* : H^ j (X, \underline{\mathbf{Z}/n\mathbf{Z}}) \longrightarrow H^ j (X, \underline{\mathbf{Z}/n\mathbf{Z}}) \] This theorem is purely formal. It is a good idea, however, to review how to compute the pullback of a cohomology class. Let us simply say that in the case where cohomology agrees with Čech cohomology, it suffices to pull back (using the fiber products on a site) the Čech cocycles. The general case is quite technical, see Hypercoverings, Theorem 25.10.1. To prove the theorem, we merely verify that the assumption of Lemma 63.3.2 holds for the frobenius. Proof of Theorem 63.3.3. We need to verify the existence of a functorial isomorphism as above. For an étale morphism $\varphi : U \to X$, consider the diagram \[ \xymatrix{ U \ar@{-->}[rd] \ar@/^1pc/[rrd]^{F_ U} \ar@/_1pc/[rdd]_\varphi \\ & {U \times _{\varphi , X, F_ X} X} \ar[r]_-{\text{pr}_1} \ar[d]^{\text{pr}_2} & U \ar[d]^\varphi \\ & X \ar[r]^{F_ X} & X. } \] The dotted arrow is an étale morphism and a universal homeomorphism, so it is an isomorphism. See Étale Morphisms, Lemma 41.14.3. $\square$ Since $\pi _ X$ is a morphism over $k$, we can base change it to any scheme over $k$. In particular we can base change it to the algebraic closure $\bar k$ and get a morphism $\pi _ X : X_{\bar k} \to X_{\bar k}$. Using $\pi _ X$ also for this base change should not be confusing as $X_{\bar k}$ does not have a geometric frobenius of its own. Lemma 63.3.5. Let $\mathcal{F}$ be a sheaf on $X_{\acute{e}tale}$. Then there are canonical isomorphisms $\pi _ X^{-1} \mathcal{F} \cong \mathcal{F}$ and $\mathcal{F} \cong {\pi _ X}_*\mathcal{F}$. This is false for the fppf site. Proof. Let $\varphi : U \to X$ be étale. Recall that ${\pi _ X}_* \mathcal{F} (U) = \mathcal{F} (U \times _{\varphi , X, \pi _ X} X)$. Since $\pi _ X = F_ X^ f$, it follows from the proof of Theorem 63.3.3 that there is a functorial isomorphism \[ \xymatrix{ U \ar[rd]_{\varphi } \ar[rr]_-{\gamma _ U} & & U \times _{\varphi , X, \pi _ X} X \ar[ld]^{\text{pr}_2} \\ & X } \] where $\gamma _ U = (\varphi , F_ U^ f)$. Now we define an isomorphism \[ \mathcal{F} (U) \longrightarrow {\pi _ X}_* \mathcal{F} (U) = \mathcal{F} (U \times _{\varphi , X, \pi _ X} X) \] by taking the restriction map of $\mathcal{F}$ along $\gamma _ U^{-1}$. The other isomorphism is analogous. $\square$ We continue discussion cohomology of sheaves on our scheme $X$ over the finite field $k$ with $q = p^ f$ elements. Fix an algebraic closure $\bar k$ of $k$ and write $G_ k = \text{Gal}(\bar k/k)$ for the absolute Galois group of $k$. Let $\mathcal{F}$ be an abelian sheaf on $X_{\acute{e}tale}$. We will define a left $G_ k$-module structure cohomology group $H^ j (X_{\bar k}, \mathcal{F}|_{X_{\bar k}})$ as follows: if $\sigma \in G_ k$, the diagram \[ \xymatrix{ X_{\bar k} \ar[rd] \ar[rr]^{\mathop{\mathrm{Spec}}(\sigma ) \times \text{id}_ X} & & X_{\bar k} \ar[ld] \\ & X } \] commutes. Thus we can set, for $\xi \in H^ j (X_{\bar k}, \mathcal{F}|_{X_{\bar k}})$ \[ \sigma \cdot \xi := (\mathop{\mathrm{Spec}}(\sigma ) \times \text{id}_ X)^*\xi \in H^ j(X_{\bar k}, (\mathop{\mathrm{Spec}}(\sigma ) \times \text{id}_ X)^{-1} \mathcal{F}|_{X_{\bar k}}) = H^ j (X_{\bar k}, \mathcal{F}|_{X_{\bar k}}), \] where the last equality follows from the commutativity of the previous diagram. This endows the latter group with the structure of a $G_ k$-module. Lemma 63.3.7. In the situation above denote $\alpha : X \to \mathop{\mathrm{Spec}}(k)$ the structure morphism. Consider the stalk $(R^ j\alpha _*\mathcal{F})_{\mathop{\mathrm{Spec}}(\bar k)}$ endowed with its natural Galois action as in Étale Cohomology, Section 59.56. Then the identification \[ (R^ j\alpha _*\mathcal{F})_{\mathop{\mathrm{Spec}}(\bar k)} \cong H^ j (X_{\bar k}, \mathcal{F}|_{X_{\bar k}}) \] from Étale Cohomology, Theorem 59.53.1 is an isomorphism of $G_ k$-modules. A similar result holds comparing $(R^ j\alpha _!\mathcal{F})_{\mathop{\mathrm{Spec}}(\bar k)}$ with $H^ j_ c (X_{\bar k}, \mathcal{F}|_{X_{\bar k}})$. The map $\pi _ X^*$ is defined by the composition \[ H^ j(X_{\bar k}, \mathcal{F}|_{X_{\bar k}}) \xrightarrow {{\pi _ X}_{\bar k}^*} H^ j(X_{\bar k}, (\pi _ X^{-1} \mathcal{F})|_{X_{\bar k}}) \cong H^ j(X_{\bar k}, \mathcal{F}|_{X_{\bar k}}). \] where the last isomorphism comes from the canonical isomorphism $\pi _ X^{-1} \mathcal{F} \cong \mathcal{F}$ of Lemma 63.3.5. Definition 63.3.10. If $x \in X(k)$ is a rational point and $\bar x : \mathop{\mathrm{Spec}}(\bar k) \to X$ the geometric point lying over $x$, we let $\pi _ x : \mathcal{F}_{\bar x} \to \mathcal{F}_{\bar x}$ denote the action by $\text{frob}_ k^{-1}$ and call it the geometric frobenius1 We can now make a more precise statement (albeit a false one) of the trace formula (63.2.0.1). Let $X$ be a finite type scheme of dimension 1 over a finite field $k$, $\ell $ a prime number and $\mathcal{F}$ a constructible, flat $\mathbf{Z}/\ell ^ n\mathbf{Z}$ sheaf. Then as elements of $\mathbf{Z}/\ell ^ n\mathbf{Z}$. The reason this equation is wrong is that the trace in the right-hand side does not make sense for the kind of sheaves considered. Before addressing this issue, we try to motivate the appearance of the geometric frobenius (apart from the fact that it is a natural morphism!). Let us consider the case where $X = \mathbf{P}^1_ k$ and $\mathcal{F} = \underline{\mathbf{Z}/\ell \mathbf{Z}}$. For any point, the Galois module $\mathcal{F}_{\bar x}$ is trivial, hence for any morphism $\varphi $ acting on $\mathcal{F}_{\bar x}$, the left-hand side is \[ \sum \nolimits _{x \in X(k)} \text{Tr}(\varphi | \mathcal{F}_{\bar x}) = \# \mathbf{P}^1_ k(k) = q+1. \] Now $\mathbf{P}^1_ k$ is proper, so compactly supported cohomology equals standard cohomology, and so for a morphism $\pi : \mathbf{P}^1_ k \to \mathbf{P}^1_ k$, the right-hand side equals \[ \text{Tr}(\pi ^* | H^0 (\mathbf{P}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}})) + \text{Tr}(\pi ^* | H^2 (\mathbf{P}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}})). \] The Galois module $H^0 (\mathbf{P}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}}) = \mathbf{Z}/\ell \mathbf{Z}$ is trivial, since the pullback of the identity is the identity. Hence the first trace is 1, regardless of $\pi $. For the second trace, we need to compute the pullback $\pi ^* : H^2(\mathbf{P}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}}))$ for a map $\pi : \mathbf{P}^1_{\bar k} \to \mathbf{P}^1_{\bar k}$. This is a good exercise and the answer is multiplication by the degree of $\pi $ (for a proof see Étale Cohomology, Lemma 59.69.2). In other words, this works as in the familiar situation of complex cohomology. In particular, if $\pi $ is the geometric frobenius we get \[ \text{Tr}(\pi _ X^* | H^2 (\mathbf{P}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}})) = q \] and if $\pi $ is the arithmetic frobenius then we get \[ \text{Tr}(\text{frob}_ k^* | H^2 (\mathbf{P}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}})) = q^{-1}. \] The latter option is clearly wrong. Remark 63.3.11. The computation of the degrees can be done by lifting (in some obvious sense) to characteristic 0 and considering the situation with complex coefficients. This method almost never works, since lifting is in general impossible for schemes which are not projective space. The question remains as to why we have to consider compactly supported cohomology. In fact, in view of Poincaré duality, it is not strictly necessary for smooth varieties, but it involves adding in certain powers of $q$. For example, let us consider the case where $X = \mathbf{A}^1_ k$ and $\mathcal{F} = \underline{\mathbf{Z}/\ell \mathbf{Z}}$. The action on stalks is again trivial, so we only need look at the action on cohomology. But then $\pi _ X^*$ acts as the identity on $H^0(\mathbf{A}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}})$ and as multiplication by $q$ on $H^2_ c(\mathbf{A}^1_{\bar k}, \underline{\mathbf{Z}/\ell \mathbf{Z}})$. [1] This notation is not standard. This operator is denoted $F_ x$ in [SGA4.5]. We will likely change this notation in the future. Comment #421 by Rex on January 04, 2014 at 10:28 In the proof of Lemma 44.69.6, there is a missing subscript in one of the fiber products: \mathcal{F} (\mathcal{U} \times{\varphi, X, \pi_X} X) Also, is there any reason to use f as the exponent in p^f = q? This made me a bit confused about the notation F_X^f later on. Wouldn't m be a more natural choice for an exponent? Thanks for pointing out the type. Fixed here. I think it is customary to write q = p^f for the number of elements of a finite field. Yes, there are lots of f's floating around... Comment #1069 by David Zureick-Brown on October 11, 2014 at 19:43 Typo after Remark 44.80.7 "Consider the We will define a left" Comment #1864 by Michael Harris on March 25, 2016 at 20:43 There's a typo on the second line of the proof of Theorem 50.79.4: S should be X. Comment #2603 by denis lieberman on June 17, 2017 at 18:56 references on the Arithmetic Frobenius are: Y. Flicker, "Drinfeld modui schemes and automorphiic forms: The theory of elliptic modules with applications" (2013). R.Kiehl and R. Weissauer, "Weil Conjectures, Perverse sheaves and l'adic Fourier Transforms" (2001). H.Esnault. Dear Dennis Lieberman, I am going to leave this as it is for now. Maybe the book by Kiehl and Weissauer should be mentioned in the introduction to this chaper... Comment #2989 by Wen-Wei Li on November 04, 2017 at 13:13 Probably a typo in the formula before Lemma 53.86.7: the \mathcal{F}|X_{\bar{k}} \mathcal{F}|_{X_{\bar{k}}} Comment #3044 by denis lieberman on December 20, 2017 at 23:41 Re: formula 53.85.0.1 appears in wikipedia under Grothendieck trace formula (Formal statement for L-functions) and in the Encyclopedia of mathematics under the Lefschetz trace formula. Both sites agree that the formula applies to a constant sheaf of rationals; consequently, the left-hand side gives the number of points of the "field extension of the scheme." (wiki) Similarly, in the Encylopedia of Mathematics ,the left hand side gives the number of "points of the scheme X with values in k." @#2989 Thanks for the typo. Fixed here. Comment #5113 by pippo on May 24, 2020 at 04:17 Typo: in (03SX) the first \pi_X \pi_x In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 03SL. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 03SL, in case you are confused.
Symmetrization Knowpia In mathematics, symmetrization is a process that converts any function i{\displaystyle n} variables to a symmetric function i{\displaystyle n} variables. Similarly, antisymmetrization converts any function i{\displaystyle n} variables into an antisymmetric function. Two variablesEdit {\displaystyle S} {\displaystyle A} be an additive abelian group. A map {\displaystyle \alpha :S\times S\to A} is called a symmetric map if {\displaystyle \alpha (s,t)=\alpha (t,s)\quad {\text{ for all }}s,t\in S.} It is called an antisymmetric map if instead {\displaystyle \alpha (s,t)=-\alpha (t,s)\quad {\text{ for all }}s,t\in S.} The symmetrization of a map {\displaystyle \alpha :S\times S\to A} {\displaystyle (x,y)\mapsto \alpha (x,y)+\alpha (y,x).} Similarly, the antisymmetrization or skew-symmetrization of a map {\displaystyle \alpha :S\times S\to A} {\displaystyle (x,y)\mapsto \alpha (x,y)-\alpha (y,x).} The sum of the symmetrization and the antisymmetrization of a map {\displaystyle \alpha } {\displaystyle 2\alpha .} Thus, away from 2, meaning if 2 is invertible, such as for the real numbers, one can divide by 2 and express every function as a sum of a symmetric function and an anti-symmetric function. The symmetrization of a symmetric map is its double, while the symmetrization of an alternating map is zero; similarly, the antisymmetrization of a symmetric map is zero, while the antisymmetrization of an anti-symmetric map is its double. The symmetrization and antisymmetrization of a bilinear map are bilinear; thus away from 2, every bilinear form is a sum of a symmetric form and a skew-symmetric form, and there is no difference between a symmetric form and a quadratic form. At 2, not every form can be decomposed into a symmetric form and a skew-symmetric form. For instance, over the integers, the associated symmetric form (over the rationals) may take half-integer values, while over {\displaystyle \mathbb {Z} /2\mathbb {Z} ,} a function is skew-symmetric if and only if it is symmetric (as {\displaystyle 1=-1} This leads to the notion of ε-quadratic forms and ε-symmetric forms. In terms of representation theory: exchanging variables gives a representation of the symmetric group on the space of functions in two variables, the symmetric and antisymmetric functions are the subrepresentations corresponding to the trivial representation and the sign representation, and symmetrization and antisymmetrization map a function into these subrepresentations – if one divides by 2, these yield projection maps. As the symmetric group of order two equals the cyclic group of order two ( {\displaystyle \mathrm {S} _{2}=\mathrm {C} _{2}} ), this corresponds to the discrete Fourier transform of order two. n variablesEdit More generally, given a function i{\displaystyle n} variables, one can symmetrize by taking the sum over all {\displaystyle n!} permutations of the variables,[1] or antisymmetrize by taking the sum over all {\displaystyle n!/2} even permutations and subtracting the sum over all {\displaystyle n!/2} odd permutations (except that when {\displaystyle n\leq 1,} the only permutation is even). Here symmetrizing a symmetric function multiplies by {\displaystyle n!} – thus if {\displaystyle n!} is invertible, such as when working over a field of characteristic {\displaystyle 0} {\displaystyle p>n,} then these yield projections when divided by {\displaystyle n!.} In terms of representation theory, these only yield the subrepresentations corresponding to the trivial and sign representation, but for {\displaystyle n>2} there are others – see representation theory of the symmetric group and symmetric polynomials. BootstrappingEdit Given a function in {\displaystyle k} variables, one can obtain a symmetric function i{\displaystyle n} variables by taking the sum over {\displaystyle k} -element subsets of the variables. In statistics, this is referred to as bootstrapping, and the associated statistics are called U-statistics. Alternating multilinear map – Multilinear map that is 0 whenever arguments are linearly dependent Antisymmetric tensor – Tensor equal to the negative of any of its transpositions ^ Hazewinkel (1990), p. 344 Hazewinkel, Michiel (1990). Encyclopaedia of mathematics: an updated and annotated translation of the Soviet "Mathematical encyclopaedia". Encyclopaedia of Mathematics. Vol. 6. Springer. ISBN 978-1-55608-005-0.
Simulation Acceleration Using MATLAB Coder and Parallel Computing Toolbox - MATLAB & Simulink - MathWorks España Create Function that Runs Simulation Algorithms Identify Speed Bottlenecks by Using MATLAB Profiler App Accelerate Simulation with MATLAB to C Code Generation Achieve Even Faster Simulation Using Parallel Processing Runs This example shows two ways to accelerate the simulation of communications algorithms in MATLAB®. It showcases the runtime performance effects of using MATLAB to C code generation and parallel processing runs (using the MATLAB parfor (Parallel Computing Toolbox) function). For a comprehensive look at all possible acceleration techniques, see Accelerating MATLAB Algorithms and Applications article. The combined effect of using these methods may speed up a typical simulation time by an order of magnitude. The difference is tantamount to running the simulation overnight or within just a few hours. To run the MATLAB to C code generation section of this example, you must have MATLAB Coder™ product. To run the parallel processing section of this example, you must have Parallel Computing Toolbox™ product. This example examines various implementations of this transceiver system in MATLAB. This system is composed of a transmitter, a channel model, and a receiver. The transmitter processes the input bit stream with a convolutional encoder, an interleaver, a modulator, and a MIMO space-time block encoder (see [ 1 ], [ 2 ]). The transmitted signal is then processed by a 2x2 MIMO block fading channel and an additive white gaussian noise (AWGN) channel. The receiver processes its input signal with a 2x2 MIMO space-time block decoder, a demodulator, a deinterleaver, and a Viterbi decoder to recover the best estimate of the input bit stream at the receiver. The example follows this workflow: Create a function that runs the simulation algorithms Use the MATLAB Profiler GUI to identify speed bottlenecks Accelerate the simulation with MATLAB to C code generation Start with a function that represents the first version or baseline implementation of this algorithm. The inputs to the helperAccelBaseline function are the {E}_{b}/{N}_{o} value of the current frame (EbNo), minimum number of errors (minNumErr) and the maximum number of bits processed (maxNumBits). {E}_{b}/{N}_{o} is the ratio of energy per bit to noise power spectral density. The function output is the bit error rate (BER) information for each {E}_{b}/{N}_{o} type helperAccelBaseline function ber = helperAccelBaseline(EbNo, minNumErr, maxNumBits) %helperAccelBaseline Simulate a communications link % BER = helperAccelBaseline(EBNO,MINERR,MAXBIT) returns the bit error % rate (BER) of a communications link that includes convolutional coding, % interleaving, QAM modulation, an Alamouti space-time block code, and a % MIMO block fading channel with AWGN. EBNO is the energy per bit to % noise power spectral density ratio (Eb/No) of the AWGN channel in dB, % MINERR is the minimum number of errors to collect, and MAXBIT is the % maximum number of simulated bits so that the simulations do not run % indefinitely if the Eb/No value is too high. M = 16; % Modulation Order k = log2(M); % Bits per Symbol codeRate = 1/2; % Coding Rate adjSNR = convertSNR(EbNo,"ebno","BitsPerSymbol",k,"CodingRate",codeRate); dataFrameLen = 1998; % Add 6 zeros to terminate the convolutional code chanFrameLen=(dataFrameLen+6)/codeRate; permvec=[1:3:chanFrameLen 2:3:chanFrameLen 3:3:chanFrameLen]'; ostbcEnc = comm.OSTBCEncoder(NumTransmitAntennas=2); ostbcComb = comm.OSTBCCombiner(NumTransmitAntennas=2,NumReceiveAntennas=2); mimoChan = comm.MIMOChannel(MaximumDopplerShift=0,PathGainsOutputPort=true); while (ber(3) <= maxNumBits) && (ber(2) < minNumErr) data = [randi([0 1],dataFrameLen,1);false(6,1)]; encOut = convenc(data,trellis); % Convolutional Encoder intOut = intrlv(double(encOut),permvec'); % Interleaver modOut = qammod(intOut,M,... 'InputType','bit'); % QAM Modulator stbcOut = ostbcEnc(modOut); % Alamouti Space-Time Block Encoder [chanOut, pathGains] = mimoChan(stbcOut); % 2x2 MIMO Channel chEst = squeeze(sum(pathGains,2)); rcvd = awgn(chanOut,adjSNR,'measured'); % AWGN channel stbcDec = ostbcComb(rcvd,chEst); % Alamouti Space-Time Block Decoder demodOut = qamdemod(stbcDec,M,... 'OutputType','bit'); % QAM Demodulator deintOut = deintrlv(demodOut,permvec'); % Deinterleaver decOut = vitdec(deintOut(:),trellis, ... % Viterbi Decoder tblen,'term','hard'); ber = berCalc(decOut(1:dataFrameLen),data(1:dataFrameLen)); As a starting point, measure the time it takes to run this baseline algorithm in MATLAB. Use the MATLAB timing functions (tic and toc) to record the elapsed runtime to complete processing of a for-loop that iterates over {E}_{b}/{N}_{o} values from 0 to 7 dB. minEbNodB=0; maxEbNodB=7; EbNoVec = minEbNodB:maxEbNodB; minNumErr=100; str='Baseline'; % Run the function once to load it into memory and remove overhead from % runtime measurements helperAccelBaseline(3,10,1e4); berBaseline=zeros(size(minEbNodB:maxEbNodB)); disp('Processing the baseline algorithm.'); Processing the baseline algorithm. for EbNoIdx=1:length(EbNoVec) EbNo = EbNoVec(EbNoIdx); y=helperAccelBaseline(EbNo,minNumErr,maxNumBits); berBaseline(EbNoIdx)=y(1); rtBaseline=toc; The result shows the simulation time (in seconds) of the baseline algorithm. Use this timing measurement to compare with subsequent accelerated simulation runtimes. helperAccelReportResults(N,rtBaseline,rtBaseline,str,str); 1. Baseline | 5.5712 | 1.0000 Identify the processing bottlenecks and problem areas of the baseline algorithm by using the MATLAB Profiler. Obtain the profiler information by executing the following script: y=helperAccelBaseline(6,100,1e6); The Profiler report presents the execution time for each function call of the algorithm. You can sort the functions according to their self-time in a descending order. The first few functions that the Profiler window depicts represent the speed bottleneck of the algorithm. In this case, the vitdec function is identified as the major speed bottleneck. MATLAB Coder generates portable and readable C code from algorithms that are part of the MATLAB code generation subset. You can create a MATLAB executable (MEX) of the helperAccelBaseline, function because it uses functions and System objects that support code generation. Use the codegen (MATLAB Coder) function to compile the helperAccelBaseline function into a MEX function. After successful code generation by codegen, you will see a MEX file in the workspace that appends '_mex' to the function, helperAccelBaseline_mex. codegen('helperAccelBaseline.m','-args',{EbNo,minNumErr,maxNumBits}) Measure the simulation time for the MEX version of the algorithm. Record the elapsed time for running this function in the same for-loop as before. str='MATLAB to C code generation'; tag='Codegen'; helperAccelBaseline_mex(3,10,1e4); berCodegen=zeros(size(berBaseline)); disp('Processing the MEX function of the algorithm.'); Processing the MEX function of the algorithm. y=helperAccelBaseline_mex(EbNo,minNumErr,maxNumBits); berCodegen(EbNoIdx)=y(1); rt=toc; The results here show the MEX version of this algorithm runs faster than the baseline versions of the algorithm. The amount of acceleration achieved depends on the nature of the algorithm. The best way to determine the acceleration is to generate a MEX-function using MATLAB Coder and test the speedup firsthand. If your algorithm contains single-precision data types, fixed-point data types, loops with states, or code that cannot be vectorized, you are likely to see speedups. On the other hand, if your algorithm contains MATLAB implicitly multithreaded computations such as fft and svd, functions that call IPP or BLAS libraries, functions optimized for execution in MATLAB on a PC such as FFTs, or algorithms where you can vectorize the code, speedups are less likely. helperAccelReportResults(N,rtBaseline,rt,str,tag); 2. MATLAB to C code generation | 1.6952 | 3.2864 Utilize multiple cores to increase simulation acceleration by running tasks in parallel. Use parallel processing runs (parfor loops) in MATLAB to perform the work on the number of available workers. Parallel Computing Toolbox enables you to run different iterations of the simulation in parallel. Use the gcp (Parallel Computing Toolbox) function to get the current parallel pool. If a pool is available but not open, the gcp opens the pool and reserves several MATLAB workers to execute iterations of a subsequent parfor-loop. In this example, six workers run locally on a MATLAB client machine. pool = gcp Run Parallel Over Eb/No Values {E}_{b}/{N}_{o} points in parallel using six workers using a parfor-loop rather than a for-loop as used in the previous cases. Measure the simulation time. str='Parallel runs with parfor over Eb/No'; tag='Parfor Eb/No'; berParfor1=zeros(size(berBaseline)); disp('Processing the MEX function of the algorithm within a parfor-loop.'); Processing the MEX function of the algorithm within a parfor-loop. parfor EbNoIdx=1:length(EbNoVec) berParfor1(EbNoIdx)=y(1); The result adds the simulation time of the MEX version of the algorithm executing within a parfor-loop to the previous results. Note that by running the algorithm within a parfor-loop, the elapsed time to complete the simulation is shorter. The basic concept of a parfor-loop is the same as the standard MATLAB for-loop. The difference is that parfor divides the loop iterations into groups so that each worker executes some portion of the total number of iterations. Because several MATLAB workers can be computing concurrently on the same loop, a parfor-loop provides significantly better performance than a normal serial for-loop. 3. Parallel runs with parfor over Eb/No | 1.4367 | 3.8779 Run Parallel Over Number of Bits In the previous section, the total simulation time is mainly determined by the highest {E}_{b}/{N}_{o} point. You can further accelerate the simulations by dividing up the number of bits simulated for each {E}_{b}/{N}_{o} point over the workers. Run each {E}_{b}/{N}_{o} point in parallel using six workers using a parfor-loop. Measure the simulation time. str='Parallel runs with parfor over number of bits'; tag='Parfor # Bits'; disp('Processing the MEX function of the second version of the algorithm within a parfor-loop.'); Processing the MEX function of the second version of the algorithm within a parfor-loop. % Calculate number of bits to be simulated on each worker minNumErrPerWorker = minNumErr / pool.NumWorkers; maxNumBitsPerWorker = maxNumBits / pool.NumWorkers; numErr = zeros(pool.NumWorkers,1); parfor w=1:pool.NumWorkers y=helperAccelBaseline_mex(EbNo,minNumErrPerWorker,maxNumBitsPerWorker); numErr(w)=y(2); numBits(w)=y(3); berParfor2(EbNoIdx)=sum(numErr)/sum(numBits); The result adds the simulation time of the MEX version of the algorithm executing within a parfor-loop where this time each worker simulates the same {E}_{b}/{N}_{o} point. Note that by running this version within a parfor-loop we get the fastest simulation performance. The difference is that parfor divides the number of bits that needs to be simulated over the workers. This approach reduces the simulation time of even the highest {E}_{b}/{N}_{o} value by evenly distributing load (specifically, the number of bits to simulate) over workers. 4. Parallel runs with parfor over number of bits | 0.9522 | 5.8507 You can significantly speed up simulations of your communications algorithms with the combined effects of MATLAB to C code generation and Parallel processing runs. MATLAB to C code generation accelerates the simulation by locking-down datatypes and sizes of every variable and by reducing the overhead of the interpreted language that checks for the size and datatype of variables in every line of the code. Parallel processing runs can substantially accelerate simulation by computing different iterations of your algorithm concurrently across a number of MATLAB workers. Parallelizing each {E}_{b}/{N}_{o} point individually can accelerate further by speeding up even the longest running {E}_{b}/{N}_{o} The following shows the run time of all four approaches as a bar graph. The results may vary based on the specific algorithm, available workers, and selection of minimum number of errors and maximum number of bits. results = helperAccelReportResults; This plot shows the BER curves for the different simulation processing approaches match each other closely. For each plotted {E}_{b}/{N}_{0} each of the four versions of the algorithm ran with the maximum number of input bits set to ten million (maxNumBits=1e7) and the minimum number of bit errors set to five thousand (minNumErr=5000). This example uses the gcp function to reserve several MATLAB workers that run locally on your MATLAB client machine. By modifying the parallel configurations, you can accelerate the simulation even further by running the algorithm on a larger cluster of workers that are not on your MATLAB client machine. For a description of how to manage and use parallel configurations, see the Discover Clusters and Use Cluster Profiles (Parallel Computing Toolbox) topic. The following functions are used in this example. helperAccelBaseline.m helperAccelReportResults.m S. M. Alamouti, "A simple transmit diversity technique for wireless communications," IEEE® Journal on Selected Areas in Communications, vol. 16, no. 8, pp. 1451-1458, Oct. 1998. V. Tarokh, H. Jafarkhami, and A. R. Calderbank, "Space-time block codes from orthogonal designs," IEEE Transactions on Information Theory, vol. 45, no. 5, pp. 1456-1467, Jul. 1999.
Graph the following points on a coordinate grid: (1, 1) \left(4, 1\right) (3, 4) Connect the points. Then translate the points three units right and three units up. What are the coordinates of the vertices of the new triangle? Use the eTool below to plot the points and then follow the instructions to translate the points.
applyrule examples - Maple Help Home : Support : Online Help : Programming : Operations : Substitution : applyrule examples With applyrule, a rule or a list of rules can be applied to a given expression. applyrule computes the fix point; it applies the rule until it cannot be applied anymore. It is more powerful than the command subs, but does not do mathematical transformations as algsubs does. \mathrm{restart} Syntax: applyrule(rule, expression) where rule is a rule or a set of rules. The syntax for rule is the same as that used in the pattern matcher. (For more details, see the help page of patmatch.) A simple manipulation: \mathrm{applyrule}⁡\left(a+b=x,f⁡\left(a+b+c\right)\right) \textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{c}\right) \mathrm{applyrule}⁡\left(x=y,{x}^{2}\right) {\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}} \mathrm{applyrule}⁡\left({x}^{2}=y,f⁡\left({x}^{2},{ⅇ}^{\mathrm{sin}⁡\left(x\right)+2⁢{x}^{2}}\right)\right) \textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}}\right) One can use parameters in applyrule: \mathrm{applyrule}⁡\left(f⁡\left(a::\mathrm{integer}⁢x\right)=a⁢f⁡\left(x\right),f⁡\left(2⁢x\right)+g⁡\left(x\right)-p⁢f⁡\left(x\right)\right) \textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right) applyrule can be used over data structures: \mathrm{applyrule}⁡\left(\left[a::\mathrm{even}=\mathrm{even},a::\mathrm{prime}=\mathrm{prime}\right],\left[1,2,4,3,5,6,4,8,15,21\right]\right) \left[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{even}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{even}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{prime}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{prime}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{even}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{even}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{even}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{15}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{21}\right] \mathrm{applyrule}⁡\left(\mathrm{sin}⁡\left(2⁢x\right)=2⁢\mathrm{sin}⁡\left(x\right)⁢\mathrm{cos}⁡\left(x\right),\mathrm{sin}⁡\left(x\right)+\mathrm{sin}⁡\left(2⁢x\right)-\mathrm{cos}⁡\left(x\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{cos}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{cos}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)
The RandomMatrix(r, c) command returns an r⁢x⁢c Matrix in which all entries have integer values in the range -99..99 The RandomMatrix(r) command returns an r⁢x⁢r -99..99 The RandomVector(d) command returns a d-dimensional Vector in which all entries have integer entries in the range -99..99 \mathrm{with}⁡\left(\mathrm{Student}[\mathrm{LinearAlgebra}]\right): \mathrm{RandomVector}⁡\left(2\right) [\begin{array}{c}\textcolor[rgb]{0,0,1}{-31}\\ \textcolor[rgb]{0,0,1}{67}\end{array}] \mathrm{RandomVector}[\mathrm{row}]⁡\left(6\right) [\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{8}& \textcolor[rgb]{0,0,1}{69}& \textcolor[rgb]{0,0,1}{99}& \textcolor[rgb]{0,0,1}{29}& \textcolor[rgb]{0,0,1}{44}& \textcolor[rgb]{0,0,1}{92}\end{array}] \mathrm{RandomMatrix}⁡\left(3,4\right) [\begin{array}{cccc}\textcolor[rgb]{0,0,1}{-98}& \textcolor[rgb]{0,0,1}{27}& \textcolor[rgb]{0,0,1}{-72}& \textcolor[rgb]{0,0,1}{-74}\\ \textcolor[rgb]{0,0,1}{-77}& \textcolor[rgb]{0,0,1}{-93}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{-4}\\ \textcolor[rgb]{0,0,1}{57}& \textcolor[rgb]{0,0,1}{-76}& \textcolor[rgb]{0,0,1}{-32}& \textcolor[rgb]{0,0,1}{27}\end{array}]
FrobeniusForm - Maple Help Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Solvers : FrobeniusForm reduce a square Matrix to Frobenius form (rational canonical form) FrobeniusForm(A, out, options, outopts) RationalCanonicalForm(A, out, options, outopts) (optional) equation of the form output = obj where obj is one of 'F' or 'Q', or a list containing one or more of these names; selects result objects to compute (optional) equation(s) of the form outputoptions[o] = list where o is one of 'F' or 'Q'; constructor options for the specified result object The FrobeniusForm(A) command returns the Frobenius form F of square Matrix A. This function can also be invoked using the RationalCanonicalForm command. The Frobenius form Matrix F has the following structure: F = DiagonalMatrix([C[1], C[2],..., C[k]]) {C}_{i} {p}_{1},{p}_{2},..,{p}_{k} {p}_{i} are a factorization of the characteristic polynomial of A with the property that {p}_{i} {p}_{i-1} i The Frobenius form defined in this way is unique (if you require that {p}_{i} {p}_{i-1} The columns of Q form a rational canonical basis for A. Depending on what is included in the output option, an expression sequence containing one or more of the factors F (the Frobenius form), or Q (the transformation Matrix) can be returned. If output is a list, the objects are returned in the same order as specified in the list. \mathrm{MatrixInverse}⁡\left(Q\right)·A·Q=F The constructor options provide additional information (readonly, shape, storage, order, datatype, and attributes) to the Matrix constructor that builds the result(s). These options may also be provided in the form outputoptions[o]=[...], where [...] represents a Maple list. If a constructor option is provided in both the calling sequence directly and in an outputoptions[o] option, the latter takes precedence (regardless of the order). Frobenius form \mathrm{with}⁡\left(\mathrm{LinearAlgebra}\right): A≔〈〈0,1,1,1,1〉|〈2,-2,0,-2,-4〉|〈0,0,1,1,3〉|〈-6,0,-3,-1,-3〉|〈2,2,2,2,4〉〉 \textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-6}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-3}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{-3}& \textcolor[rgb]{0,0,1}{4}\end{array}] \mathrm{FrobeniusForm}⁡\left(A\right) [\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-2}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\end{array}] \mathrm{factor}⁡\left(\mathrm{CharacteristicPolynomial}⁡\left(A,x\right)\right) \left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{⁢}{\left({\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)}^{\textcolor[rgb]{0,0,1}{2}} M≔\mathrm{BandMatrix}⁡\left([[2,2,2,2],[1,1]],0\right) \textcolor[rgb]{0,0,1}{M}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccc}\textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}\end{array}] F,Q≔\mathrm{FrobeniusForm}⁡\left(M,\mathrm{output}=['F','Q']\right) \textcolor[rgb]{0,0,1}{F}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{Q}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{8}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-12}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{cccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{1}\end{array}] {Q}^{-1}·M·Q [\begin{array}{cccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{8}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-12}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}\end{array}]
Continuous-time or discrete-time two-degree-of-freedom PID controller - Simulink D\left[\frac{N}{1+N\alpha \left(z\right)}\right], \alpha \left(z\right)=\frac{{T}_{s}}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}z}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}}{2}\frac{z+1}{z-1}. u=P\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right), u=P\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right), u=P\left[\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right)\right]. u=P\left[\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right)\right], \alpha \left(z\right)=\frac{{T}_{s}}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}z}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}}{2}\frac{z+1}{z-1}. D\left[\frac{N}{1+N\alpha \left(z\right)}\right], \alpha \left(z\right)=\frac{{T}_{s}}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}z}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}}{2}\frac{z+1}{z-1}. {u}_{i}=\int \left(r-y\right)I\text{\hspace{0.17em}}dt. u=P\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right), u=P\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right), u=P\left[\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right)\right]. u=P\left[\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right)\right], D\frac{z-1}{z{T}_{s}}\left(cr-y\right). {z}_{pole}=1-N{T}_{s} {z}_{pole}=\frac{1}{1+N{T}_{s}} {z}_{pole}=\frac{1-N{T}_{s}/2}{1+N{T}_{s}/2} \begin{array}{l}{F}_{par}\left(s\right)=\frac{\left(bP+cDN\right){s}^{2}+\left(bPN+I\right)s+IN}{\left(P+DN\right){s}^{2}+\left(PN+I\right)s+IN},\\ {C}_{par}\left(s\right)=\frac{\left(P+DN\right){s}^{2}+\left(PN+I\right)s+IN}{s\left(s+N\right)},\end{array} \begin{array}{l}{F}_{id}\left(s\right)=\frac{\left(b+cDN\right){s}^{2}+\left(bN+I\right)s+IN}{\left(1+DN\right){s}^{2}+\left(N+I\right)s+IN},\\ {C}_{id}\left(s\right)=P\frac{\left(1+DN\right){s}^{2}+\left(N+I\right)s+IN}{s\left(s+N\right)}.\end{array} {Q}_{par}\left(s\right)=\frac{\left(\left(b-1\right)P+\left(c-1\right)DN\right)s+\left(b-1\right)PN}{s+N}. {Q}_{id}\left(s\right)=P\frac{\left(\left(b-1\right)+\left(c-1\right)DN\right)s+\left(b-1\right)N}{s+N}.
Fundamentals of Transportation/Earthwork/Solution - Wikibooks, open books for an open world Fundamentals of Transportation/Earthwork/Solution < Fundamentals of Transportation‎ | Earthwork Given the end areas below, calculate the volumes of cut (in cubic meters) and fill between stations 0+00 and 1+50. Determine the true amount of excess cut or fill to be removed. 0+00: Fill = 60 0+75: Cut = 0, Fill = 25 1+00: Cut = 10, Fill = 5 1+50: Cut = 30 Two different methods need to be used here to compute earthwork volumes along the five strips. The average end area method can be used for non-zero sections. The pyramid method needs to be used for areas with zero ends. For 0+00 to 0+50, use average end area: {\displaystyle Fill={\frac {60+50}{2}}(50)=2750\,\!} {\displaystyle Fill={\frac {50+25}{2}}(25)=937.5\,\!} For 0+75 to 1+00, use the average end area method for the fill section and the pyramid method for the cut section: {\displaystyle Fill={\frac {25+5}{2}}(25)=375\,\!} {\displaystyle Cut={\frac {10(25)}{3}}=83.3\,\!} For 1+00 to 1+15, use the pyramid method for the fill section and the average end area method for the cut section: {\displaystyle Fill={\frac {5(15)}{3}}=25\,\!} {\displaystyle Cut={\frac {10+15}{2}}(15)=187.5\,\!} For 1+15 to 1+50, use the average end area method: {\displaystyle Cut={\frac {15+30}{2}}(35)=787.5\,\!} The sums of both cut and fill can be found: Fill = 4087.5 cubic-meters Cut = 1058.3 cubic-meters Thus, 3029.2 cubic-meters of dirt are needed to meet the earthwork requirement for this project. Retrieved from "https://en.wikibooks.org/w/index.php?title=Fundamentals_of_Transportation/Earthwork/Solution&oldid=3291062"
A Theoretical Model of Uniform Flow Distribution for the Admission of High-Energy Fluids to a Surface Steam Condenser | J. Eng. Gas Turbines Power | ASME Digital Collection Electronic mail: wang@sheffield.ac.uk G. H. Priestman, G. H. Priestman Department of Chemical and Process Engineering, The University of Sheffield, Sheffield, S1 3JD, England School of Mechanical Engineering, East China University of Science and Technology, Shanghai 200237, P.R. China Contributed by the Power Division of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS for publication in the ASME JOURNAL OF ENGINEERING FOR GAS TURBINES AND POWER. Manuscript received by the Power Division, Sept. 2000; final revision received by the ASME Headquarters Jan. 23, 2001. Editor: D. Lou. Wang, J., Priestman, G. H., and Wu , D. (January 23, 2001). "A Theoretical Model of Uniform Flow Distribution for the Admission of High-Energy Fluids to a Surface Steam Condenser." ASME. J. Eng. Gas Turbines Power. April 2001; 123(2): 472–475. https://doi.org/10.1115/1.1359237 An analytical study is made of the perforated pipe distributor for the admission of high-energy fluids to a surface steam condenser. The results show that for all perforated pipes there is a general characteristic parameter MkD/Lf, which depends on the pipe geometry and flow properties. Four cases are considered based on the value of the characteristic parameter M. (1) When M⩾1/4, momentum controls and the main channel static pressure will increase in the direction of the streamline. (2) When 1/6⩽M<1/4, the momentum effect balances friction losses and the pressure will decrease to a minimum, and then increase in the direction of flow to a positive value. (3) When 0<M<1/6, friction controls and the pressure will decrease to a minimum, then increase slowly, but the total pipe static pressure difference will always be negative. (4) When M=0, a limiting case when the ratio of the length to the diameter is infinite. This analysis is useful not only for the design of perforated pipe distributors for turbine condensers over a wide range of dimensions, fluid properties, and side hole pressure but also for many other technical systems requiring branching flow distribution. gas turbine power stations, steam turbines, pipe flow, condensers (steam plant), computational fluid dynamics Condensers (steam plant), Flow (Dynamics), Fluids, Friction, Momentum, Pipes, Pressure Sebald, J. F., Phillips, N. A., and Haman, L. L., 1982, “Recommenced Guidelines for the Admission of High-Energy Fluids to Steam Surface Condenser,” EPRI-CS-2251. Wang, J., 1995, “Theory of Radial Flow Distributor and Characteristics of a Parallel-Multiple Jet,” Ph.D. thesis, East China University of Science and Technology, Shanghai. Progress of Flow in Manifold Adv. in Mech. Flow Distribution and Pressure Drop in Plate Heat Exchangers-I: U-Type Arrangement Chang, Chengfang, Zhu, Zibin, Xu, Maosheng, and Zhu, Bingchen, 1979, “An Investigation of Design on the Uniform Fluid Distribution for Radial Flow Reactors,” J. Chem. Ind. Eng., No. 1, pp. 67–90 (in Chinese). Zhufan Investigations on Branched Pipe Distributors for Fluidized Beds (II) Determination of Design Parameters for Branched Pipe Distributors J. Chem. Ind. Eng.
Volume 90 Issue 2A | Seismological Research Letters | GeoScienceWorld An Ominous (?) Quiet in the Pacific Northwest Seismological Research Letters February 13, 2019, Vol.90, 463-466. doi:https://doi.org/10.1785/0220190005 How Physics‐Based Earthquake Simulators Might Help Improve Earthquake Forecasts In Memoriam: Jack Boatwright (1951–2018) Andrew J. Michael; Tia Lombardi; Thomas C. Hanks Preface to the Focus Section on Machine Learning in Seismology Karianne J. Bergen; Ting Chen; Zefeng Li Seismic Event and Phase Detection Using Time–Frequency Representation and Convolutional Neural Networks Ramin M. H. Dokht; Honn Kao; Ryan Visser; Brindley Smith Jack Woollam; Andreas Rietbrock; Angel Bueno; Silvio De Angelis Pairwise Association of Seismic Arrivals with Convolutional Neural Networks Ian W. McBrearty; Andrew A. Delorey; Paul A. Johnson A Deep Convolutional Neural Network for Localization of Clustered Earthquakes Based on Multistation Full Waveforms Marius Kriegerowski; Gesa M. Petersen; Hannes Vasyura‐Bathke; Matthias Ohrnberger Anthony Lomax; Alberto Michelini; Dario Jozinović Discrimination of Seismic Signals from Earthquakes and Tectonic Tremor by Applying a Convolutional Neural Network to Running Spectral Images Masaru Nakano; D. Sugiyama; T. Hori; T. Kuwatani; S. Tsuboi Aftershock Identification Using Diffusion Maps Yuri Bregman; Neta Rabin Machine Learning Aspects of the MyShake Global Smartphone Seismic Network Qingkai Kong; Asaf Inbal; Richard M. Allen; Qin Lv; Arno Puder Seismology with Dark Data: Image‐Based Processing of Analog Records Using Machine Learning for the Rangely Earthquake Control Experiment Kaiwen Wang; William L. Ellsworth; Gregory C. Beroza; Gordon Williams; Miao Zhang; Dustin Schroeder; Justin Rubinstein Earthquake Detection in 1D Time‐Series Data with Feature Selection and Dictionary Learning Zheng Zhou; Youzuo Lin; Zhongping Zhang; Yue Wu; Paul Johnson Chao Zhang; Mirko van der Baan; Ting Chen Standardization of Noisy Volcanoseismic Waveforms as a Key Step toward Station‐Independent, Robust Automatic Recognition Guillermo Cortés; Roberto Carniel; M. Ángeles Mendoza; Philippe Lesage Using Machine Learning to Discern Eruption in Noisy Environments: A Case Study Using CO2 ‐Driven Cold‐Water Geyser in Chimayó, New Mexico Baichuan Yuan; Yen Joe Tan; Maruti K. Mudunuru; Omar E. Marcillo; Andrew A. Delorey; Peter M. Roberts; Jeremy D. Webster; Christine N. L. Gammans; Satish Karra; George D. Guthrie; Paul A. Johnson Artificial Neural Network‐Based Framework for Developing Ground‐Motion Models for Natural and Induced Earthquakes in Oklahoma, Kansas, and Texas Farid Khosravikia; Patricia Clayton; Zoltan Nagy Application of Pool‐Based Active Learning in Physics‐Based Earthquake Ground‐Motion Simulation Naeem Khoshnevis; Ricardo Taborda Automatic Selection of Dispersion Curves Based on a Weighted Probability Scheme Roberto Ortega; Dana Carciumaru; Eduardo Huesca; Edahí Gutierrez Composite Earthquake Source Mechanism for 2018 Mw 5.2–5.4 Swarm at Kīlauea Caldera: Antipodal Source Constraint Coseismic Slip Model of the 2018 Mw 7.9 Gulf of Alaska Earthquake and Its Seismic Hazard Implications Bin Zhao; Yujie Qi; Dongzhen Wang; Jiansheng Yu; Qi Li; Caihong Zhang Mw 6.6 Poso Earthquake: Implications for Extrusion Tectonics in Central Sulawesi Shuai Wang; Caijun Xu; Wenbin Xu; Zhi Yin; Yangmao Wen; Guoyan Jiang Fling Effects from Near‐Source Strong‐Motion Records: Insights from the 2016 Mw 6.5 Norcia, Central Italy, Earthquake Maria D’Amico; Chiara Felicetta; Erika Schiappapietra; Francesca Pacor; František Gallovič; Roberto Paolucci; Rodolfo Puglia; Giovanni Lanzano; Sara Sgobba; Lucia Luzi 2016 Central Italy Earthquakes Recorded by Low‐Cost MEMS‐Distributed Arrays Jacopo Boaga; Filippo Casarin; Giancarlo De Marchi; Maria Rosa Valluzzi; Giorgio Cassiani Lower Bounds on Ground Motion at Point Reyes during the 1906 San Francisco Earthquake from Train Toppling Analysis Swetha Veeraraghavan; Thomas H. Heaton; Swaminathan Krishnan Sunyoung Park; Miaki Ishii Zefeng Li; Egill Hauksson; Tom Heaton; Luis Rivera; Jennifer Andrews Imaging 3D Upper‐Mantle Structure with Autocorrelation of Seismic Noise Recorded on a Transportable Single Station Jun Xie; Sidao Ni Wavefield Reconstruction of Teleseismic Receiver Function with the Stretching‐and‐Squeezing Interpolation Method Shaoqian Hu; Xiaohuan Jiang; Lupei Zhu; Huajian Yao Optimizing Earthquake Early Warning Performance: ElarmS‐3 Angela I. Chung; Ivan Henson; Richard M. Allen A Comprehensive Quality Analysis of Empirical Green’s Functions at Ocean‐Bottom Seismometers in Cascadia Xiaotao Yang; Haiying Gao; Sampath Rathnayaka; Cong Li Christian Poppeliers; Leiph Preston A Collection of Historic Seismic Instrumentation Photographs at the Albuquerque Seismological Laboratory S. V. Moore; C. R. Hutt; R. E. Anthony; A. T. Ringler; A. C. B. Alejandro; D. C. Wilson Historical Accounts of Sea Disturbances from South India and Their Bearing on the Penultimate Predecessor of the 2004 Tsunami Graphical Location of Seismic Sources Based on Amplitude Ratios Seismological Research Letters February 13, 2019, Vol.90, 790. doi:https://doi.org/10.1785/0220190023 Front: Machine learning (ML) is a collection of algorithms and statistical models that enable computers to extract relevant patterns and information from large datasets. Seismologists have usedML algorithms for decades to analyze seismic signals, but in just the past few years research activity aboutML applications in seismology has surged, driven by the increasing size of seismic datasets, improvements in computational power, new algorithms and architectures (e.g., deep neural networks), and the availability of easy-to-use open-source ML frameworks. In this issue of SRL, the Focus Section on Machine Learning in Seismology presents 16 original articles covering a range of ML applications. This illustration, which is based on figures in Nakano et al. (this issue), shows an example of seismic data flow and architecture for an ML neural network application for the study of tectonic tremor. Back: When the Mw 7.8 San Francisco earthquake struck on 18 April 1906, a narrow-gauge locomotive and train that had pulled into a siding to refuel at Point Reyes Station toppled due to ground motion caused by the earthquake. This photo shows two people and a canine at the site of the upset locomotive, with Point Reyes Station and its damaged buildings in the distance (U.S. Geological Survey Photographic Library). Veeraraghavan et al. (this issue) mathematically modeled the tipping of the train to calculate a lower limit on the earthquake’s ground motion at the site. Such analyses provide important additional data points for scientists who are working to anticipate the ground motions that will result from future large earthquakes. Mw Mw
Synchronization stability of delayed discrete-time complex dynamical networks with randomly changing coupling strength | Advances in Continuous and Discrete Models | Full Text From: Synchronization stability of delayed discrete-time complex dynamical networks with randomly changing coupling strength The curves of the operation modes: (a) {\mathbit{\rho }}_{\mathbf{0}}\mathbf{=}\mathbf{0.7} {\mathbit{\rho }}_{\mathbf{0}}\mathbf{=}\mathbf{0.2}
Multiple Regression: Block Analysis - SAGE Research Methods Multiple Regression: Block Analysis | The SAGE Encyclopedia of Communication Research Methods Multiple regression represents an equation wherein a set of predictor variables is used to create a predicted value for a dependent variable. The mathematical elements, often described as ordinary least squares, are such that the goal of the equation is the generation of a model where the sum of the squared deviations are minimized between the observed and predicted values (the sum of the actual deviations should be zero). The process of creating a value that minimizes the sum of the deviations represents the assumptions of the normal curve for any process that involves estimation of a mean or a correlation. This process simply takes the same set of expectations and for a standardized equation operates using the following equation: \begin{array}{}...\end{array}
QUAMETEC™: Forum Users Guide Forum Use Tips For full functionality in forum use, you will need a browser that works with HTML Editors; such as Internet Explorer and Firefox. Chrome, Opera, and Safari do not work with the HTML editor. The HTML Editor provides numerous tools useful when posting to a forum such as; text editing, formula creating, spell checking, Word formatting removal tool, 'do' and 'undo' buttons, etc. The HTML Editor tool bar will appear directly above the provided text entry box when you post to a forum. If you do not see this tool bar you are likely not using a browser that supports HTML editing. The formula creation tool takes a little playing with to get the most from it. The formula below was created using the formula tool. u_{x}=sqrt(u^2_{1}+u^{2}_{2}+...u^2_{n}+2 \rho_{1,2}u_{1}u_{2}) Although while in the text editing box it appears as: u_{x}=sqrt(u^2_{1}+u^{2}_{2}+...u^2_{n}+2 \rho_{1,2}u_{1}u_{2}) with two dollar signs in front and back. I had to omit the $ signs to get it to show you the text. Most of our forums allow the person posting to edit their post, providing they do it within 30 minutes. So don't be afraid to post, evaluate your posting and edit as desired to get it to appear as you want. Use the HTML Editor tools to improve the presentation of your posting. When copying and pasting from Microsoft Word you will need to remove all the MS Word specific 'formatting code'. Use the <> button to see if any code remains after using the "DeWord" button in the HTML editor tools. The "DeWord" button looks like the MS Word logo. Post your questions in the applicable forum. You can subscribe to any of the forums and have new postings and responses emailed directly to you. ◄ Take a Free Course Jump to... Jump to... QIMT News QIMT Training Format Modular Training Guide Courses In-Development Take a Free Course Forum User Rules General Metrology Forum ISO/IEC17025 Forum Measurement Uncertainty Forum Metrology & Measurement Uncertainty Consultants GENERAL INFORMATION Site User Agreement Measurement Uncertainty/Risk Public Webinar QIMT Glossary UncertaintyToolbox™ add-in for Microsoft® Excel® CAPA+™ for ISO/IEC17025 UncertaintyToolbox™ Pre & Post Installation Video UncertaintyToolbox™ Training Videos Series 1 UncertaintyToolbox™ Training Videos Series 2 UncertaintyToolbox™ Demonstration Video CAPA+™ for ISO17025 Demo Video Forum User Rules ►
CharacteristicCone - Maple Help Home : Support : Online Help : Mathematics : Geometry : Polyhedral Sets : Calculating Related Sets : CharacteristicCone characteristic cone of a polyhedral set CharacteristicCone(polyset) This command computes the characteristic cone of the polyhedral set polyset, returning the result as a new PolyhedralSet. \mathrm{with}⁡\left(\mathrm{PolyhedralSets}\right): The characteristic cone for a bounded set is the origin since it has no rays. c≔\mathrm{ExampleSets}:-\mathrm{Cube}⁡\left([x,y,z]\right): \mathrm{c_cone}≔\mathrm{CharacteristicCone}⁡\left(c\right) \textcolor[rgb]{0,0,1}{\mathrm{c_cone}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{{}\begin{array}{lll}\textcolor[rgb]{0,0,1}{\mathrm{Coordinates}}& \textcolor[rgb]{0,0,1}{:}& [\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}]\\ \textcolor[rgb]{0,0,1}{\mathrm{Relations}}& \textcolor[rgb]{0,0,1}{:}& [\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]\end{array} The V-Representation of a set and its characteristic cone always have the same rays. \mathrm{ps}≔\mathrm{PolyhedralSet}⁡\left([10\le x+y]\right): \mathrm{ps_verts},\mathrm{ps_rays}≔\mathrm{VerticesAndRays}⁡\left(\mathrm{ps}\right): \mathrm{ps_rays} [[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]] \mathrm{ps_cone}≔\mathrm{CharacteristicCone}⁡\left(\mathrm{ps}\right): \mathrm{ps_cone_verts},\mathrm{ps_cone_rays}≔\mathrm{VerticesAndRays}⁡\left(\mathrm{ps_cone}\right): \mathrm{ps_cone_rays} [[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]] \mathrm{evalb}⁡\left(\mathrm{ps_rays}=\mathrm{ps_cone_rays}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} The PolyhedralSets[CharacteristicCone] command was introduced in Maple 2015.
Context-free language - Wikipedia In formal language theory, a context-free language (CFL) is a language generated by a context-free grammar (CFG). Context-free languages have many applications in programming languages, in particular, most arithmetic expressions are generated by context-free grammars. 2.1 Dyck language 3.1 Context-free parsing 3.2.1 Nonclosure under intersection, complement, and difference 3.4 Languages that are not context-free Context-free grammar[edit] Different context-free grammars can generate the same context-free language. Intrinsic properties of the language can be distinguished from extrinsic properties of a particular grammar by comparing multiple grammars that describe the language. The set of all context-free languages is identical to the set of languages accepted by pushdown automata, which makes these languages amenable to parsing. Further, for a given CFG, there is a direct way to produce a pushdown automaton for the grammar (and thereby the corresponding language), though going the other way (producing a grammar given an automaton) is not as direct. An example context-free language is {\displaystyle L=\{a^{n}b^{n}:n\geq 1\}} , the language of all non-empty even-length strings, the entire first halves of which are a's, and the entire second halves of which are b's. L is generated by the grammar {\displaystyle S\to aSb~|~ab} . This language is not regular. It is accepted by the pushdown automaton {\displaystyle M=(\{q_{0},q_{1},q_{f}\},\{a,b\},\{a,z\},\delta ,q_{0},z,\{q_{f}\})} {\displaystyle \delta } is defined as follows:[note 1] {\displaystyle {\begin{aligned}\delta (q_{0},a,z)&=(q_{0},az)\\\delta (q_{0},a,a)&=(q_{0},aa)\\\delta (q_{0},b,a)&=(q_{1},\varepsilon )\\\delta (q_{1},b,a)&=(q_{1},\varepsilon )\\\delta (q_{1},\varepsilon ,z)&=(q_{f},\varepsilon )\end{aligned}}} Unambiguous CFLs are a proper subset of all CFLs: there are inherently ambiguous CFLs. An example of an inherently ambiguous CFL is the union of {\displaystyle \{a^{n}b^{m}c^{m}d^{n}|n,m>0\}} {\displaystyle \{a^{n}b^{n}c^{m}d^{m}|n,m>0\}} . This set is context-free, since the union of two context-free languages is always context-free. But there is no way to unambiguously parse strings in the (non-context-free) subset {\displaystyle \{a^{n}b^{n}c^{n}d^{n}|n>0\}} which is the intersection of these two languages.[1] Dyck language[edit] The language of all properly matched parentheses is generated by the grammar {\displaystyle S\to SS~|~(S)~|~\varepsilon } Context-free parsing[edit] Main article: Parsing The context-free nature of the language makes it simple to parse with a pushdown automaton. Determining an instance of the membership problem; i.e. given a string {\displaystyle w} {\displaystyle w\in L(G)} {\displaystyle L} is the language generated by a given grammar {\displaystyle G} ; is also known as recognition. Context-free recognition for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728639).[2][note 2] Conversely, Lillian Lee has shown O(n3−ε) boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter.[3] A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser.[4] The class of context-free languages is closed under the following operations. That is, if L and P are context-free languages, the following languages are context-free as well: {\displaystyle L\cup P} of L and P[5] the reversal of L[6] {\displaystyle L\cdot P} the Kleene star {\displaystyle L^{*}} of L[5] the image {\displaystyle \varphi (L)} of L under a homomorphism {\displaystyle \varphi } {\displaystyle \varphi ^{-1}(L)} of L under an inverse homomorphism {\displaystyle \varphi ^{-1}} the circular shift of L (the language {\displaystyle \{vu:uv\in L\}} the prefix closure of L (the set of all prefixes of strings from L)[10] the quotient L/R of L by a regular language R[11] Nonclosure under intersection, complement, and difference[edit] The context-free languages are not closed under intersection. This can be seen by taking the languages {\displaystyle A=\{a^{n}b^{n}c^{m}\mid m,n\geq 0\}} {\displaystyle B=\{a^{m}b^{n}c^{n}\mid m,n\geq 0\}} , which are both context-free.[note 3] Their intersection is {\displaystyle A\cap B=\{a^{n}b^{n}c^{n}\mid n\geq 0\}} , which can be shown to be non-context-free by the pumping lemma for context-free languages. As a consequence, context-free languages cannot be closed under complementation, as for any languages A and B, their intersection can be expressed by union and complement: {\displaystyle A\cap B={\overline {{\overline {A}}\cup {\overline {B}}}}} . In particular, context-free language cannot be closed under difference, since complement can be expressed by difference: {\displaystyle {\overline {L}}=\Sigma ^{*}\setminus L} However, if L is a context-free language and D is a regular language then both their intersection {\displaystyle L\cap D} and their difference {\displaystyle L\setminus D} are context-free languages.[13] In formal language theory, questions about regular languages are usually decidable, but ones about context-free languages are often not. It is decidable whether such a language is finite, but not whether it contains every possible string, is regular, is unambiguous, or is equivalent to a language with a different grammar. Equivalence: is {\displaystyle L(A)=L(B)} Disjointness: is {\displaystyle L(A)\cap L(B)=\emptyset } ?[15] However, the intersection of a context-free language and a regular language is context-free,[16][17] hence the variant of the problem where B is a regular grammar is decidable (see "Emptiness" below). Containment: is {\displaystyle L(A)\subseteq L(B)} ?[18] Again, the variant of the problem where B is a regular grammar is decidable,[citation needed] while that where A is regular is generally not.[19] Universality: is {\displaystyle L(A)=\Sigma ^{*}} Regularity: is {\displaystyle L(A)} a regular language?[21] Ambiguity: is every grammar for {\displaystyle L(A)} ambiguous?[22] Emptiness: Given a context-free grammar A, is {\displaystyle L(A)=\emptyset } Finiteness: Given a context-free grammar A, is {\displaystyle L(A)} finite?[24] Membership: Given a context-free grammar G, and a word {\displaystyle w} {\displaystyle w\in L(G)} ? Efficient polynomial-time algorithms for the membership problem are the CYK algorithm and Earley's Algorithm. According to Hopcroft, Motwani, Ullman (2003),[25] many of the fundamental closure and (un)decidability properties of context-free languages were shown in the 1961 paper of Bar-Hillel, Perles, and Shamir[26] Languages that are not context-free[edit] {\displaystyle \{a^{n}b^{n}c^{n}d^{n}|n>0\}} is a context-sensitive language, but there does not exist a context-free grammar generating this language.[27] So there exist context-sensitive languages which are not context-free. To prove that a given language is not context-free, one may employ the pumping lemma for context-free languages[26] or a number of other methods, such as Ogden's lemma or Parikh's theorem.[28] ^ meaning of {\displaystyle \delta } 's arguments and results: {\displaystyle \delta (\mathrm {state} _{1},\mathrm {read} ,\mathrm {pop} )=(\mathrm {state} _{2},\mathrm {push} )} ^ In Valiant's paper, O(n2.81) was the then-best known upper bound. See Matrix multiplication#Computational complexity for bound improvements since then. ^ A context-free grammar for the language A is given by the following production rules, taking S as the start symbol: S → Sc | aTb | ε; T → aTb | ε. The grammar for B is analogous. ^ Hopcroft & Ullman 1979, p. 100, Theorem 4.7. ^ Valiant, Leslie G. (April 1975). "General context-free recognition in less than cubic time". Journal of Computer and System Sciences. 10 (2): 308–315. doi:10.1016/s0022-0000(75)80046-8. ^ Lee, Lillian (January 2002). "Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix Multiplication" (PDF). J ACM. 49 (1): 1–15. arXiv:cs/0112018. doi:10.1145/505241.505242. S2CID 1243491. ^ a b c Hopcroft & Ullman 1979, p. 131, Corollary of Theorem 6.1. ^ Hopcroft & Ullman 1979, p. 142, Exercise 6.4d. ^ Hopcroft & Ullman 1979, p. 131-132, Corollary of Theorem 6.2. ^ Hopcroft & Ullman 1979, p. 142-144, Exercise 6.4c. ^ Hopcroft & Ullman 1979, p. 142, Exercise 6.4b. ^ Hopcroft & Ullman 1979, p. 142, Exercise 6.4a. ^ Stephen Scheinberg (1960). "Note on the Boolean Properties of Context Free Languages" (PDF). Information and Control. 3 (4): 372–375. doi:10.1016/s0019-9958(60)90965-7. ^ Beigel, Richard; Gasarch, William. "A Proof that if L = L1 ∩ L2 where L1 is CFL and L2 is Regular then L is Context Free Which Does Not use PDA's" (PDF). University of Maryland Department of Computer Science. Retrieved June 6, 2020. ^ Hopcroft & Ullman 1979, p. 203, Theorem 8.12(1). ^ Hopcroft & Ullman 1979, p. 202, Theorem 8.10. ^ Salomaa (1973), p. 59, Theorem 6.7 ^ Hopcroft & Ullman 1979, p. 137, Theorem 6.6(a). ^ Hopcroft & Ullman 1979, p. 137, Theorem 6.6(b). ^ John E. Hopcroft; Rajeev Motwani; Jeffrey D. Ullman (2003). Introduction to Automata Theory, Languages, and Computation. Addison Wesley. Here: Sect.7.6, p.304, and Sect.9.7, p.411 ^ a b Yehoshua Bar-Hillel; Micha Asher Perles; Eli Shamir (1961). "On Formal Properties of Simple Phrase-Structure Grammars". Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung. 14 (2): 143–172. ^ Hopcroft & Ullman 1979. ^ "How to prove that a language is not context-free?". Hopcroft, John E.; Ullman, Jeffrey D. (1979). Introduction to Automata Theory, Languages, and Computation (1st ed.). Addison-Wesley. ISBN 9780201029888. Salomaa, Arto (1973). Formal Languages. ACM Monograph Series. Autebert, Jean-Michel; Berstel, Jean; Boasson, Luc (1997). "Context-Free Languages and Push-Down Automata". In G. Rozenberg; A. Salomaa (eds.). Handbook of Formal Languages (PDF). Vol. 1. Springer-Verlag. pp. 111–174. Ginsburg, Seymour (1966). The Mathematical Theory of Context-Free Languages. New York, NY, USA: McGraw-Hill. Sipser, Michael (1997). "2: Context-Free Languages". Introduction to the Theory of Computation. PWS Publishing. pp. 91–122. ISBN 0-534-94728-X. Retrieved from "https://en.wikipedia.org/w/index.php?title=Context-free_language&oldid=1065546969"
Pad - Maple Help Home : Support : Online Help : Programming : Data Types : Tables, lists, and sets : ListTools Package : Pad pad the elements in a list Pad[N](pd[1], ..., pd[N-1], L, pd[N], ...) positive integer; location of list L in the argument sequence pd[1], p[2], ... objects to pad the list elements with The Pad[N](..., L, ...) function returns a new list that contains the elements of list L surrounded by the specified objects pd[1], pd[2], .... In the list that is returned by Pad, each element of L is located at position N in the series of specified objects pd[1], pd[2], .... \mathrm{with}⁡\left(\mathrm{ListTools}\right): L≔[a,b,c] \textcolor[rgb]{0,0,1}{L}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}] \mathrm{Pad}[2]⁡\left(\mathrm{LL1},L,\mathrm{RR1},\mathrm{RR2}\right) [\textcolor[rgb]{0,0,1}{\mathrm{LL1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RR1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RR2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{LL1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RR1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RR2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{LL1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RR1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RR2}}]
Three numbers are in the ratio of 5 : 3 : 7 The sum of their cubes is 61875 - Maths - Cubes and Cube Roots - 8060531 | Meritnation.com Three numbers are in the ratio of 5 : 3 : 7. The sum of their cubes is 61875. Find the three numbers.-Please explain. Since the numbers are in the ratio of 5 : 3 : 7. So suppose the numbers are 5x, 3x and 7x respectively. Now according to the question we have; {\left(5x\right)}^{3}+{\left(3x\right)}^{3}+{\left(7x\right)}^{3} = 61875\phantom{\rule{0ex}{0ex}}⇒125{x}^{3}+27{x}^{3}+343{x}^{3} = 61875\phantom{\rule{0ex}{0ex}}⇒495{x}^{3} = 61875\phantom{\rule{0ex}{0ex}}⇒{x}^{3} = \frac{61875}{495} \phantom{\rule{0ex}{0ex}}⇒x = \sqrt[3]{\frac{61875}{495}} = \sqrt[3]{125}\phantom{\rule{0ex}{0ex}}⇒x = 5 Therefore the numbers are 5x \mathrm{i}.\mathrm{e}. 5×5 = 25; 3x \mathrm{i}.\mathrm{e}. 3×5 = 15; 7x \mathrm{i}.\mathrm{e}. 7×5 = 35
A radio station is giving away free t-shirts to students in local schools. It plans to give away 40 shirts at Big Sky Middle School and 75 shirts at High Peaks High School. Big Sky Middle School has 350 students, and 800 students attend High Peaks High School. What is the probability of getting a t-shirt if you are a student at the middle school? Remember, probability can be expressed by this fraction: \frac{\text{number of shirts given at the school}}{\text{total number of students at the school}} \frac{40}{350} What is the probability of getting a t-shirt if you are a student at the high school? Are you more likely to get a t-shirt if you are a student at the high school, or at the middle school? Convert each fraction to a percent. The school with the higher percentage means you are more likely to get a t-shirt if you are a student at that school.
Wright omega function - MATLAB wrightOmega - MathWorks Australia Compute Wright Omega Function of Numeric Inputs Compute Wright Omega Function of Symbolic Numbers Compute Wright Omega Function of Symbolic Expression Compute Derivative of Wright Omega Function Compute Wright Omega Function for Matrix Input wrightOmega(x) computes the Wright omega function of x. If z is a matrix, wrightOmega acts elementwise on z. Compute the Wright omega function for these numbers. Because these numbers are not symbolic objects, you get floating-point results: wrightOmega(1/2) wrightOmega(pi) 2.3061wrightOmega(-1+i*pi) -1.0000 + 0.0000 Compute the Wright omega function for the numbers converted to symbolic objects. For most symbolic (exact) numbers, wrightOmega returns unresolved symbolic calls: wrightOmega(sym(1/2)) wrightOmega(sym(pi)) For some exact numbers, wrightOmega has special values: wrightOmega(-1+i*sym(pi)) Compute the Wright omega function for x and sin(x) + x*exp(x). For symbolic variables and expressions, wrightOmega returns unresolved symbolic calls: wrightOmega(sin(x) + x*exp(x)) Now compute the derivatives of these expressions: diff(wrightOmega(x), x, 2) diff(wrightOmega(sin(x) + x*exp(x)), x) wrightOmega(x)/(wrightOmega(x) + 1)^2 -... wrightOmega(x)^2/(wrightOmega(x) + 1)^3 (wrightOmega(sin(x) + x*exp(x))*(cos(x) +... exp(x) + x*exp(x)))/(wrightOmega(sin(x) + x*exp(x)) + 1) Compute the Wright omega function for elements of matrix M and vector V: M = [0 pi; 1/3 -pi]; V = sym([0; -1+i*pi]); wrightOmega(M) wrightOmega(V) lambertw(0, 1) The Wright omega function is defined in terms of the Lambert W function: \omega \left(x\right)={W}_{⌈\frac{\mathrm{Im}\left(x\right)-\pi }{2\pi }⌉}\left({e}^{x}\right) The Wright omega function ω(x) is a solution of the equation Y + log(Y) = X. [1] Corless, R. M. and D. J. Jeffrey. “The Wright omega Function.” Artificial Intelligence, Automated Reasoning, and Symbolic Computation (J. Calmet, B. Benhamou, O. Caprotti, L. Henocque, and V. Sorge, eds.). Berlin: Springer-Verlag, 2002, pp. 76-89. lambertW | log
Calculating How Many Tokens Your LPs are Worth - PrivacySwap 2.0 Many people ask about how to find out the value of their LP tokens. Here we will try to provide that info for you. A little technical, but pretty easy once you understand how it works. LP tokens are representations of your share of the total liquidity pool that you guy after providing liquidity of two tokens For example, in the PRV2-BNB liquidity pool, you need to add using PRV and BNB. In other words, in order to find out how much your tokens are worth, you just need a few variables. The number of LP tokens you have for the specific pool you are calculating for. The total number of LP tokens in circulation. The current amount and price of tokens in the pool. With these three variables, you can calculate the value of your LP tokens. Let us show you how. Step 1: Find out the amount of LP tokens you have. If you're staked in farms, head to the farm you're staked in and take a look. Step 2: Find out the total amount of LP tokens in circulation. You do this by selecting "Details" and then "View on BscScan". Look for "Total Supply" in the page that opens on BscScan. Although LP tokens ALL bear the same name "Cake-LP", different LPs have different LP contract addresses. This means that the PRV2-BUSD LP token will be called Cake-LP, and so will the PRV2-BNB LP token. But they are NOT the same LP, and the "Total Supply" for PRV2-BUSD LP and PRV2-BNB LP will differ. So make sure you are looking at the "Total Supply" of the right LP. (How to check comes in the later steps!) Step 3: Take the amount of LP tokens you have staked found in Step 1, and divide it by the "Total Supply" in Step 2, and you get your share of the LP. In this case, Your LP Ratio = 771.788 / 52,582.617314 = 0.0146776261704743 Which also means that you own 1.4677% of the LP's assets. Step 4: Select "Contract" to be taken to the smart contract's page to be taken to the contract's page. Step 5: On the contract's page, you will see a dropdown that displays all the current assets within this contract address. This dropdown clearly shows that the assets in this pool are BUSD and PRV. Which means that this is the PRV-BUSD LP. You can repeat the above steps for other LPs and found out exactly how much assets are in each of these LPs. This pool has 247,612.09520648 BUSD and 11,177.07254118 PRV. Step 6: Calculating what your LPs are worth is simply using your LP Ratio obtained in Step 3, and multiplying it by each of the assets in Step 5. Value of Current Holdings = Your LP Ratio * Amount of Assets Value of BUSDHoldings = 0.0146776261704743*247,612.09520648 = 3,634.357768728605 BUSD Value of PRVHoldings = 0.0146776261704743*11,177.07254118 = 164.0528924397133 PRV
Scientists consider the average growth rate of kelp (which grows in the sea) and the average mass of crabs that live in kelp beds to be indicators of the health of marine life. But they want to know if there is an association between the growth rates of kelp and the masses of crabs. Marine biologists collected data from different parts of the world and created the following conditional relative frequency table. They considered the average growth rate of kelp as the independent variable. \% Average Mass of Crabs (kg) \mathbf{< 0.25} \mathbf{0.25} \; \text{-} \; \mathbf{0.49} \mathbf{0.50} \; \text{-} \; \mathbf{0.75} \mathbf{> 0.75} of Kelp \mathbf{< 5} Average Growth Rate of Kelp is < 5 and average mass of Crabs is < 0.25: 66\% Average Growth Rate of Kelp is < 5 and average mass of Crabs is 0.25 to 0.49: 34\% 0\% Average Growth Rate of Kelp is < 5 and average mass of Crabs is > 0.75: 0\% \mathbf{≥ 5} Average Growth Rate of Kelp is greater than or equal to 5 and average mass of Crabs is < 0.25: 12\% Average Growth Rate of Kelp is greater than or equal to 5 and average mass of Crabs is 0.25 to 0.49: 49\% 35\% Average Growth Rate of Kelp is greater than or equal to 5 and average mass of Crabs is > 0.75: 4\% Is there an association between growth rates of kelp and the masses of crabs? When the growth rate of the kelp is high, how are the crab masses distributed? What about when the growth rate of the kelp is low? What does this tell you about the association?
The Echo Modelling and Simulation of the Semi-Active Radar Seeker against a Sea Skimming Target The Echo Modelling and Simulation of the Semi-Active Radar Seeker against a Sea Skimming Target () Peng Peng, Lixin Guo, Hualong Sun School of Physics and Optoelectronic Engineering, Xidian University, Xi’an, China. This paper has proposed a new modelling and simulating technique for the echo of the semi-active radar seeker against the sea skimming target. The echo modelling is based on the electromagnetic scattering mechanisms. A modified Four-path model based on the radar detection scene is used to describe the multipath scattering between the target and rough sea surface. A Facet-based Small Slope Approximation (FBSSA) method is employed to calculate the scattering from the sea surface. The Physical Optics (PO) and the Equivalent Edge Current (EEC) Method is used to calculate the target scattering. In the echo simulations. The results present the original echo and the echo processed by the signal processing procedures, where the clutter and multipath effect can be observed. Semi-Active Radar Seeker, Sea Skimming Target, Electromagnetic Scattering, Radar Echo Peng, P. , Guo, L. and Sun, H. (2018) The Echo Modelling and Simulation of the Semi-Active Radar Seeker against a Sea Skimming Target. Journal of Computer and Communications, 6, 74-79. doi: 10.4236/jcc.2018.612007. Topics about the ocean securities are widely focused in the recent years. The detection and defending against the sea skimming target is one of those topics, and it is particularly noticed in the military field [1] [2]. The semi-active RF seeker is often set up on an airborne or missile borne platform. It doesn’t contain an active radar but receives the scattered energy from the target when it’s illuminated by the launch platform on the ground. The radar echo components received by the seeker are very complex, where the combined effect of multipath scattering and sea clutter may cause detrimental effects on the target detections. To better analyze the echo from the sea skimming target and evaluate the performance of the semi-active seeker detections, an accurate and efficient simulation model is very necessary. An echo simulation model for the semi-active Radar seeker against a sea skimming target is proposed in this paper. The echo simulations are made based on the hybrid electromagnetic scattering mechanisms of the target-sea model, which has considered the scattering in the simulation scene at each time instant. 2. Approach and Models 2.1. The Scattering Mechanisms and Models The sea skimming target has composite scattering mechanisms, which basically contain the target scattering, sea clutter scattering and the multipath scattering between the target and sea surface. Four-path model is an efficient way to describe the scattering mechanisms. In this paper, the four path model is employed based on the semi-active seeker detection geometry. The geometry is shown as in the following figure. As shown in Figure 1, at the receiver, the total response observed can be seen as the summation of the returns from the following paths: the direct return from the target and sea surface; the single bounce return, in which the scattered field experiences a single bounce with the sea surface either after scattering from the target or prior to illuminating the target; The forth path is a double ground bounce return, where the ray interacts with the sea surface twice, once on the path to the target and again after scattering from the target prior to arrival at the receiver. In fact, the classical Four-path model is only suitable in the ideal condition when the sea surface is regarded as a flat surface. The actual sea surface is a kind of highly random rough surface. In this condition, the scattering components are considered at every reflecting facet on the interface boundary, rather than at a singular point as illustrated in Figure 2, which is referred as the modified four-path model or the extended image method [3]. Figure 1. Radar scattering geometry. Figure 2. The multipath scattering mechanisms for the very rough surface. In this model each facet can serve as a small mirror, against which target will yield image reflections at the corresponding image locations. The details of the method can be acquired in the ref. [3]. So in this model, the scattered field upon the illuminated target facet, can be calculated by the Physical Optics method (PO). The scattered field is calculated by the Stratton-Chu equation in PO approximations as given by {E}_{s}^{PO}=\frac{jk{\mathrm{e}}^{-ikR}}{4\pi R}{\displaystyle \iint \eta \cdot {\stackrel{^}{k}}_{s}\times }({\stackrel{^}{k}}_{s}\times (2n\times {H}_{inc}({r}^{\prime }))){\mathrm{e}}^{ik{\stackrel{^}{k}}_{s}\cdot {r}^{\text{'}}}d{s}^{\text{'}} The Gaussian pattern beam is used as the incident wave. The scattering from the edge of the target is calculated by the Equivalent Edge Current (EEC) method. The scattered field upon the sea facet is calculated by a Facet-based Small Slope Approximation method (FBSSA), where the scattering field upon each sea facet can be calculated by {E}_{s}^{SSA}={S}^{SSA}\cdot {\mathrm{e}}^{ikR/R} SSSA is the scattering amplitude (SA) upon an individual sea facet, which is given by {S}^{SSA}=\frac{2{\left(q{q}_{0}\right)}^{1/2}}{\left(q+{q}_{0}\right){P}_{inc}^{1/2}}B\left(k,{k}_{0}\right)\int \frac{T\left(r,\xi \left(r\right)\right)}{{\left(2\pi \right)}^{2}}{e}^{-i\left(k-{k}_{0}\right)\cdot r-i\left(q+{q}_{0}\right)\xi \left(r\right)}dr k0 and q0 are the horizontal and vertical projections of the incident vector ki. k and q are the horizontal and vertical projections of the scattering vector ks. B(k, k0) is a polarization matrix which can be referred in [4]. The SSA integral can be analytically solved upon the Bragg wave structure on each facet. This process is described in details in the ref. [5], which is not repeated here. 2.2. Echo Simulation Model The echo simulation model is set up in this section for the radar detection scene of the semi-active seeker, which is described in Figure 1. The target is flying above the time-varying sea. The seeker is flying towards the target. The seeker is receiving the scattering return from the target scattering, multipath scattering and sea surface scattering, when they are illuminated by the radar. The radar is assumed to be the pulsed radar. In the pulse duration, the target and the seeker are assumed to be static. Considering the fact that the sea surface scattering and the multipath scattering is time varying, so the reflectivity of each scattering point at every position should be calculated at each pulses. The reflectivity of each scattering point is calculated at each time instant. The echo is obtained at each pulse instant by summing up the scattering energy from the scatterers in the simulation scene, which is presented as {s}_{r}(k,t)={\displaystyle \underset{i=1}{\overset{N}{\sum }}{\gamma }_{i}\left(k\right)·\text{rect}\left[\frac{t-2{R}_{i}\left(k\right)/c}{{T}_{r}}\right]\phi (t-\frac{2{R}_{i}\left(k\right)}{c})\mathrm{exp}\left\{-j\frac{4\pi {R}_{i}\left(k\right)}{\lambda }\right\}} In (4), rect(∙) is the envelope of the rectangular pulse. Tr is the pulse repetition time (PRT). R is the distance, which can reflect the time delay for the wave propagation. {\gamma }_{i} is the reflectivity of each scattering point for each time instant, which is calculated by the model in the Section 2.1. The radar scattering returns collected from the facets in the same range bin are processed together. In the echo simulations, the antenna pattern for the seeker and the ground radar should be considered, where the Gaussian Beam function is used as given by {G}_{MB}\left(\theta \right)=\mathrm{exp}\left(-2\mathrm{ln}2{\theta }^{2}/{\theta }_{B}^{2}\right)\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }|\theta |\le \mu θB is the half-power beam width of the main lobe. μ is the azimuth width , given by \mu ={\theta }_{B}\sqrt{\mathrm{ln}{g}_{3}/\left(-2\mathrm{ln}2\right)} The radiation from the first and second side-lobes are considered, whose beam function are similarly presented as {G}_{B1}\left(\theta \right)={g}_{1}\mathrm{exp}\left(-2\mathrm{ln}2{\left(\theta \pm 1.5\mu \right)}^{2}/{\theta }_{B1}^{2}\right)\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mu \le \left|\theta \right|\le 2\mu {G}_{B2}\left(\theta \right)={g}_{2}\mathrm{exp}\left(-2\mathrm{ln}2{\left(\theta \pm 2.5\mu \right)}^{2}/{\theta }_{B2}^{2}\right)\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}2\mu <\left|\theta \right|\le 3\mu g3 is the gain of the main beam. g1 is the peak gain of the first side-lobe. g2 is the peak gain of the second side-lobe. θB1 and θB2 are the beam width of the first and second side-lobe, whose value are given by {\theta }_{B1}=0.5\mu \sqrt{\left(-2\mathrm{ln}2\right)/\mathrm{ln}\left({g}_{3}/{g}_{1}\right)} {\theta }_{B2}=0.5\mu \sqrt{\left(-2\mathrm{ln}2\right)/\mathrm{ln}\left({g}_{3}/{g}_{2}\right)} In the real scene, the target is flying above the infinite sea surface. The radar echo simulations, the target is moving and the sea surface is time varying. Once the echo is generated, the echo signal is handled by the following signal processing procedures [6] (Figure 3). In the numerical simulations, the working frequency of the ground radar is 10 GHz (X band). The sea state is chosen as low sea state, where the wind speed at 10 m high above the sea in the Elfouhaily’s sea spectrum is 3 m/s. Figure 4 shows the bistatic scattering characteristics of a cruise missile target above the sea. The target is 5 m high above the sea, the incident angle is {\theta }_{i}={45}^{\circ },{\phi }_{i}={0}^{\circ } . The results are compared with the target scattering and the sea scattering. It can be seen that the composite scattering is contributed by the target scattering, sea scattering as well as the multipath scattering, so it’s much stronger than each individual scattering component. In the echo simulations, the bandwidth is 5 MHz. The antenna parameters are set as g1 = 10−2, g2 = 10−2.5, g3 = 10−3, and θB = 5˚. The PRT is 10 μs. The pulse duration is 1 μs. The sampling frequency is 100 MHz (Figure 5). Figure 5 shows the echo simulation results, where 512 pulses are used in a CPI. In Figure 5(a), the raw echo signal for. In Figure 5(b), the range-dopper map is obtained by employing the signal processing to the echo. The sea clutter has the strongest power in the map. The target has a velocity, so it has different Doppler value from the clutter. The multipath effect can be also identified in the map, which has broadened the target Doppler spectrum, but it is weak compared with the target and sea clutter. Figure 3. Signal processing procedure. Figure 4. Bistatic scattering characteristics of a cruise missile above the sea surface. Figure 5. Echo simulation results of the semi-active seeker. (a) Raw echo signal; (b) Range-Doppler map. In this paper, the radar echo of the semi-active radar seeker against the sea skimming target is modelled and simulated. The scattering mechanisms are considered by a hybrid scheme based on the radar detection scene of the semi-active seeker. The echo after signal processing can show the spatial and Doppler characteristics of the target, sea clutter, and the multipath scattering. [1] Li, H., Zhang, Y., Li, S., Li, S. and Sun, C. (2010) Low Altitude Sea-Skimming Target Detection System Design of Microwave. Electronic Test, 4, 281-284. [2] Zhou, H., Guoping, H.U., Kuang, X. and Shi, J. (2017) A Study on the Target Detection Performance of Radar in Low-Altitude Multipath Environment. Modern Radar, 6, 121-124. [3] Peng, P., Guo, L.X. and Tong, C. (2018) An EM Model for Radar Multipath Simulation and hrrp Analysis of Low Altitude Target above Electrically Large Composite Scale Rough Surface. Electromagnetics, 10, 1-12. [4] Voronovich, A.G. (2002) The Effect of the Modulation of Bragg Scattering in Small-Slope Approximation. Waves in Random Media, 12, 341-349. https://doi.org/10.1088/0959-7174/12/3/306 [5] Peng, P. and Guo, L. A Facet-Based Simulation of the Multipath Effect on the EM Scattering and Doppler Spectrum of a Low-Flying Target at Maritime Scene. IEEE Geoscience and Remote Sensing Letters. [6] Melvin, W.L. and Scheer, J.A. (2013) Principles of Modern Radar. An Imprint of the IET, SciTech Publishing, Edison, NJ.
Transforms Supported by hgtransform - MATLAB & Simulink - MathWorks Australia Creating a Transform Matrix The Default Transform Disallowed Transforms: Perspective Disallowed Transforms: Shear Absolute vs. Relative Transforms Combining Transforms into One Matrix Multiplying the Transform by the Identity Matrix Undoing Transform Operations The transform object's Matrix property applies a transform to all the object’s children in unison. Transforms include rotation, translation, and scaling. Define a transform with a four-by-four transformation matrix. The makehgtform function simplifies the construction of matrices to perform rotation, translation, and scaling. For information on creating transform matrices using makehgtform, see Nest Transforms for Complex Movements. Rotation transforms follow the right-hand rule — rotate objects about the x-, y-, or z-axis, with positive angles rotating counterclockwise, while sighting along the respective axis toward the origin. If the angle of rotation is theta, the following matrix defines a rotation of theta about the x-axis. To create a transform matrix for rotation about an arbitrary axis, use the makehgtform function. Translation transforms move objects with respect to their current locations. Specify the translation as distances tx, ty, and tz in data space units. The following matrix shows the location of these elements in the transform matrix. Scaling transforms change the sizes of objects. Specify scale factors sx, sy, and sz and construct the following matrix. You cannot use scale factors less than or equal to zero. The default transform is the identity matrix, which you can create with the eye function. Here is the identity matrix. See Undoing Transform Operations. Perspective transforms change the distance at which you view an object. The following matrix is an example of a perspective transform matrix, which MATLAB® graphics does not allow. \left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& {p}_{x}& 0\end{array}\right] In this case, px is the perspective factor. Shear transforms keep all points along a given line (or plane, in 3-D coordinates) fixed while shifting all other points parallel to the line (plane) proportional to their perpendicular distance from the fixed line (plane). The following matrix is an example of a shear transform matrix, which hgtransform does not allow. \left[\begin{array}{cccc}1& {s}_{x}& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\end{array}\right] In this case, sx is the shear factor and can replace any zero element in an identity matrix. Transforms are specified in absolute terms, not relative to the current transform. For example, if you apply a transform that translates the transform object 5 units in the x direction, and then you apply another transform that translates it 4 units in the y direction, the resulting position of the object is 4 units in the y direction from its original position. If you want transforms to accumulate, you must concatenate the individual transforms into a single matrix. See Combining Transforms into One Matrix. It is usually more efficient to combine various transform operations into one matrix by concatenating (multiplying) the individual matrices and setting the Matrix property to the result. Matrix multiplication is not commutative, so the order in which you multiply the matrices affects the result. For example, suppose you want to perform an operation that scales, translates, and then rotates. Assuming R, T and S are your individual transform matrices, multiply the matrices as follows: S is the scaling matrix, T is the translation matrix, R is the rotation matrix, and C is the composite of the three operations. Then set the transform object's Matrix property to C: The following sets of statements are not equivalent. The first set: results in the removal of the transform C. The second set: applies the transform C. Concatenating the identity matrix to other matrices has no effect on the composite matrix. Because transform operations are specified in absolute terms (not relative to the current transform), you can undo a series of transforms by setting the current transform to the identity matrix. For example: returns the objects contained by the transform object, hg, to their orientation before applying the transform C. For more information on the identity matrix, see the eye function
a2-16p - 80 - Maths - Algebraic Expressions and Identities - 9704387 | Meritnation.com a2-16p - 80 Hi, it should be p2 instead of a2 {p}^{2}-16p-80=0 \phantom{\rule{0ex}{0ex}}{p}^{2}-20p+4p-80= 0 \phantom{\rule{0ex}{0ex}}p\left(p-20\right)+4\left(p-20\right)= 0 \phantom{\rule{0ex}{0ex}}\left(p-20\right)\left(p+4\right)= 0 \phantom{\rule{0ex}{0ex}}p = 20 , - 4
The school counselors are worried about the study habits of students who are involved in a lot of after-school activities. They randomly selected students at the school and gathered the following data. Consider the number of activities the independent variable. Hours Spent Studying Per Week Hours of After-School Activities Per Week \mathbf{8} Eight or More Hours \mathbf{5} # of students with less than 5 hours of after school activities per week and study less than 8 hours per week: 29 # of students with less than 5 hours of after school activities per week and studies 8 or more hours of per week: 19 Five or More Hours # of students with 5 or more hours of after school activities per week and studies less than 8 hours per week: 14 # of students with 5 or more hours of after school activities per week and studies 8 or more hours per week: 36 Make a conditional relative frequency table. What is the total number of hours in each row? What is the percentage of this total for each column? Use this information to make your table. Is there an association between the amount of time spent studying and number of after-school activities? There is an association, but is it the association you would expect?
The paper develops a multi-product supply chain model where supplyproduction- sale integration is considered and the worst-case conditional value at risk (WCVaR) model is applied as the risk measure, and also provides a coordination strategy to minimize the supply chain risk. First, by analyzing the source demand of market in the supply chain, three WCVaR models consisting of three tiers—the supplier, the manufacturer and the retailer in the supply chain are proposed to measure the market risk. Then, a risk coordination model is proposed to cover the whole supply chain including producing, order, inventory and sales. Finally, the numerical results show the efficiency of the model in mitigating risks. And we make a summary of supply chain risk management strategies. WCVaR Model, Supply Chain, Supply-Production-Sale Integration, Multi Product Jiang, M. , Shen, R. and Meng, Z. (2017) WCVaR-Based Risk Coordination Model for Multi-Product Supply. Open Journal of Business and Management, 5, 641-652. doi: 10.4236/ojbm.2017.54054. \xi ={\left({\xi }_{1},\cdots ,{\xi }_{n},\cdots ,{\xi }_{N}\right)}^{\text{T}} {x}_{n} x={\left({x}_{1},\cdots ,{x}_{n},\cdots ,{x}_{N}\right)}^{\text{T}} X {L}_{1}\left(x,{\xi }^{i}\right) {\xi }^{i} {p}_{1}^{i}\left(t\right)\left(i=1,2,\cdots ,I\right) {\text{P}}_{1}=\left\{\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{1}{p}_{1}^{i}\left(t\right)|\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{1}=1,{\lambda }_{i}^{1}\ge 0,i=1,2,\cdots ,I\right\} {\Lambda }_{1}=\left\{{\lambda }_{1}=\left({\lambda }_{1}^{1},{\lambda }_{2}^{1},\cdots ,{\lambda }_{I}^{1}\right)|\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{1}=1,{\lambda }_{i}^{1}\ge 0,i=1,2,\cdots ,I\right\} WCVa{R}_{{\beta }_{1}}\left(x\right)=\underset{\lambda \in {\Lambda }_{1}}{\mathrm{max}}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}{F}_{1i}\left(x,{\alpha }_{1}\right) {F}_{1i}\left(x,{\alpha }_{1}\right)={\alpha }_{1}+\frac{1}{1-{\beta }_{1}}{\int }_{t\in {R}^{m}}{\left[{L}_{1}\left(x,t\right)-{\alpha }_{1}\right]}^{+}{p}_{1}^{i}\left(t\right)\text{d}t \begin{array}{l}\mathrm{min}\text{}WCVa{R}_{{\beta }_{1}}\left(x\right)=\underset{{\lambda }_{1}\in {\Lambda }_{1}}{\mathrm{max}}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}{}^{1}{F}_{1i}\left(x,{\alpha }_{1}\right)\\ \text{s}\text{.t}\text{.}x\in X\end{array} \begin{array}{l}\mathrm{min}\text{}{\chi }_{1}\\ \text{s}\text{.t}\text{.}{\chi }_{1}\ge {F}_{1i}\left(x,{\alpha }_{1}\right),i=1,2,\cdots ,I,\\ \text{}x\in X,{\alpha }_{1},{\chi }_{1}\in {R}^{1}.\end{array} {c}_{n} {r}_{n} {r}_{n}<{c}_{n} {a}_{n} {s}_{n} {L}_{1}\left(x,\xi \right)=\underset{n=1}{\overset{N}{\sum }}\left\{\left({a}_{n}+{c}_{n}-{r}_{n}\right){\left[{x}_{n}-{\xi }_{n}\right]}^{+}+\left({s}_{n}-{c}_{n}\right){\left[{\xi }_{n}-{x}_{n}\right]}^{+}\right\} {\xi }_{k} \xi k=1,\cdots ,K {p}_{1k}^{i} {F}_{1i}\left(x,{\alpha }_{1}\right) i=1,\cdots ,I {F}_{1i}\left(x,{\alpha }_{1}\right)\approx {\stackrel{˜}{F}}_{1i}\left(x,{\lambda }_{1},{\alpha }_{1}\right)={\alpha }_{1}+\frac{1}{1-{\beta }_{1}}\underset{k=1}{\overset{K}{\sum }}\left({\left[{L}_{1}\left(x,{\xi }_{k}\right)-{\alpha }_{1}\right]}^{+}{p}_{1k}^{i}\right) {\mu }_{1,k}={L}_{1}\left(x,{\xi }_{k}\right)-{\alpha }_{1} {\nu }_{1,k}={x}_{n}-{\xi }_{n,k} {\omega }_{1,k}={\xi }_{n,k}-{x}_{n} k=1,\cdots ,K {\mu }_{1}=\left({\mu }_{1,1},{\mu }_{1,2},\cdots ,{\mu }_{1,K}\right) {\nu }_{1}=\left({\nu }_{1,1},{\nu }_{1,2},\cdots ,{\nu }_{1,K}\right) {\omega }_{1}=\left({\omega }_{1,1},{\omega }_{1,2},\cdots ,{\omega }_{1,K}\right) {\beta }_{1} \underset{\left(x,{\alpha }_{1},{\mu }_{1},{\nu }_{1},{\omega }_{1},{\chi }_{1}\right)}{\mathrm{min}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\chi }_{1} {\mu }_{1,k}\ge \underset{n=1}{\overset{N}{\sum }}\left\{\left({a}_{n}+{c}_{n}-{r}_{n}\right){\nu }_{1,k}+\left({s}_{n}-{c}_{n}\right){\omega }_{1,k}\right\}-{\alpha }_{1} {\mu }_{1,k}\ge 0 {\nu }_{1,k}\ge {x}_{n}-{\xi }_{n,k} {\nu }_{1,k}\ge 0 {\omega }_{1,k}\ge {\xi }_{n,k}-{x}_{n} {\omega }_{1,k}\ge 0 \underset{n=1}{\overset{N}{\sum }}{c}_{n}{x}_{n}\le {\Phi }_{1} {\alpha }_{1},{\chi }_{1}\in {R}^{1} {A}_{n}^{1}\le {x}_{n}\le {A}_{n}^{2} {\chi }_{1}\ge {\alpha }_{1}+\frac{1}{1-{\beta }_{1}}\underset{k=1}{\overset{K}{\sum }}{\mu }_{1,k}{p}_{1k}^{i} i=1,2,\cdots ,I n=1,\cdots ,N k=1,\cdots ,K {\Phi }_{1} {A}_{n}^{1} {A}_{n}^{2} {x}^{\ast } \xi ={\left({\xi }_{1},\cdots ,{\xi }_{n},\cdots ,{\xi }_{N}\right)}^{\text{T}} {x}_{n} y={\left({y}_{1},\cdots ,{y}_{n},\cdots ,{y}_{N}\right)}^{\text{T}} Y {L}_{2}\left(y,{\zeta }^{i}\right) {\zeta }^{i} {p}_{2}^{i}\left(t\right) {\text{P}}_{2}=\left\{\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{2}{p}_{2}^{i}\left(t\right)|\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{2}=1,{\lambda }_{i}^{2}\ge 0,i=1,2,\cdots ,I\right\} {\Lambda }_{2}=\left\{{\lambda }_{2}=\left({\lambda }_{1}^{2},{\lambda }_{2}^{2},\cdots ,{\lambda }_{I}^{2}\right)|\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{2}=1,{\lambda }_{i}^{2}\ge 0,i=1,2,\cdots ,I\right\} WCVa{R}_{{\beta }_{2}}\left(y\right)=\underset{{\lambda }_{2}\in {\Lambda }_{2}}{\mathrm{max}}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{2}{F}_{2i}\left(y,{\alpha }_{2}\right) {F}_{2i}\left(y,{\alpha }_{2}\right)={\alpha }_{2}+\frac{1}{1-{\beta }_{2}}{\int }_{t\in {R}^{m}}{\left[{L}_{2}\left(y,t\right)-{\alpha }_{2}\right]}^{+}{p}_{2}^{i}\left(t\right)\text{d}t \begin{array}{l}\mathrm{min}\text{}WCVa{R}_{{\beta }_{2}}\left(y\right)=\underset{{\lambda }_{2}\in {\Lambda }_{2}}{\mathrm{max}}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{2}{F}_{2i}\left(y,{\alpha }_{2}\right)\\ \text{s}\text{.t}\text{.}y\in Y\end{array} \begin{array}{l}\mathrm{min}\text{}{\chi }_{2}\\ \text{s}\text{.t}\text{.}{\chi }_{2}\ge {F}_{2i}\left(y,{\alpha }_{2}\right),i=1,2,\cdots ,I,\\ \text{}y\in Y,{\alpha }_{2},{\chi }_{2}\in {R}^{1}.\end{array} \zeta ={\phi }_{1}\left(\xi \right) \zeta =\xi +\theta \theta {\theta }_{n}\left(n=1,\cdots ,N\right) N\left(0,{\delta }^{2}\right) {l}_{n} {g}_{n} {g}_{n}<{l}_{n} {b}_{n} {c}_{n} {w}_{m} {A}_{m*n} \underset{n=1}{\overset{N}{\sum }}{A}_{m*n}{y}_{n} {v}_{n} {l}_{n}={v}_{n}+\underset{m=1}{\overset{M}{\sum }}{w}_{m}{A}_{m*n} {L}_{2}\left(y,\zeta \right)=\underset{n=1}{\overset{N}{\sum }}\left\{\left({b}_{n}+{l}_{n}-{g}_{n}\right){\left[{y}_{n}-{\zeta }_{n}\right]}^{+}+\left({c}_{n}-{l}_{n}\right){\left[{\zeta }_{n}-{y}_{n}\right]}^{+}\right\} {\zeta }_{k} \zeta k=1,\cdots ,K {p}_{2k}^{i} {F}_{2}\left(y,{\alpha }_{2}\right) {\stackrel{˜}{F}}_{2}\left(y,{\lambda }_{2},{\alpha }_{2}\right)={\alpha }_{2}+\frac{1}{1-{\beta }_{2}}\underset{k=1}{\overset{K}{\sum }}\left({\left[{L}_{2}\left(y,{\zeta }_{k}\right)-{\alpha }_{2}\right]}^{+}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{2}{p}_{2k}^{i}\right) {\mu }_{2,k}={L}_{2}\left(y,{\zeta }_{k}\right)-{\alpha }_{2} {\nu }_{2,k}={y}_{n}-{\zeta }_{n,k} {\omega }_{2,k}={\zeta }_{n,k}-{y}_{n} k=1,\cdots ,K {\zeta }_{n,k}={\xi }_{n,k}+{\theta }_{n,k} k=1,\cdots ,K {\mu }_{2}=\left({\mu }_{2,1},{\mu }_{2,2},\cdots ,{\mu }_{2,K}\right) {\nu }_{2}=\left({\nu }_{2,1},{\nu }_{2,2},\cdots ,{\nu }_{2,K}\right) {\omega }_{2}=\left({\omega }_{2,1},{\omega }_{2,2},\cdots ,{\omega }_{2,K}\right) {\beta }_{2} \underset{\left(y,{\alpha }_{2},{\mu }_{2},{\nu }_{2},{\omega }_{2},{\chi }_{2}\right)}{\mathrm{min}} {\chi }_{2} {\mu }_{2,k}\ge \underset{n=1}{\overset{N}{\sum }}\left\{\left({b}_{n}+{l}_{n}-{g}_{n}\right){\nu }_{2,k}+\left({c}_{n}-{l}_{n}\right){\omega }_{2,k}\right\}-{\alpha }_{2} {\mu }_{2,k}\ge 0 {\nu }_{2,k}\ge {y}_{n}-\left({\xi }_{n,k}+{\theta }_{n,k}\right) {\nu }_{2,k}\ge 0 {\omega }_{2,k}\ge \left({\xi }_{n,k}+{\theta }_{n,k}\right)-{y}_{n} {\omega }_{2,k}\ge 0 \underset{n=1}{\overset{N}{\sum }}{l}_{n}{y}_{n}\le {\Phi }_{2} {\alpha }_{2},{\chi }_{2}\in {R}^{1} {l}_{n}={v}_{n}+{\displaystyle \underset{m=1}{\overset{M}{\sum }}{w}_{m}{A}_{m\times n}} {\chi }_{2}\ge {\alpha }_{2}+\frac{1}{1-{\beta }_{2}}\underset{k=1}{\overset{K}{\sum }}{\mu }_{2,k}{p}_{2k}^{i} i=1,\cdots ,I {A}_{n}^{3}\le {y}_{n}\le {A}_{n}^{4} n=1,\cdots ,N k=1,\cdots ,K {\Phi }_{2} {A}_{n}^{3} {A}_{n}^{4} {y}^{\ast } {A}_{m*n}{y}^{\ast } \gamma ={\left({\gamma }_{1},\cdots ,{\gamma }_{m},\cdots ,{\gamma }_{M}\right)}^{\text{T}} {z}_{m} z={\left({z}_{1},\cdots ,{z}_{m},\cdots ,{z}_{M}\right)}^{\text{T}} Z {L}_{3}\left(z,{\gamma }^{i}\right) {\gamma }^{i} {p}_{3}^{i}\left(t\right) {\text{P}}_{3}=\left\{\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{3}{p}_{3}^{i}\left(t\right)|\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{3}=1,{\lambda }_{i}^{3}\ge 0,i=1,2,\cdots ,I\right\} {\Lambda }_{3}=\left\{{\lambda }_{3}=\left({\lambda }_{1}^{3},{\lambda }_{2}^{3},\cdots ,{\lambda }_{I}^{3}\right)|\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{3}=1,{\lambda }_{i}^{3}\ge 0,i=1,2,\cdots ,I\right\} WCVa{R}_{{\beta }_{3}}\left(y\right)=\underset{{\lambda }_{3}\in {\Lambda }_{3}}{\mathrm{max}}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{3}{F}_{3i}\left(z,{\alpha }_{3}\right) {F}_{3i}\left(z,{\alpha }_{3}\right)={\alpha }_{3}+\frac{1}{1-{\beta }_{3}}{\int }_{t\in {R}^{m}}{\left[{L}_{3}\left(z,t\right)-{\alpha }_{3}\right]}^{+}{p}_{3}^{i}\left(t\right)\text{d}t \begin{array}{l}\mathrm{min}\text{}WCVa{R}_{{\beta }_{3}}\left(z\right)=\underset{{\lambda }_{3}\in {\Lambda }_{3}}{\mathrm{max}}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}^{3}{F}_{3i}\left(z,{\alpha }_{3}\right)\\ \text{s}\text{.t}\text{.}z\in X\end{array} \begin{array}{l}\mathrm{min}\text{}{\chi }_{3}\\ \text{s}\text{.t}\text{.}{\chi }_{3}\ge {F}_{3i}\left(z,{\alpha }_{3}\right),i=1,2,\cdots ,I,\\ \text{}z\in Y,{\alpha }_{3},{\chi }_{3}\in {R}^{1}.\end{array} \gamma ={\phi }_{2}\left(\xi \right) \gamma ={A}_{m*n}\left(\xi +\epsilon \right) \underset{n=1}{\overset{N}{\sum }}{A}_{m*n}\xi \epsilon {\epsilon }_{n} N\left(0,{\sigma }^{2}\right) n=1,\cdots ,N {d}_{m} {w}_{m} {L}_{3}\left(z,\gamma \right)=\underset{m=1}{\overset{M}{\sum }}\left\{{d}_{m}{\left[{z}_{m}-{\gamma }_{m}\right]}^{+}+{w}_{m}{\left[{\gamma }_{m}-{z}_{m}\right]}^{+}\right\} {\gamma }_{k} \gamma k=1,\cdots ,K {p}_{3k}^{i} {F}_{3}\left(z,{\alpha }_{3}\right) {\stackrel{˜}{F}}_{3}\left(z,{\lambda }_{3},{\alpha }_{3}\right)={\alpha }_{3}+\frac{1}{1-{\beta }_{3}}\underset{k=1}{\overset{K}{\sum }}\left({\left[{L}_{3}\left(z,{\gamma }_{k}\right)-{\alpha }_{3}\right]}^{+}\underset{i=1}{\overset{I}{\sum }}{\lambda }_{i}{}^{3}{p}_{3k}^{i}\right) {\mu }_{3,k}={L}_{3}\left(z,{\gamma }_{k}\right)-{\alpha }_{3} {\nu }_{3,k}={z}_{m}-{\gamma }_{m,k} {\omega }_{3,k}={\gamma }_{m,k}-{z}_{m} k=1,\cdots ,K {\zeta }_{n,k}={\xi }_{n,k}+{\theta }_{n,k} k=1,\cdots ,K {\mu }_{3}=\left({\mu }_{3,1},{\mu }_{3,2},\cdots ,{\mu }_{3,K}\right) {\nu }_{3}=\left({\nu }_{3,1},{\nu }_{3,2},\cdots ,{\nu }_{3,K}\right) {\omega }_{3}=\left({\omega }_{3,1},{\omega }_{3,2},\cdots ,{\omega }_{3,K}\right) {\gamma }_{m,k}=\underset{n=1}{\overset{N}{\sum }}\left[{A}_{m*n}\left({\xi }_{n,k}+{\epsilon }_{n,k}\right)\right] {\beta }_{3} \underset{\left(z,{\alpha }_{3},{\mu }_{3},{\nu }_{3},{\omega }_{3},{\chi }_{3}\right)}{\mathrm{min}} {\chi }_{3} {\mu }_{3,k}\ge \underset{m=1}{\overset{M}{\sum }}\left\{{d}_{m}{\nu }_{3,k}+{w}_{m}{\omega }_{3,k}\right\}-{\alpha }_{3} {\mu }_{3,k}\ge 0 {\nu }_{3,k}\ge {z}_{m}-\underset{n=1}{\overset{N}{\sum }}\left[{A}_{m*n}\left({\xi }_{n,k}+{\epsilon }_{n,k}\right)\right] {\nu }_{3,k}\ge 0 {\omega }_{3,k}\ge \underset{n=1}{\overset{N}{\sum }}\left[{A}_{m*n}\left({\xi }_{n,k}+{\epsilon }_{n,k}\right)\right]-{z}_{m} {\omega }_{3,k}\ge 0 m=1,\cdots ,M k=1,\cdots ,K \underset{m=1}{\overset{M}{\sum }}{d}_{m}{z}_{m}\le {\Phi }_{3} {\alpha }_{3},{\chi }_{3}\in {R}^{1} {A}_{m}^{5}\le {z}_{m}\le {A}_{m}^{6} m=1,2,\cdots ,M {\chi }_{3}\ge {\alpha }_{3}+\frac{1}{1-{\beta }_{3}}\underset{k=1}{\overset{K}{\sum }}{\mu }_{3,k}{p}_{3k}^{i} i=1,\cdots ,I {\Phi }_{3} {A}_{m}^{5} {A}_{m}^{6} {z}^{\ast } {z}_{m}^{\ast }=\underset{n=1}{\overset{N}{\sum }}\left({A}_{m*n}*{u}_{n}^{\ast }\right) m=1,\cdots ,M {u}^{\ast } {\pi }_{1} {\pi }_{2} {\pi }_{3} {\pi }_{1}+{\pi }_{2}+{\pi }_{3}=1 \Theta ={\pi }_{1}{\chi }_{1}+{\pi }_{2}{\chi }_{2}+{\pi }_{3}{\chi }_{3} \underset{\begin{array}{l}\left(x,{\alpha }_{1},{\mu }_{1},{\nu }_{1},{\omega }_{1},{\chi }_{1}\right)\\ \left(y,{\alpha }_{2},{\mu }_{2},{\nu }_{2},{\omega }_{2},{\chi }_{2}\right)\\ \left(z,{\alpha }_{3},{\mu }_{3},{\nu }_{3},{\omega }_{3},{\chi }_{3}\right)\end{array}}{\mathrm{min}} \Theta ={\pi }_{1}{\chi }_{1}+{\pi }_{2}{\chi }_{2}+{\pi }_{3}{\chi }_{3} {\mu }_{1,k}\ge \underset{n=1}{\overset{N}{\sum }}\left\{\left({a}_{n}+{c}_{n}-{r}_{n}\right){\nu }_{1,k}+\left({s}_{n}-{c}_{n}\right){\omega }_{1,k}\right\}-{\alpha }_{1} {\mu }_{1,k}\ge 0 {\nu }_{1,k}\ge {x}_{n}-{\xi }_{n,k} {\nu }_{1,k}\ge 0 {\omega }_{1,k}\ge {\xi }_{n,k}-{x}_{n} {\omega }_{1,k}\ge 0 \underset{n=1}{\overset{N}{\sum }}{c}_{n}{x}_{n}\le {\Phi }_{1} {\alpha }_{1},{\chi }_{1}\in {R}^{1} {A}_{n}^{1}\le {x}_{n}\le {A}_{n}^{2} {\chi }_{1}\ge {\alpha }_{1}+\frac{1}{1-{\beta }_{1}}\underset{k=1}{\overset{K}{\sum }}{\mu }_{1,k}{p}_{1k}^{i} {\mu }_{2,k}\ge \underset{n=1}{\overset{N}{\sum }}\left\{\left({b}_{n}+{l}_{n}-{g}_{n}\right){\nu }_{2,k}+\left({c}_{n}-{l}_{n}\right){\omega }_{2,k}\right\}-{\alpha }_{2} {\mu }_{2,k}\ge 0 {\nu }_{2,k}\ge {y}_{n}-\left({\xi }_{n,k}+{\theta }_{n,k}\right) {\nu }_{2,k}\ge 0 {\omega }_{2,k}\ge \left({\xi }_{n,k}+{\theta }_{n,k}\right)-{y}_{n} {\omega }_{2,k}\ge 0 {l}_{n}={v}_{n}+\underset{m=1}{\overset{M}{\sum }}{w}_{m}{A}_{m*n} {A}_{n}^{3}\le {y}_{n}\le {A}_{n}^{4} \underset{n=1}{\overset{N}{\sum }}{l}_{n}{y}_{n}\le {\Phi }_{2} {\alpha }_{2},{\chi }_{2}\in {R}^{1} {\chi }_{2}\ge {\alpha }_{2}+\frac{1}{1-{\beta }_{2}}\underset{k=1}{\overset{K}{\sum }}{\mu }_{2,k}{p}_{2k}^{i} {\mu }_{3,k}\ge \underset{m=1}{\overset{M}{\sum }}\left\{{d}_{m}{\nu }_{3,k}+{w}_{m}{\omega }_{3,k}\right\}-{\alpha }_{3} {\mu }_{3,k}\ge 0 {\nu }_{3,k}\ge {z}_{m}-\underset{n=1}{\overset{N}{\sum }}\left[{A}_{m*n}\left({\xi }_{n,k}+{\epsilon }_{n,k}\right)\right] {\nu }_{3,k}\ge 0 {\omega }_{3,k}\ge \underset{n=1}{\overset{N}{\sum }}\left[{A}_{m*n}\left({\xi }_{n,k}+{\epsilon }_{n,k}\right)\right]-{z}_{m} {\omega }_{3,k}\ge 0 {A}_{m}^{5}\le {z}_{m}\le {A}_{m}^{6} \underset{m=1}{\overset{M}{\sum }}{d}_{m}{z}_{m}\le {\Phi }_{3} {\alpha }_{3},{\chi }_{3}\in {R}^{1} {z}_{m}=\underset{n=1}{\overset{N}{\sum }}\left({A}_{m*n}*{u}_{n}\right) n=1,\cdots ,N k=1,\cdots ,K i=1,2,\cdots ,I m=1,\cdots ,M {\chi }_{3}\ge {\alpha }_{3}+\frac{1}{1-{\beta }_{3}}\underset{k=1}{\overset{K}{\sum }}{\mu }_{3,k}{p}_{3k}^{i} i=1,\cdots ,I |{x}_{n}-{y}_{n}|\le \delta |{y}_{n}-{u}_{n}|\le \sigma \delta >0 \sigma >0 \delta =6 \sigma =9 {\pi }_{1}={\pi }_{2}={\pi }_{3}=\frac{1}{3} {\beta }_{1}={\beta }_{2}={\beta }_{3}=95% {x}^{*} {y}^{*} {u}^{*} {x}^{*}={y}^{*}={u}^{*}
Parks-McClellan optimal FIR filter design - MATLAB firpm - MathWorks España Parks-McClellan Bandpass Filter Parks-McClellan Lowpass Filter b = firpm(n,f,a) b = firpm(n,f,a,w) b = firpm(n,f,a,ftype) b = firpm(n,f,a,lgrid) [b,err] = firpm(___) [b,err,res] = firpm(___) b = firpm(n,f,fresp,w) b = firpm(n,f,fresp,w,ftype) b = firpm(n,f,a) returns row vector b containing the n+1 coefficients of an order-n FIR filter. The frequency and amplitude characteristics of the resulting filter match those given by vectors f and a. b = firpm(n,f,a,w) uses w to weigh the frequency bins. b = firpm(n,f,a,ftype) uses a filter type specified by 'ftype'. b = firpm(n,f,a,lgrid) uses the integer lgrid to control the density of the frequency grid. [b,err] = firpm(___) returns the maximum ripple height in err. You can use this with any of the previous input syntaxes. [b,err,res] = firpm(___) returns the frequency response characteristics as a structure res. b = firpm(n,f,fresp,w) returns an FIR filter whose frequency-amplitude characteristics best approximate the response returned by function handle fresp. b = firpm(n,f,fresp,w,ftype) designs antisymmetric (odd) filters, where ftype specifies the filter as a differentiator or Hilbert transformer. If you do not specify an ftype, a call is made to fresp to determine the default symmetry property. Use the Parks-McClellan algorithm to design an FIR bandpass filter of order 17. Specify normalized stopband frequencies of 0.3\pi 0.7\pi rad/sample and normalized passband frequencies of 0.4\pi 0.6\pi rad/sample. Plot the ideal and actual magnitude responses. b = firpm(17,f,a); [h,w] = freqz(b,1,512); plot(f,a,w/pi,abs(h)) legend('Ideal','firpm Design') xlabel 'Radian Frequency (\omega/\pi)', ylabel 'Magnitude' Design a lowpass filter with a 1500 Hz passband cutoff frequency and 2000 Hz stopband cutoff frequency. Specify a sampling frequency of 8000 Hz. Require a maximum stopband amplitude of 0.01 and a maximum passband error (ripple) of 0.001. Obtain the required filter order, normalized frequency band edges, frequency band amplitudes, and weights using firpmord. [n,fo,ao,w] = firpmord([1500 2000],[1 0],[0.001 0.01],8000); Use the Parks-McClellan algorithm to create a 50th-order equiripple FIR bandpass filter to be used with signals sampled at 1 kHz. Specify that the passband spans the frequencies between 200 Hz and 300 Hz and that the transition region on either side of the passband has a width of 50 Hz. Design the filter so that the optimization fit weights the low-frequency stopband with a weight of 3, the passband with a weight of 1, and the high-frequency stopband with a weight of 100. Display the magnitude response of the filter. b = firpm(N,[0 Fstop1 Fpass1 Fpass2 Fstop2 Fs/2]/(Fs/2), ... [0 0 1 1 0 0],[Wstop1 Wpass Wstop2]); Desired amplitudes at the points specified in f, specified as a vector. f and a must be the same length. The length must be an even number. The desired amplitude at frequencies between pairs of points (f(k), f(k+1)) for k odd is the line segment connecting the points (f(k), a(k)) and (f(k+1), a(k+1)). The desired amplitude at frequencies between pairs of points (f(k), f(k+1)) for k even is unspecified. The areas between such points are transition regions or regions that are not important for a particular application. Filter type for linear-phase filters with odd symmetry (type III and type IV), specified as either 'hilbert' or 'differentiator': 'hilbert' — The output coefficients in b obey the relation b(k) = –b(n + 2 – k), k = 1, ..., n + 1. This class of filters includes the Hilbert transformer, which has a desired amplitude of 1 across the entire band. h = firpm(30,[0.1 0.9],[1 1],'hilbert'); designs an approximate FIR Hilbert transformer of length 31. 'differentiator' — For nonzero amplitude bands, the filter weighs the error by a factor of 1/f so that the error at low frequencies is much smaller than at high frequencies. For FIR differentiators, which have an amplitude characteristic proportional to frequency, these filters minimize the maximum relative error (the maximum of the ratio of the error to the desired amplitude). 16 (default) | 1-by-1 cell array with integer value Control the density of the frequency grid, which has roughly (lgrid*n)/(2*bw) frequency points, where bw is the fraction of the total frequency band interval [0,1] covered by f. Increasing lgrid often results in filters that more exactly match an equiripple filter, but that take longer to compute. The default value of 16 is the minimum value that should be specified for lgrid. Frequency response, specified as a function handle. The function is called from within firpm with this syntax: The arguments are similar to those for firpm: f is the vector of normalized frequency band edges that appear monotonically between 0 and 1, where 1 is the Nyquist frequency. gf is a vector of grid points that have been linearly interpolated over each specified frequency band by firpm. gf determines the frequency grid at which the response function must be evaluated, and contains the same data returned by cfirpm in the fgrid field of the opt structure. w is a vector of real, positive weights, one per band, used during optimization. w is optional in the call to firpm; if not specified, it is set to unity weighting before being passed to fresp. dh and dw are the desired complex frequency response and band weight vectors, respectively, evaluated at each frequency in grid gf. Filter coefficients, returned as a row vector of length n + 1. The coefficients are in increasing order. err — Maximum ripple height res — Frequency response characteristics Frequency response characteristics, returned as a structure. The structure res has the following fields: Desired frequency response for each point in res.fgrid Actual frequency response for each point in res.fgrid Error at each point in res.fgrid (res.des-res.H) Vector of indices into res.fgrid for extremal frequencies If your filter design fails to converge, the filter design might not be correct. Verify the design by checking the frequency response. If your filter design fails to converge and the resulting filter design is not correct, attempt one or more of the following: Increase the filter order. Relax the filter design by reducing the attenuation in the stopbands and/or broadening the transition regions. firpm designs a linear-phase FIR filter using the Parks-McClellan algorithm [2]. The Parks-McClellan algorithm uses the Remez exchange algorithm and Chebyshev approximation theory to design filters with an optimal fit between the desired and actual frequency responses. The filters are optimal in the sense that the maximum error between the desired frequency response and the actual frequency response is minimized. Filters designed this way exhibit an equiripple behavior in their frequency responses and are sometimes called equiripple filters. firpm exhibits discontinuities at the head and tail of its impulse response due to this equiripple nature. These are type I (n odd) and type II (n even) linear-phase filters. Vectors f and a specify the frequency-amplitude characteristics of the filter: f is a vector of pairs of frequency points, specified in the range between 0 and 1, where 1 corresponds to the Nyquist frequency. The frequencies must be in increasing order. Duplicate frequency points are allowed and, in fact, can be used to design a filter exactly the same as those returned by the fir1 and fir2 functions with a rectangular (rectwin) window. a is a vector containing the desired amplitude at the points specified in f. The desired amplitude function at frequencies between pairs of points (f(k), f(k+1)) for k odd is the line segment connecting the points (f(k), a(k)) and (f(k+1), a(k+1)). The desired amplitude function at frequencies between pairs of points (f(k), f(k+1)) for k even is unspecified. These are transition or "don’t care" regions. f and a are the same length. This length must be an even number. The figure below illustrates the relationship between the f and a vectors in defining a desired amplitude response. firpm always uses an even filter order for configurations with even symmetry and a nonzero passband at the Nyquist frequency. The reason for the even filter order is that for impulse responses exhibiting even symmetry and odd orders, the frequency response at the Nyquist frequency is necessarily 0. If you specify an odd-valued n, firpm increments it by 1. firpm designs type I, II, III, and IV linear-phase filters. Type I and type II are the defaults for n even and n odd, respectively, while type III (n even) and type IV (n odd) are specified with 'hilbert' or 'differentiator', respectively, using the ftype argument. The different types of filters have different symmetries and certain constraints on their frequency responses. (See [3] for more details.) b\left(k\right)=b\left(n+2-k\right),\text{ }k=1,...,n+1 b\left(k\right)=b\left(n+2-k\right),\text{ }k=1,...,n+1 firpm increments the filter order by 1 if you attempt to construct a type II filter with a nonzero passband at the Nyquist frequency. b\left(k\right)=-b\left(n+2-k\right),\text{ }k=1,...,n+1 Type IV Odd b\left(k\right)=-b\left(n+2-k\right),\text{ }k=1,...,n+1 You can also use firpm to write a function that defines the desired frequency response. The predefined frequency response function handle for firpm is @firpmfrf, which designs a linear-phase FIR filter. b = firpm(n,f,a,w) is equivalent to b = firpm(n,f,{@firpmfrf,a},w), where, @firpmfrf is the predefined frequency response function handle for firpm. If desired, you can write your own response function. Use help private/firpmfrf and see Create Function Handle for more information. [3] Oppenheim, Alan V., Ronald W. Schafer, and John R. Buck. Discrete-Time Signal Processing. Upper Saddle River, NJ: Prentice Hall, 1999, p. 486. [4] Parks, Thomas W., and C. Sidney Burrus. Digital Filter Design. New York: John Wiley & Sons, 1987, p. 83. [5] Rabiner, Lawrence R., James H. McClellan, and Thomas W. Parks. "FIR Digital Filter Design Techniques Using Weighted Chebyshev Approximation." Proceedings of the IEEE®. Vol. 63, Number 4, 1975, pp. 595–610. butter | cheby1 | cheby2 | cfirpm | ellip | fir1 | fir2 | fircls | fircls1 | firls | firpmord | rcosdesign | yulewalk
ratpolytocoeff - Maple Help Home : Support : Online Help : Mathematics : Discrete Mathematics : Combinatorics : gfun : ratpolytocoeff ratpolytocoeff compute the nth coefficient of a rational function ratpolytocoeff(f, x, n) name; indeterminate in f name; index of the Taylor coefficients The ratpolytocoeff(f, x, n) command computes the expression for the nth coefficient of the Taylor expansion about the origin of f as a function of x. \mathrm{with}⁡\left(\mathrm{gfun}\right): \mathrm{ratpolytocoeff}⁡\left(\frac{1}{1-x-{x}^{2}},x,n\right) \textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{\textcolor[rgb]{0,0,1}{\mathrm{_α}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{RootOf}}\textcolor[rgb]{0,0,1}{⁡}\left({\textcolor[rgb]{0,0,1}{\mathrm{_Z}}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{_Z}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{-}\frac{\left(\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{_α}}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{5}}\right)\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{_α}}}^{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{n}}}{\textcolor[rgb]{0,0,1}{\mathrm{_α}}}\right)
Solve system of linear equations — minimum residual method - MATLAB minres - MathWorks India {H}^{-1}A\text{\hspace{0.17em}}{H}^{-T}y={H}^{-1}b y={H}^{T}x H={M}^{1/2}={\left({M}_{1}{M}_{2}\right)}^{1/2} \mathrm{Ax}=\mathit{b} \mathit{x} \mathrm{Ax}=\mathit{b} \frac{‖\mathit{b}-\mathrm{Ax}‖}{‖\mathit{b}‖} \mathrm{Ax}=\mathit{b} ‖\mathit{b}-\mathrm{Ax}‖ ‖{\mathit{A}}^{\mathit{T}}\mathit{A}\text{\hspace{0.17em}}\mathit{x}-{\mathit{A}}^{\mathit{T}}\mathit{b}‖ \mathit{M}=\mathit{L}\text{\hspace{0.17em}}{\mathit{L}}^{\mathit{T}} \mathrm{Ax}=\mathit{b} \mathit{x} \mathrm{Ax}=\mathit{b} \mathrm{Ax} \mathrm{Ax}=\left[\begin{array}{cccccccccc}10& 1& 0& \cdots & & \cdots & & \cdots & 0& 0\\ 1& 9& 1& 0& & & & & & 0\\ 0& 1& 8& 1& 0& & & & & ⋮\\ ⋮& 0& 1& 7& 1& 0& & & & \\ & & 0& 1& 6& 1& 0& & & ⋮\\ ⋮& & & 0& 1& 5& 1& 0& & \\ & & & & 0& 1& 4& 1& 0& ⋮\\ ⋮& & & & & 0& 1& 3& \ddots & 0\\ 0& & & & & & 0& \ddots & \ddots & 1\\ 0& 0& \cdots & & \cdots & & \cdots & 0& 1& 10\end{array}\right]\left[\begin{array}{c}{\mathit{x}}_{1}\\ {\mathit{x}}_{2}\\ {\mathit{x}}_{3}\\ {\mathit{x}}_{4}\\ {\mathit{x}}_{5}\\ ⋮\\ \\ ⋮\\ \\ {\mathit{x}}_{21}\end{array}\right]=\left[\begin{array}{c}10{\mathit{x}}_{1}+{\mathit{x}}_{2}\\ {\mathit{x}}_{1}+9{\mathit{x}}_{2}+{\mathit{x}}_{3}\\ {\mathit{x}}_{2}+8{\mathit{x}}_{3}+{\mathit{x}}_{4}\\ ⋮\\ {\mathit{x}}_{19}+9{\mathit{x}}_{20}+{\mathit{x}}_{21}\\ {\mathit{x}}_{20}+10{\mathit{x}}_{21}\end{array}\right] \mathrm{Ax}=\left[\begin{array}{c}0+10{\mathit{x}}_{1}+{\mathit{x}}_{2}\\ {\mathit{x}}_{1}+9{\mathit{x}}_{2}+{\mathit{x}}_{3}\\ {\mathit{x}}_{2}+8{\mathit{x}}_{3}+{\mathit{x}}_{4}\\ ⋮\\ {\mathit{x}}_{19}+9{\mathit{x}}_{20}+{\mathit{x}}_{21}\\ {\mathit{x}}_{20}+10{\mathit{x}}_{21}+0\end{array}\right] \left[\begin{array}{c}0\\ {\mathit{x}}_{1}\\ ⋮\\ {\mathit{x}}_{20}\end{array}\right]+\left[\begin{array}{c}10{\mathit{x}}_{1}\\ 9{\mathit{x}}_{2}\\ ⋮\\ 10{\mathit{x}}_{21}\end{array}\right]+\left[\begin{array}{c}{\mathit{x}}_{2}\\ ⋮\\ {\mathit{x}}_{21}\\ 0\end{array}\right] \mathrm{Ax}=\mathit{b}
Diesel cycle — Wikipedia Republished // WIKI 2 This article is about the thermodynamic cycle. For diesel motorcycles, see diesel motorcycle. {\displaystyle c=} {\displaystyle T} {\displaystyle \partial S} {\displaystyle N} {\displaystyle \partial T} {\displaystyle \beta =-} {\displaystyle 1} {\displaystyle \partial V} {\displaystyle V} {\displaystyle \partial p} {\displaystyle \alpha =} {\displaystyle 1} {\displaystyle \partial V} {\displaystyle V} {\displaystyle \partial T} {\displaystyle U(S,V)} {\displaystyle H(S,p)=U+pV} {\displaystyle A(T,V)=U-TS} {\displaystyle G(T,p)=H-TS} The Diesel cycle is a combustion process of a reciprocating internal combustion engine. In it, fuel is ignited by heat generated during the compression of air in the combustion chamber, into which fuel is then injected. This is in contrast to igniting the fuel-air mixture with a spark plug as in the Otto cycle (four-stroke/petrol) engine. Diesel engines are used in aircraft, automobiles, power generation, diesel–electric locomotives, and both surface ships and submarines. The Diesel cycle is assumed to have constant pressure during the initial part of the combustion phase ( {\displaystyle V_{2}} {\displaystyle V_{3}} in the diagram, below). This is an idealized mathematical model: real physical diesels do have an increase in pressure during this period, but it is less pronounced than in the Otto cycle. In contrast, the idealized Otto cycle of a gasoline engine approximates a constant volume process during that phase. Bench test: McCoy Redhead .09 diesel model airplane engine Gasoline and Diesel aircraft engines - what's the difference? Animation How Otto cycle works. ✔ 1 Idealized Diesel cycle 1.1 Maximum thermal efficiency 1.2 Comparing efficiency to Otto cycle 2.1 Diesel engines 2.2 Other internal combustion engines without spark plugs Idealized Diesel cycle p-V Diagram for the ideal Diesel cycle. The cycle follows the numbers 1-4 in clockwise direction. The image shows a p-V diagram for the ideal Diesel cycle; where {\displaystyle p} is pressure and V the volume or {\displaystyle v} the specific volume if the process is placed on a unit mass basis. The idealized Diesel cycle assumes an ideal gas and ignores combustion chemistry, exhaust- and recharge procedures and simply follows four distinct processes: 1→2 : isentropic compression of the fluid (blue) 2→3 : reversible constant pressure heating (red) 3→4 : isentropic expansion (yellow) 4→1 : reversible constant volume cooling (green)[1] The Diesel engine is a heat engine: it converts heat into work. During the bottom isentropic processes (blue), energy is transferred into the system in the form of work {\displaystyle W_{in}} , but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant pressure (red, isobaric) process, energy enters the system as heat {\displaystyle Q_{in}} . During the top isentropic processes (yellow), energy is transferred out of the system in the form of {\displaystyle W_{out}} , but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant volume (green, isochoric) process, some of energy flows out of the system as heat through the right depressurizing process {\displaystyle Q_{out}} . The work that leaves the system is equal to the work that enters the system plus the difference between the heat added to the system and the heat that leaves the system; in other words, net gain of work is equal to the difference between the heat added to the system and the heat that leaves the system. Work in ( {\displaystyle W_{in}} ) is done by the piston compressing the air (system) Heat in ( {\displaystyle Q_{in}} ) is done by the combustion of the fuel Work out ( {\displaystyle W_{out}} ) is done by the working fluid expanding and pushing a piston (this produces usable work) Heat out ( {\displaystyle Q_{out}} ) is done by venting the air Net work produced = {\displaystyle Q_{in}} {\displaystyle Q_{out}} The net work produced is also represented by the area enclosed by the cycle on the P-V diagram. The net work is produced per cycle and is also called the useful work, as it can be turned to other useful types of energy and propel a vehicle (kinetic energy) or produce electrical energy. The summation of many such cycles per unit of time is called the developed power. The {\displaystyle W_{out}} is also called the gross work, some of which is used in the next cycle of the engine to compress the next charge of air. The maximum thermal efficiency of a Diesel cycle is dependent on the compression ratio and the cut-off ratio. It has the following formula under cold air standard analysis: {\displaystyle \eta _{th}=1-{\frac {1}{r^{\gamma -1}}}\left({\frac {\alpha ^{\gamma }-1}{\gamma (\alpha -1)}}\right)} {\displaystyle \eta _{th}} is thermal efficiency {\displaystyle \alpha } is the cut-off ratio {\displaystyle {\frac {V_{3}}{V_{2}}}} (ratio between the end and start volume for the combustion phase) r is the compression ratio {\displaystyle {\frac {V_{1}}{V_{2}}}} {\displaystyle \gamma } is ratio of specific heats (Cp/Cv)[2] The cut-off ratio can be expressed in terms of temperature as shown below: {\displaystyle {\frac {T_{2}}{T_{1}}}={\left({\frac {V_{1}}{V_{2}}}\right)^{\gamma -1}}=r^{\gamma -1}} {\displaystyle \displaystyle {T_{2}}={T_{1}}r^{\gamma -1}} {\displaystyle {\frac {V_{3}}{V_{2}}}={\frac {T_{3}}{T_{2}}}} {\displaystyle \alpha =\left({\frac {T_{3}}{T_{1}}}\right)\left({\frac {1}{r^{\gamma -1}}}\right)} {\displaystyle T_{3}} can be approximated to the flame temperature of the fuel used. The flame temperature can be approximated to the adiabatic flame temperature of the fuel with corresponding air-to-fuel ratio and compression pressure, {\displaystyle p_{3}} {\displaystyle T_{1}} can be approximated to the inlet air temperature. This formula only gives the ideal thermal efficiency. The actual thermal efficiency will be significantly lower due to heat and friction losses. The formula is more complex than the Otto cycle (petrol/gasoline engine) relation that has the following formula: {\displaystyle \eta _{otto,th}=1-{\frac {1}{r^{\gamma -1}}}} The additional complexity for the Diesel formula comes around since the heat addition is at constant pressure and the heat rejection is at constant volume. The Otto cycle by comparison has both the heat addition and rejection at constant volume. Comparing efficiency to Otto cycle Comparing the two formulae it can be seen that for a given compression ratio (r), the ideal Otto cycle will be more efficient. However, a real diesel engine will be more efficient overall since it will have the ability to operate at higher compression ratios. If a petrol engine were to have the same compression ratio, then knocking (self-ignition) would occur and this would severely reduce the efficiency, whereas in a diesel engine, the self ignition is the desired behavior. Additionally, both of these cycles are only idealizations, and the actual behavior does not divide as clearly or sharply. Furthermore, the ideal Otto cycle formula stated above does not include throttling losses, which do not apply to diesel engines. Main article: Diesel engine Diesel engines have the lowest specific fuel consumption of any large internal combustion engine employing a single cycle, 0.26 lb/hp·h (0.16 kg/kWh) for very large marine engines (combined cycle power plants are more efficient, but employ two engines rather than one). Two-stroke diesels with high pressure forced induction, particularly turbocharging, make up a large percentage of the very largest diesel engines. In North America, diesel engines are primarily used in large trucks, where the low-stress, high-efficiency cycle leads to much longer engine life and lower operational costs. These advantages also make the diesel engine ideal for use in the heavy-haul railroad and earthmoving environments. Other internal combustion engines without spark plugs Many model airplanes use very simple "glow" and "diesel" engines. Glow engines use glow plugs. "Diesel" model airplane engines have variable compression ratios. Both types depend on special fuels. Some 19th-century or earlier experimental engines used external flames, exposed by valves, for ignition, but this becomes less attractive with increasing compression. (It was the research of Nicolas Léonard Sadi Carnot that established the thermodynamic value of compression.) A historical implication of this is that the diesel engine could have been invented without the aid of electricity. See the development of the hot bulb engine and indirect injection for historical significance. ^ Eastop & McConkey 1993, Applied Thermodynamics for Engineering Technologists, Pearson Education Limited, Fifth Edition, p.137 ^ "The Diesel Engine". Hot bulb engine Mixed/dual cycle Partially premixed combustion Thermodynamic cycles combustion / thermal (hot air engines) Bell Coleman Brayton / Joule Stirling (pseudo / adiabatic) Rankine (Organic Rankine) Staged combustion Mixed / Dual Pulse tube Regenerative cooling Vapor absorption
Making Quality Measurements - MATLAB & Simulink - MathWorks 日本 How Are Range, Gain, and Measurement Precision Related? Removing Internal Noise Removing External Noise Matching the Sensor Range and A/D Converter Range How Fast Should a Signal Be Sampled? How Can Aliasing Be Eliminated? For most data acquisition applications, you need to measure the signal produced by a sensor at a specific rate. In many cases, the sensor signal is a voltage level that is proportional to the physical phenomena of interest (for example, temperature, pressure, or acceleration). If you are measuring slowly changing (quasi-static) phenomena like temperature, a slow sampling rate usually suffices. If you are measuring rapidly changing (dynamic) phenomena like vibration or acoustic measurements, a fast sampling rate is required. To make high-quality measurements, you should follow these rules: Maximize the precision and accuracy Match the sensor range to the A/D range Whenever you acquire measured data, you should make every effort to maximize its accuracy and precision. The quality of your measurement depends on the accuracy and precision of the entire data acquisition system, and can be limited by such factors as board resolution or environmental noise. In general terms, the accuracy of a measurement determines how close the measurement comes to the true value. Therefore, it indicates the correctness of the result. The precision of a measurement reflects how exactly the result is determined without reference to what the result means. The relative precision indicates the uncertainty in a measurement as a fraction of the result. For example, suppose you measure a table top with a meter stick and find its length to be 1.502 meters. This number indicates that the meter stick (and your eyes) can resolve distances down to at least a millimeter. Under most circumstances, this is considered to be a fairly precise measurement with a relative precision of around 1/1500. However, suppose you perform the measurement again and obtain a result of 1.510 meters. After careful consideration, you discover that your initial technique for reading the meter stick was faulty because you did not read it from directly above. Therefore, the first measurement was not accurate. Precision and accuracy are illustrated below. For analog input subsystems, accuracy is usually limited by calibration errors while precision is usually limited by the A/D converter. Accuracy and precision are discussed in more detail below. Accuracy is defined as the agreement between a measured quantity and the true value of that quantity. Every component that appears in the analog signal path affects system accuracy and performance. The overall system accuracy is given by the component with the worst accuracy. For data acquisition hardware, accuracy is often expressed as a percent or a fraction of the least significant bit (LSB). Under ideal circumstances, board accuracy is typically ±0.5 LSB. Therefore, a 12 bit converter has only 11 usable bits. Many boards include a programmable gain amplifier, which is located just before the converter input. To prevent system accuracy from being degraded, the accuracy and linearity of the gain must be better than that of the A/D converter. The specified accuracy of a board is also affected by the sampling rate and the settling time of the amplifier. The settling time is defined as the time required for the instrumentation amplifier to settle to a specified accuracy. To maintain full accuracy, the amplifier output must settle to a level given by the magnitude of 0.5 LSB before the next conversion, and is on the order of several tenths of a millisecond for most boards. Settling time is a function of sampling rate and gain value. High rate, high gain configurations require longer settling times while low rate, low gain configurations require shorter settling times. The number of bits used to represent an analog signal determines the precision (resolution) of the device. The more bits provided by your board, the more precise your measurement will be. A high precision, high resolution device divides the input range into more divisions thereby allowing a smaller detectable voltage value. A low precision, low resolution device divides the input range into fewer divisions thereby increasing the detectable voltage value. The overall precision of your data acquisition system is usually determined by the A/D converter, and is specified by the number of bits used to represent the analog signal. Most boards use 12 or 16 bits. The precision of your measurement is given by: precision={\text{one part in 2}}^{numberofbits} The precision in volts is given by: precision=\frac{voltage\text{ }range}{{2}^{number\text{ }of\text{ }bits}} For example, if you are using a 12 bit A/D converter configured for a 10 volt range, then precision=\frac{10\text{ }volts}{{2}^{12}} This means that the converter can detect voltage differences at the level of 0.00244 volts (2.44 mV). When you configure the input range and gain of your analog input subsystem, the end result should maximize the measurement resolution and minimize the chance of an overrange condition. The actual input range is given by the formula: actual\text{ }input\text{ }range=\frac{input\text{ }range}{gain} The relationship between gain, actual input range, and precision for a unipolar and bipolar signal having an input range of 10 V is shown below. Relationship Between Input Range, Gain, and Precision Actual Input Range Precision (12 Bit A/D) As shown in the table, the gain affects the precision of your measurement. If you select a gain that decreases the actual input range, then the precision increases. Conversely, if you select a gain that increases the actual input range, then the precision decreases. This is because the actual input range varies but the number of bits used by the A/D converter remains fixed. With Data Acquisition Toolbox™ software, you do not have to specify the range and gain. Instead, you simply specify the actual input range desired. Noise is considered to be any measurement that is not part of the phenomena of interest. Noise can be generated within the electrical components of the input amplifier (internal noise), or it can be added to the signal as it travels down the input wires to the amplifier (external noise). Techniques that you can use to reduce the effects of noise are described below. Internal noise arises from thermal effects in the amplifier. Amplifiers typically generate a few microvolts of internal noise, which limits the resolution of the signal to this level. The amount of noise added to the signal depends on the bandwidth of the input amplifier. To reduce internal noise, you should select an amplifier with a bandwidth that closely matches the bandwidth of the input signal. External noise arises from many sources. For example, many data acquisition experiments are subject to 60 Hz noise generated by AC power circuits. This type of noise is referred to as pick-up or hum, and appears as a sinusoidal interference signal in the measurement circuit. Another common interference source is fluorescent lighting. These lights generate an arc at twice the power line frequency (120 Hz). Noise is added to the acquisition circuit from these external sources because the signal leads act as aerials picking up environmental electrical activity. Much of this noise is common to both signal wires. To remove most of this common-mode voltage, you should Configure the input channels in differential mode. Refer to Channel Configuration for more information about channel configuration. Use signal wires that are twisted together rather than separate. Keep the signal wires as short as possible. Keep the signal wires as far away as possible from environmental electrical activity. Filtering also reduces signal noise. For many data acquisition applications, a low-pass filter is beneficial. As the name suggests, a low-pass filter passes the lower frequency components but attenuates the higher frequency components. The cut-off frequency of the filter must be compatible with the frequencies present in the signal of interest and the sampling rate used for the A/D conversion. A low-pass filter that's used to prevent higher frequencies from introducing distortion into the digitized signal is known as an antialiasing filter if the cut-off occurs at the Nyquist frequency. That is, the filter removes frequencies greater than one-half the sampling frequency. These filters generally have a sharper cut-off than the normal low-pass filter used to condition a signal. Antialiasing filters are specified according to the sampling rate of the system and there must be one filter per input signal. When sensor data is digitized by an A/D converter, you must be aware of these two issues: The expected range of the data produced by your sensor. This range depends on the physical phenomena you are measuring and the output range of the sensor. The range of your A/D converter. For many devices, the hardware range is specified by the gain and polarity. You should select the sensor and hardware ranges such that the maximum precision is obtained, and the full dynamic range of the input signal is covered. For example, suppose you are using a microphone with a dynamic range of 20 dB to 140 dB and an output sensitivity of 50 mV/Pa. If you are measuring street noise in your application, then you might expect that the sound level never exceeds 80 dB, which corresponds to a sound pressure magnitude of 200 mPa and a voltage output from the microphone of 10 mV. Under these conditions, you should set the input range of your data acquisition card for a maximum signal amplitude of 10 mV, or a little more. Whenever a continuous signal is sampled, some information is lost. The key objective is to sample at a rate such that the signal of interest is well characterized and the amount of information lost is minimized. If you sample at a rate that is too slow, then signal aliasing can occur. Aliasing can occur for both rapidly varying signals and slowly varying signals. For example, suppose you are measuring temperature once a minute. If your acquisition system is picking up a 60-Hz hum from an AC power supply, then that hum will appear as constant noise level if you are sampling at 30 Hz. Aliasing occurs when the sampled signal contains frequency components greater than one-half the sampling rate. The frequency components could originate from the signal of interest in which case you are undersampling and should increase the sampling rate. The frequency components could also originate from noise in which case you might need to condition the signal using a filter. The rule used to prevent aliasing is given by the Nyquist theorem, which states that An analog signal can be uniquely reconstructed, without error, from samples taken at equal time intervals. The sampling rate must be equal to or greater than twice the highest frequency component in the analog signal. A frequency of one-half the sampling rate is called the Nyquist frequency. However, if your input signal is corrupted by noise, then aliasing can still occur. For example, suppose you configure your A/D converter to sample at a rate of 4 samples per second (4 S/s or 4 Hz), and the signal of interest is a 1 Hz sine wave. Because the signal frequency is one-fourth the sampling rate, then according to the Nyquist theorem, it should be completely characterized. However, if a 5 Hz sine wave is also present, then these two signals cannot be distinguished. In other words, the 1 Hz sine wave produces the same samples as the 5 Hz sine wave when the sampling rate is 4 S/s. The following diagram illustrates this condition. In a real-world data acquisition environment, you might need to condition the signal by filtering out the high frequency components. Even though the samples appear to represent a sine wave with a frequency of one-fourth the sampling rate, the actual signal could be any sine wave with a frequency of: \left(n±0.25\right)×\left(sampling\text{ }rate\right) where n is zero or any positive integer. For this example, the actual signal could be at a frequency of 3 Hz, 5 Hz, 7 Hz, 9 Hz, and so on. The relationship 0.25 × (sampling rate) is called the alias of a signal that might be at another frequency. In other words, aliasing occurs when one frequency assumes the identity of another frequency. If you sample the input signal at least twice as fast as the highest frequency component, then that signal might be uniquely characterized, but this rate would not mimic the waveform very closely. As shown below, to get an accurate picture of the waveform, you need a sampling rate of roughly 10 to 20 times the highest frequency. As shown in the top figure, the low sampling rate produces a sampled signal that appears to be a triangular waveform. As shown in the bottom figure, a higher fidelity sampled signal is produced when the sampling rate is higher. In the latter case, the sampled signal actually looks like a sine wave. The primary considerations involved in antialiasing are the sampling rate of the A/D converter and the frequencies present in the sampled data. To eliminate aliasing, you must Establish the useful bandwidth of the measurement. Select a sensor with sufficient bandwidth. Select a low-pass antialiasing analog filter that can eliminate all frequencies exceeding this bandwidth. Sample the data at a rate at least twice that of the filter's upper cutoff frequency.
How to define quantum Turing machines? - PhotoLens In quantum computation, what is the equivalent model of a Turing machine? It is quite clear to me how quantum circuits can be constructed out of quantum gates, but how can we define a quantum Turing machine (QTM) that can actually benefit from quantum effects, namely, perform on high-dimensional systems? Q={q0,q1,..} Q=\{q_0,q_1,..\} – a finite set of states. Let q0 q_0 be an initial state. Σ={σ0,σ1,...} \Sigma=\{\sigma_0,\sigma_1,...\} , Γ={γ0,..} \Gamma=\{\gamma_0,..\} – set of input/working alphabet an infinite tape and a single “head”. However, when defining the transition function, one should recall that any quantum computation must be reversible. Recall that a configuration of TM is the tuple C=(q,T,i) denoting that the TM is at state q\in Q , the tape contains T∈Γ∗ T\in \Gamma^* and the head points to the i th cell of the tape. Since, at any given time, the tape consist only a finite amount of non-blank cells, we define the (quantum) state of the QTM as a unit vector in the Hilbert space H \mathcal{H} generated by the configuration space Q×Σ∗×Z Q\times\Sigma^*\times \mathrm{Z} . The specific configuration C=(q,T,i) is represented as the state |C⟩=|q⟩|T⟩|i⟩. (remark: Therefore, every cell in the tape isa \Gamma -dimensional Hilbert space.) The QTM is initialized to the state |ψ(0)⟩=|q0⟩|T0⟩|1⟩ |\psi(0)\rangle = |q_0\rangle |T_0\rangle |1\rangle , where T0∈Γ∗ T_0\in \Gamma^* is concatenation of the input x∈Σ∗ x\in\Sigma^* with many “blanks” as needed (there is a subtlety here to determine the maximal length, but I ignore it). At each time step, the state of the QTM evolves according to some unitary U |ψ(i+1)⟩=U|ψ(i)⟩ Note that the state at any time n is given by |ψ(n)⟩=Un|ψ(0)⟩ |\psi(n)\rangle = U^n|\psi(0)\rangle . U can be any unitary that “changes” the tape only where the head is located and moves the head one step to the right or left. That is, ⟨q′,T′,i′|U|q,T,i⟩ \langle q',T',i'|U|q,T,i\rangle is zero unless i′=i±1 i'= i \pm 1 and T′ T' differs from T only at position i At the end of the computation (when the QTM reaches a state qf q_f ) the tape is being measured (using, say, the computational basis). The interesting thing to notice, is that each “step” the QTM’s state is a superposition of possible configurations, which gives the QTM the “quantum” advantage. The answer is based on Masanao Ozawa, On the Halting Problem for Quantum Turing Machines. See also David Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer. Source : Link , Question Author : Ran G. , Answer Author : Ran G. Categories computation-models, probabilistic-turing-machines, quantum-computing Tags computation-models, probabilistic-turing-machines, quantum-computing Post navigation Is C actually Turing-complete? Why are some games np-complete? How do I change geolocation of photos in the Photo app? iPhone X (unlike my old iPhone 6s+) photos not synced to Win7 How to capture still frame from video on iOS? Photoshop: how to make a reflected floor and wall How can I make crisp icons for Windows Store apps? Eyedropper doesn’t copy entire appearence in Adobe Illustrator In Illustrator CS6, how do I make my path transparent, but have solid color in the middle? Removing a section of a stroked path in Illustrator Is there a quick way to draw a curved line with thickness that changes in Illustrator? Metro Website Template Guidlines Website Style Guide Creation Resources on web design aesthetic Force line break on mobile – currently it breaks word in half Arabic website – design for right to left languages
Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Standard : Add compute a linear combination of Matrices, Vectors and scalars Add(A, B, c1, c2, ip, options) The Add(A, B) function, where A and B are either both Matrices or both Vectors, computes the elementwise sum of A and B. The special case of the sum of a scalar and a Matrix is described below. Any other combination for the types of A and B results in an error. The default values of c1 and c2 is 1. If A and B are both Matrices or both Vectors, Add(A, B, c1, c2) computes the sum \mathrm{c1}⁢A+\mathrm{c2}⁢B If A is a scalar and B is a Matrix, then Add(A, B, c1, c2) computes the sum \mathrm{c1}⁢\mathrm{ScalarMatrix}⁡\left(A,\mathrm{op}⁡\left(1,B\right)\right)+\mathrm{c2}⁢B If A is a Matrix and B is a scalar, Add(A, B, c1, c2) returns the sum \mathrm{c1}⁢A+\mathrm{c2}⁢\mathrm{ScalarMatrix}⁡\left(B,\mathrm{op}⁡\left(1,A\right)\right) If A is a scalar and B is a scalar, Add(A, B, c1, c2) returns the sum \mathrm{c1}⁢A+\mathrm{c2}⁢B This function is part of the LinearAlgebra package, and so it can be used in the form Add(..) only after executing the command with(LinearAlgebra). However, it can always be accessed through the long form of the command by using LinearAlgebra[Add](..). Note: This routine uses the types of the first two parameters to select between the MatrixAdd and VectorAdd LinearAlgebra routines to do the actual computation. \mathrm{with}⁡\left(\mathrm{LinearAlgebra}\right): \mathrm{Ax}≔〈1.00004,1.99987,-0.00012〉: b≔〈1.,2.,0.〉: \mathrm{Add}⁡\left(\mathrm{Ax},b,1,-1\right) [\begin{array}{c}\textcolor[rgb]{0,0,1}{0.0000400000000000400}\\ \textcolor[rgb]{0,0,1}{-0.000129999999999963}\\ \textcolor[rgb]{0,0,1}{-0.000120000000000000}\end{array}] M≔〈〈m,o〉|〈n,p〉〉: \mathrm{Add}⁡\left(M,-\mathrm{\lambda },\mathrm{inplace}\right) [\begin{array}{cc}\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\lambda }}& \textcolor[rgb]{0,0,1}{n}\\ \textcolor[rgb]{0,0,1}{o}& \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\lambda }}\end{array}] M [\begin{array}{cc}\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\lambda }}& \textcolor[rgb]{0,0,1}{n}\\ \textcolor[rgb]{0,0,1}{o}& \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\lambda }}\end{array}]
The expected value of the tail of a distribution » SAS博客列表 The expected value of the tail of a distribution The expected value of a random variable is essentially a weighted mean over all possible values. You can compute it by summing (or integrating) a probability-weighted quantity over all possible values of the random variable. The expected value is a measure of the "center" of a probability distribution. You can generalize this idea. Instead of using the entire distribution, suppose you want to find the "center" of only a portion of the distribution. For the central portion of a distribution, this process is similar to the trimmed mean. (The trimmed mean discards data in the tails of a distribution and averages the remaining values.) For the tails of a distribution, a natural way to compute the expected value is to sum (or integrate) the weighted quantity x*pdf(x) over the tail of the distribution. The graph to the right illustrates this idea for the exponential distribution. The left and right tails (defined by the 20th and 80th percentiles, respectively) are shaded and the expected value in each tail is shown by using a vertical reference line. This article shows how to compute the expected value of the tail of a probability distribution. It also shows how to estimate this quantity for a data sample. Why is this important? Well, this idea appears in some papers about robust and unbiased estimates of the skewness and kurtosis of a distribution (Hogg, 1974; Bono, et al., 2020). In estimating skewness and kurtosis, it is important to estimate the length of the tails of a distribution. Because the tails of many distributions are infinite in length, you need some alternative definition of "tail length" that leads to finite quantities. One approach is to truncate the tails, such as at the 5th percentile on the left and the 95th percentile on the right. An alternative approach is to use the expected value of the tails, as shown in this article. The expected value in a tail Suppose you have any distribution with density function f. You can define the tail distribution as a truncated distribution on the interval (a,b), where possibly a = -∞ or b = ∞. To get a proper density, you need to divide by the area of the tail, as follows: \( g(x) = f(x) / \int_a^b f(x) \,dx \) g(x) = f(x) / \int_a^b f(x) \,dx If F(x) is the cumulative distribution, the denominator is simply the expression F(b) – F(a). Therefore, the expected value for the truncated distribution on (a,b) is \( EV = \int_a^b x g(x) \,dx = (\int_a^b x f(x) \,dx) / (F(b) - F(a)) \) EV = \int_a^b x g(x) \,dx = (\int_a^b x f(x) \,dx) / (F(b) - F(a)) There is no standard definition for the "tail" of a distribution, but one definition is to use symmetric quantiles of the distribution to define the tails. For a quantile, p, you can define the left tail to be the portion of the distribution for which X ≤ p and the right tail to be the portion for which X ≥ 1-p. If you let qL be the pth quantile and qU be the (1-p)th quantile, then the expected value of the left tail is \( E_L = (\int_{-\infty}^{q_L} x f(x) \,dx) / (F(q_L) - 0) \) E_L = (\int_{-\infty}^{q_L} x f(x) \,dx) / (F(q_L) - 0) and the expected value of the right tail is \( E_R = (\int_{q_U}^{\infty} x f(x) \,dx) / (1 - F(q_U)) \) E_R = (\int_{q_U}^{\infty} x f(x) \,dx) / (1 - F(q_U)) The expected value in the tail of the exponential distribution For an example, let's look at the exponential distribution. The exponential distribution is defined only for x ≥ 0, so the left tail starts a 0. The choice of the quantile, p, is arbitrary, but I will use p=0.2 because that value is used in Bono, et al. (2020). The 20th percentile of the exponential distribution is q20 = 0.22. The 80th percentile is q80 = 1.61. You can use the QUAD function in the SAS/IML language to compute the integrals, as follows: /* find the expected value of the truncated exponential distribution on the interval [a,b] start Integrand(x); return x*pdf("Expon",x); /* if f(x) is the PDF and F(x) is the CDF of a distribution, the expected value on [a,b] is (\int_a^b x*f(x) dx) / (CDF(B) - CDF(a)) start Expo1Moment(a,b); call quad(numer, "Integrand", a||b ); /* define CDF(.M)=0 and CDF(.P)=1 */ cdf_a = choose(a=., 0, cdf("Expon", a)); /* CDF(a) */ cdf_b = choose(b=., 1, cdf("Expon", b)); /* CDF(b) */ ExpectedValue = numer / (cdf_b - cdf_a); /* expected value of lower 20th percentile of Expon distribution */ qLow = quantile("Expon", p); ExpValLow20 = Expo1Moment(0, qLow); print qLow ExpValLow20; /* expected value of upper 20th percentile */ qHi = quantile("Expon", 1-p); ExpValUp20 = Expo1Moment(qHi, .I); /* .I = infinity */ print qHi ExpValUp20; In this program, the left tail is the portion of the distribution to the left of the 20th percentile. The right tail is to the right of the 80th percentile. The first table says that the expected value in the left tail of the exponential distribution is 0.107. Intuitively, that is the weighted average of the left tail or the location of its center of mass. The second table says that the expected value in the right tail is 0.209. These results are visualized in the graph at the top of this article. Estimate the expected value in the tail of a distribution You can perform a similar computation for a data sample. Instead of an integral, you merely take the average of the lower and upper p*100% of the data. For example, the following SAS/IML statements simulate 100,000 random variates from the exponential distribution. You can use the QNTL function to estimate the quantiles of the data. You can then use the LOC function to find the elements that are in the tail and use the MEAN function to compute the arithmetic average of those elements: /* generate data from the exponential distribution */ x = randfun(N, "Expon"); /* assuming no duplicates and a large sample, you can use quantiles and means to estimate the expected values */ /* estimate the expected value for the lower 20% tail */ call qntl(qEst, x, p); /* p_th quantile */ idx = loc(x<=qEst); /* rows for which x[i]<= quantile */ meanLow20 = mean(x[idx]); /* mean of lower tail */ print qEst meanLow20; /* estimate the expected value for the upper 20% tail */ call qntl(qEst, x, 1-p); /* (1-p)_th quantile */ idx = loc(x>=qEst); /* rows for which x[i]>= quantile */ meanUp20 = mean(x[idx]); /* mean of upper tail */ print qEst meanUp20; The estimates are shown for the lower and upper 20th-percentile tails of the data. Because the sample is large, the sample estimates are close to the quantiles of the exponential distribution. Also, the means of the lower and upper tails of the data are close to the expected values for the tails of the distribution. It is worth mentioning that there are many different formulas for estimating quantiles. Each estimator will give a slightly different estimate for quantile, and therefore you can get different estimates for the mean. This becomes important for small samples, for long-tailed distributions, and for samples that have duplicate values. The expected value is a measure of the "center" of a probability distribution on some domain. This article shows how to solve an integral to find the expected value for the left tail or right tail of a distribution. For a data distribution, the expected value is the mean of the observations in the tail. In a future article, I'll show how to use these ideas to create robust and unbiased estimates of skewness and kurtosis. The post The expected value of the tail of a distribution appeared first on The DO Loop. The sample skewness is a biased statistic Robust statistics for skewness and kurtosis
Voigt_notation Knowpia In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order.[1] There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig[2] of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application. For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector {\displaystyle \langle x_{11},x_{22},x_{12}\rangle } The stress tensor (in matrix notation) is given as {\displaystyle {\boldsymbol {\sigma }}=\left[{\begin{matrix}\sigma _{xx}&\sigma _{xy}&\sigma _{xz}\\\sigma _{yx}&\sigma _{yy}&\sigma _{yz}\\\sigma _{zx}&\sigma _{zy}&\sigma _{zz}\end{matrix}}\right].} In Voigt notation it is simplified to a 6-dimensional vector: {\displaystyle {\tilde {\sigma }}=(\sigma _{xx},\sigma _{yy},\sigma _{zz},\sigma _{yz},\sigma _{xz},\sigma _{xy})\equiv (\sigma _{1},\sigma _{2},\sigma _{3},\sigma _{4},\sigma _{5},\sigma _{6}).} The strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as {\displaystyle {\boldsymbol {\epsilon }}=\left[{\begin{matrix}\epsilon _{xx}&\epsilon _{xy}&\epsilon _{xz}\\\epsilon _{yx}&\epsilon _{yy}&\epsilon _{yz}\\\epsilon _{zx}&\epsilon _{zy}&\epsilon _{zz}\end{matrix}}\right].} Its representation in Voigt notation is {\displaystyle {\tilde {\epsilon }}=(\epsilon _{xx},\epsilon _{yy},\epsilon _{zz},\gamma _{yz},\gamma _{xz},\gamma _{xy})\equiv (\epsilon _{1},\epsilon _{2},\epsilon _{3},\epsilon _{4},\epsilon _{5},\epsilon _{6}),} {\displaystyle \gamma _{xy}=2\epsilon _{xy}} {\displaystyle \gamma _{yz}=2\epsilon _{yz}} {\displaystyle \gamma _{zx}=2\epsilon _{zx}} are engineering shear strains. The benefit of using different representations for stress and strain is that the scalar invariance {\displaystyle {\boldsymbol {\sigma }}\cdot {\boldsymbol {\epsilon }}=\sigma _{ij}\epsilon _{ij}={\tilde {\sigma }}\cdot {\tilde {\epsilon }}} Likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix. Mnemonic ruleEdit A simple mnemonic rule for memorizing Voigt notation is as follows: Write down the second order tensor in matrix form (in the example, the stress tensor) Strike out the diagonal Continue on the third column Go back to the first element along the first row. Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue). Mandel notationEdit For a symmetric tensor of second rank {\displaystyle {\boldsymbol {\sigma }}=\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{matrix}}\right]} only six components are distinct, the three on the diagonal and the others being off-diagonal. Thus it can be expressed, in Mandel notation,[3] as the vector {\displaystyle {\tilde {\sigma }}^{M}=\langle \sigma _{11},\sigma _{22},\sigma _{33},{\sqrt {2}}\sigma _{23},{\sqrt {2}}\sigma _{13},{\sqrt {2}}\sigma _{12}\rangle .} The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors, for example: {\displaystyle {\tilde {\sigma }}:{\tilde {\sigma }}={\tilde {\sigma }}^{M}\cdot {\tilde {\sigma }}^{M}=\sigma _{11}^{2}+\sigma _{22}^{2}+\sigma _{33}^{2}+2\sigma _{23}^{2}+2\sigma _{13}^{2}+2\sigma _{12}^{2}.} A symmetric tensor of rank four satisfying {\displaystyle D_{ijkl}=D_{jikl}} {\displaystyle D_{ijkl}=D_{ijlk}} has 81 components in three-dimensional space, but only 36 components are distinct. Thus, in Mandel notation, it can be expressed as {\displaystyle {\tilde {D}}^{M}={\begin{pmatrix}D_{1111}&D_{1122}&D_{1133}&{\sqrt {2}}D_{1123}&{\sqrt {2}}D_{1113}&{\sqrt {2}}D_{1112}\\D_{2211}&D_{2222}&D_{2233}&{\sqrt {2}}D_{2223}&{\sqrt {2}}D_{2213}&{\sqrt {2}}D_{2212}\\D_{3311}&D_{3322}&D_{3333}&{\sqrt {2}}D_{3323}&{\sqrt {2}}D_{3313}&{\sqrt {2}}D_{3312}\\{\sqrt {2}}D_{2311}&{\sqrt {2}}D_{2322}&{\sqrt {2}}D_{2333}&2D_{2323}&2D_{2313}&2D_{2312}\\{\sqrt {2}}D_{1311}&{\sqrt {2}}D_{1322}&{\sqrt {2}}D_{1333}&2D_{1323}&2D_{1313}&2D_{1312}\\{\sqrt {2}}D_{1211}&{\sqrt {2}}D_{1222}&{\sqrt {2}}D_{1233}&2D_{1223}&2D_{1213}&2D_{1212}\\\end{pmatrix}}.} The notation is named after physicist Woldemar Voigt & John Nye (scientist). It is useful, for example, in calculations involving constitutive models to simulate materials, such as the generalized Hooke's law, as well as finite element analysis,[4] and Diffusion MRI.[5] Hooke's law has a symmetric fourth-order stiffness tensor with 81 components (3×3×3×3), but because the application of such a rank-4 tensor to a symmetric rank-2 tensor must yield another symmetric rank-2 tensor, not all of the 81 elements are independent. Voigt notation enables such a rank-4 tensor to be represented by a 6×6 matrix. However, Voigt's form does not preserve the sum of the squares, which in the case of Hooke's law has geometric significance. This explains why weights are introduced (to make the mapping an isometry). A discussion of invariance of Voigt's notation and Mandel's notation can be found in Helnwein (2001).[6] ^ Woldemar Voigt (1910). Lehrbuch der kristallphysik. Teubner, Leipzig. Retrieved November 29, 2016. ^ Klaus Helbig (1994). Foundations of anisotropy for exploration seismics. Pergamon. ISBN 0-08-037224-4. ^ Jean Mandel (1965). "Généralisation de la théorie de plasticité de WT Koiter". International Journal of Solids and Structures. 1 (3): 273–295. doi:10.1016/0020-7683(65)90034-x. ^ O.C. Zienkiewicz; R.L. Taylor; J.Z. Zhu (2005). The Finite Element Method: Its Basis and Fundamentals (6 ed.). Elsevier Butterworth—Heinemann. ISBN 978-0-7506-6431-8. ^ Maher Moakher (2009). "The Algebra of Fourth-Order Tensors with Application to Diffusion MRI". Visualization and Processing of Tensor Fields. Mathematics and Visualization. Springer Berlin Heidelberg. pp. 57–80. doi:10.1007/978-3-540-88378-4_4. ISBN 978-3-540-88377-7. ^ Peter Helnwein (February 16, 2001). "Some Remarks on the Compressed Matrix Representation of Symmetric Second-Order and Fourth-Order Tensors". Computer Methods in Applied Mechanics and Engineering. 190 (22–23): 2753–2770. Bibcode:2001CMAME.190.2753H. doi:10.1016/s0045-7825(00)00263-2. Vectorization (mathematics)
Enrollment in math courses at Kennedy High School in Bloomington, Minnesota is shown in the circle graph at right. (If you are unfamiliar with circle graphs, refer to the glossary located in the eBook for assistance.) If there are 1000 students enrolled in math courses, approximately how many students are enrolled in Algebra? In Geometry? In Calculus? What approximate percentage of the circle graph is taken up by Algebra, Geometry, and Calculus? 450 250 50
Revision as of 17:34, 11 November 2017 by MathAdmin (talk | contribs) (Created page with "<span class="exam">Find the following limits: <span class="exam">(a) Find <math style="vertical-align: -13px">\lim _{x\rightarrow 2} g(x),</math> provided that &n...") {\displaystyle \lim _{x\rightarrow 2}g(x),} {\displaystyle \lim _{x\rightarrow 2}{\bigg [}{\frac {4-g(x)}{x}}{\bigg ]}=5.} {\displaystyle \lim _{x\rightarrow 0}{\frac {\sin(4x)}{5x}}} {\displaystyle \lim _{x\rightarrow -3^{+}}{\frac {x}{x^{2}-9}}} {\displaystyle \lim _{x\rightarrow a}g(x)\neq 0,} {\displaystyle \lim _{x\rightarrow a}{\frac {f(x)}{g(x)}}={\frac {\displaystyle {\lim _{x\rightarrow a}f(x)}}{\displaystyle {\lim _{x\rightarrow a}g(x)}}}.} {\displaystyle \lim _{x\rightarrow 0}{\frac {\sin x}{x}}=1} {\displaystyle \lim _{x\rightarrow 2}x=2\neq 0,} {\displaystyle {\begin{array}{rcl}\displaystyle {5}&=&\displaystyle {\lim _{x\rightarrow 2}{\bigg [}{\frac {4-g(x)}{x}}{\bigg ]}}\\&&\\&=&\displaystyle {\frac {\displaystyle {\lim _{x\rightarrow 2}(4-g(x))}}{\displaystyle {\lim _{x\rightarrow 2}x}}}\\&&\\&=&\displaystyle {{\frac {\displaystyle {\lim _{x\rightarrow 2}(4-g(x))}}{2}}.}\end{array}}} {\displaystyle 2,} {\displaystyle 10=\lim _{x\rightarrow 2}(4-g(x)).} {\displaystyle {\begin{array}{rcl}\displaystyle {10}&=&\displaystyle {\lim _{x\rightarrow 2}4-\lim _{x\rightarrow 2}g(x)}\\&&\\&=&\displaystyle {4-\lim _{x\rightarrow 2}g(x).}\\\end{array}}} {\displaystyle \lim _{x\rightarrow 2}g(x)} {\displaystyle \lim _{x\rightarrow 2}g(x)=-6.} {\displaystyle \lim _{x\rightarrow 0}{\frac {\sin(4x)}{5x}}=\lim _{x\rightarrow 0}{\frac {4}{5}}{\bigg (}{\frac {\sin(4x)}{4x}}{\bigg )}.} {\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{x\rightarrow 0}{\frac {\sin(4x)}{5x}}}&=&\displaystyle {{\frac {4}{5}}\lim _{x\rightarrow 0}{\frac {\sin(4x)}{4x}}}\\&&\\&=&\displaystyle {{\frac {4}{5}}(1)}\\&&\\&=&\displaystyle {{\frac {4}{5}}.}\end{array}}} {\displaystyle -3} {\displaystyle {\frac {x}{x^{2}-9}},} {\displaystyle \lim _{x\rightarrow -3^{+}}{\frac {x}{x^{2}-9}}} {\displaystyle \infty } {\displaystyle -\infty .} {\displaystyle \lim _{x\rightarrow -3^{+}}{\frac {x}{x^{2}-9}}=\lim _{x\rightarrow -3^{+}}{\frac {x}{(x-3)(x+3)}}.} {\displaystyle x} {\displaystyle -3.} {\displaystyle x=-2.9.} {\displaystyle x-3} {\displaystyle x+3} {\displaystyle \lim _{x\rightarrow -3^{+}}{\frac {x}{x^{2}-9}}=\infty .} {\displaystyle -6} {\displaystyle {\frac {4}{5}}} {\displaystyle \infty }
lastmaplet - Maple Help Home : Support : Online Help : Programming : Maplets : Tools : lastmaplet Each time a Maplet application is displayed by using the Display procedure, the Maplet application is assigned to the global variable lastmaplet. This means that the variable lastmaplet is assigned the Maplet application definition. The global variable lastmaplet can be used to debug or display the last displayed Maplet application. \mathrm{with}⁡\left(\mathrm{Maplets}[\mathrm{Elements}]\right): \mathrm{maplet}≔\mathrm{Maplet}⁡\left([["Hello world"]]\right): \mathrm{assigned}⁡\left(\mathrm{lastmaplet}\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{Maplets}[\mathrm{Display}]⁡\left(\mathrm{maplet}\right) \mathrm{assigned}⁡\left(\mathrm{lastmaplet}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{Maplets}[\mathrm{Display}]⁡\left(\mathrm{lastmaplet}\right)
Numerical modelling of scour around circular cylinder caused by jet flow and bed shear stress | JVE Journals Hyoseob Kim1 , Seungho Lee2 , Jungik Lee3 , Hak Soo Lim4 , Hee-Suk Ryoo5 1, 2, 3Kookmin University, Seoul, Republic of Korea 4Korea Institute of Ocean Science and Technology, Ansan, Republic of Korea 5Korea Electrotechnology Reasearch Institute, Changwon, Republic of Korea Received 7 November 2016; accepted 8 November 2016; published 30 May 2017 A new scour numerical model composed of two modules is proposed here. The two modules are detailed Reynolds-average Navier-Stokes flow module and sediment transport and resultant scour module. The flow module uses horizontally regular grid at the bed but partial grid concept in the vertical direction. The sediment module operates entrainment, and deposition of suspended sediment. The entrainment of sediment is computed by a new empirical equation; the major independent variables of which are the jet flow velocity and the bed shear stress. The model is applied to a laboratory scour experiment around a circular cylinder at Kookmin University, and shows satisfactory agreement with measurements. Keywords: scour, jet, bed shear stress, numerical model, curvy vorticity. Modelling scour require a special grid system to allow gradual evolvement of local scour hole, which involves steep slope and sudden slope change, in contrast to modelling wide area morphologic change on coasts. There have been a few trials to overcome this problem. First, a very fine traditional regular grid can be used. However, very fine grid involves high computation cost to resolve bed roughness. Second, a nesting or coupling technique to combine coarse and fine grids can be used to reduce computational load. However, this also requires complicated joint treatment, and does not dramatically reduce computational load. Third, a moving boundary-fitted, unstructured, or fractional grid around solid boundary can be used. However, this measure leads to heavy grid-related mid-processing, and is not free from numerical error due to non-straight grid lines, or triangular, trapezoidal, or octagonal grids. A simple grid system, regular in plan, moving fractional in the vertical direction, is adopted here to simulate scour evolution around vertical structures. Scour evolution can be explained by either bed load, suspended sediment load or both. There are arguments that it is quite delicate to define bed load and suspended load, especially if scour evolves with steep slope, or the bed is covered with bed forms. The bed load gradient produces imminent morphological change, while the resuspension rate gradient and deposition rate gradient produce wider morphologic change due to phase lag between suspension, transport, and settling. If bed is covered with bed forms like ripples, it is still acceptable to treat the bed load as suspended load once the sediment particles leave the ripple crest, see Kim [1]. In this paper only suspended load is considered, so that the phase shift between erosion and deposition is well represented. Existing sediment entrainment or pickup rate from the bed surface has been described as a function of the bed shear stress [2]. Our approach here is resolving short wave phase, and therefore we don’t need wave-phase-average entrainment rate, but instantaneous entrainment rate. Existing instantaneous sediment entrainment rate formulas are all empirical [3-5], and could be represented by an equation as: {E}_{\tau }={C}_{1}{\left(\tau -{\tau }_{cri}\right)}^{{n}_{1}}, {E}_{\tau } is the entrainment rate; {C}_{1} {n}_{1} are specific coefficients for a given sediment material, \tau is the instantaneous bed shear stress, and {\tau }_{cri} is the critical shear stress for sediment movement initiation. If bed sediment is non-cohesive, and its median diameter is known, the above coefficients and be found from an existing formula. A fundamental defect of the above equation is that the equation cannot take into account the effect of local pressure enhanced by jet flow. Eq. (1) is suitable for fluid flows which are more less flat like river flows or coastal flows. It is quite obvious that sediment is eroded at the center of a circle if a jet flow hits the seabed in the normal direction, while simulation with the above equation may result in donut-type erosion, which is wrong. Flow around a vertical structure shows sheet-type jet flow along the surface of the structure which soon turns its direction, and generate horseshoe vortex. Sumer and Fredsoe [6], Dixen et al. [7] and Sumer et al. [8] proposed a methodology to treat the extra entrainment due to jet flow, or horseshoe vortex in front of structures, Sumer et al. regarded the extra entrainment as a result of enhanced turbulence, and modified their bed load formula by adding a term including turbulent energy. However, the horseshoe vortex is a turbulence-average flow behavior, and thus quantifying the extra entrainment by the turbulent energy could not explain the phenomena properly. Explanation of the extra entrainment by jet flow seems to be more appropriate. Local erosion due to jet flow has been studied by experiments or numerical simulations [9-11]. The numerical model WCFLUME [12, 13] is composed of two modules, i.e. flow and sediment transport modules. The flow module solves extended governing equations and difference equations for three-dimensional domain from two-dimensional vertical domain. Model grid is basically regular rectangular, see Fig. 1. The horizontal grid is regular, see Fig. 2(a). The bed morphology is expressed as steps, the levels of which do not agree with grid border levels, see Fig. 2(b). The bed shear stress at the bed surface is computed by using the logarithmic law and the nearest available velocity above the bed, see Fig. 3. Fig. 1. Model grid Fig. 2. Modification of grid in the vertical direction The sediment transport module solves the transport and dispersion with the water column [12, 13]. The sediment entrainment into the water column is contributed by both the instantaneous bed shear stress, and the local jet. To take into account the extra-entrainment due to jet effect pressure could be used, but it is not easy to extract the jet-induced pressure increment from the total pressure field which includes both static and dynamic pressure. Alternately the vertical velocity may be used as the representative variable of the jet towards seabed. The vertical velocity just above the seabed is the most adequate to be linked to the entrainment rate instead of pressure. A weak point of using the vertical velocity above the seabed is that it is grid-size dependent. Velocity reduction gradient in the vertical direction would be the most appropriate representative variable to express the extra entrainment. In this paper the vertical velocity gradient at the seabed is adopted. Then the extra entrainment rate is expressed as: {E}_{jet}={C}_{2}{\left\{\frac{\partial \left(w-{w}_{cri}\right)}{\partial z}\right\}}^{{n}_{2}}, {E}_{jet} is the entrainment rate due to jet flow, w is the vertical fluid velocity speed towards the seabed, and {w}_{cri} is the critical vertical fluid velocity speed for initiation movement of sediment. A difference equation replaces the above differential equation as: {E}_{jet}={C}_{2}{\left(\frac{{w}^{\mathrm{*}}-{w}_{cri}}{∆z}\right)}^{{n}_{2}}, {w}^{\mathrm{*}} is the downward vertical fluid velocity speed at the nearest grid border from the bed. The coefficients in the above equation should be found from measurements available. Coefficients used are: {C}_{2}=1.5 {C}_{2}=1.0 {E={E}_{\tau }+E}_{jet}. Fig. 3. Assumption of horizontal velocity distribution in the vertical direction The present model system WCFLUME was applied to a laboratory experiment at Kookmin University [14]. A vertical cylinder stands in a current flume, see Fig. 4. Model results show three-dimensional flow pattern around the cylinder, and minor surface undulation around the cylinder. Horseshoe vortex ring developed around the cylinder foot. Computed flow field at the bed in ( x-z ) domain in Fig. 5, and flow field in Fig. 6 in a \left(x-z\right) section show horseshoe vortex at an intermediate stage of scour. The vortex could be expressed by vorticity, {\mathrm{\Omega }}_{xz} {\mathrm{\Omega }}_{xz}=\frac{\partial \omega }{\partial x}-\frac{\partial u}{\partial z}. An interesting three-dimensional circulation behind the cylinder has been reported by Sumer et al. [8]. Computed flow fields in the \left(x-z\right) sections show this circulation pattern, see Fig. 8. Because the simulation includes free surface, computed water surface level shows the back water phenomena. However, vorticity describes angular rotating speed of a fluid element, and thus gives positive value for even straight shear flow, see Fig. 7. If we want to extract rotationality with curvature, say curvy vorticity, we could introduce the following properties, {\mathrm{\Omega }}_{c,\mathrm{ }xz} {\mathrm{\Omega }}_{c,yz} {\mathrm{\Omega }}_{c,\mathrm{ }xz}=\left\{\begin{array}{ll}-\frac{\partial \omega }{\partial x}\left|\frac{\partial u}{\partial z}\right|,& \frac{\partial \omega }{\partial x}\bullet \frac{\partial u}{\partial x}<0,\\ 0,& \frac{\partial \omega }{\partial y}\bullet \frac{\partial u}{\partial z}\ge 0,\end{array}\right\ {\mathrm{\Omega }}_{c,\mathrm{ }zy}=\left\{\begin{array}{ll}-\frac{\partial \omega }{\partial y}\left|\frac{\partial v}{\partial z}\right|,& \frac{\partial \omega }{\partial x}\bullet \frac{\partial u}{\partial x}<0,\\ 0,& \frac{\partial \omega }{\partial y}\bullet \frac{\partial u}{\partial z}\ge 0.\end{array}\right\ Fig. 4. Model setup Fig. 5. Computed bed flow field showing horseshoe vortex in front of cylinder Fig. 6. Horseshoe vortex in front of cylinder Fig. 7. a) Vorticity for shear flow b) curvy vorticity Computed vorticity and curvy vorticity are shown in Fig. 8. The curvy vorticity well describes the strength of rotationality, and the position of horse shoes vorticity, compared to traditional vorticity. Computed contributions of the vertical gradient of the vertical velocity the bed shear stress and are shown in Figs. 10, and 11, and their sum is shown in Fig. 12. It is obvious that jet flow heavily contributes scour evolvement in front of the cylinder. Computed time evolution of the scour depth at cylinder foot is shown in Fig. 13. Scour depth develops with step-shape, which may have been resulted from the partial vertical grid treatment. Fig. 8. Traditional vorticity vs curvy vorticity a) Vorticity for shear flow (1/s) b) Curvy vorticity (1/s) Fig. 9. Three-dimensional flow behind cylinder a) 0.229 m behind cylinder b) 0.259 m behind cylinder c) 0.289 m behind cylinder Fig. 10. Scour depth due to jet flow Fig. 11. Scour depth due to bed shear stress Fig. 12. Total scour depth Fig. 13. Time evolution of scour depth A numerical model system, WCFLUME [12-13], was developed for simulation of local scour around coastal structures. The system uses a regular three-dimensional parallelepiped grid. The seabed level stays between grid border lines. Sediment entrainment is expressed by a new equation which includes both existing empirical term composed of the bed shear stress, and a new term composed of the vertical gradient of the vertical velocity, which represents the jet effect. The model system was applied to a scour around a vertical circular cylinder sandy bed. Model simulated horseshoe- vortex-induced scour hole upstream side of the cylinder, and bed-shear-stress-induced scour hole at both sides of the cylinder reasonably well. Empirical coefficients involved in simulation may further need to be assessed with more data. This research is a part of the Project titled “Development of Coastal Erosion Control Technology”, funded by the Ministry of Oceans and Fisheries, and “Installation and Corroborative Study for Internal Power Network of Ocean Wind Power Plant”, funded by the Institute of Energy Technology Evaluation and Planning, Korea. Kim H., O’Connor B., Shim Y. Numerical modelling of flow over ripples using sola method. International Conference on Coastal Engineering, 1994, p. 2140-2154. [Search CrossRef] Van Rijn L. C. Unified view of sediment transport by currents and waves. IV: Application of morphodynamic model. Journal of Hydraulic Engineering. American Society of Civil Engineers, Vol. 133, Issue 7, 2007, p. 776-793. [Search CrossRef] Partheniades E. Results of Recent Investigations on Erosion and Deposition of Cohesive Sediments. Sedimentation, Fort of Collins, Colorado, 1972, p. 20-39. [Search CrossRef] Nielsen P. Sheet flow sediment transport under waves with acceleration skewness and boundary layer streaming. Coast Engineering, Vol. 53, 2006, p. 749-758. [Search CrossRef] Van Rijn L. C. Sediment pick-up functions. Journal of Hydraulic Engineering, American Society of Civil Engineers, Vol. 110, Issue 10, 1984, p. 1494-1502. [Search CrossRef] Sumer B. M., Fredsoe J. The Mechanics of Scour in the Marine Environment. World Scientific Publishing Co. Pte Ltd., Vol. 17, 2002. [Search CrossRef] Dixen M., Sumer B. M., Fredsoe J. Numerical and experimental investigation of flow and scour around a half-burie sphere. Coastal Engineering, Vol. 73, 2013, p. 84-105. [Search CrossRef] Sumer B. M. A Review of recent advances in numerical modelling of local scour problems. 7th International Conference on Scour and Erosion, Australia, 2014, p. 61-70. [Publisher] Qian Z. D., Hu X. Q., Huai W. X., Xue W. Y. Numerical simulation of sediment erosion by submerged jets using a Eulerian model. Science China Technological Sciences, Vol. 53, Issue 12, 2010, p. 3324-3330. [Search CrossRef] Siteur W. J. Sedimentation-Velocity in Jet Induced Flow. M.Sc. Thesis, Delft University of Technology, 2012. [Search CrossRef] Hunter T. N., Peakall J., Unsworth T. J., Acun M. H., Keevil G., Rice H., Biggs S. The influence of system scale on impinging jet sediment erosion: observed using novel and standard measurement techniques. Chemical Engineering Research and Design, Vol. 91, Issue 4, 2013, p. 742-734. [Search CrossRef] Kim H., Baek S.-W., Hwang D.-H., Lee G.-P., Jin J.-Y., Jang C.-H. Intra-wave-phase cross-shore bed profile modelling by using boundary-fitted moving grid. Journal of Mathematical Models in Engineering, Vol. 24, 2016. (in Press). [Search CrossRef] Kim H. WCFLUME: A CFD for Wave and Current. http://blog.naver.com/cfd3d, Kookmin University, 2016. [Search CrossRef] Kim H., Kim I., Lee S., Hong T. Laboratory experiments on scour around flat circular buckets for high waves and strong current at Wido Windfarm, Westsouth Sea, Korea. International Conference on Scour and Erosion, Vol. 8, 2016, p. 81. [Publisher]
RadialBasisFunctionInterpolation - Maple Help Home : Support : Online Help : Mathematics : Numerical Computations : Interpolation and Curve Fitting : Interpolation Package : RadialBasisFunctionInterpolation interpolate N-D scattered data using the radial basis function interpolation method RadialBasisFunctionInterpolation(points,values) RadialBasisFunctionInterpolation(points,values,rbf,c) f:=RadialBasisFunctionInterpolation(...) (optional) the radial basis function to be used; the default is multiquadric (optional) the shape parameter for gaussian, inversequadratic, multiquadric, and inversemultiquadric radial basis functions; the default is 0.5 The RadialBasisFunctionInterpolation command creates a function f⁡\left(\mathrm{x1},\mathrm{...},\mathrm{xn}\right)=\mathrm{value} {R}^{n} The supported radial basis functions are gaussian, inversequadratic, multiquadric, inversemultiquadric, linear, cubic, and thinplatespline. The default is multiquadric. Using gaussian, inversequadratic, multiquadric, or inversemultiquadric radial basis function results in a {C}^{\infty } interpolant. Using linear, cubic, or thinplatespline radial basis function gives a piecewise smooth interpolant. This interpolation method can introduce local minima or maxima beyond the minimum or maximum sample value. The shape parameter c can take any nonzero real value. However, the effects of any c and -c are the same. Numerical errors may be large if c is very close to 0 or is very big. The default value of c is \frac{1}{2} {R}^{n} The interpolant is a linear combination of radial basis functions centered at each point in points, with the coefficients chosen so that the result is an interpolant. More precisely, the interpolant is of the form f⁡\left(\mathbit{x}\right)=\sum _{i}⁡{\mathrm{\alpha }}_{i}⁢{\mathrm{\phi }}_{c}⁡\left(‖\mathbit{x}-{\mathbit{x}}_{i}‖\right) {\mathbit{x}}_{i} iterates over the points in points, {\mathrm{\phi }}_{c}⁡\left(r\right) is determined by the rbf argument and explained in the table below, and the coefficients \mathrm{α__i} f⁡\left({\mathbit{x}}_{i}\right)={y}_{i} \mathrm{y__i} i th entry of values. Each radial basis function takes the form {\mathrm{\phi }}_{c}\left(r\right) r=‖\mathbit{x}-{\mathbit{x}}_{ⅈ}‖ \mathbit{x} is a query point, and {\mathbit{x}}_{ⅈ} is a sample point. The following table illustrates the form of each radial basis function. {\mathrm{\phi }}_{c}\left(r\right) {ⅇ}^{-{\left(c⁢r\right)}^{2}} \frac{1}{1+{\left(c⁢r\right)}^{2}} \frac{1}{\sqrt{1+{\left(c⁢r\right)}^{2}}} \sqrt{1+{\left(c⁢r\right)}^{2}} linear {r}^{3} {r}^{2}⁢\mathrm{ln}\left(r\right) \mathrm{XY}≔[[0,0],[1,0],[2,0],[0,1],[1,1],[2,1],[0,2],[1,2],[2,2]] \textcolor[rgb]{0,0,1}{\mathrm{XY}}\textcolor[rgb]{0,0,1}{≔}[[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]] Z≔[0,0,0,0,1,0,0,0,0] \textcolor[rgb]{0,0,1}{Z}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}] f≔\mathrm{Interpolation}:-\mathrm{RadialBasisFunctionInterpolation}⁡\left(\mathrm{XY},Z\right) \textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\left(\begin{array}{c}\textcolor[rgb]{0,0,1}{Raⅆⅈal Basⅈs Functⅈon ⅈntⅇrpolatⅈon obȷⅇct wⅈth 9 samplⅇ poⅈnts}\\ \textcolor[rgb]{0,0,1}{Raⅆⅈal Basⅈs Functⅈon: multⅈquaⅆrⅈc}\end{array}\right) f⁡\left(0.5,0.5\right) \textcolor[rgb]{0,0,1}{0.454268623296982810} M≔\mathrm{Matrix}⁡\left([[1.5,0.3],[0.7,1.4],[1.2,1.8]],\mathrm{datatype}=\mathrm{float}[8],\mathrm{order}=\mathrm{C_order}\right) \textcolor[rgb]{0,0,1}{M}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{1.50000000000000}& \textcolor[rgb]{0,0,1}{0.300000000000000}\\ \textcolor[rgb]{0,0,1}{0.700000000000000}& \textcolor[rgb]{0,0,1}{1.40000000000000}\\ \textcolor[rgb]{0,0,1}{1.20000000000000}& \textcolor[rgb]{0,0,1}{1.80000000000000}\end{array}] f⁡\left(M\right) [\begin{array}{c}\textcolor[rgb]{0,0,1}{0.279916011151627}\\ \textcolor[rgb]{0,0,1}{0.685802870361854}\\ \textcolor[rgb]{0,0,1}{0.257913080621478}\end{array}] \mathrm{plot3d}⁡\left(\left(x,y\right)↦f⁡\left(x,y\right),0..2,0..2,\mathrm{labels}=[x,y,z]\right) The Interpolation[RadialBasisFunctionInterpolation] command was introduced in Maple 2018.
NumberOfSolutions - Maple Help Home : Support : Online Help : Mathematics : Factorization and Solving Equations : RegularChains : ChainTools Subpackage : NumberOfSolutions number of solutions of a regular chain NumberOfSolutions(rc, R) The command NumberOfSolutions(rc, R) returns the number of complex solutions of rc. If rc has a positive dimension, then infinity is returned. If rc has dimension zero, the number of roots is returned. This command is part of the RegularChains[ChainTools] package, so it can be used in the form NumberOfSolutions(..) only after executing the command with(RegularChains[ChainTools]). However, it can always be accessed through the long form of the command by using RegularChains[ChainTools][NumberOfSolutions](..). \mathrm{with}⁡\left(\mathrm{RegularChains}\right): \mathrm{with}⁡\left(\mathrm{ChainTools}\right): R≔\mathrm{PolynomialRing}⁡\left([x,a],{b,c}\right) \textcolor[rgb]{0,0,1}{R}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{polynomial_ring}} \mathrm{sys}≔[a⁢{x}^{2}+b⁢x+c] \textcolor[rgb]{0,0,1}{\mathrm{sys}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{c}] \mathrm{dec}≔\mathrm{Triangularize}⁡\left(\mathrm{sys},R,\mathrm{output}=\mathrm{lazard}\right) \textcolor[rgb]{0,0,1}{\mathrm{dec}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}] \mathrm{map}⁡\left(\mathrm{Equations},\mathrm{dec},R\right) [[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{c}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}]] \mathrm{map}⁡\left(\mathrm{Dimension},\mathrm{dec},R\right) [\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}] \mathrm{map}⁡\left(\mathrm{NumberOfSolutions},\mathrm{dec},R\right) [\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]
j invariant - Maple Help Home : Support : Online Help : Mathematics : Algebra : Polynomials : Algebraic Curves : j invariant The j invariant of an elliptic curve j_invariant(f, x, y) polynomial in x and y representing a curve of genus 1 For algebraic curves with genus 1 one can compute a number called the j invariant. An important property of this j invariant is the following: two elliptic (i.e. genus = 1) curves are birationally equivalent (i.e. can be transformed to each other with rational transformations over an algebraically closed field of constants) if and only if their j invariants are the same. The curve must be irreducible and have genus 1, otherwise the j invariant is not defined and this procedure will fail. \mathrm{with}⁡\left(\mathrm{algcurves}\right): f≔{y}^{5}+\frac{4}{3}-\frac{23}{3}⁢{y}^{2}+11⁢{y}^{3}-\frac{17}{3}⁢{y}^{4}-\frac{16}{3}⁢{x}^{2}+\frac{16}{3}⁢{x}^{3}-\frac{4}{3}⁢{x}^{4}: Check that the genus is 1, because only then is the j invariant defined. \mathrm{genus}⁡\left(f,x,y\right) \textcolor[rgb]{0,0,1}{1} \mathrm{j_invariant}⁡\left(f,x,y\right) \textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1404928}}{\textcolor[rgb]{0,0,1}{171}}
Numerical simulation of jet formation and penetration characteristics in multi-point initiation mode | JVE Journals Xin Wang1 , Changxiao Zhao2 , Chong Ji3 , Xinghua Li4 , Mingshou Zhong5 , Huayuan Ma6 , Yun Gu7 1, 2, 3, 4, 5, 6College of Field Engineering, Army Engineering University of PLA, Nanjing, China 7Nuclear Industry Nanjing Construction Group Co., Ltd, Nanjing, China Received 2 April 2021; received in revised form 11 April 2021; accepted 18 April 2021; published 7 May 2021 In order to explore the application feasibility and effective gain of shaped charge jet under multi-point initiation system, the jet forming characteristics of 40 mm diameter shaped charge under multi-point initiation mode were analyzed, and the influence rules of the number of initiation points and initiation radius on jet parameters were obtained. The simulation results of jet penetrating cylindrical shell covered charge show that the multi-point initiation system can effectively improve the jet tip velocity and the length of the jet, which can effectively improve the impact initiation ability of the shell charge. The jet forming characteristics of 40mm diameter shaped charge under multi-point initiation mode were analyzed. The influence rules of the number of initiation points and initiation radius on jet parameters were obtained. The capability of jet penetrating cylindrical shell covered charge under multi-point initiation mode was evaluated. Keywords: shaped charge jet, multi-point initiation system, formation, penetration. Shaped charge jet (SCJ) is a kind of condensed high-speed penetrator, which has high speed and is widely used in penetrating fortifications, armor targets, destroying unexploded ordnance and so on [1]. In practical application, the diversity of targets and the lightweight of warhead require the shaped charge to reduce its weight and size as much as possible when used to destroy targets, which poses a severe challenge to the design of warhead. By changing the number of initiation points, the multi-point initiation system can increase the detonation pressure in the explosive and also change the collapse angle, thus affecting the stress state of the liner and its forming [2], which becomes a potential choice to improve the characteristics of SCJ. At present, multi-point initiation system is widely used in rod penetrator and explosively formed projectile, but the application is limited due to the sensitivity of shaped charge jet to initiation synchronization [3]. In this paper, according to a typical shaped charge structure of 40 mm caliber, the SCJ forming characteristics under different initiation modes were analyzed by numerical simulation. The influence of the number of initiation points and the initiation radius on the characteristic parameters of jet was obtained, and a better initiation mode is optimized. The numerical simulation of penetrating cylindrical shell covered charge was carried out, so as to provide reference for the application of multi-point initiation system in small cone angle shaped charge, and also a basis for the application of multi-point initiation system in the destruction of unexploded ordnance. The shaped charge used in this paper was composed of main charge, liner and shell. The main charge is 40 mm in diameter, 60 mm in height and 40° in cone angle. The main charge was 8701. The liner was made of copper with a wall thickness of 1.6 mm. The shell was 2 mm thick and made of aluminum alloy. The 8-node polyhedral solid element in LS-DYNA software (Solid164) was used in the calculation model. The air domain, explosive and shaped charge structure adopt Euler element, and the shell and charge in cylindrical shell charge used Lagrange mesh. Thus, the model was calculated by fluid structure coupling method. At the same time, a 1/4 model was established to simplify the calculation. Fig. 1 shows the 1/4 finite element model of shaped charge and its components. Fig. 1. 1/4 finite element model of shaped charge and its components c) Liner d) Shaped charge {\sigma }_{y}=\left[A+B{\left({\stackrel{-}{\epsilon }}^{p}\right)}^{n}\right]\left[1+C\mathrm{l}\mathrm{n}{\stackrel{˙}{\epsilon }}^{*}\right]\left[1-{\left({T}^{*}\right)}^{m}\right], {\stackrel{-}{\epsilon }}^{p} is equivalent plastic strain, {\stackrel{˙}{\epsilon }}^{\mathrm{*}} is relative equivalent plastic strain rate, {T}^{*} is relative temperature, A is yield stress, B is strain hardening, n is strain hardening index, C is strain rate correlation factor, m is temperature correlation factor. {\epsilon }_{f}=\left[{D}_{1}+{D}_{2}\mathrm{e}\mathrm{x}\mathrm{p}{D}_{3}{\sigma }^{\mathrm{*}}\right]\left[1+{D}_{4}\mathrm{l}\mathrm{n}{\stackrel{˙}{\epsilon }}^{\mathrm{*}}\right]\left[1+{D}_{5}{T}^{\mathrm{*}}\right],\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }D=\sum \frac{\mathrm{\Delta }{\epsilon }_{y}}{{\epsilon }_{f}}, D is the damage to a material element, \mathrm{\Delta }{\epsilon }_{y} is the increment of accumulated plastic strain, and {\epsilon }_{f} is the accumulated plastic strain to failure under the current conditions of stress triaxiality, strain rate and temperature. Failure occurs when D= 1. Material parameters selected from reference [5]. The explosive of shaped charge was 8701, HIGH_EXPLOSIVE_BURN material model and JWL equation of state were selected, see in reference [3] for specific parameters. The explosive in the shell charge was TNT and ELASTIC_PLASTIC_HYDRO material model and IGNITION_AND_GROWTH_OF_REACTION_IN_HE were selected for the study on shock initiation characteristics. For air, Null material model and LINEAR_POLYNOMIAL equation of state were used. The material parameters were obtained from reference [5]. 3.1. Analysis of jet forming process The typical forming progress of SCJ was shown in Fig. 2. The initiation mode was center point initiation. From the numerical simulation results, it can be seen that at about 10.4 μs after initiation, most of the liner materials have completed the movement to the symmetrical plane. At this time, the jet after collision redistributes the energy. The jet gradually forms at the head, and the pestle body forms at the tail. Due to the velocity difference between the head and tail, the jet was stretched. The final shape of jet formed at about 34.6 μs after initiation, and the velocity of jet head was about 5500 m/s. Fig. 2. Shaped charge jet forming process with center point initiation t= t= 12.6 µs t= 71 µs t= t= Fig. 3 shows the propagation process of detonation front in the case of center point initiation. When t= 0.2 μs, the spherical detonation front grows steadily and propagates to the unexploded explosive. When t= 2.2 μs, the spherical detonation wave reaches the top of the liner and starts to impact and squeeze the liner; the element pressure on the top of the liner suddenly jumps to 20.7 GPa. When t= 3.2 μs, the detonation products attached to the detonation wave continue to squeeze the liner, and the detonation wave begins to impact and squeeze the circumferential shell. When t= 7.6 μs, the peak pressure of the detonation front is 31.6 GPa, and the converged detonation wave moves towards the liner direction. After a series of complex interactions, the new wave system and detonation products will have a new round of impact and extrusion on the liner, and the maximum front pressure is 38.9 GPa. Fig. 3. The propagation process of detonation front of center point initiation t= t= t= t= 3.2. Influence of initiation point Based on the above model, numerical simulation under four initiation modes with 2, 4, 8 and 12 initiation points ( N ) was carried out respectively. The initiation points were distributed in a ring shape and the initiation radius was 20 mm. Fig. 4 shows the propagation state of detonation wave in charge and the pressure distribution on the upper surface of the liner. When multi-point initiation was adopted, the detonation wave would converge and collide in charge, and the peak pressure on the upper surface of the liner was 23.68 GPa, 26.30 GPa, 26.74 GPa and 25.51 GPa respectively, which were significantly larger than that of single point initiation of 17.13 GPa. The distribution of detonation wave initiated by two points was asymmetric (Fig. 4(d)). With the increase of the number of initiation points, the pressure distribution on the upper surface of the liner was more uniform, which was closer to the effect of ring initiation. Fig. 4. The propagation process of detonation front of multi-point initiation N= N= N= N= Fig. 5 is the variation of jet tip velocity with initiation point. The initial velocity of jet tip increases with the number of initiation points. Compared with single point initiation, the increment of initial velocity of jet head can reach 980 m/s, and the maximum increase amplitude of jet tip velocity can reach 15.63 %. Fig. 5. The variation of jet tip velocity with initiation point 3.3. Influence of initiation radius In order to explore the influence of distribution radius of multiple initiation points on jet performance, the circular distribution radius r of eight initiation points was selected, as shown in Fig. 6. The distribution radius r is 4 mm, 8 mm, 12 mm, 16 mm and 20 mm respectively. By extracting the pressure and impulse data of element (H5601) on the top of liner and drawing the curve in Fig. 7, it shows that the maximum pressure and impulse of element H 5601 are increasing with the increase of initiation radius. We can get enlightenment from the pressure propagation characteristics in Fig. 8. As shown in the Fig. 8, the detonation wave generated at each initiation point was superimposed for the first time at 0.8 μs, and then gradually converged to the bottom of the charge and the unexploded area under the reflection of the shell; the detonation wave was superimposed at 2.6 μs to form a high pressure area, with an instantaneous high pressure of 119 GPa; as the high pressure superposition area of the initiation ring propagates downward along the central axis and acts on the charge. The top of the liner was then crushed and shaped. Fig. 6. Schematic diagram of initiation point position and pressure nephogram of liner ( t= 6 μs) Fig. 7. The variation of maximum pressure and impulse at observation point with r t= Fig. 8. Peak pressure variation of explosive detonation front under r= 20 mm condition Fig. 9. The change of jet tip velocity and jet length As the radius of the initiation ring becomes smaller, that is, the initiation point is closer to the central axis, and the detonation wave does not grow sufficiently at the time of the first superposition, the instantaneous high pressure will decrease, and the overall impulse acting on the liner will also decrease. The change of jet tip velocity and jet length recorded in Fig. 9 can well reflect this rule. Theoretically, the bigger the radius of initiation ring is, the better jet state can be obtained. 4. Impact initiation of jet on cylindrical shell covered charge In order to study the impact initiation ability of SCJ to the charge with shell, the numerical simulation of SCJ detonating charge with shell under single initiation was carried out. The numerical model was shown in Fig. 10. It is necessary to keep the shape intact and the overall characteristics in a good state before the SCJ impinging into the cylindrical shell. Through the numerical analysis in the last section, standoff distance h=3d was selected, namely 120 mm. Fig. 10. Schematic diagram of shaped charge impacting into cylindrical shell Fig. 11 shows the initiation process of 10 mm thick cylindrical shell covered charge by SCJ. Six observation points (#319005, #319083, #319161, #319239, #319317, #319315) are selected along the axis of the charged charge, shown in the Fig. 12, to record the pressure. The pressure of the charge has exceeded the critical initiation pressure of TNT by 10.4 GPa, it is considered that the charge has been successfully initiated. The shell has not been broken down, which indicates that the strong shock wave produced by the jet impinging on the shell is the shock initiation mechanism. Fig. 11. Detonation growth process of covered charge t= t= t= t= Fig. 12. Observation point diagram and pressure curve In this paper, the analysis of SCJ forming and penetrating into shell covered charge under multi-point initiation mode was carried out, the detailed process of jet forming was revealed, and the reliable gain of multi-point initiation mode on penetration of SCJ was obtained. The conclusions are as follows: 1) The tip velocity of jet increases with the number of initiation points. Compared with single point initiation, the maximum increase amplitude of jet tip velocity can reach 15.63 %. 2) The increase of initiation radius is helpful to the full growth of detonation wave before collision, which can effectively improve the jet velocity. Under the eight point initiation mode, the jet tip velocity of full-scale initiation is 14.2 % higher than that of single point initiation. 3) The SCJ formed by single point initiation could initiate 10 mm thick shell covered charge through shock initiation mechanism. Hansenberg Divad Consequences of Coaxial Jet Penetration Performance and Shaped Charge Design Criteraia. Naval Postgraduate School, Monterey, California, 2010. [Search CrossRef] Li W., Wang X., Li W. The effect of annular multi-point initiation on the formation and penetration of an explosively formed penetrator. International Journal of Impact Engineering, Vol. 37, Issue 4, 2010, p. 414-424. [Publisher] Liu Jian-Ging, et al. Formation of explosively formed penetrator with fins and its flight characteristics. Defense Technology, Vol. 10, Issue 2, 2014, p. 119-123. [Publisher] Johnson Gordon R., Cook William H. Fracture characteristics of three metals subjected to various strains, strain rates, temperatures, and pressures. Engineering Fracture Mechanics, Vol. 21, 1985, p. 31-48. [Publisher] LS-DYNAR Keyword User’s Manual V971. Livermore Software Technology Corporation (LSTC), Livermore, 2012. [Search CrossRef]
Linear variable resistor in electrical systems - MATLAB - MathWorks United Kingdom Minimum resistance R>=0 Linear variable resistor in electrical systems The Variable Resistor block models a linear variable resistor, described with the following equation: V=I·R R is resistance, that is, the signal value at the control port. Connections + and - are conserving electrical ports corresponding to the positive and negative terminals of the resistor, respectively. R is a physical signal input port that controls the resistance value. The current is positive if it flows from positive to negative, and the voltage across the resistor is equal to the difference between the voltage at the positive and the negative terminal, V(+) – V(–). R — Resistance control signal, Ohm Input physical signal that specifies the resistance value. Minimum resistance R>=0 — Minimum acceptable resistance The minimum resistance value. If the input signal falls below this level (for example, turns negative), this minimum resistance value is used. The parameter value must be nonnegative. Resistor | Thermal Resistor
Create inflationcurve object for interest-rate curve from dates and data - MATLAB - MathWorks Italia \begin{array}{l}I\left(0,{T}_{1Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ I\left(0,{T}_{2Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ I\left(0,{T}_{3Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ ...\\ I\left(0,{T}_{i}\right)=I\left({T}_{0}\right){\left(}^{1}\end{array} I\left(0,{T}_{i}\right) I\left({T}_{0}\right) b\left(0;{T}_{0},{T}_{i}\right) {f}_{i}=\frac{1}{\left({T}_{i}-{T}_{i-1}\right)}\mathrm{log}\left(\frac{I\left(0,{T}_{i}\right)}{I\left(0,{T}_{i-1}\right)}\right) \begin{array}{l}I\left(0,{T}_{i}\right)=I\left({T}_{0}\right)\mathrm{exp}\left(\underset{{T}_{0}}{\overset{{T}_{i}}{\int }}f\left(u\right)du\right)\right)\mathrm{exp}\left(\underset{{T}_{0}}{\overset{{T}_{i}}{\int }}s\left(u\right)du\right)\right)\\ I\left(0,{T}_{i}\right)=I\left(0,{T}_{i-1}\right)\mathrm{exp}\left(\left({T}_{i}-{T}_{i-1}\right)\left({f}_{i}+{s}_{i}\right)\right)\end{array} I\left(0,{T}_{i}\right) I\left(0,{T}_{i-1}\right) \left[{T}_{i-1},{T}_{i}\right]
Climate Change/Science/Climate Modeling - Wikibooks, open books for an open world Climate Change/Science/Climate Modeling Climate models come in many forms, from very simple energy-balance models to fully coupled, three dimensional atmosphere-ocean-land numerical models. As computers have become faster, climate science has advanced commensurately. The equations that govern how fluids move in time and space (Navier-Stokes Equations) are complicated to solve, and when all the scales of motion and physical processes (radiative transfer, precipitation, etc.) are incorporated, the resulting problem is impossible to carry out analytically. Instead, climate scientists turn these systems of equations into a series of computer programs. The resulting set of programs is, in some cases, a "climate model." When the model is used to approximate the equations of motion on a sphere, it can be called a general circulation model (GCM). These are generic models, which can be specialized to simulate the ocean, atmosphere, or other fluid problems. Mostly because of limitations on computer power, these models do not resolve all scales of motion; instead a grid of points is established as an array of points where the equations are solved. Most modern atmospheric GCMs are run with horizontal grid spacing (distance between adjacent grid points) around 100 km, and with a number of vertical levels (usually around 30). The exact resolution depends on details of the model and the application. Because of this coarse grid spacing, small-scale (or "sub-grid-scale") phenomena (like individual clouds or even hurricanes) are not explicitly resolved. For detailed calculations of smaller scales, more specialized numerical models are often employed, though there are some very high resolution GCMs (e.g. Japan's NICAM[1]). To incorporate the effects of sub-grid-scale phenomena, conventional GCMs rely on statistical rules, parameterizations, that describe how the processes work on average given the conditions within the grid cell. Parameterizations can be very simple or very complicated, depending on the complexity of the process and the level of understanding of the statistical behavior of the process. Much of the improvement in GCMs today is directly related to improving parameterizations, either by incorporating more elaborate rules to match measured quantities better or by using more sophisticated theoretical arguments for how the physics should work. 1 Kinds of Models 2 What Models Tell Us 3 What Uncertainty in Simulations Means Kinds of ModelsEdit There are many classes of models, and within each class there are many implementations and variations. It is impossible to enumerate and describe every climate model that has ever been developed; even doing so for the published literature would be prohibitively difficult. In fact, there are entire volumes devoted to the history of numerical modeling of just the atmosphere; the American Institute of Physics has a brief description of AGCMs available online [2]. Here we discuss several classes of models, with an emphasis on atmospheric models. The discussion closely follows that of Henderson-Sellers and McGuffie (1987), which is an excellent resource on the subject (and has an updated edition). First of all, we restrict ourselves to numerical models, specifically those designed to be solved with computers. More generally, any equation or set of equations that represents the climate system is a climate model. Some of these can be solved analytically, but those are highly simplified models, which are sometimes incorporated in numerical models. The ultimate goal of climate models is to represent all physical processes that are important for the evolution of the climate system. This is a lofty goal, and will never truly be realized. The climate system contains important contributions and interactions among the lithosphere (the solid Earth), the biosphere (e.g., marine phytoplankton, tropical rainforests), atmospheric and oceanic chemistry (e.g., stratospheric ozone), and even molecular dynamics (e.g. radiative transfer). In fluid dynamics, some systems are now modeled using "direct numerical simulation" (DNS), in which (nearly) all the active scales are explicitly resolved. This is will never be feasible for the climate system. We cannot possible represent every atom in the climate system, it would essentially take the same number of electrons in the computer. Instead, climate modeling is limited to truly modeling the system; simplifying assumptions and empirical laws are used, the resolved motions are chosen to match the problem and/or the computing resources, and other processes are parameterized. However, these comprehensive climate models are not the only way to model the climate system. Simpler models have been developed over the years for many reasons. One common reason historically was the computational cost of running large computers; simpler models have fewer processes to represent, and often have fewer space and time points (lower resolution). Two extremely simple classes of climate models are one-dimensional energy balance models and one-dimensional radiative-convective models. The single dimension in each is typically latitude (north-south direction) and altitude (vertical column), respectively. A typical energy balance model (EBM) solves a small set of equations for the average temperature, T, as a function of latitude, {\displaystyle T(\phi )} . These models were introduced in 1969 by Budyko and Sellers independently. They are solved for the equilibrium temperature at each latitude based on the incoming and outgoing radiative fluxes and the horizontal transport of energy. The radiative fluxes are simple schemes (usually) for the radiation reaching the surface, and often include some temperature dependent albedo (reflectivity) to represent ice-albedo feedback. The horizontal transport is typically given by an eddy diffusion term, which is just a coefficient multiplied by the meridional (north-south) temperature gradient. One of the most interesting aspects of these simple models is that they already produce multiple equilibria, having solutions for ice-free and ice-covered Earths as well as a more temperate solution (like the current climate). This result spurred much research in the sensitivity of the climate system. Radiative-convective models (RCM) are essentially models of an atmospheric column. They can be used to represent the global average atmosphere, a particular latitude (zone), or a particular location. The resolved dimension is vertical, so all the horizontal fluxes (like winds and advected scalars like temperature and moisture) must be passed to the column somehow. The early RCMs (due largely to S. Manabe and colleagues) have a background temperature structure (lapse rate) and a treatment of radiative fluxes through he column. When the radiative heating of the column brings the lapse rate beyond a critical or threshold lapse rate, a "convective adjustment" is used to reduce the instability. Given constant boundary conditions, the model will equilibrate such that the energy budget is balanced, giving a model of the vertical (especially temperature) structure of the atmosphere. The early RCMs were used to explore the effects of increasing carbon dioxide in the atmosphere. There are also combinations of EBMs with RCMs that give a simple two-dimensional representation of radiative-convective equilibrium. Another class of two-dimensional model is the axially symmetric model used, for example, by Held & Hou (1980) and Lindzen & Hou (1988). This is a dynamical model only, and has been used to study the meridional circulation in the absence of baroclinic eddies (midlatitude storm systems). While not truly climate models, these simple dynamical models have provided important theoretical understanding of the atmospheric circulation. In the ocean, there are simple box models that are somewhat analogous to the axially symmetric models of the meridional circulation of the atmosphere. These box models are traced back at least to Stommel, who used one to show the multiple equilibria of the thermohaline circulation in the Atlantic Ocean. Other two-dimensional models also exist. For example there are simple equivalent barotropic models of the atmosphere. However these have mostly been used in numerical weather prediction and theoretical atmospheric dynamics. Occupying a higher region of the modeling hierarchy are three-dimensional numerical models. In terms of dynamics, these are usually fully turbulent fluids, and can be applied to spherical geometry or some simplied geometry like the beta-plane. This class of models should probably be divided into several subclasses. Some are coupled models (atmosphere + ocean, for example) while others only contain a single component of the climate system. Some are described as climate models of intermediate complexity, which covers a large range of models. At and around the top of the climate model hierarchy are general circulation models (GCM), sometimes called global climate models. These are fully three-dimensional representations of the atmosphere and/or ocean solved in spherical geometry. They are designed to conserve energy (1st law of thermodynamics), momentum (Newton's 2nd law of motion), mass (continuity equation), and (usually) moisture. We will discuss GCMs in much greater detail later, including the primary assumptions that they include, and the uncertainty associated with the results. GCMs are the best available tools for studying climate change. What Models Tell UsEdit What Uncertainty in Simulations MeansEdit Why can't climate models predict climate change perfectly? There are many answers to this question, and most of them are at least partly true! Here we briefly describe what is meant by "uncertainty" in climate modeling. Before starting to describe the uncertainty associated with climate models, it is important to emphasize that climate models are the best tools currently available for studying the climate of Earth and other planets. Although they are far from perfect, sophisticated climate models embody the physical processes thought to be important in the real climate. That there is some uncertainty most decidedly does not mean that we can't trust climate model results, nor does it mean there is built in "wiggle room" in the models. Different climate models, and here we want to imply sophisticated numerical models (usually of the whole globle), get different results for the same experiment. These differences are due largely to different ways of representing physical processes that happen on scales smaller than the distance between model points. These processes are usually called sub-gridscale processes, and the representations for them in numerical models are known as parameterizations. The main idea of a parameterization is that uses information from the large scale, and infers (based on some rules) what is likely happening on smaller scales. A good example of this is wind near mountains. GCMs might have grid points only every 100 km, but mountain ranges can have very drastic elevation changes over much shorter distances. Rather than try to represent the scale of the mountains, which would be very hard with current computers, GCMs have a sub-gridscale topography parameterization. Depending on the details, it may affect the "roughness" of the surface or gravity waves induced by terrain, but the idea is the same, given that mountains change height on small scales, the GCM tries to model that behavior, at least to capture how mountains can affect the large-scale circulations the GCM does resolve. Since parameterizations are different, and given the large number of parameterized processes, it is no wonder GCMs get different results. The fact that the results are not more different is a testament to our current level of understanding of climate processes. Imagine taking a large number of GCMs and running them all the same way. For example, the IPCC had modeling centers around the world run the same experiments (basically increasing CO2 concentration) to compare each model's climate response. Because the models are built differently, in some cases the are very fundamental differences between models, the results vary from model to model. If we just take the climate response, for example the change in surface air temperature for a given change in radiative forcing, from each model and find the average and standard deviation, that gives us an estimate of the "uncertainty" in the climate response. This is done in lieu of a real experiment because we only have one Earth, and definitely not enough time to run so many experiments! The above method gives a measure of expected climate response based on very different models. Another method is to use the same GCM, but slightly change parameters or even parameterizations to determine the strength of different processes. As a simple example, imagine that some GCM uses a single value for the albedo (reflectivity) of stratocumulus clouds. If the GCM is run ten times, each with a different value for that parameter, the results of a climate change experiment will change. How much the results differ will determine that GCM's sensitivity to stratocumulus albedo. This gives another measure of "uncertainty," since that model assumes there is only one value of the albedo, which may not be true in the real atmosphere. The distributed computing project ClimatePrediction.net uses such a methodology to study processes important to climate sensitivity. Another answer to our original question is that the system is not perfectly predictable. The climate system is chaotic, or at least "sensitive to initial data." This just means that we know the equations that govern fluid motion, and we have a pretty good idea of the physical processes that need to be included, but the system of equations has many solutions, and even if the system is perfectly deterministic (no random fluctuations), unless we also perfectly know the initial conditions, we may not get the right answer. In fact, in chaotic systems it has been shown that arbitrarily small errors in the initial conditions can give wildly different results after some amount of time. While the case of Earth's climate is unlikely to be that sensitive, it does mean that we shouldn't expect a perfect long-term (greater than 2 weeks) weather forecast to be on the local television station any time soon (or ever). Note that the science of climate prediction differs fundamentally from that of weather prediction. In weather prediction, the sensitivity to initial conditions is a basic limitation, as perfect knowledge of initial conditions is impossible. Climate models are not sensitive to initial condition; the problem changes from an initial-value problem to a boundary-value problem. ^ M. Satoh, T. Matsuno, H. Tomita, H. Miura, T. Nasuno and S. Iga, "Nonhydrostatic icosahedral atmospheric model (NICAM) for global cloud resolving simulations," Journal of Computational Physics Volume 227, Issue 7, 20 March 2008, Pages 3486-3514 Retrieved from "https://en.wikibooks.org/w/index.php?title=Climate_Change/Science/Climate_Modeling&oldid=2064082"
Redox (reduction–oxidation, /ˈrɛdɒks/ RED-oks, /ˈriːdɒks/ REE-doks[2]) is a type of chemical reaction in which the oxidation states of substrate change.[3] Example of a reduction–oxidation reaction between sodium and chlorine, with the OIL RIG mnemonic[1] 2 Rates, mechanisms, and energies 3 Standard electrode potentials (reduction potentials) 4.1 Metal displacement 4.3 Corrosion and rusting 4.4 Disproportionation 7 Redox reactions in geology 8 Redox reactions in soils "Redox" is a combination of the words "reduction" and "oxidation". The term "redox" was first used in 1928.[5] The processes of oxidation and reduction occur simultaneously and cannot occur independently.[4] In redox processes, the reductant transfers electrons to the oxidant. Thus, in the reaction, the reductant or reducing agent loses electrons and is oxidized, and the oxidant or oxidizing agent gains electrons and is reduced. The pair of an oxidizing and reducing agent that is involved in a particular reaction is called a redox pair. A redox couple is a reducing species and its corresponding oxidizing form,[6] e.g., Fe2+ / Fe3+ .The oxidation alone and the reduction alone are each called a half-reaction because two half-reactions always occur together to form a whole reaction. OxidantsEdit Oxidation originally implied reaction with oxygen to form an oxide. Later, the term was expanded to encompass oxygen-like substances that accomplished parallel chemical reactions. Ultimately, the meaning was generalized to include all processes involving the loss of electrons. Substances that have the ability to oxidize other substances (cause them to lose electrons) are said to be oxidative or oxidizing, and are known as oxidizing agents, oxidants, or oxidizers. The oxidant (oxidizing agent) removes electrons from another substance, and is thus itself reduced. And, because it "accepts" electrons, the oxidizing agent is also called an electron acceptor. Oxidants are usually chemical substances with elements in high oxidation states (e.g., H 4), or else highly electronegative elements (O2, F2, Cl2, Br2) that can gain extra electrons by oxidizing another substance.[citation needed] Main article: Oxidizing agent ReducersEdit Main article: Reducing agent Rates, mechanisms, and energiesEdit Standard electrode potentials (reduction potentials)Edit 1⁄2 H2 → H+ + e−. red, or potential when the half-reaction takes place at a cathode. The reduction potential is a measure of the tendency of the oxidizing agent to be reduced. Its value is zero for H+ + e− → 1⁄2 H2 by definition, positive for oxidizing agents stronger than H+ (e.g., +2.866 V for F2) and negative for oxidizing agents that are weaker than H+ (e.g., −0.763 V for Zn2+).[13] cathode – Eo ox = –Eo Examples of redox reactionsEdit Metal displacementEdit {\displaystyle {\ce {2NO3- + 10e- + 12H+ -> N2 + 6 H2O}}} Corrosion and rustingEdit {\displaystyle {\ce {4Fe + 3O2 -> 2Fe2O3}}} {\displaystyle {\ce {Fe^{2+}->{Fe^{3+}}+e-}}} {\displaystyle {\ce {H2O2 + 2e- -> 2OH-}}} {\displaystyle {\ce {{2Fe^{2+}}+{H2O2}+2H+->{2Fe^{3+}}+2H2O}}} DisproportionationEdit Redox reactions in industryEdit Cathodic protection is a technique used to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. A simple method of protection connects protected metal to a more easily corroded "sacrificial anode" to act as the anode. The sacrificial metal instead of the protected metal, then, corrodes. A common application of cathodic protection is in galvanized steel, in which a sacrificial coating of zinc on steel parts protects them from rust.[citation needed] Redox reactions are the foundation of electrochemical cells, which can generate electrical energy or support electrosynthesis. Metal ores often contain metals in oxidized states such as oxides or sulfides, from which the pure metals are extracted by smelting at high temperature in the presence of a reducing agent. The process of electroplating uses redox reactions to coat objects with a thin layer of a material, as in chrome-plated automotive parts, silver plating cutlery, galvanization and gold-plated jewelry.[citation needed] Redox reactions in biologyEdit Redox cyclingEdit Redox reactions in geologyEdit Redox reactions in soilsEdit Main article: List of chemistry mnemonics "OIL RIG" — oxidation is loss of electrons, reduction is gain of electrons[20][21][22][23] "LEO the lion says GER [grr]" — loss of electrons is oxidation, gain of electrons is reduction[20][21][22][23] "LEORA says GEROA" — the loss of electrons is called oxidation (reducing agent); the gain of electrons is called reduction (oxidizing agent).[22] "PANIC" – Positive Anode and Negative is Cathode. This applies to electrolytic cells which release stored electricity, and can be recharged with electricity. PANIC does not apply to cells that can be recharged with redox materials. These galvanic or voltaic cells, such as fuel cells, produce electricity from internal redox reactions. Here, the positive electrode is the cathode and the negative is the anode. ^ "redox – definition of redox in English | Oxford Dictionaries". Oxford Dictionaries | English. Archived from the original on October 1, 2017. Retrieved May 15, 2017. ^ "Redox Reactions". wiley.com. Archived from the original on May 30, 2012. Retrieved May 9, 2012. ^ Pingarrón, José M.; Labuda, Ján; Barek, Jiří; Brett, Christopher M. A.; Camões, Maria Filomena; Fojta, Miroslav; Hibbert, D. Brynn (2020). "Terminology of electrochemical methods of analysis (IUPAC Recommendations 2019)". Pure and Applied Chemistry. 92 (4): 641–694. doi:10.1515/pac-2018-0109. ^ Hudlický, Miloš (1990). Oxidations in Organic Chemistry. Washington, D.C.: American Chemical Society. pp. 456. ISBN 978-0-8412-1780-5. ^ a b Schmidt-Rohr, K. (2018). "How Batteries Store and Release Energy: Explaining Basic Electrochemistry". J. Chem. Educ. 95 (10): 1801–1810. Bibcode:2018JChEd..95.1801S. doi:10.1021/acs.jchemed.8b00479. ^ Schmidt-Rohr, K. (2015). "Why Combustions Are Always Exothermic, Yielding About 418 kJ per Mole of O2". J. Chem. Educ. 92 (12): 2094–2099. Bibcode:2015JChEd..92.2094S. doi:10.1021/acs.jchemed.5b00333. ^ "Titles of Volumes 1–44 in the Metal Ions in Biological Systems Series". Metals, Microbes, and Minerals - the Biogeochemical Side of Life. De Gruyter. 2021. pp. xxiii–xxiv. doi:10.1515/9783110589771-005. ISBN 9783110588903. S2CID 242013948. ^ Ponnamperuma, F.N. (1992). "The chemistry of submerged soils". Advances in Agronomy. 24: 29–96. doi:10.1016/S0065-2113(08)60633-1. ISBN 9780120007240. ^ Bartlett, R.J.; James, Bruce R. (1991). "Redox chemistry of soils". Advances in Agronomy. 39: 151–208. Wikiquote has quotations related to Redox. Wikimedia Commons has media related to Redox reactions. Retrieved from "https://en.wikipedia.org/w/index.php?title=Redox&oldid=1088749043"
Formal language - formulasearchengine Template:Expand section The first formal language is thought be the one used by Gottlob Frege in his Begriffsschrift (1879), literally meaning "concept writing", and which Frege described as a "formal language of pure thought."[2] The following rules describe a formal language Template:Mvar over the alphabet Σ = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, = }: Every nonempty string that does not contain "+" or "=" and does not start with "0" is in Template:Mvar. The string "0" is in Template:Mvar. A string containing "=" is in Template:Mvar if and only if there is exactly one "=", and it separates two valid strings of Template:Mvar. A string containing "+" but not "=" is in Template:Mvar if and only if every "+" in the string separates two valid strings of Template:Mvar. No string is in Template:Mvar other than those implied by the previous rules. Under these rules, the string "23+4=555" is in Template:Mvar, but the string "=234=+" is not. This formal language expresses natural numbers, well-formed addition statements, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, or that "+" means addition. For finite languages one can explicitly enumerate all well-formed words. For example, we can describe a language Template:Mvar as just Template:Mvar = {"a", "b", "ab", "cba"}. The degenerate case of this construction is the empty language, which contains no words at all (Template:Mvar = ∅). However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are infinitely many words: "a", "abb", "ababba", "aaababbbbaab", …. Therefore formal languages are typically infinite, and describing an infinite formal language is not as simple as writing L = {"a", "b", "ab", "cba"}. Here are some examples of formal languages: Template:Mvar = Σ*, the set of all words over Σ; Template:Mvar = {"a"}* = {"a"n}, where n ranges over the natural numbers and "a"n means "a" repeated n times (this is the set of words consisting only of the symbol "a"); Closure properties of language families ( {\displaystyle L_{1}} {\displaystyle L_{2}} {\displaystyle L_{1}} {\displaystyle L_{2}} are in the language family given by the column). After Hopcroft and Ullman. {\displaystyle \{w|w\in L_{1}\lor w\in L_{2}\}} Yes No Yes Yes Yes Yes Yes {\displaystyle \{w|w\in L_{1}\land w\in L_{2}\}} Yes No No No Yes Yes Yes {\displaystyle \{w|w\not \in L_{1}\}} Yes Yes No No Yes Yes No {\displaystyle L_{1}\cdot L_{2}=\{w\cdot z|w\in L_{1}\land z\in L_{2}\}} {\displaystyle L_{1}^{*}=\{\epsilon \}\cup \{w\cdot z|w\in L_{1}\land z\in L_{1}^{*}\}} {\displaystyle \{w^{R}|w\in L\}} Intersection with a regular language {\displaystyle \{w|w\in L_{1}\land w\in R\},R{\text{ regular}}} Yes Yes Yes Yes Yes Yes Yes A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules which may be interpreted as valid rules of inference or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems {\displaystyle {\mathcal {FS}}} {\displaystyle {\mathcal {FS'}}} may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance). ↑ See e.g. {{#invoke:citation/CS1|citation |CitationClass=citation }}. ↑ Template:Harvtxt, Chapter 11: Closure properties of families of languages. Template:Navbox with columns Template:Mathematical logic Retrieved from "https://en.formulasearchengine.com/index.php?title=Formal_language&oldid=219514"
Mixed_tensor Knowpia In tensor analysis, a mixed tensor is a tensor which is neither strictly covariant nor strictly contravariant; at least one of the indices of a mixed tensor will be a subscript (covariant) and at least one of the indices will be a superscript (contravariant). A mixed tensor of type or valence {\displaystyle \scriptstyle {\binom {M}{N}}} , also written "type (M, N)", with both M > 0 and N > 0, is a tensor which has M contravariant indices and N covariant indices. Such a tensor can be defined as a linear function which maps an (M + N)-tuple of M one-forms and N vectors to a scalar. Changing the tensor typeEdit Consider the following octet of related tensors: {\displaystyle T_{\alpha \beta \gamma },\ T_{\alpha \beta }{}^{\gamma },\ T_{\alpha }{}^{\beta }{}_{\gamma },\ T_{\alpha }{}^{\beta \gamma },\ T^{\alpha }{}_{\beta \gamma },\ T^{\alpha }{}_{\beta }{}^{\gamma },\ T^{\alpha \beta }{}_{\gamma },\ T^{\alpha \beta \gamma }} The first one is covariant, the last one contravariant, and the remaining ones mixed. Notationally, these tensors differ from each other by the covariance/contravariance of their indices. A given contravariant index of a tensor can be lowered using the metric tensor gμν, and a given covariant index can be raised using the inverse metric tensor gμν. Thus, gμν could be called the index lowering operator and gμν the index raising operator. Generally, the covariant metric tensor, contracted with a tensor of type (M, N), yields a tensor of type (M − 1, N + 1), whereas its contravariant inverse, contracted with a tensor of type (M, N), yields a tensor of type (M + 1, N − 1). As an example, a mixed tensor of type (1, 2) can be obtained by raising an index of a covariant tensor of type (0, 3), {\displaystyle T_{\alpha \beta }{}^{\lambda }=T_{\alpha \beta \gamma }\,g^{\gamma \lambda }} {\displaystyle T_{\alpha \beta }{}^{\lambda }} is the same tensor as {\displaystyle T_{\alpha \beta }{}^{\gamma }} {\displaystyle T_{\alpha \beta }{}^{\lambda }\,\delta _{\lambda }{}^{\gamma }=T_{\alpha \beta }{}^{\gamma }} with Kronecker δ acting here like an identity matrix. {\displaystyle T_{\alpha }{}^{\lambda }{}_{\gamma }=T_{\alpha \beta \gamma }\,g^{\beta \lambda },} {\displaystyle T_{\alpha }{}^{\lambda \epsilon }=T_{\alpha \beta \gamma }\,g^{\beta \lambda }\,g^{\gamma \epsilon },} {\displaystyle T^{\alpha \beta }{}_{\gamma }=g_{\gamma \lambda }\,T^{\alpha \beta \lambda },} {\displaystyle T^{\alpha }{}_{\lambda \epsilon }=g_{\lambda \beta }\,g_{\epsilon \gamma }\,T^{\alpha \beta \gamma }.} Raising an index of the metric tensor is equivalent to contracting it with its inverse, yielding the Kronecker delta, {\displaystyle g^{\mu \lambda }\,g_{\lambda \nu }=g^{\mu }{}_{\nu }=\delta ^{\mu }{}_{\nu }} so any mixed version of the metric tensor will be equal to the Kronecker delta, which will also be mixed. Wheeler, J.A.; Misner, C.; Thorne, K.S. (1973). "§3.5 Working with Tensors". Gravitation. W.H. Freeman & Co. pp. 85–86. ISBN 0-7167-0344-0. Index Gymnastics, Wolfram Alpha
Energy phantoms - Wikiversity EnergiesEdit Main articles: Physics/Energies and Energies Def. a quantity that denotes the ability to do work and is measured in a unit dimensioned in mass × distance²/time² (ML²/T²) or the equivalent is called energy. Def. a physical quantity that denotes ability to push, pull, twist or accelerate a body which is measured in a unit dimensioned in mass × distance/time² (ML/T²): SI: newton (N); CGS: dyne (dyn) is called force. {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}\ } N m2 kg−2.[2] {\displaystyle F} {\displaystyle q} {\displaystyle r_{q}} {\displaystyle Q} {\displaystyle r_{Q}} {\displaystyle F={qQ \over 4\pi \varepsilon _{0}}{1 \over {r^{2}}},} {\displaystyle \varepsilon _{0}} {\displaystyle r} {\displaystyle k_{e}=1/(4\pi \varepsilon _{0}\varepsilon ),} {\displaystyle \varepsilon _{0}} {\displaystyle \varepsilon } {\displaystyle E} {\displaystyle E=F\cdot D,} {\displaystyle D} Unknown forcesEdit {\displaystyle F=ma} {\displaystyle F} {\displaystyle m} {\displaystyle a} {\displaystyle {1~{\rm {N}}=1~{\rm {kg}}{\frac {\rm {m}}{{\rm {s}}^{2}}}}} {\displaystyle {\mathsf {F}}={\frac {\mathsf {ML}}{{\mathsf {T}}^{2}}}} {\displaystyle {\mathsf {A}}={\mathsf {F/M}}={\frac {\mathsf {L}}{{\mathsf {T}}^{2}}},} {\displaystyle {\mathsf {A}}={\mathsf {F/Q}}={\frac {\mathsf {L}}{{\mathsf {T}}^{2}}},} {\displaystyle {\mathsf {P}}={\mathsf {E/M}}={\frac {{\mathsf {L}}^{2}}{{\mathsf {T}}^{2}}},} {\displaystyle {\mathsf {P}}={\mathsf {E/Q}}={\frac {{\mathsf {L}}^{2}}{{\mathsf {T}}^{2}}},} {\displaystyle {\mathsf {P}}} c. The image at SIMBAD for the star at left, also called Gliese 35, shows a split color image of the star (assume the position of the star is between the two color spots), where the image is 2.4' x 2.4'. The star is not at the coordinates but along the line of travel. If the star slowed down between these two images, what is its speed in the SIMBAD image? In astronomy observations usually consist of detection of radiation followed by analysis which usually includes assumptions about the forces and fields observed. The four known forces or interactions all stem from an electromagnetic type source that manifests itself at varying intensities depending on the collection of particles. ↑ - Proposition 75, Theorem 35: p.956 - I.Bernard Cohen and Anne Whitman, translators: Isaac Newton, The Principia: Mathematical Principles of Natural Philosophy. Preceded by A Guide to Newton's Principia, by I. Bernard Cohen. University of California Press 1999 ISBN 0-520-08816-6 ISBN 0-520-08817-4 ↑ "CODATA2006". ↑ F. Van Leeuwen (November 1, 2007). "Validation of the new Hipparcos reduction". Astronomy & Astrophysics 474 (2): 653-64. doi:10.1051/0004-6361:20078357. http://adsabs.harvard.edu/abs/2007A%26A...474..653V. Retrieved 2013-12-05. {{Flight resouces}}{{Principles of radiation astronomy}}{{Radiation astronomy resources}}{{Repellor vehicle}}{{Technology resources}} Retrieved from "https://en.wikiversity.org/w/index.php?title=Energy_phantoms&oldid=1967032"
1 School of Sciences, Universidad Nacional Agraria La Molina (UNALM), Lima, Peru. 2 School of Chemical Engineering and Textile (FIQT), Universidad Nacional de Ingeniería (UNI), Rímac, Lima, Peru. 3 School of Environmental Engineering, Universidad Nacional Tecnológica deLimaSur (UNTELS), Villa El Salvador, Lima, Peru. 4 Department of Atmospheric Sciences at the Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG), University of S?o Paulo, S?o Paulo, Brazil . 5 Centro Meteorológico Provincial Santa Clara, Cuba. 6 Instituto Geofísico del Peru (IGP), Calle Badajoz N? 169 Urb. Mayorazgo IV Etapa Ate, Lima, Peru. Abstract: Lima is the capital of the Republic of Peru. It is the most important city in the country and as other Latin America metropolises have multiple problems, including air pollution due to particulate material above air quality standards, emitted by 1.6 million vehicles. The “on-line” coupled model of meteorology and chemistry of transport and meteorological/chemistry, WRF/Chem (Weather and Research Forecasting with Chemistry) has been used in the Lima Metropolitan Area, and validated against data observed at ground level with ten air quality stations of the National Service of Meteorology and Hydrology for the year 2016. The goal of this study was to estimate the concentration of PM2.5 particulate matter in the months of February and July of 2016. In both months, the model satisfactorily predicts temperature and relative humidity. The average observed PM2.5 concentrations in the month of July are higher than in February, probably because the relative humidity in July is greater than the relative humidity in February. In the months of February and July the standard observed deviations of the model have a factor of 2.4 and 3.7 respectively, indicating a greater dispersion in the data of the model. In the month of July, the model captures the characteristics of transport, shows characteristic peaks during peak hours, therefore, the model estimates transport behavior better in July than in February. The quality of the air is strongly influenced by the vehicular transport. The PM2.5 particulate material in February had an average bias that varied from [?13.2 to 4.4 μg/m3] and in July [?9.63 to 11.65 μg/m3] and a normalized average bias in February that varied from [?0.68 to 0.43] and in July of [?0.46 to 0.48]. Keywords: Air Quality, Aerosol, WRF/Chem Model, PM2.5, Lima-Peru \text{MB}=\frac{1}{N}\underset{1}{\overset{N}{\sum }}\left(Ypi-Xoi\right) \text{NMB}=\frac{1}{N}\underset{1}{\overset{N}{\sum }}\left(\frac{Ypi-Xoi}{Xoi}\right) \text{RMSE}=\sqrt{\frac{1}{N}\underset{1}{\overset{N}{\sum }}{\left(Ypi-Xoi\right)}^{2}} Xoi=a+bYpi Ynpi=a+bYpi Cite this paper: Reátegui-Romero, W. , Sánchez-Ccoyllo, O. , Andrade, M. , Moya-Alvarez, A. (2018) PM2.5 Estimation with the WRF/Chem Model, Produced by Vehicular Flow in the Lima Metropolitan Area. Open Journal of Air Pollution, 7, 215-243. doi: 10.4236/ojap.2018.73011.
Write an equation to represent this situation and answer the question. Use the 5-D Process to help you define a variable and write an equation if necessary. Ella is trying to determine the side lengths of a triangle. She knows that the longest side is three times longer than the shortest side. The medium side is ten more than twice the shortest side. If the perimeter is 142 cm, how long is each side? x . How do the other sides relate to the shortest side, or x x 10+2x 3x Sum the lengths of the sides and set the sum equal to the perimeter in order to solve for x 22
Introduction to Robotics/Robotics and BoeBots/Quiz/Teachers - Wikiversity Introduction to Robotics/Robotics and BoeBots/Quiz/Teachers < Introduction to Robotics‎ | Robotics and BoeBots‎ | Quiz Robotics and BoeBots: Lecture (For Students) (For Teachers) — Lab (For Students) (For Teachers) — Assignment (For Students) (For Teachers) — Quiz (For Students) (For Teachers) In 1 second: 1 sec = x × 2 μsec 1 × 10^6 μsec = x × 2 × 10-6μsec {\displaystyle x={\frac {10^{6}}{2}}=5\times 10^{5}=500,000} 1cm = 1 \times 10-2m 1Gm = 1 \times 109m {\displaystyle {\frac {10^{9}}{10^{-2}}}=10^{11}{\frac {cm}{Gm}}} Retrieved from "https://en.wikiversity.org/w/index.php?title=Introduction_to_Robotics/Robotics_and_BoeBots/Quiz/Teachers&oldid=516334"
analytic_extension - Maple Help Home : Support : Online Help : Mathematics : Mathematical Functions : FunctionAdvisor : analytic_extension return the definition of the analytic extension of a given mathematical function FunctionAdvisor(analytic_extension, math_function) literal name; 'analytic_extension' The FunctionAdvisor(analytic_extension, math_function) command returns the definition of the analytic extension used by Maple to extend the function outside the classical domain, typically to the entire complex plane. Note: For most functions the domain of the classical definition is the entire complex plane. If the requested information is not available, the FunctionAdvisor command returns NULL. \mathrm{FunctionAdvisor}⁡\left(\mathrm{definition},\mathrm{\Gamma }⁡\left(z\right)\right) [\textcolor[rgb]{0,0,1}{\mathrm{\Gamma }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\int }}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{\mathrm{\infty }}}\frac{{\textcolor[rgb]{0,0,1}{\mathrm{_k1}}}^{\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}}{{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{\mathrm{_k1}}}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{\mathrm{_k1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{<}\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{z}\right)] \mathrm{FunctionAdvisor}⁡\left(\mathrm{analytic_extension},\mathrm{\Gamma }\right) \textcolor[rgb]{0,0,1}{\mathrm{\Gamma }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{=}\frac{\textcolor[rgb]{0,0,1}{\mathrm{\pi }}}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{\pi }}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{\Gamma }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{z}\right)}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{<}\textcolor[rgb]{0,0,1}{0}
What is the fastest algorithm for finding all shortest paths in a sparse graph? - PhotoLens In an unweighted, undirected graph with V vertices and E edges such that 2V>E 2V \gt E , what is the fastest way to find all shortest paths in a graph? Can it be done in faster than Floyd-Warshall which is O(V3) O(V^3) but very fast per iteration? How about if the graph is weighted? Since this is an unweighted graph, you could run a Breadth First Search (BFS) from every vertex v in the graph. Each run of BFS gives you the shortest distances (and paths) from the starting vertex to every other vertex. Time complexity for one BFS is O(V+E)=O(V) O(V + E) = O(V) since E=O(V) E = O(V) in your sparse graph. Running it V times gives you a O(V2) O(V^2) For a weighted directed graph, the Johnson’s algorithm as suggested by Yuval is the fastest for sparse graphs. It takes O(V2logV+VE) O(V^2\log V + VE) which in your case turns out to be O(V2logV) O(V^2\log V) . For a weighted undirected graph, you could either run Dijkstra’s algorithm from each node, or replace each undirected edge with two opposite directed edges and run the Johnson’s algorithm. Both these will give the same aysmptotic times as Johnson’s algorithm above for your sparse case. Also note that the BFS approach I mention above works for both directed and undirected graphs. Source : Link , Question Author : Jakob Weisblat , Answer Author : Juho Categories algorithms, directed-graphs, shortest-path Tags algorithms, directed-graphs, shortest-path Post navigation Scattering images across photoshop canvas How can I create a “crazy mirror” distorted portrait in Photoshop? What can Illustrator do that Photoshop can’t? How to preview an Illustrator document at different rasterized sizes? How to distinguish chinese fonts in Illustrator? (I can’t read their chinese names)
Plot multiple functions - Maple Help Home : Support : Online Help : Getting Started : How Do I... : Plot multiple functions Plot Multiple Functions? Plotting Lists of Expressions The following examples with the plot command provide a list of expressions to plot. Note: In Maple, a list is denoted using [ ] square brackets. Example: Plotting Sine and Cosine on Same Plot plot([sin, cos]) \mathrm{plot}\left(\left[\mathrm{sin},\mathrm{cos}\right]\right) Example: Plotting x,{x}^{2} with Custom Color \mathrm{plot}\left(\left[x,{x}^{2}\right],x=0..2,\mathrm{color}=\left["DarkViolet","Red"\right]\right) Example: Plotting a Sequence plot([seq(BesselJ(n,x),n=1..4)],x=0..2*Pi,filled) \mathrm{plot}\left(\left[\mathrm{seq}\left(\mathrm{BesselJ}\left(n,x\right),n=1..4\right)\right],x=0..2\cdot \mathrm{Pi},\mathrm{filled}\right) Example: Plotting Multiple 3-D Plots This also works for 3-D plots: plot3d([x,sin(x)*y], x=-2*Pi..2*Pi, y=-5..5) \mathrm{plot3d}\left(\left[x, \mathrm{sin}\left(x\right)\cdot y\right],x=-2\cdot \mathrm{Pi}..2\cdot \mathrm{Pi},y=-5..5\right) Plotting Multiple Plot Structures You can also combine multiple plot structures and display these together using the plots:-display command: Example: Plotting Multiple Plot Structures Together Type the following: plot1:= plot(sin): \mathrm{plot1}≔\mathrm{plot}\left(\mathrm{sin}\right): plot2 := plot(cos, color="Niagara Green"): \mathrm{plot2}≔\mathrm{plot}\left(\mathrm{cos},\mathrm{color}="Niagara Green"\right): plots:-display([plot1,plot2]) The plots:-display command also works on 3-D plots: \mathrm{plot3}≔\mathrm{plot3d}\left(\mathrm{sin}\left(x\right)\cdot y,x=-2\cdot \mathrm{Pi}..2\cdot \mathrm{Pi},y=-5..5\right): \mathrm{plot4}≔\mathrm{plot3d}\left(\sqrt{y},x=-2\cdot \mathrm{Pi}..2\cdot \mathrm{Pi},y=-5..5\right): \mathrm{plots}:-\mathrm{display}\left(\left[\mathrm{plot3},\mathrm{plot4}\right]\right) You can also combine multiple plots using drag and drop. Example: Combining the Plots for \mathrm{sin}\left(x\right) \mathrm{sin}\left(\frac{x}{2}\right) Type the following: sin(x) Click sin(x) and, from the context panel, choose Plots > 2-D Plot. \mathrm{sin}\left(x\right) \to Type the following: sin(x/2) \mathrm{sin} \left(\frac{x}{2}\right): Drag the expression \mathrm{sin}\left(\frac{x}{2}\right) onto the existing plot of \mathrm{sin}\left(x\right) Setting Curve Colors
Home : Support : Online Help : System : Information : Updates : Maple 2015 : Data Plots There is a new dataplot command for plotting numerical data. It is similar in scope to the plot and plot3d commands but is specifically designed for displaying data. It encompasses both 2-D and 3-D plots. The dataplot command is also available in the right-click context-sensitive "Plots" submenu. Many Different Plots with One Command New Intuitive Calling Sequences and Support for Different Data Types More Options for 2-D Point Plots With the dataplot command, you can generate a large variety of plots by simply specifying the type of plot you want. As a shortcut, you can also select Plots > Data Plot from the context menu by right clicking on the following matrix in order to choose the type of data plot that you want to show. M≔ \mathrm{Matrix}\left(\left[\mathrm{seq}\left(\left[\mathrm{seq}\left(\mathrm{exp}\left(-\left(x^2+y^2\right)*\left(1/100\right)\right), x = -10.0 .. 10.0\right)\right], y = -10.0 .. 10.0\right)\right], \mathrm{datatype}=\mathrm{float}\left[8\right]\right) \textcolor[rgb]{0,0,1}{M}\textcolor[rgb]{0,0,1}{:=}\left[\begin{array}{c}\textcolor[rgb]{0,0,1}{\mathrm{21 x 21}}\textcolor[rgb]{0,0,1}{\mathrm{Matrix}}\\ \textcolor[rgb]{0,0,1}{\mathrm{Data Type:}}{\textcolor[rgb]{0,0,1}{\mathrm{float}}}_{\textcolor[rgb]{0,0,1}{8}}\\ \textcolor[rgb]{0,0,1}{\mathrm{Storage:}}\textcolor[rgb]{0,0,1}{\mathrm{rectangular}}\\ \textcolor[rgb]{0,0,1}{\mathrm{Order:}}\textcolor[rgb]{0,0,1}{\mathrm{Fortran_order}}\end{array}\right] \mathrm{dataplot}\left(M, \mathrm{surface}\right) \mathrm{dataplot}\left(M, \mathrm{contour3d}, \mathrm{color}="DarkBlue"\right) \mathrm{dataplot}\left(M, \mathrm{density}, \mathrm{colorscheme}=\left["Blue", "Green"\right]\right) The dataplot command allows several calling sequences, making it easier to generate plots without having to transform your data into the right format. In addition to the calling sequence shown in the examples earlier, two more are available. Notice also that the data can be provided as a list, Vector, Matrix, or Array. X ≔ \mathrm{Vector}\left(\left[1, 2, 4, 7, 10, 12\right], \mathrm{datatype}=\mathrm{float}\left[8\right]\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{Y1} ≔ \mathrm{Vector}\left(\left[1, 2.5, 5, 6, 8, 7\right], \mathrm{datatype}=\mathrm{float}\left[8\right]\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{Y2} ≔ \mathrm{Vector}\left(\left[2, 4, 6, 9, 10, 12\right], \mathrm{datatype}=\mathrm{float}\left[8\right]\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{Y3} ≔ \mathrm{Vector}\left(\left[4, 5, 7, 9.5, 11, 13.5\right], \mathrm{datatype}=\mathrm{float}\left[8\right]\right): This calling sequence for 2-D point plots makes it easy to plot different sets of y-values against a single set of x-values. \mathrm{dataplot}\left(X, \left[\mathrm{Y1}, \mathrm{Y2}, \mathrm{Y3}\right]\right); This calling sequence for 3-D surfaces allows you to adjust the x- and y- values associated with a grid of z-values. M≔ \mathrm{Matrix}\left(\left[\mathrm{seq}\left(\left[\mathrm{seq}\left(\mathrm{exp}\left(-\left(x^2+y^2\right)*\left(1/100\right)\right), x = -10.0 .. 10.0\right)\right], y = -10.0 .. 10.0\right)\right], \mathrm{datatype}=\mathrm{float}\left[8\right]\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{dataplot}\left(\left[\mathrm{seq}\left(i^2, i=0..20\right)\right], 1..5, M\right); A number of options available with the dataplot command allow you to change the look of 2-D point plots. Animate a point plot. In order to view the animation, rightclick and choose - Animation > Play. \mathrm{dataplot}⁡\left(X,\left[\mathrm{Y1},\mathrm{Y2},\mathrm{Y3}\right],\mathrm{animation}\right) The dataplot automatically assigns colors to different datasets, but if you specify a single color, symbols are used to differentiate the datasets. \mathrm{dataplot}⁡\left(X,\left[\mathrm{Y1},\mathrm{Y2},\mathrm{Y3}\right],\mathrm{color}="Niagara Purple",\mathrm{symbolsize}=25\right) The colorpalette option changes the color palette from which the default colors are chosen. \mathrm{dataplot}\left(X, \left[\mathrm{Y1}, \mathrm{Y2}, \mathrm{Y3}\right], \mathrm{colorpalette}="Dalton"\right); The dataplot command allows you to generate a variety of statistical plots and to visualize Quandl datasets. Statistical plots such as bar charts and area charts are available. \mathrm{dataplot}⁡\left(\left[\mathrm{Y1},\mathrm{Y2},\mathrm{Y3}\right],\mathrm{bar}\right) \mathrm{dataplot}\left(\left[\mathrm{Y1}, \mathrm{Y2}, \mathrm{Y3}\right], \mathrm{areachart}, \mathrm{color}=\left["MediumTurquoise", "MediumOrchid", "SaddleBrown"\right]\right) Quandl datasets can also be plotted. The following plot shows the population in Canada: \mathrm{ref}:=\mathrm{DataSets}:-\mathrm{Quandl}:-\mathrm{Reference}\left("FRED/CANPOPL"\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{dataplot}\left(\mathrm{ref},\mathrm{color}="Red"\right);
Conditional on human-level AI by 2039, will there by end of 2019 be an agent at least as good as AlphaStar using non-controversial, human-like APM restrictions? | Forecasting AI Conditional on human-level AI by 2039, will there by end of 2019 be an agent at least as good as AlphaStar using non-controversial, human-like APM restrictions? There is also an unconditional version of this question. Asking about both allows us to properly update on the event, by multiplying our prior P(human\ level\ AI) with the normalised strength of evidence to get Conditional on human-level AI before Jan 1st 2039 (see details below), will there by the end of 2019 be a published AI system -- with performance (roughly) at least as good as the AlphaStar that defeated TLO and MaNa, and hard-coded knowledge (roughly) no greater than AlphaStar -- whose APM distribution has a tail comparable to human players? We refer to the unconditional version of the question for full background, description and resolution conditions. A "a tail comparable to humans players" is quite vague, as I don't want to end up with an ambiguous resolution by too narrowly constraining what possible APM restrictions might look like. Human-level AI is taken to mean a system that can pass a generalized intelligence test described by this previous Metaculus question.
Is C actually Turing-complete? - PhotoLens I was trying to explain to someone that C is Turing-complete, and realized that I don’t actually know if it is, indeed, technically Turing-complete. (C as in the abstract semantics, not as in an actual implementation.) The “obvious” answer (roughly: it can address an arbitrary amount of memory, so it can emulate a RAM machine, so it’s Turing-complete) isn’t actually correct, as far as I can tell, as although the C standard allows for size_t to be arbitrarily large, it must be fixed at some length, and no matter what length it is fixed at it is still finite. (In other words, although you could, given an arbitrary halting Turing machine, pick a length of size_t such that it will run “properly”, there is no way to pick a length of size_t such that all halting Turing machines will run properly) So: is C99 Turing-complete? I’m not sure but I think the answer is no, for rather subtle reasons. I asked on Theoretical Computer Science a few years ago and didn’t get an answer that goes beyond what I’ll present here. In most programming languages, you can simulate a Turing machine by: simulating the finite automaton with a program that uses a finite amount of memory; simulating the tape with a pair of linked lists of integers, representing the content of the tape before and after the current position. Moving the pointer means transferring the head of one of the lists onto the other list. A concrete implementation running on a computer would run out of memory if the tape got too long, but an ideal implementation could execute the Turing machine program faithfully. This can be done with pen and paper, or by buying a computer with more memory, and a compiler targeting an architecture with more bits per word and so on if the program ever runs out of memory. This doesn’t work in C because it’s impossible to have a linked list that can grow forever: there’s always some limit on the number of nodes. To explain why, I first need to explain what a C implementation is. C is actually a family of programming languages. The ISO C standard (more precisely, a specific version of this standard) defines (with the level of formality that English allows) the syntax and semantics a family of programming languages. C has a lot of undefined behavior and implementation-defined behavior. An “implementation” of C codifies all the implementation-defined behavior (the list of things to codify is in appendix J for C99). Each implementation of C is a separate programming language. Note that the meaning of the word “implementation” is a bit peculiar: what it really means is a language variant, there can be multiple different compiler programs that implement the same language variant. In a given implementation of C, a byte has 2CHAR_BIT 2^{\texttt{CHAR_BIT}} possible values. All data can represented as an array of bytes: a type t has at most 2CHAR_BIT×sizeof(t) 2^{\texttt{CHAR_BIT} \times \texttt{sizeof(t)}} possible values. This number varies in different implementations of C, but for a given implementation of C, it’s a constant. In particular, pointers can only take at most 2CHAR_BIT×sizeof(void*) 2^{\texttt{CHAR_BIT} \times \texttt{sizeof(void*)}} values. This means that there is a finite maximum number of addressable objects. The values of CHAR_BIT and sizeof(void*) are observable, so if you run out of memory, you can’t just resume running your program with larger values for those parameters. You would be running the program under a different programming language — a different C implementation. If programs in a language can only have a bounded number of states, then the programming language is no more expressive than finite automata. The fragment of C that’s restricted to addressable storage only allows at most n×2CHAR_BIT×sizeof(void*) n \times 2^{\texttt{CHAR_BIT} \times \texttt{sizeof(void*)}} program states where n is the size of the abstract syntax tree of the program (representing the state of the control flow), therefore this program can be simulated by a finite automaton with that many states. If C is more expressive, it has to be through the use of other features. C does not directly impose a maximum recursion depth. An implementation is allowed to have a maximum, but it’s also allowed not to have one. But how do we communicate between a function call and its parent? Arguments are no good if they’re addressable, because that would indirectly limit the depth of recursion: if you have a function int f(int x) { … f(…) …} then all the occurrences of x on active frames of f have their own address and so the number of nested calls is bounded by the number of possible addresses for x. A C program can use non-addressable storage in the form of register variables. “Normal” implementations can only have a small, finite number of variables that don’t have an address, but in theory an implementation could allow an unbounded amount of register storage. In such an implementation, you can make an unbounded amount of recursive calls to a function, as long as its argument are register. But since the arguments are register, you can’t make a pointer to them, and so you need to copy their data around explicitly: you can only pass around a finite amount of data, not an arbitrary-sized data structure that’s made of pointers. With unbounded recursion depth, and the restriction that a function can only get data from its direct caller (register arguments) and return data to its direct caller (the function return value), you get the power of deterministic pushdown automata. I can’t find a way to go further. (Of course you could make the program store the tape content externally, through file input/output functions. But then you wouldn’t be asking whether C is Turing-complete, but whether C plus an infinite storage system is Turing-complete, to which the answer is a boring “yes”. You might as well define the storage to be a Turing oracle — call fopen("oracle", "r+"), fwrite the initial tape content to it and fread back the final tape content.) Source : Link , Question Author : TLW , Answer Author : Gilles ‘SO- stop being evil’ Categories .netrc, computability, turing-completeness Tags .netrc, computability, turing-completeness Post navigation All iPhone 4s pics not showing on Windows 7 PC How to confirm iOS device has uploaded/synced all photos+videos to iCloud Photo Library? [closed] Reverse dress up doll
Optimization and analysis of novel thermoelectric module | JVE Journals Pradyumn Mane1 , Deepali Atheaya2 1Engineering Physics Department, School of Engineering and Applied Sciences, Bennett University, Tech Zone – II, Greater Noida, 201310, UP, India 2Mechanical and Aerospace Engineering Department, School of Engineering and Applied Sciences, Bennett University, Tech Zone – II, Greater Noida, 201310, UP, India Received 24 September 2019; accepted 1 October 2019; published 28 November 2019 Copyright © 2019 Pradyumn Mane, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Optimization and analysis of novel thermoelectric module is proposed in this research paper. The simulation for four thermoelectric modules were performed in COMSOL Multiphysics 5.4 software and detailed analysis of these thermoelectric modules were carried out. The three thermoelectric modules showed the efficiency and power output above-average thermoelectric modules. It also indicated that lead telluride based thermoelectric modules could be used in isolated areas due to toxicity of lead whereas tetrahedrite based thermoelectric module could be used in non-isolated areas due to its non-toxic properties. The proposed thermoelectric modules can be utilized in applications such as industries, deep space explorations, automobiles, thermal power plants, renewable electricity generation, hybrid renewable systems, etc. in an economically viable manner. Optimization and analysis of novel thermoelectric module is proposed in this research paper. The simulation for four thermoelectric modules were performed in COMSOL Multiphysics 5.4 software and detailed analysis of these thermoelectric modules were carried out. The three thermoelectric modules showed the efficiency and power output above-average thermoelectric modules. The proposed thermoelectric modules can be utilized in applications such as, automobiles, thermal power plants, renewable electricity generation, hybrid systems, etc. in an economically viable manner. Keywords: thermoelectric module, lead telluride, tetrahedrites, skutterudites, perovskites. Thermoelectricity works on Seebeck principle which states, “The Seebeck effect is a phenomenon in which the temperature difference between two dissimilar electrical conductors or semiconductors produces voltage difference between the two substances”. The work to increase performance efficiency ( \eta ) of thermoelectric modules is going since decades and so far 5-8 % efficiency ( \eta ) could be achieved [1]. In 2009, Alphabet Energy invested $49.5 million to develop waste heat recovery thermoelectric generator and claimed to have an efficiency ( \eta ) of 5 % and ensured to go above 10 % in upcoming years. A group of researchers from Swiss Federal Laboratories for Materials Testing and Research, High Voltage Laboratories and Institute of Physics of ASCR experimented GdCo0.95Ni0.05O3 and CaMn0.98Nb0.02O3 thermoelectric materials and thermoelectric modules and evaluated their power characteristics [2]. Fu et al. [3] designed quaternary alloys of Pb1-xMgxTe0.8Se0.2 and achieved a figure of merit ( zT ) of 2.2 at 820 K. Chen [1] from Queen Mary University of London provided a comprehensive study on synthesis method, crustal structures and thermoelectric properties of tetrahedrite compounds. Scientists from Panasonic experimented and studied the thermoelectric properties of skutterudites and achieved a figure of merit ( zT ) of 1.1 at 820 K for the composition CoSb2.875Te0.125 [4]. Recently, Lan et al. [5] predicted fuel economy potential for a skutterudite thermoelectric generator and observed that 25 %-50 % of fuel efficiency is obtained in light-duty vehicles. The simulation of various combinations of thermoelectric couples were performed and it was noted that Pb1-xMgxTe0.8Se0.2 and n -type PbTe, Pb1-xMgxTe0.8Se0.2 and CoSb3-xTex, Pb1-xMgxTe0.8Se0.2 and CaMn0.98Nb0.02O3, Cu12Sb4S13 and CoSb3-xTex indicated higher efficiency ( \eta ) than other thermoelectric couples [6]. In this research paper, performance analysis of these optimized novel thermoelectric modules was carried out. 2. Analysis of thermoelectric module Table 1 shows the simulated thermoelectric modules and their calculated and validated properties. From the table it can be concluded, that Pb1-xMgxTe0.8Se0.2 and n -type PbTe, Pb1-xMgxTe0.8Se0.2 and CoSb3-xTex, Cu12Sb4S13 and CoSb3-xTex showsefficiency ( \eta ) as predicted by Mane and Atheaya [6]. The detailed analysis of Pb1-xMgxTe0.8Se0.2 and CaMn0.98Nb0.02O3 showed efficiency ( \eta ) below 5 %. Therefore Pb1-xMgxTe0.8Se0.2 and CaMn0.98Nb0.02O3 cannot be used to develop an efficient thermoelectric module. Table 1. Thermoelectric modules and their properties \eta Pb1-xMgxTe0.8Se0.2 n -type PbTe CoSb3-xTex CaMn0.98Nb0.02O3 The unitless figure of merit ( ZT ) of thermoelectric module is the key entity to measure thermoelectric efficiency ( \eta ) and is formulated as follows [1]: ZT=\frac{\left({{S}_{p}-{S}_{n}\right)}^{2} T}{{\left(\sqrt{{\rho }_{n}{k}_{n}} +\sqrt{{\rho }_{p}{k}_{p}}\right)}^{2}}. The thermoelectric performance efficiency ( \eta ) is mainly dependent on the figure of merit ( ZT \eta =\frac{{T}_{H}-{T}_{C}}{{T}_{H}} \frac{\sqrt{\left(1+ZT\right)}-1}{\sqrt{\left(1+ZT\right)}+\frac{{T}_{C}}{{T}_{H}}}. Thermoelectric voltage ( V ) is induced when a temperature gradient is created and is formulated as follows [1]: V=\left({S}_{p}-{S}_{n}\right)T. Thermoelectric current ( I ) was calculated by using Ohms law. The net resistance ( R ) of the thermoelectric module was calculated by summing the resistance of thermoelectric legs: {R}_{p}=12{\rho }_{p}\frac{l}{A}, {R}_{n}=12{\rho }_{n}\frac{l}{A}, R={R}_{p}+{R}_{n}, I=\frac{V}{R}. Thermoelectric power output ( P ) is simply the product of thermoelectric voltage ( V ) and the thermoelectric current ( I P=VI. All thermoelectric modules were simulated and detailed investigation was carried out in COMSOL Multiphysics 5.4 software. The properties of thermoelectric materials were also validated in the simulation. The highest efficiency was found to be 17 % and the results of this thermoelectric module have been discussed further. The thermoelectric module was designed in COMSOL Multiphysics. The boundary conditions set to the module were 800 K and 300 K at the upper and lower surface respectively. Tetrahedral elements meshing was set manually for thermoelectric legs as well as copper metallic plates. Extremely fine mesh with minimum element size 8.6×10-6 meters and maximum element size 5.6×10-4 meters were set for thermoelectric legs to provide accurate results. Extra fine mesh with minimum element size 6.4×10-5 meters and maximum element size 1.5×10-3 meters were set for copper plates to reduce the computational time. High level element quality optimization was set to perform efficient meshing. Fig. 1. Temperature distribution along the surface of the thermoelectric module Fig. 1 displays the temperature distribution along the surface of the thermoelectric module. The module has a dimension of 43 mm×23 mm×12 mm and it comprises of twenty-four thermoelectric legs which were attached to copper metallic plates. The temperature at the hot terminal was set at 800 K and the temperature at the cold terminal was set at 300 K. Fig. 1 displays that the temperature is evenly distributed along the surface of the thermoelectric module. Fig. 2. Figure of merit ( ZT ) with respect to temperature difference ( T ) for Pb1-xMgxTe0.8Se0.2 and n -type PbTe thermoelectric module Fig. 2 demonstrates the figure of merit \text{(}ZT\text{)} for Pb1-xMgxTe0.8Se0.2 and n -type PbTe thermoelectric module for various temperature difference ( T ). At temperature difference of 100 K it was observed that figure of merit ( ZT ) of approximately 0.3 was achieved whereas, at a temperature difference of 500 K, the figure of merit ( ZT ) of approximately 1.3 was achieved. Fig. 3. Efficiency ( \eta ) with respect to figure of merit ( ZT n The efficiency ( \eta n -type PbTe thermoelectric module has been presented in Fig. 3. It was observed that for figure of merit ( ZT ) of 1.3, an efficiency ( \eta ) of 17 % was achieved. Consequently, Pb1-xMgxTe0.8Se0.2 and n -type PbTe thermoelectric module had the highest thermoelectric performance efficiency ( \eta ) and it may be utilized to fabricate an efficient thermoelectric generator. Fig. 4. Thermoelectric power output ( P ) with respect to thermoelectric current ( I n Fig. 4 displays the thermoelectric power output ( P I n -type PbTe thermoelectric module. It was observed that for thermoelectric current ( I ) of 15.13 A, power output ( P ) of 54.92 W was achieved. Further, the current can be increased by increasing the area-length ratio of the thermoelectric module or by decreasing resistivity of the thermoelectric module. Fig. 5. Thermoelectric voltage output ( V n Fig. 5 displays the thermoelectric voltage output ( V n -type PbTe thermoelectric module. This analysis was carried out in multislice plot and volumetric plot and the mean value was considered as the thermoelectric voltage output ( V ). The multislice plot displayed 3.56 V whereas volumetric plot displayed 3.69 V. Therefore, thermoelectric voltage ( V ) of 3.63 V was considered to perform further analysis. 1) Pb1-xMgxTe0.8Se0.2 and n -type PbTe, Pb1-xMgxTe0.8Se0.2 and CoSb3-xTex, Cu12Sb4S13 and CoSb3-xTex showedhigherefficiency, thus, could be used in making efficient thermoelectric generators. 2) Pb1-xMgxTe0.8Se0.2 and CaMn0.98Nb0.02O3 showed efficiency below 5 %, thus, could not be used in making an efficient thermoelectric generator. 3) Lead telluride based thermoelectric modules could be used in isolated areas due to the toxicity of lead. 4) Tetrahedrite based thermoelectric module could be used in non-isolated areas due to its non-toxic properties and higher cost-efficiency. 5) Pb1-xMgxTe0.8Se0.2 and CoSb3-xTex thermoelectric module showed maximum thermoelectric power output. Kan Chen Synthesis and Thermoelectric Properties of Cu-Sb-S Compounds. Queen Mary University of London, 2016, p. 13-142. [Search CrossRef] Tomes P., Robert R., Trottmann M., Bocher L., Aguirre M. H., Bitschi A., Hejtmanek J., Weidenkaff A. Synthesis and characterization of new ceramic thermoelectrics implemented in a thermoelectric oxide module. Journal of Electronic Materials, Vol. 39, 2010, p. 1696-1703. [Publisher] Fu Tiezheng, Yue Xianqiang, Wu Haijun, Fu Chenguang, Zhu Tiejun, Liu Xiaohua, Hu Lipeng, Ying Pingjun, He Jiaqing, Zhao Xinbing Enhanced thermoelectric performance of PbTe bulk materials with figure of merit zT > 2 by multi-functional alloying. Journal of Materiomics, Vol. 2, Issue 2, 2016, p. 141-149. [Publisher] Liang Tao, Su Xianli, Yan Yonggao, Zheng Gang, She Xiaoyu, You Yonghui, Uher Ctirad, Kanatzidis Mercouri G., Tang Xinfeng Panoscopic approach for high-performance Te-doped skutterudite. NPG Asia Materials, Vol. 9, 2017, p. 2-9. [Publisher] Lan Song, Yang Zhijia, Stobart Richard, Chen Rui Prediction of the fuel economy potential for a skutterudite thermoelectric generator in light-duty vehicle applications. Applied Energy, Vol. 231, 2018, p. 68-79. [Publisher] Mane Pradyumn, Atheaya Deepali Performance analysis of thermoelectric generator by using lead telluride, perovskites, skutterudites and tetrahedrites. WEENTECH proceedings in Energy, Vol. 5, Issue 2, 2019, p. 66-78. [Publisher] Pradyumn Mane, Deepali Atheaya
Absorption law - Wikipedia Law in algebra In algebra, the absorption law or absorption identity is an identity linking a pair of binary operations. Two binary operations, ¤ and ⁂, are said to be connected by the absorption law if: a ¤ (a ⁂ b) = a ⁂ (a ¤ b) = a. A set equipped with two commutative and associative binary operations {\displaystyle \scriptstyle \lor } ("join") and {\displaystyle \scriptstyle \land } ("meet") that are connected by the absorption law is called a lattice; in this case, both operations are necessarily idempotent. Examples of lattices include Heyting algebras and Boolean algebras,[1] in particular sets of sets with union and intersection operators, and ordered sets with min and max operations. In classical logic, and in particular Boolean algebra, the operations OR and AND, which are also denoted by {\displaystyle \scriptstyle \lor } {\displaystyle \scriptstyle \land } , satisfy the lattice axioms, including the absorption law. The same is true for intuitionistic logic. The absorption law does not hold in many other algebraic structures, such as commutative rings, e.g. the field of real numbers, relevance logics, linear logics, and substructural logics. In the last case, there is no one-to-one correspondence between the free variables of the defining pair of identities. ^ See Boolean algebra (structure)#Axiomatics for a proof of the absorption laws from the distributivity, identity, and boundary laws. Brian A. Davey; Hilary Ann Priestley (2002). Introduction to Lattices and Order (2nd ed.). Cambridge University Press. ISBN 0-521-78451-4. LCCN 2001043910. "Absorption laws", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Absorption Law". MathWorld. Retrieved from "https://en.wikipedia.org/w/index.php?title=Absorption_law&oldid=1069123437" Abstract algebra stubs
Price floor instrument from Black-Derman-Toy interest-rate tree - MATLAB floorbybdt - MathWorks Deutschland Price a 10% Floor Instrument Using a BDT Interest-Rate Tree Price a 10% Floor Instrument Using a Newly Created BDT Interest-Rate Tree Compute the Price of an Amortizing Floor Using the BDT Model Price floor instrument from Black-Derman-Toy interest-rate tree [Price,PriceTree] = floorbybdt(BDTTree,Strike,Settle,Maturity) [Price,PriceTree] = floorbybdt(___,FloorReset,Basis,Principal,Options) [Price,PriceTree] = floorbybdt(BDTTree,Strike,Settle,Maturity) computes the price of a floor instrument from a Black-Derman-Toy interest-rate tree. floorbybdt computes prices of vanilla floors and amortizing floors. [Price,PriceTree] = floorbybdt(___,FloorReset,Basis,Principal,Options) adds optional arguments. Load the file deriv.mat, which provides BDTTree. BDTTree contains the time and interest-rate information needed to price the floor instrument. Use floorbybdt to compute the price of the floor instrument. Price = floorbybdt(BDTTree, Strike, Settle, Maturity) First set the required arguments for the three needed specifications. Set the floor arguments. Remaining arguments will use defaults. FloorStrike = 0.10; Use floorbybdt to find the price of the floor instrument. Price= floorbybdt(BDTTree, FloorStrike, Settlement, Maturity,... FloorReset) Price = floorbybdt(BDTTree, Strike, Settle, Maturity, Reset, Basis, Principal) Settlement date for the floor, specified as a NINST-by-1 vector of serial date numbers or date character vectors. The Settle date for every floor is set to the ValuationDate of the BDT tree. The floor argument Settle is ignored. \mathrm{max}\left(FloorRate-CurrentRate,0\right) bdttree | capbybdt | cfbybdt | swapbybdt | floorbynormal
Elwin_Bruno_Christoffel Knowpia Elwin Bruno Christoffel (German: [kʁɪˈstɔfl̩]; 10 November 1829 – 15 March 1900) was a German mathematician and physicist. He introduced fundamental concepts of differential geometry, opening the way for the development of tensor calculus, which would later provide the mathematical basis for general relativity. Montjoie, Prussia Strasbourg, German Empire Riemann–Christoffel tensor Christoffel was born on 10 November 1829 in Montjoie (now Monschau) in Prussia in a family of cloth merchants. He was initially educated at home in languages and mathematics, then attended the Jesuit Gymnasium and the Friedrich-Wilhelms Gymnasium in Cologne. In 1850 he went to the University of Berlin, where he studied mathematics with Gustav Dirichlet (which had a strong influence over him)[1] among others, as well as attending courses in physics and chemistry. He received his doctorate in Berlin in 1856 for a thesis on the motion of electricity in homogeneous bodies written under the supervision of Martin Ohm, Ernst Kummer and Heinrich Gustav Magnus.[2] After receiving his doctorate, Christoffel returned to Montjoie where he spent the following three years in isolation from the academic community. However, he continued to study mathematics (especially mathematical physics) from books by Bernhard Riemann, Dirichlet and Augustin-Louis Cauchy. He also continued his research, publishing two papers in differential geometry.[2] In 1859 Christoffel returned to Berlin, earning his habilitation and becoming a Privatdozent at the University of Berlin. In 1862 he was appointed to a chair at the Polytechnic School in Zürich left vacant by Dedekind. He organised a new institute of mathematics at the young institution (it had been established only seven years earlier) that was highly appreciated. He also continued to publish research, and in 1868 he was elected a corresponding member of the Prussian Academy of Sciences and of the Istituto Lombardo in Milan. In 1869 Christoffel returned to Berlin as a professor at the Gewerbeakademie (now part of the Technical University of Berlin), with Hermann Schwarz succeeding him in Zürich. However, strong competition from the close proximity to the University of Berlin meant that the Gewerbeakademie could not attract enough students to sustain advanced mathematical courses and Christoffel left Berlin again after three years.[2] In 1872 Christoffel became a professor at the University of Strasbourg, a centuries-old institution that was being reorganized into a modern university after Prussia's annexation of Alsace-Lorraine in the Franco-Prussian War. Christoffel, together with his colleague Theodor Reye, built a reputable mathematics department at Strasbourg. He continued to publish research and had several doctoral students including Rikitaro Fujisawa, Ludwig Maurer and Paul Epstein. Christoffel retired from the University of Strasbourg in 1894, being succeeded by Heinrich Weber.[2] After retirement he continued to work and publish, with the last treatise finished just before his death and published posthumously.[1] Christoffel died on 15 March 1900 in Strasbourg. He never married and left no family.[2] Christoffel is mainly remembered for his seminal contributions to differential geometry. In a famous 1869 paper on the equivalence problem for differential forms in n variables, published in Crelle's Journal,[3] he introduced the fundamental technique later called covariant differentiation and used it to define the Riemann–Christoffel tensor (the most common method used to express the curvature of Riemannian manifolds). In the same paper he introduced the Christoffel symbols {\displaystyle \Gamma _{kij}} {\displaystyle \Gamma _{ij}^{k}} which express the components of the Levi-Civita connection with respect to a system of local coordinates. Christoffel's ideas were generalized and greatly developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita, who turned them into the concept of tensors and the absolute differential calculus. The absolute differential calculus, later named tensor calculus, forms the mathematical basis of the general theory of relativity.[2] Christoffel contributed to complex analysis, where the Schwarz–Christoffel mapping is the first nontrivial constructive application of the Riemann mapping theorem. The Schwarz–Christoffel mapping has many applications to the theory of elliptic functions and to areas of physics.[2] In the field of elliptic functions he also published results concerning abelian integrals and theta functions. Christoffel generalized the Gaussian quadrature method for integration and, in connection to this, he also introduced the Christoffel–Darboux formula for Legendre polynomials[4] (he later also published the formula for general orthogonal polynomials). Christoffel also worked on potential theory and the theory of differential equations, however much of his research in these areas went unnoticed. He published two papers on the propagation of discontinuities in the solutions of partial differential equations which represent pioneering work in the theory of shock waves. He also studied physics and published research in optics, however his contributions here quickly lost their utility with the abandonment of the concept of the luminiferous aether.[2] Christoffel was elected as a corresponding member of several academies: Istituto Lombardo (1868) Christoffel was also awarded two distinctions for his activity by the Kingdom of Prussia: Order of the Red Eagle 3rd Class with bow (Schleife) (1893) Order of the Crown 2nd Class (1895) Christoffel, E. B. (1858). "Über die Gaußische Quadratur und eine Verallgemeinerung derselben". Journal für die Reine und Angewandte Mathematik (in German). 1858 (55): 61–82. doi:10.1515/crll.1858.55.61. ISSN 0075-4102. S2CID 123118038. Christoffel, E.B. (1869). "Ueber die Transformation der homogenen Differentialausdrücke zweiten Grades". Journal für die Reine und Angewandte Mathematik. 70. Retrieved 6 October 2015. Gesammelte Mathematische Abhandlungen. Lepizig: B. G. Teubern. 1910. 2 volumes, edited by Ludwig Maurer with the assistance of Adolf Krazer and Georg Faber;[5] Erster Band, Zweiter Band. (Service Commun de Documentation de l'Université Louis Pasteur, Strasbourg) ^ a b Windelband, Wilhelm (1901). "Zum Gedächtniss Elwin Bruno Christoffel's" (PDF). Mathematische Annalen (in German). 54 (3): 341–344. doi:10.1007/bf01454257. S2CID 122771618. Retrieved 2015-10-06. ^ a b c d e f g h Butzer, Paul L. (1981). "An Outline of the Life and Work of E. B. Christoffel (1829–1900)". Historia Mathematica. 8 (3): 243–276. doi:10.1016/0315-0860(81)90068-9. ^ Christoffel, E.B. (1869), "Ueber die Transformation der homogenen Differentialausdrücke zweiten Grades", Journal für die Reine und Angewandte Mathematik, B. 70 (70): 46–70, doi:10.1515/crll.1869.70.46, S2CID 122999847 ^ Christoffel, E. B. (1858), "Über die Gaußische Quadratur und eine Verallgemeinerung derselben", Journal für die Reine und Angewandte Mathematik (in German), 1858 (55): 61–82, doi:10.1515/crll.1858.55.61, ISSN 0075-4102, S2CID 123118038 ^ Eisenhart, Luther Pfahler (1914). "Book Review: E. B. Christoffel, Gesammelte mathematische Abhandlungen". Bulletin of the American Mathematical Society. 20 (9): 476–483. doi:10.1090/S0002-9904-1914-02522-4. MR 1559531. P.L. Butzer & F. Feher (editors) EB Christoffel: the influence of his work on mathematics and the physical sciences, Birkhäuser Verlag, 1981 ISBN 3-7643-1162-2. O'Connor, John J.; Robertson, Edmund F., "Elwin Bruno Christoffel", MacTutor History of Mathematics archive, University of St Andrews Elwin Bruno Christoffel at the Mathematics Genealogy Project Wikiquote has quotations related to Elwin Bruno Christoffel.
SaveProfiles - Maple Help Home : Support : Online Help : Programming : CodeTools : Profiling : SaveProfiles save profiling data to a file SaveProfiles(filename, proc1, proc2, ..., tab1, tab2, ..., opts) string; file in which to save the profiles (optional) name of the form option where option is one of 'append' or 'overwrite'; specify save options The SaveProfiles(filename) command saves profiling data for all procedures for which it has profiling data to filename. The SaveProfiles(filename, proc1, proc2, ...) command saves the profiling data for the specified procedures to the file. The SaveProfiles(filename, proc1, proc2, ..., tab1, tab2, ...) command reads profiling data from the currently profiled procedures and the specified tables of profiling data. If a procedure appears more than once in any of these sources, the profiles are joined together (as in Merge) and the data from the merged profiles is saved. If the file, filename, exists, then SaveProfiles raises an error. To avoid this error, specify the opts parameter as 'append' or 'overwrite'. Specifies that the saved data is saved to the end of the existing file. Specifies that an existing file is overwritten by a new file. The data written for a procedure is the current profiling data (if any) for the procedure combined with any associated profiling data that was specified in a table. To reload the profiles into Maple, use the LoadProfiles function. \mathrm{with}⁡\left(\mathrm{CodeTools}[\mathrm{Profiling}]\right): t≔\mathrm{Build}⁡\left(\mathrm{procs}=a,\mathrm{commands}=['a⁡\left(0\right)','a⁡\left(1\right)']\right): \mathrm{PrintProfiles}⁡\left(a,t\right) \mathrm{Profile}⁡\left(a\right) a⁡\left(2\right) \textcolor[rgb]{0,0,1}{1} \mathrm{PrintProfiles}⁡\left(a,t\right) \mathrm{SaveProfiles}⁡\left("file",a,t\right): \mathrm{UnProfile}⁡\left(\right) \mathrm{PrintProfiles}⁡\left(a\right) \mathrm{LoadProfiles}⁡\left("file",a\right) \mathrm{PrintProfiles}⁡\left(a\right) \mathrm{FileTools}[\mathrm{Remove}]⁡\left("file"\right) CodeTools[Profiling][LoadProfiles]
The getNext method of the ModuleIterator returns two Arrays, L R , each indexed from 1 to n. Their elements define the connections for the left and right branches, respectively. {L}_{k} is the node to which the left-branch of node k connects. If {L}_{k} is 0 then it is connected to an external (terminal) node. R is similarly defined for the right-branches. \mathrm{with}⁡\left(\mathrm{Iterator}\right): Construct an iterator over binary trees with four internal nodes. In the loop, lr is assigned a sequence of two Arrays, L,R B≔\mathrm{BinaryTrees}⁡\left(4\right): \mathbf{for}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{lr}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{in}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}B\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}\mathrm{printf}⁡\left("%d - %d\n",\mathrm{lr}\right)\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do} \mathrm{Print}⁡\left(B,'\mathrm{showranks}'\right): \mathrm{Number}⁡\left(B\right) \textcolor[rgb]{0,0,1}{14} \mathrm{DigitalPuzzle}⁡\left(6\right) \textcolor[rgb]{0,0,1}{7} \mathrm{DigitalPuzzle2}⁡\left(6\right) \textcolor[rgb]{0,0,1}{7}
Dynamic stiffness method for free vibration analysis of thin functionally graded rectangular plates | JVE Journals Manish Chauhan1 , Vinayak Ranjan2 , Prabhakar Sathujoda3 Copyright © 2019 Manish Chauhan, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this present work, the dynamic stiffness method (DSM) is used to analyze the free vibration of a thin functionally graded rectangular plate. Classical plate theory (CPT) is used to develop the dynamic stiffness matrix of a functionally graded material (FGM) plate. For free vibration analysis, the natural frequencies of the functionally graded material plate are estimated by using DSM with Wittrick-Williams algorithm for different aspect ratios and different boundary conditions. The present research compared the DSM natural frequencies results with those available in the published literature. Keywords: dynamic stiffness method, free vibration, functionally graded material, CPT. The concept of functionally graded materials was first time introduced by Yamanoushi et.al [1] in 1980 during the advancement of thermal resistance material for aerospace engineering applications. Functionally graded materials are known as a new class of composite materials, which is a mixture of ceramics and metal constituents. The ceramic constituents give high-temperature resistance, whereas metal constituents enhance the mechanical performance and decrease the failure possibility of the structure. Leissa [2] used the Ritz method to analyze free vibration behaviour of the rectangular isotropic plate under applied twenty-one possible boundary conditions. Bercin [3] analyze free vibration and mode shape of the orthotropic plate by using finite element method. Bercin and Langley [4] continued to this work to develop the dynamic stiffness matrix for vibration analysis of plate structures. Boscolo and Banerjee [5] used DSM for analysis of free transverse vibration of the rectangular isotropic plate by using classical plate theory and first-order shear deformation theory. Chauhan et al. [6] used classical plate theory to analyze the free vibration of isotropic plate for different boundaries by using DSM Shen and Yang [7] applied CPT to investigate free vibration behavior of initially stressed elastically founded functionally graded material (FGM) plates under impetuous lateral loading. Baferani et al. [8] used Navier and Levy type solution for the free vibration analysis of functionally graded plate under different boundary conditions by using CPT. Kumar et al. [9] used CPT to formulate the DSM with Wittrick-Williams algorithm to extarct the eigen value of the FGM plates. In this paper, we have analyzed the free vibration behavior of functionally graded material plates by using dynamic stiffness method with Wittrick-Williams algorithm to extract the natural frequencies under different boundary conditions. 2. Governing differential equation of the functionally graded material plate Fig. 1. shows a rectangular functionally graded plate of length a, width b and thickness h , where material properties vary along with the thickness as a power-law distribution [9] as given by Eq. (1): {V}_{c}\left(z\right)={\left(\frac{z}{h}+\frac{1}{2}\right)}^{k}, {V}_{m}\left(z\right)=1-{V}_{c}\left(z\right), \left(-0.5h\le z\le 0.5h\right), {V}_{c} {V}_{m} denotes the volume fractions of ceramics and metal constituents, k represent the power-law index that takes a positive real number in Eq. (1). Fig. 1. Material geometry and coordinates system of the functionally graded plate Fig. 2. Boundary conditions for displacements and forces for a plate element The displacement components of thin rectangular functionally graded plate {u}_{o}\left(x,y,z\right)\text{,} {v}_{o}\left(x,y,z\right) {w}_{o}\left(x,y,z\right) by using classical plate theory are given by Eq. (2): {u}_{o}\left(x,y,z\right)={u}^{\text{'}}\left(x,y\right)-\left(z-{z}_{0}\right)\frac{\partial {w}^{\text{'}}}{\partial x}, {v}_{o}\left(x,y,z\right)={v}^{\text{'}}\left(x,y\right)-\left(z-{z}_{0}\right)\frac{\partial {w}^{\text{'}}}{\partial y}, {w}_{o}\left(x,y,z\right)={w}^{\text{'}}\left(x,y\right), {u}^{\text{'}}\left(x,y\right) {v}^{\text{'}}\left(x,y\right) {w}^{\text{'}}\left(x,y\right) are the mid-plate (i.e, z=0 ) displacement components. Fig. 1. shows that the material properties are nonhomogeneous in the transverse direction, due to this the middle surface of the geometry has in-plane displacement, which cannot be neglected. Therefore, the middle surface of FGM plate geometry does not concur with the neutral surface. In this condition, the neutral surface must be changed to {z}_{n}=z-{z}_{0} {z}_{0} is the distance between mid-surface to the neutral surface of the plate as shown in Fig. 1. Hamilton’s principle is used to drive the fourth-order differential equation for transverse deflection of a thin rectangular functionally graded plate under free vibration condition and is given by Eq. (3): {D}_{eff}\left(\frac{{\partial }^{4}{w}^{\text{'}}}{\partial {x}^{4}}+2\frac{{\partial }^{4}{w}^{\text{'}}}{\partial {x}^{2}\partial {y}^{2}}+\frac{{\partial }^{4}{w}^{\text{'}}}{\partial {y}^{4}}\right)+\rho h\frac{{\partial }^{4}{w}^{\text{'}}}{\partial {t}^{4}}=0. The boundary conditions for Levy-type solution in Fig. 2., are given as: {V}_{x}:-{D}_{eff}\left(\frac{{\partial }^{3}{w}^{\text{'}}}{\partial {x}^{3}}+\left(2-\upsilon \right)\frac{{\partial }^{3}{w}^{\text{'}}}{\partial x\partial {y}^{2}}\right)\delta {w}^{\text{'}}, {M}_{xx}:-{D}_{eff}\left(\frac{{\partial }^{2}{w}^{\text{'}}}{\partial {x}^{2}}+\upsilon \frac{{\partial }^{2}{w}^{\text{'}}}{\partial {y}^{2}}\right)\delta {\varnothing }_{y}, {D}_{eff}=E{h}^{3}/12\left(1-{\upsilon }^{2}\right) is the effective bending stiffness, h plate thickness, E Young’s Modulus of Elasticity, \upsilon Poisson’s ratio of the given material, {V}_{x} {M}_{xx} {\varnothing }_{y} are the shear force, bending moment and rotation of the bending plate. 3. Formulation of dynamic stiffness A levy type solution of Eq. (3) which satisfies the boundary condition of Eq. (4) can be expressed in the following form [8]: {w}^{\text{'}}\left(x,y,t\right)={\sum }_{m=1}^{\infty }{W}_{m}\left(x\right){e}^{i\omega t}\mathrm{sin}\left({\propto }_{m}y\right), {\propto }_{m}=\frac{m\pi }{L}, \left(m=1,2,\dots ,\infty \right), \omega is unknow natural frequency. By putting Eq. (5) into Eq. (3) we get Eq. (6): \frac{{d}^{4}{W}_{m}}{d{x}^{4}}-2{\propto }_{m}^{2}\frac{{d}^{2}{W}_{m}}{d{x}^{2}}+\left({\propto }_{m}^{4}-\frac{\rho h{\omega }^{2}}{{D}_{eff}}\right){W}_{m}=0, \left(m=1,2,\dots ,\infty \right). The two possible solutions of the ordinary differential Eq. (6) are obtained, depending on the nature of all roots. Here we show only one possible solution: {\propto }_{m}^{2}\ge \omega \sqrt{\frac{{I}_{0}}{{D}_{eff}}}⇒ all roots are real ( {\propto }_{1m},-{\propto }_{1m},{\propto }_{2m},-{\propto }_{2m} {\propto }_{1m}=\sqrt{{\propto }_{m}^{2}+\omega \sqrt{\frac{{I}_{0}}{{D}_{eff}}}}, {\propto }_{2m}=\sqrt{{\propto }_{m}^{2}-\omega \sqrt{\frac{{I}_{0}}{{D}_{eff}}}}. {W}_{m}\left(x\right)={A}_{m}\mathrm{cosh}\left({\propto }_{1m}x\right)+{B}_{m}\mathrm{sinh}\left({\propto }_{1m}x\right)+{C}_{m}\mathrm{cosh}\left({\propto }_{2m}x\right)+{D}_{m}\mathrm{sinh}\left({\propto }_{2m}x\right). {w}^{\text{'}} in Eq. (8) and Eq. (5), shear force {V}_{x} , rotation {\varnothing }_{y} {M}_{xx} can be expressed in the following form using Eq. (4) as shown below: {\varphi }_{ym}\left(x,y\right)={\varphi }_{ym}\left(x\right)\mathrm{sin}{\left(\propto }_{m}y\right), {V}_{xm}\left(x,y\right)={V}_{xm}\left(x\right)\mathrm{sin}{\left(\propto }_{m}y\right), {M}_{xxm}\left(x,y\right)={M}_{xxm}\left(x\right)\mathrm{sin}{\left(\propto }_{m}y\right). The displacements boundary conditions for the plate are: x=0, {W}_{m}={W}_{1}, {\varphi }_{ym}={\varphi }_{y1}, x=b, {W}_{m}={W}_{2}, {\varphi }_{ym}={\varphi }_{y2}, similarly, the forces boundary conditions are: x=0, {V}_{xm}={-V}_{1}, {M}_{xxm}={-M}_{1}, x=b, {V}_{xm}={-V}_{2}, {M}_{xxm}={M}_{2}. The displacement boundary conditions are applied, i.e., putting Eq. (12) into Eqs. (8) and (9), the following matrix relationship is obtained: \left[\begin{array}{c}{W}_{1}\\ {\varphi }_{y1}\\ {W}_{2}\\ {\varphi }_{y1}\end{array}\right]=\left[\begin{array}{cccc}1& 0& 1& 0\\ 0& -{\propto }_{1m}& 0& -{\propto }_{2m}\\ {C}_{h1}& {S}_{h1}& {C}_{2}& {S}_{2}\\ {-{\propto }_{1m}S}_{h1}& {-{\propto }_{1m}C}_{h1}& {-{\propto }_{1m}S}_{h2}& {-{\propto }_{1m}C}_{h1}\end{array}\right]\left[\begin{array}{c}{A}_{m}\\ {B}_{m}\\ {C}_{m}\\ {D}_{m}\end{array}\right], \delta =AC, {C}_{h1}=\mathrm{cosh}{\left(\propto }_{im}b\right) {S}_{h1}=\mathrm{sinh}{\left(\propto }_{im}b\right) {C}_{i}=\mathrm{cos}{\left(\propto }_{im}b\right) {S}_{i}=\mathrm{sin}{\left(\propto }_{im}b\right) i= The force boundary conditions are applied, i.e., putting Eq. (13) into Eqs. (10) and (11), the following matrix relationship is obtained: \left[\begin{array}{c}{V}_{1}\\ {M}_{1}\\ {V}_{2}\\ {M}_{2}\end{array}\right]=\left[\begin{array}{cccc}0& {R}_{1}& 0& {R}_{2}\\ {L}_{1}& 0& {L}_{1}& 0\\ -{R}_{1}{S}_{h1}& -{R}_{1}{C}_{h1}& -{R}_{1}{S}_{2}& -{R}_{1}{C}_{2}\\ -{L}_{1}{C}_{h1}& -{L}_{1}{S}_{h1}& -{L}_{2}{C}_{h1}& -{L}_{2}{S}_{2}\end{array}\right]\left[\begin{array}{c}{A}_{m}\\ {B}_{m}\\ {C}_{m}\\ {D}_{m}\end{array}\right], F=RC, {R}_{i}={D}_{eff}\left({{\propto }_{im}}^{3}-{\propto }^{2}{\propto }_{im}\left(2-\nu \right)\right) {L}_{i}={D}_{eff}\left({{\propto }_{im}}^{2}-{\propto }^{2}\nu \right) i= 1, 2. Using Eqs. (15) and (17), the dynamic stiffness matrix K for functionally graded (FG) plate can be formulated by eliminating the constant vector C to get Eq. (18): F=K\delta , K=R{A}^{-1}. By using Eq. (19), the generalized dynamic stiffness matrix ( K ) as given by Eq. (20): \mathbit{K}=\left[\begin{array}{cccc}{s}_{vv}& { s}_{vm}& {f}_{vv}& {f}_{vm}\\ & {s}_{mm}& {-f}_{vm}& {f}_{mm}\\ & Sym& {s}_{vv}& { -s}_{vm}\\ & & & {s}_{mm}\end{array}\right], where six variable terms {s}_{vv} {s}_{vm} {s}_{mm} {f}_{vv} {f}_{vm} {f}_{mm} can be expressed in the following form [9]. Table 1. Non-dimensional natural frequencies ( \varpi =\omega {a}^{2}\sqrt{ {\rho }_{c}h/{D}_{c} }\right) for Functionally graded square plates with S-S-S-S and S-F-S-F boundary conditions using DSM method mn k= k= k= k= k= k= k= S-F-S-F The dynamic stiffness matrix is used to obtain natural frequencies of the functionally graded plate by applying the Wittrick-Williams algorithm [5]. The above procedure is used to formulate DSM and this procedure has been implemented in MATLAB program to compute the natural frequencies of the FGM plate for different boundary conditions with different power-law index ( k ) values as shown in Tables 1-3, where {\rho }_{c} {D}_{c} are denotes the density, bending stiffness of the ceramic material. The letter m denotes the number of half-sine wave in x direction, whereas n n th lowest frequency of a given value of m Table 2. Comparison of Non-dimensional natural frequencies ( \varpi =\omega {a}^{2}\sqrt{ {\rho }_{c}h/{D}_{c} } ) with results reported in the available published literature of the functionally graded plate S-C-S-C \frac{a}{b} k= k= k= k= k= k= k= k= \varpi =\omega {a}^{2}\sqrt{ {\rho }_{c}h/{D}_{c} } of square FGM plate with published results in Chakraverty and Pradhan [12] k= k = k = mn From Table 1, we observed that with increase in k value, the natural frequencies decrease. This is because as the k value increase, the metal constituent in the FGM plate and the stiffness of the plate is reduced. When we compared the natural frequency results of the FGM plates with those available in the published literature, we found that the reported natural frequencies values at k=0 in Tables 2-3 are nearly same with those available in the literature [2, 11, 12]. While increasing the k value from 0.5 to 1.0, the maximum error increases 5 % to 11 % as given by Chakravarty and Pradhan [12] in Table 3. The possible reasons for these reported results are discussed below. Chakravarty and Pradhan [12] have considered mid-plane surface geometry instead of the neutral surface for solving the effective bending stiffness ( {D}_{eff}\right) , which increases the percentage error. Due to this reason, we have observed that error is smaller for k= 0 and higher for k= The impetus of the present work is to formulate the dynamic stiffness matrix to estimate the natural frequencies of a thin rectangular functionally graded plate, where two different sides of the plate are simply supported. Classical plate theory is used to develop the dynamic stiffness matrix of a functionally graded material plate whereas the transcendental nature of dynamic stiffness matrix is solved by using Wittrick-Williams algorithm and this formulation has been employed into MATLAB to extract natural frequency of the FGM plate with the desired accuracy. The natural frequencies calculated by DSM are compared with those available in literature. Koizumi M. FGM activities in Japan. Composites Part B: Engineering, Vol. 28, Issues 1-2, 1997, p. 1-4. [Publisher] Leissa A. W. The free vibration of rectangular plates. Journal of Sound and Vibration, Vol. 31, Issue 3, 1973, p. 257-293. [Publisher] Bercin A. N. Analysis of orthotropic plate structures by the direct -dynamic stiffness method. Mechanics Research Communications, Vol. 22, Issue 5, 1995, p. 461-466. [Publisher] Bercin A. N., Langley R. S. Application of the dynamic stiffness technique to the in-plane vibrations plate structures. Computers and Structures, Vol. 59, Issue 5, 1996, p. 869-875. [Publisher] Boscolo M., Banerjee J. R. Dynamic stiffness elements and their applications for plates using first order shear deformation theory. Composite Structures, Vol. 89, Issue 3, 2011, p. 395-410. [Publisher] Manish Chauhan, Vinayak Ranjan, Baij Nath Singh Comparison of Natural frequencies of isotropic plate using DSM with Wittrick-Williams Algorithm. Vibroengineering Procedia, Vol. 21, 2018, p. 59-64 [Search CrossRef] Yang J., Shen H. S. Dynamic response of initially stressed functionally graded rectangular thin plates. Composite Structures, Vol. 54, Issue 4, 2001, p. 497-508. [Publisher] Baferani A. H., Saidi A. R., Jomehzadeh E. An exact solution for free vibration of thin functionally graded rectangular plates. Proceedings of The Institution of Mechanical Engineers Part C-Journal of Mechanical Engineering Scie, Vol. 225, Issue 3, 2011, p. 526-536. [Publisher] Kumar S., Ranjan V., Jana P. Free vibration analysis of thin functionally graded rectangular plates using the dynamic stiffness method. Composite Structures, Vol. 197, 2018, p. 39-53. [Publisher] Yin S., Yu T., Liu P. Free vibration analyses of FGM thin plates by isogeometric analysis based on classical plate theory and physical neutral surface. Advances in Mechanical Engineering, Vol. 5, 2013, p. 634584. [Publisher] Chakraverty S., Pradhan K. K. Free vibration of exponential functionally graded rectangular plates in thermal environment with general boundary conditions. Aerospace Science and Technology, Vol. 36, 2014, p. 132-156. [Publisher] Chakraverty S., Pradhan K. K. Free vibration of functionally graded thin rectangular plates resting on Winkler elastic foundation with general boundary conditions using Rayleigh-Ritz method. International Journal of Applied Mechanics, Vol. 6, Issue 4, 2014, p. 1450043. [Publisher]
Plot variable correlations - MATLAB corrplot Plot and Return Pearson's Correlation Coefficients Between Variables in Matrix of Data Plot and Return Correlations and \mathit{p} -values Between Table Variables Plot Correlations Between Selected Variables Plot and Test Kendall's Rank Correlation Coefficients Conduct Right-Tailed Correlation Tests corrplot returns results in tables when you supply a table of data Plot variable correlations [R,PValue] = corrplot(X) [R,PValue] = corrplot(Tbl) [___] = corrplot(___,Name=Value) corrplot(___) corrplot(ax,___) [___,H] = corrplot(___) [R,PValue] = corrplot(X) plots Pearson's correlation coefficients between all pairs of variables in the matrix of time series data X. The plot is a numVars-by-numVars grid, where numVars is the number of time series variables (columns) in X, including the following subplots: Each off diagonal subplot contains a scatterplot of a pair of variables with a least-squares reference line, the slope of which is equal to the displayed correlation coefficient. Each diagonal subplot contains the distribution of a variable as a histogram. Also, the function returns the correlation matrix in the plots R and a matrix of p-values PValue for testing the null hypothesis that each pair of coefficients is not correlated against the alternative hypothesis of a nonzero correlation. [R,PValue] = corrplot(Tbl) plots the Pearson's correlation coefficients between all pairs of variables in the table or timetable Tbl, and also returns tables for the correlation matrix R and matrix of p-values PValue. To select a subset of variables in Tbl, for which to plot the correlation matrix, use the DataVariables name-value argument. [___] = corrplot(___,Name=Value) uses additional options specified by one or more name-value arguments, using any input-argument combination in the previous syntaxes. corrplot returns the output-argument combination for the corresponding input arguments. For example, corrplot(Tbl,Type="Spearman",TestR="on",DataVariables=1:5) computes Spearman’s rank correlation coefficient for the first 5 variables of the table Tbl and tests for significant correlation coefficients. corrplot(___) plots the correlation matrix. corrplot(ax,___) plots on the axes specified by ax instead of the current axes (gca). ax can precede any of the input argument combinations in the previous syntaxes. [___,H] = corrplot(___) plots the diagnostics of the input series and additionally returns handles to plotted graphics objects H. Use elements of H to modify properties of the plot after you create it. Plot and return Pearson's correlation coeffifients between pairs of time series using the default options of corrplot. Input the time series data as a numeric matrix. Plot and return the correlation matrix between all pairs of variables in the data. R = corrplot(Data) The correlation plot shows that the short-term, medium-term, and long-term interest rates are highly correlated. \mathit{p} Plot correlations between time series, which are variables in a table, using default options. Return a table of pairwise correlations and a table of corresponding significance-test \mathit{p} Plot and return the correlation matrix, with corresponding significance-test \mathit{p} -values, between all pairs of variables in the data [R,PValue] = corrplot(TT) INF_C INF_G INT_S INT_M INT_L _______ _______ _______ _______ _______ INF_C 1 0.92665 0.74007 0.72867 0.7136 INF_G 0.92665 1 0.59077 0.57159 0.55557 INT_S 0.74007 0.59077 1 0.9758 0.93843 INT_M 0.72867 0.57159 0.9758 1 0.98609 INT_L 0.7136 0.55557 0.93843 0.98609 1 INF_C INF_G INT_S INT_M INT_L INF_C 1 3.6657e-18 3.2113e-08 6.6174e-08 1.6318e-07 INF_G 3.6657e-18 1 4.7739e-05 9.4769e-05 0.00016278 INT_S 3.2113e-08 4.7739e-05 1 2.3206e-27 1.3408e-19 INT_M 6.6174e-08 9.4769e-05 2.3206e-27 1 5.1602e-32 INT_L 1.6318e-07 0.00016278 1.3408e-19 5.1602e-32 1 corrplot returns the correlation matrix and corresponding matrix of \mathit{p} -values in tables R and PValue, respectively. By default, corrplot computes correlations between all pairs of variables in the input table. To select a subset of variables from an input table, set the DataVariables option. Plot the correlation matrix for selected time series. corrplot(DataTable,DataVariables=prednames(2:end)); Plot Kendall's rank correlations between multiple time series. Conduct a hypothesis test to determine which correlations are significantly different from zero. Load data on Canadian inflation and interest rates. Plot the Kendall's rank correlation coefficients between all pairs of variables. Identify which correlations are significantly different from zero by conducting hypothesis tests. corrplot(DataTable,Type="Kendall",TestR="on") The correlation coefficients highlighted in red indicate which pairs of variables have correlations significantly different from zero. For these time series, all pairs of variables have correlations significantly different from zero. Test for correlations greater than zero between multiple time series. Load data on Canadian inflation and interest rates Data_Canada.mat. Return the pairwise Pearson's correlations and corresponding \mathit{p} -values for testing the null hypothesis of no correlation against the right-tailed alternative that the correlations are greater than zero. [R,PValue] = corrplot(DataTable,Tail="right"); INF_G 1.8329e-18 1 2.3869e-05 4.7384e-05 8.1392e-05 INT_L 8.1592e-08 8.1392e-05 6.7041e-20 2.5801e-32 1 The output PValue has pairwise \mathit{p} -values all less than the default 0.05 significance level, indicating that all pairs of variables have correlation significantly greater than zero. By default, corrplot plots to the current axes (gca). corrplot does not support UIAxes targets. Example: corrplot(Tbl,Type="Spearman",TestR="on",DataVariables=1:5) computes Spearman’s rank correlation coefficient for the first 5 variables of the table Tbl and tests for significant correlation coefficients. Type — Correlation coefficient "Pearson" (default) | "Kendall" | "Spearman" | character vector Correlation coefficient to compute, specified as a value in this table. "Pearson" Pearson’s linear correlation coefficient "Kendall" Kendall’s rank correlation coefficient (τ) "Spearman" Spearman’s rank correlation coefficient (ρ) Example: Type="Kendall" Rows — Option for handling rows in input time series data that contain NaN values "pairwise" (default) | "all" | "complete" | character vector Option for handling rows in the input time series data that contain NaN values, specified as a value in this table. "all" Use all rows, regardless of any NaN entries. "complete" Use only rows that do not contain NaN entries. "pairwise" Use rows that do not contain NaN entries in column (variable) i or j to compute R(i,j). Example: Rows="complete" "both" (default) | "right" | "left" | character vector Alternative hypothesis Ha used to compute the p-values PValue, specified as a value in this table. "both" Ha: Correlation is not zero. "right" Ha: Correlation is greater than zero. "left" Ha: Correlation is less than zero. Example: Tail="left" VarNames — Variable names to use in plots Variable names used in the plots, specified as a string vector or cell vector of strings of a length numVars. VarNames(j) specifies the name to use for variable X(:,j) or DataVariables(j). TestR — Flag for testing whether correlations are significant Flag for testing whether correlations are significant, specified as a value in this table. "on" corrplot highlights significant correlations in the correlation matrix plot using red font. "off" All correlations in the correlation matrix plot have black font. Example: TestR="on" 0.05 (default) | scalar in [0,1] Significance level for correlation tests, specified as a scalar in the interval [0,1]. Example: Alpha=0.01 Variables in Tbl for which corrplot includes in the correlation matrix plot, specified as a string vector or cell vector of character vectors containing variable names in Tbl.Properties.VariableNames, or an integer or logical vector representing the indices of names. The selected variables must be numeric. Example: DataVariables=[true true true false] or DataVariables=1:3 selects the first through third table variables. R — Correlations Correlations between pairs of variables in the input time series data that are displayed in the plots, returned as one of the following quantities: numVars-by-numVars numeric matrix when you supply the input X. numVars-by-numVars table when you supply the input Tbl, where numVars is the selected number of variables in the DataVariables argument. p-values corresponding to significance tests on the elements of R, returned as one of the following quantities: numVars-by-numVars table when you supply the input Tbl, where the variables specified by the DataVariables argument determines numVars and the names of the rows and columns of the output table. The p-values are used to test the null hypothesis of no correlation against the alternative hypothesis of a nonzero correlation, with test tail specified by the TestR argument. Handles to plotted graphics objects, returned as one of the following quantities: numVars-by-numVars matrix of graphics objects when you supply the input X numVars-by-numVars table of graphics objects when you supply the input Tbl, where the variables specified by the DataVariables argument determines numVars and the names of the rows and columns of the output table The setting Rows="pairwise" (the default) can return a correlation matrix that is not positive definite. The setting Rows="complete" returns a positive-definite matrix, but, in general, the estimates are based on fewer observations. corrplot computes p-values for Pearson’s correlation by transforming the correlation to create a t-statistic with numObs – 2 degrees of freedom. The transformation is exact when the input time series data is normal. corrplot computes p-values for Kendall’s and Spearman’s rank correlations by using either the exact permutation distributions (for small sample sizes) or large-sample approximations. corrplot computes p-values for two-tailed tests by doubling the more significant of the two one-tailed p-values. R2022a: corrplot returns results in tables when you supply a table of data If you supply a table of time series data Tbl, corrplot returns all outputs in separate tables. Rows and variables in the tables correspond to the variables specified by DataVariables. Before R2022a, corrplot returned each output as a matrix when you supplied a table of input data. Starting in R2022a, if you supply a table of input data and return any of the outputs, access results by using table indexing. For more details, see Access Data in Tables. collintest | corr
EqualEntries - Maple Help Home : Support : Online Help : Programming : Data Types : Rtables, Arrays, Matrices, and Vectors : EqualEntries compare elements inside two container structures EqualEntries(A,B); The EqualEntries command compares the elements of A with the elements of B. EqualEntries('A','B') will return true when both A and B are the same kind of structure, have the same number of elements, and contain the same elements in the same order. Some exceptions apply. When comparing tables, both tables must have the same index/value pairs. The indices in both tables must be exactly the same. That is, an index of 1.0 in one table will not compare equally with 1.00 in the other table. The values at each index are compared using evalb. When comparing rtables, both structures must have the same number of dimensions, and the same bounds in each dimension. If A and B are Vectors of differing orientation, EqualEntries will return false. The values at each index are compared using evalb. When comparing lists, EqualEntries is applied recursively on the elements of the lists. In this way, lists containing other mutable container objects like tables and rtables can be compared in a deeper way than what evalb does. When comparing sets, EqualEntries will verify that each element in set A has an equivalent entry in set B, and vice versa. This means that sets A and B need not be the same size for EqualEntries to return true. As one example, the set {1.0,1.00} is considered to have the same entries as {1.000}. Comparison of modules is currently undefined and may change in the future. All other data structures are directly compared using evalb. a≔〈1,2,3〉 \textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\end{array}] b≔\mathrm{Vector}[\mathrm{column}]⁡\left([1,2,3]\right) \textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\end{array}] \mathrm{EqualEntries}⁡\left(a,b\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{EqualEntries}⁡\left([1.0,1.00,1.000],[1.,1.,1.]\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{t1}≔\mathrm{table}⁡\left({\mathrm{color}=\mathrm{red},\mathrm{size}=4}\right) \textcolor[rgb]{0,0,1}{\mathrm{t1}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{⁡}\left([\textcolor[rgb]{0,0,1}{\mathrm{size}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{color}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{red}}]\right) \mathrm{t2}≔\mathrm{table}⁡\left({\mathrm{color}=\mathrm{red},\mathrm{size}=5}\right) \textcolor[rgb]{0,0,1}{\mathrm{t2}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{⁡}\left([\textcolor[rgb]{0,0,1}{\mathrm{size}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{color}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{red}}]\right) \mathrm{EqualEntries}⁡\left(\mathrm{t1},\mathrm{t2}\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{t2}[\mathrm{size}]≔4 {\textcolor[rgb]{0,0,1}{\mathrm{t2}}}_{\textcolor[rgb]{0,0,1}{\mathrm{size}}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{4} \mathrm{EqualEntries}⁡\left(\mathrm{t1},\mathrm{t2}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{EqualEntries}⁡\left({1.00,1.0},{1.}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} The EqualEntries command was introduced in Maple 15. LinearAlgebra,Equal
Interpret estimates for a Weibull regression model in SAS » SAS博客列表 f(x; \alpha, \beta) = \frac{\beta}{\alpha^{\beta}} (x)^{\beta -1} \exp \left(-\left(\frac{x}{\alpha}\right)^{\beta }\right) Why would you want to use a regression procedure instead of PROC UNIVARIATE? One reason is that the response variable (failure or survival) might depend on additional covariates. A regression model enables you to account for additional covariates and still understand the underlying distribution of the random errors. A second reason is that the FMM procedure can fit a mixture of distributions. To make sense of the results, you must be able to interpret the regression output in terms of the usual parameters for the probability distributions. The post Interpret estimates for a Weibull regression model in SAS appeared first on The DO Loop. Sliced survival graphs in SAS Fit a mixture of Weibull distributions in SAS
Home : Support : Online Help : Mathematics : Differential Equations : Lie Symmetry Method : Commands for PDEs (and ODEs) : LieAlgebrasOfVectorFields : LAVF : DChange change coordinates in a LAVF object DChange( tr, obj, vf, options ) set of equations corresponding to the transformation from the old variables on the left-hand side of the equations to the new variables on the right-hand side The DChange method performs change of variables tr in the LAVF object obj. The method returns a new LAVF object written with respect to independent variables and the dependent variables specified in vf The vf argument is required. Other options are as for PDEtools:-dchange, and are ultimately passed through to it. This method is associated with the LAVF object. For more detail see Overview of the LAVF object. \mathrm{with}⁡\left(\mathrm{LieAlgebrasOfVectorFields}\right): Build a LAVF for the sl[2] action on the line... X≔\mathrm{VectorField}⁡\left([[\mathrm{\xi }⁡\left(x\right),x]]\right) \textcolor[rgb]{0,0,1}{X}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} \mathrm{Sys}≔\mathrm{LHPDE}⁡\left([\mathrm{diff}⁡\left(\mathrm{\xi }⁡\left(x\right),x,x,x\right)=0],\mathrm{indep}=[x],\mathrm{dep}=[\mathrm{\xi }]\right) \textcolor[rgb]{0,0,1}{\mathrm{Sys}}\textcolor[rgb]{0,0,1}{≔}[\frac{{\textcolor[rgb]{0,0,1}{ⅆ}}^{\textcolor[rgb]{0,0,1}{3}}}{\textcolor[rgb]{0,0,1}{ⅆ}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{indep}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{x}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{dep}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)] L≔\mathrm{LAVF}⁡\left(X,\mathrm{Sys}\right) \textcolor[rgb]{0,0,1}{L}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}]\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0,0,1}{&where}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{[\frac{{\textcolor[rgb]{0,0,1}{ⅆ}}^{\textcolor[rgb]{0,0,1}{3}}}{\textcolor[rgb]{0,0,1}{ⅆ}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]} Now set up another vector field with a change of variables... Y≔\mathrm{VectorField}⁡\left([[\mathrm{\eta }⁡\left(y\right),y]]\right) \textcolor[rgb]{0,0,1}{Y}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} \mathrm{DChange}⁡\left({x=y},L,Y\right) [\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}]\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0,0,1}{&where}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{[\frac{{\textcolor[rgb]{0,0,1}{ⅆ}}^{\textcolor[rgb]{0,0,1}{3}}}{\textcolor[rgb]{0,0,1}{ⅆ}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{3}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]}
Do subqueries add expressive power to SQL queries? - PhotoLens Does SQL need subqueries? Imagine a sufficiently generalized implementation of the structured query language for relation databases. Since the structure of the canonical SQL SELECT statement is actually pretty important for this to make sense, I don’t appeal directly to relational algebra, but you could frame this in those terms by making appropriate restrictions on the form of expressions. An SQL SELECT query generally consists of a projection (the SELECT part) some number of JOIN operations (the JOIN part), some number of SELECTION operations (in SQL, the WHERE clauses), and then set-wise operations (UNION, EXCEPT, INTERSECT, etc.), followed by another SQL SELECT query. Tables being joined can be the computed results of expressions; in other words, we can have a statement such as: SELECT t1.name, t2.address JOIN (SELECT id, address WHERE t3.id = t1.id) AS t2 WHERE t1.salary > 50,000; We will refer to the use of a computed table as part of an SQL query as a subquery. In the example above, the second (indented) SELECT is a subquery. Can all SQL queries be written in such a way as to not use subqueries? The example above can: This example is somewhat spurious, or trivial, but one can imagine instances where considerably more effort might be required to recover an equivalent expression. In other words, is it the case that for every SQL query q with subqueries, there exists a query q′ q' without subqueries such that q and q′ q' are guaranteed to produce the same results for the same underlying tables? Let us limit SQL queries to the following form: SELECT <attribute>, FROM <a table, not a subquery> JOIN <a table, not a subquery> AND <condition> And so on. I think left and right outer joins don’t add much, but if I am mistaken, please feel free to point that out… in any event, they are fair game as well. As far as set operations go, I guess any of them are fine… union, difference, symmetric difference, intersection, etc… anything that is helpful. Are there any known forms to which all SQL queries can be reduced? Do any of these eliminate subqueries? Or are there some instances where no equivalent, subquery-free query exists? References are appreciated… or a demonstration (by proof) that they are or aren’t required would be fantastic. Thanks, and sorry if this is a celebrated (or trivial) result of which I am painfully ignorant. There is some terminology confusion; the query block within parenthesis is called inner view. A subquery is query block within either WHERE or SELECT clause, e.g. where 3 < (select count(1) from emp where dept.deptno=emp.deptno) In either case, inner view or subquery can be unnested into “flat” project-restrict-join. Correlated subquery with aggregation unnests into inner views with grouping, which then unnests into flat query. select deptno from dept d where 3 < (select avg(sal) from emp e where d.deptno=e.deptno) select d.deptno from dept d, ( select deptno from emp e having avg(sal) > 3 ) where d.deptno=e.deptno select d.deptno from dept d, emp e where d.deptno=e.deptno As for algebraic rules for query optimization, relational algebra is known to be axiomatized into Relational Lattice which simplifies query transformations as demonstrated here and there. Source : Link , Question Author : Patrick87 , Answer Author : Tegiri Nenashi Categories database-theory, relational-algebra Tags database-theory, relational-algebra Post navigation iOS 10.3.1 – Apple iPhone – Photos app shows 295 images; none found on device internal storage How to Design a 3D Ball/Sphere Texture [closed] How to make a ‘D’ shape out of circles (that follow the curved path)? [duplicate] Remove white grid lines when zoomed in Photoshop
TestCoverageWorksheet - Maple Help Home : Support : Online Help : Programming : CodeTools : Profiling : Coverage : TestCoverageWorksheet create a worksheet to determine coverage TestCoverageWorksheet(p, paths, opts) the name of a module or procedure, or a set or list of such names (optional) string or list or set of strings; paths to test files (optional) keyword options, see below setlibname : truefalse or the symbol auto, default is auto If set to true, Maple will insert a statement setting libname to its value in the current session. If set to false, Maple will not insert such a statement. If set to auto, Maple will behave as setlibname = true if libname has more than one component. basedir : string or symbol, default currentdir() Path relative to which paths is understood. The worksheet will contain a currentdir() command to go into this directory before reading the given paths, if any. If this is a string, it's understood as a literal path. If it is a symbol, it is expected that this is a symbol understood by kernelopts; e.g., mapledir. newsheet : truefalse, default true Whether to insert the content as a new worksheet. insert : truefalse, default false Whether to insert the content into the current worksheet. returnxml : truefalse, default false Whether to return the content as xml. Otherwise, return NULL. useshowstat : truefalse, default false To show the results, you can use either showstat or CodeTools:-Profiling:-Coverage:-Print. The worksheet will include both commands but one will be commented out; by default that's the showstat command. This option allows you to switch that around. testfilemarkers : truefalse or the symbol auto, default is auto If set to true, Maple will insert a call of the form CodeTools:-Test(1, 0, 'label' = str, 'quiet', 'boolout') before each read command. The string str indicates the file name that Maple is about to read. This makes it easier to interpret the result of the call to CodeTools:-TestFailures at the end of the worksheet: it will contain a test failure marking the start of each test file. Setting the option to false will omit these calls. If set to auto, the calls will be included if the number of test files specified is greater than one. If no test files are specified, this option is ignored. This command creates a worksheet that can be used to examine test coverage of some code by some test files. This can then be used in the following cycle: Run the worksheet by hitting the Execute Worksheet button (!!!). See if all tests pass: if so, skip ahead to step 6. Find which test fails. Because the read statements have semicolons after them, failure results will be shown in the worksheet. You should be able to search for them with Ctrl + F (Command+F, Mac), searching for failed. Adjust the code and load it, and/or adjust the failing test(s). Find somewhere you want to improve test coverage. If you do not find such a place, you're done. Add a test case that covers that code. The worksheet will contain commands to execute the following steps: A restart command. If so directed by the setting of the setlibname option, set the libname variable. The definition of a command to recursively list the members of a module. Calls to this command to find all recursive members of the given module, and select the procedures among them. Calls to CodeTools:-Profiling:-Profile to turn on profiling of all these procedures. Calls to read in order to read the test files. These test files should have calls to the CodeTools:-Test command to do their tests. They should not include restart commands. If you do not specify the paths argument, Maple will insert instructions where to include the read statements. Calls to showstat and to CodeTools:-Profiling:-Coverage:-Print. One of these will be commented out. The advantage of the latter is that you can quickly see if there are lines in the procedure that aren't covered; the advantage of the former is that you see all lines of code, so you have context and can understand the code better. A call to print all entries of CodeTools:-TestFailures(), so that if there are any failures, you are informed of them explicitly. It's useful to have this at the bottom of the worksheet, so that you see it immediately after you have run it with the Execute Worksheet button. It is typically more useful to use the default options newsheet=true and insert=false, but using the opposite here allows for easier demonstration of what the result looks like in this help page. \mathrm{CodeTools}:-\mathrm{Profiling}:-\mathrm{Coverage}:-\mathrm{TestCoverageWorksheet}⁡\left(\mathrm{NewPackage},{"test/mpl/123456.tst","lib/NewPackage/tst/bar.tst","lib/NewPackage/tst/foo.tst"},\mathrm{basedir}=\mathrm{mapledir},\mathrm{newsheet}=\mathrm{false},\mathrm{insert}\right) \mathrm{restart} \mathrm{libname}≔"/maple/cbat/active/169462/lib","/maple/cbat/active/169462/toolbox/BlockImporter/lib","/maple/cbat/active/169462/toolbox/GlobalOptimization/lib","/maple/cbat/active/169462/toolbox/Grid/lib","/maple/cbat/active/169462/toolbox/MatlabSymbolicToolbox/lib" \mathrm{all_entries}≔\mathrm{CodeTools}:-\mathrm{RecursiveMembers}⁡\left(\mathrm{NewPackage},'\mathrm{output}'='\mathrm{members}'\right): \mathrm{procs}≔\mathrm{select}⁡\left(\mathrm{type},\mathrm{all_entries},\mathrm{procedure}\right) \mathrm{map}⁡\left(\mathrm{CodeTools}:-\mathrm{Profiling}:-\mathrm{Profile},\mathrm{procs}\right): \mathrm{currentdir}⁡\left(\mathrm{kernelopts}⁡\left(\mathrm{mapledir}\right)\right): \mathrm{CodeTools}:-\mathrm{Test}⁡\left(1,0,'\mathrm{label}'="next test file: test/mpl/123456.tst",'\mathrm{quiet}','\mathrm{boolout}'\right): \mathbf{read}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}"test/mpl/123456.tst" \mathrm{CodeTools}:-\mathrm{Test}⁡\left(1,0,'\mathrm{label}'="next test file: lib/NewPackage/tst/bar.tst",'\mathrm{quiet}','\mathrm{boolout}'\right): \mathbf{read}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}"lib/NewPackage/tst/bar.tst" \mathrm{CodeTools}:-\mathrm{Test}⁡\left(1,0,'\mathrm{label}'="next test file: lib/NewPackage/tst/foo.tst",'\mathrm{quiet}','\mathrm{boolout}'\right): \mathbf{read}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}"lib/NewPackage/tst/foo.tst" \text{#map(showstat, procs):} \mathrm{map}⁡\left(\mathrm{CodeTools}:-\mathrm{Profiling}:-\mathrm{Coverage}:-\mathrm{Print},\mathrm{procs}\right): \mathrm{map2}⁡\left(\mathrm{printf},"%s\n",\mathrm{CodeTools}:-\mathrm{TestFailures}⁡\left(\right)\right): The CodeTools:-Profiling:-Coverage:-TestCoverageWorksheet command was introduced in Maple 2021.
Some elementary results on the cohomology of graded Lie algebras Bøgvad, Rikard author = {B{\o}gvad, Rikard}, title = {Some elementary results on the cohomology of graded {Lie} algebras}, AU - Bøgvad, Rikard TI - Some elementary results on the cohomology of graded Lie algebras Bøgvad, Rikard. Some elementary results on the cohomology of graded Lie algebras, dans Homotopie algébrique et algèbre locale, Astérisque, no. 113-114 (1984), 11 p. http://www.numdam.org/item/AST_1984__113-114__156_0/ (1) H. Bass, Finitistic dimension and a homological generalization of semi-primary rings, Trans. A. M. S. 95 (1960) | Article | MR 157984 | Zbl 0094.02201 (2) R. Bieri, Homological dimension of discrete groups, Queen Mary Coll. Math. Notes (1976) | MR 466344 | Zbl 0357.20027 (3) P. M. Cohn, Free rings and their relations, LMS Monographs, No 2, Academic Press (London, New York 1971) | MR 371938 | Zbl 0232.16003 (4) P. Farkas, Self-injective group rings, J. of Algebra 1 (1973) | MR 313300 | Zbl 0258.16008 (5) Y. Felix - J.-C. Thomas, Characterization of spaces whose rational LS -category is two, preprint Lille (1982) (6) C. Jacobsson, On local flat homomorphisms and the Yoneda Ext-algeora of the fibre, Reports, Dep of Math Univ of Stockholm, 1982:8. | Zbl 0572.13002 (7) C. Löfwall, On the subalgebra generated by the one-dimensional elements of the Yoneda Ext-algebra, Reports , Dep of Math, Univ of Stockholm 1976: 5 | Zbl 0429.13008 (8) C. Löfwall, On the center of graded Lie algebras, Reports, Dep of Math, Univ of Stockholm (1982) (9) C. Löfwall - J.-E. Roos, Cohomologie des algèbres de Lie graduées et séries de Poincaré-Betti non-rationelles. Comptes Rendus, Acad. Sci., Paris, 290 séries A (1980) p. 733-736. | MR 577146 | Zbl 0449.13011 (10) J.-E. Roos, On the use of graded Lie algebras in the theory of local rings, Commutative Algebra, Durham 1981, Ed. by R. Sharp, LMS Lecture notes 72. | MR 693637 | Zbl 0523.13010 (11) J. Stallings, Centerless groups-an algebraic formulation of Gottlieb's theorem. Topology vol 4 (1965) | Article | MR 202807 | Zbl 0201.36001 (12) Y. Felix et al , On the homotopy Lie algebra of a finite complex, Publ. IHES 59 (1982)
Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Modular Subpackage : ZigZag in place conversion of mod m Matrix to ZigZag form ZigZag(m, A) The ZigZag function applies a sequence of similarity transformations to the n⁢x⁢n mod m Matrix A to obtain the ZigZag form of A. A ZigZag form is an almost block diagonal structure having fewer than 2⁢n nonzero entries. A ZigZag form can be used to obtain the Smith normal form and the Frobenius form of a Matrix. The Frobenius form of a Matrix is the same as the Frobenius form of its ZigZag form. This command is part of the LinearAlgebra[Modular] package, so it can be used in the form ZigZag(..) only after executing the command with(LinearAlgebra[Modular]). However, it can always be used in the form LinearAlgebra[Modular][ZigZag](..). \mathrm{with}⁡\left(\mathrm{LinearAlgebra}[\mathrm{Modular}]\right): p≔97 \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{97} A≔\mathrm{Mod}⁡\left(p,\mathrm{Matrix}⁡\left(5,5,\left(i,j\right)↦\mathbf{if}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}i\le j\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{then}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{rand}⁡\left(\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{else}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}0\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{if}\right),\mathrm{integer}[]\right) \textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{77}& \textcolor[rgb]{0,0,1}{96}& \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{86}& \textcolor[rgb]{0,0,1}{58}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{36}& \textcolor[rgb]{0,0,1}{80}& \textcolor[rgb]{0,0,1}{22}& \textcolor[rgb]{0,0,1}{44}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{39}& \textcolor[rgb]{0,0,1}{60}& \textcolor[rgb]{0,0,1}{39}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{43}& \textcolor[rgb]{0,0,1}{12}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{55}\end{array}] \mathrm{A2}≔\mathrm{Copy}⁡\left(p,A\right): \mathrm{ZigZag}⁡\left(p,\mathrm{A2}\right): \mathrm{A2} [\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{77}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{44}& \textcolor[rgb]{0,0,1}{8}& \textcolor[rgb]{0,0,1}{36}& \textcolor[rgb]{0,0,1}{76}\end{array}] \mathrm{Frobenius}⁡\left(\mathrm{A2}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}p,\mathrm{Frobenius}⁡\left(A\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}p [\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{7}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{49}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{56}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{7}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{49}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{56}\end{array}]
Home : Support : Online Help : Mathematics : Geometry : Polyhedral Sets : Transforming Sets : Translate translate a polyhedral set Translate(polyset, trans) polyhedral set representing a vertex, list of rationals, Vector or list/set of equations of the form coordinate = value; translation vector This command translates the polyhedral set polyset according to trans. \mathrm{with}⁡\left(\mathrm{PolyhedralSets}\right): Translate the cube by [4,4,4] c≔\mathrm{ExampleSets}:-\mathrm{Cube}⁡\left(\right) \textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{{}\begin{array}{lll}\textcolor[rgb]{0,0,1}{\mathrm{Coordinates}}& \textcolor[rgb]{0,0,1}{:}& [{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{3}}]\\ \textcolor[rgb]{0,0,1}{\mathrm{Relations}}& \textcolor[rgb]{0,0,1}{:}& [\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}]\end{array} \mathrm{c_trans}≔\mathrm{Translate}⁡\left(c,[4,4,4]\right) \textcolor[rgb]{0,0,1}{\mathrm{c_trans}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{{}\begin{array}{lll}\textcolor[rgb]{0,0,1}{\mathrm{Coordinates}}& \textcolor[rgb]{0,0,1}{:}& [{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{3}}]\\ \textcolor[rgb]{0,0,1}{\mathrm{Relations}}& \textcolor[rgb]{0,0,1}{:}& [\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{-3}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{-3}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{-3}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{5}]\end{array} \mathrm{Plot}⁡\left([c,\mathrm{c_trans}],\mathrm{orientation}=[20,78,0]\right) The PolyhedralSets[Translate] command was introduced in Maple 2015.